"Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2"

Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record.  He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now.  – Anthony

Future Perfect

By Pat Frank

In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves.  It turned out they have a lot to reveal.

As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.

Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.

For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.

I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.

The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.

Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.

Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like?  Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.

Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.

Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.

Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.

Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.

The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.

Figure 3: Comparison of the GISS and CRU fitted cosines.

The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).

The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.

In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.

So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.

But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.

The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.

Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.

But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010.  In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.

Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.

Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.

The fit results of the early and later temperature anomaly trends are in Table 1.

 

Table 1: Decadal Warming Rates for the Early and Late Periods.

Data Set

C/d (1880-1940)

C/d (1960-2010)

(late minus early)

GISS

0.056

0.087

0.031

CRU

0.044

0.073

0.029

“C/d” is the slope of the fitted lines in Celsius per decade.

So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.

If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.

Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.

Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.

I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.

All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2).  That plot is in Figure 5.

Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.

There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.

So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.

Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.

The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.

Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase.  This value is 4.4x -13x smaller than the range projected by the IPCC.

The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?

But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.

But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.

So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.

If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.

It couldn’t be  the negative feedback of Roy Spencer’s clouds, could it?

The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.

Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.

The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.

The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.

If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.

Semi-empirical linear model: 0.84 C warmer by 2100.

Fully empirical negative feedback model: 0.42 C warmer by 2100.

And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.

So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.

The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.

And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.

So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.

So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.

Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.

But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.

The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.

In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
337 Comments
Inline Feedbacks
View all comments
Bart
June 8, 2011 9:43 pm

”“Energy” ? Perhaps you mean ‘power’?”
It is average power, which is energy divided by the record interval. Conventionally, we usually refer to the result of integrating the PSD as “energy” to avoid ambiguity. How widespread that convention is, I really am not sure, so perhaps I should have explained it.
“The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.”
Nope, it’s still there. You just can’t see it because your analysis method is so lousy.
“Your 22-yr period … has an amplitude of 0.01C…”
This is a stochastic signal. Discussing “amplitude” is not really rigorous. In any case, as I explained, it is being significantly attenuated by the running average, so the actual signal is many times larger than what is observed.
“All the peaks and valleys you see below 30 years are not real.”
I’ve tried to explain it to you. Why are you insisting on something in an area in which you are not particularly proficient with someone who is? I feel like I’m arguing with Myrrh again.

Bart
June 8, 2011 11:03 pm

“Discussing “amplitude” is not really rigorous.”
Let me try to explain this a little. What we are dealing with is a distributed parameter system. Distributed parameter systems are generally characterized by partial differential equations (e.g., equations of structural dynamics, Navier Stokes equations, etc…). Via functional analysis, we can determine certain eigenmodes, i.e., certain configurations (mode shapes) of the system which oscillate at particular sinusoidal frequencies in response to exogenous inputs.
For a given system, taken in isolation, there is generally a lowest frequency mode, which we call the fundamental mode, and various higher frequency modes which require steadily escalating energy input to excite (note: I may slip from time to time and refer interchangeably to the “mode” meaning the mode shape or the modal frequency – it is part of the jargon. It should be clear what I mean by the context). In general, the “bigger” the system, the lower the fundamental mode. Interaction of the various modes can create complex dynamics which alternatingly interfere constructively and destructively with one another.
Dissipation of energy leads to eventual damping of these responses. However, if a mode is continually fed by a wideband excitation source whose bandwidth encompasses the modal frequency, it can keep getting regenerated ad infinitum. Over time, this signal grows and fades. Depending on the rate of energy dissipation and the time span under observation, it can look like a steady state sinusoid, or it can look like a (generally nonuniformly) amplitude and phase modulated sinusoidal signal.
The climate is a distributed parameter system (or, perhaps more accurately, a series of overlapping piecewise continuous ones). It has certain modes which are excited by various energy inputs, from the Sun (electromagnetic radiation), from the Moon (tidal forces), from intergalactic cosmic rays, from internal heat dissipation, etc… We know some of these modes well: The PDO, the AMO, the ENSO… These are responses of the distributed parameter system of the Earth to wideband forcing(s). If you took away the forcings, they would gradually decay and die out.
For such a huge system as the climate system of the Earth, the fundamental modes are certain to be very, very long relative to our perceptions. But, there is ample energy to excite a plethora of higher frequency modes as well. And, of course, there are additionally steady state, near perfectly sinusoidally varying diurnal, monthly, seasonal, and longer term inputs, as well.
The constructive and destructive interference of all these modes, along with the steady state periodic excitations, form what we call “climate.”
PSD analysis is an excellent way to look for the modal frequencies and, once found, they may be observed to be quasi-steady state, or they may surge and fade. But, they will be recurring, because they are part of the physical system which defines, or constrains, or begets… however you want to say it… the climate system.

J. Simpson
June 9, 2011 12:13 am

The 60 year cosine is fair start to this sort of crude approximation but why do you chose to fit a straight line? Because it’s straight ? Not a very good start.
CO2 must have some effect according to basic radiation physics even ignoring the IPCC’s attempts to mutiply it up. Such a radiative forcing will afftect the rate of change of temperature not the temperature. If we appoximate CO2 level as increasing exponentially and then account for the saturation of the blocking effect (the absorption is reduced logarithmically as CO2 goes up) we get a linear increase in the forcing. This acts to produce an increasing rate of change ie accelerating warming, not a linear one. In fact this simple approximation gives a quadratic rise. It’s small but it is increasing faster as it goes along. In fact this is why you see your increasing slopes in figure 4.
You need to redo your fits with cosine plus quadratic and see what if gives.
But be warned your residuals here (that you have not put a scale on in figure 1 ) are about +/-0.2C and data than have a total range of only about +/-0.4C over the whole dataset . Any fits you do will only be weakly correlated to the data and the margin for error in any magnitudes (like the magnitude of the cosine or quad terms) are quite large.
You need to try to produce an error estimate for any result you find. Any result without that is not scientific.
0.009 is tiny but you need to say something like 0.009 +/- 0.001 to give it meaning.
If it turns out to be 0.009 +/- 0.85 you get a better idea of how meaningfut your answers are.

June 9, 2011 5:45 am

Bart says:
June 8, 2011 at 9:43 pm
“The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.”
Nope, it’s still there. You just can’t see it because your analysis method is so lousy.

Show me your analysis. ‘Nope’ doesn’t cut it.

Bart
June 9, 2011 10:05 am

It is there. The apparent energy (given the quality of the data) appears to vary, but this is in no way incompatible with the behavior which might be expected of random modal excitation. Moreover, a 21 year cycle (which is within a reasonable error bound) appears clearly in the 20th century direct measurements as well (see Spector June 5, 2011 at 12:27 pm).
Leif, your methods are poor. You use the FFT improperly. You do not understand aliasing. You do not understand transfer functions for FIR filters (the simplest of which is the sliding uniformly weighted average). You do not know what a PSD is. You do not understand stochastic processes. You are belligerent and accusatory with a guy who has been at this for over a quarter of a century, analyzing data and creating models which are employed in real world systems which you have almost certainly unwittingly used.
I see no value in continuing this conversation.

June 9, 2011 10:22 am

Bart says:
June 9, 2011 at 10:05 am
It is there. […] I see no value in continuing this conversation.
Show it.
Here is how Loehle describes his data:
“The present note treats the 18 series on a more uniform basis than in the original study. Data in each series have different degrees of temporal coverage. For example, the pollen-based reconstruction of Viau et al. (2006) has data at 100-year intervals, which is now assumed to represent 100 year intervals (rather than points, as in Loehle, 2007). Other sites had data at irregular intervals. This data is now interpolated to put all data on the same annual basis. In Loehle (2007), interpolation was not done, but some of the data had already been interpolated before they were obtained, making the data coverage inconsistent. In order to use data with non-annual coverage, some type of interpolation is necessary, especially when the different series do not line up in dating. This interpolation introduces some unknown error into the reconstruction but is incapable of falsely generating the major patterns seen in the results below. An updated version of the Holmgren data was obtained. Data on duplicate dates were averaged in a few of the series. Data in each series (except Viau, because it already represents a known time interval) were smoothed with a 29-year running centered mean (previously called a 30 year running mean). This smoothing serves to emphasize long term climate patterns instead of short term variability. All data were then converted to anomalies by subtracting the mean of each series from that series. This was done instead of using a standardization date such as 1970 because series date intervals did not all line up or all extend to the same ending date. With only a single date over many decades and dating error, a short interval for determining a zero date for anomaly calculations is not valid. The mean of the eighteen anomaly series was then computed for the period 16 AD to 1980 AD. When missing values were encountered, means were computed for the sites having data. Note that the values do not represent annual values but rather are based on running means.”
My poor understanding was enough to actually conclude that he used a 29-year running mean. Your mistake is to assume that the climate system respond to very many actual cycles [e.g. of 8.5 and 3.6 yrs] and that the proxy data is good enough to find anything less than 30 years.

Bart
June 9, 2011 11:55 am

I will give an example of what I am talking about. The following code is written using MATLAB. Hopefully, it should be transparent for users of other languages.
First, set up the constants governing a particular mode with a 23 year quasi-period (resonant frequency near 1/23 year^-1):
zeta = 0.001;
a=2*exp(-zeta*2*pi/23)*cos(2*pi/23);
b=exp(-2*zeta*2*pi/23);
Define a data series representing the vibration of a slightly damped oscillating mode driven by Gaussian “white” noise:
x=zeros(1,1000);
for k = 3:1000
x(k) = a*x(k-1) – b*x(k-2) + randn;
end
We want to eliminate the initial transient response, so run it a few times, replacing the starting condition with the previous end condition:
x(1)=x(999);
x(2)=x(1000);
for k = 3:1000
x(k) = a*x(k-1) – b*x(k-2) + randn;
end
Now, plot “x”. What you should see is something that looks like a fairly steady oscillation with some small amplitude modulation. Now, reevaluate zeta as zeta = 0.01 and repeat. Now, you will see a lot more variation in the amplitude. Run enough cases, and you will see periods in which the oscillation virtually vanishes, only to be stirred up again by later random inputs. Try different values of zeta, and observe what it looks like. A raw FFT will start to show apparent splitting of the frequency line as zeta becomes larger. A properly executed PSD windowed over the appropriate correlation time will resolve the ambiguity.
The time constant is tau = 23/(2*pi*zeta). zeta = 1 is critical damping, at which point you should no longer see much in the way of coherent oscillation.

Bart
June 9, 2011 12:19 pm

“Your mistake…”
I have made no mistakes. I suggest you study up on the subject and stop digging your hole deeper.
“…is to assume that the climate system respond to very many actual cycles [e.g. of 8.5 and 3.6 yrs]…”
Those cycles are in the MLO CO2 data. It is a bad idea to respond when flustered. You tend to miss details.
“…and that the proxy data is good enough to find anything less than 30 years.”
Quality of the data is one issue. Ability to “see” particular frequencies is completely independent. Given the transmission characteristics of a 30 (or, 29, it makes little difference) year average, it is entirely possible to detect a 23 year cycle, as I have explained to the point of exhaustion.
Are we done here? I think we should be.

Bart
June 9, 2011 12:31 pm

Just one final note: I’m not engaging in alchemy, or going off on some flight of fancy of my own here. This is all industry standard operating procedure when designing systems involving compliant structures (buildings, trusses, air frames, what have you) or fluid containment vessels (water distribution (plumbing), pumping stations, fuel tanks…). This is what Finite Element Analysis (surely, you have all heard that catchphrase) is all about: determining the modes of oscillation of distributed parameter (continuum) systems.

Bart
June 9, 2011 12:32 pm

Every continuum system ever anywhere in the universe can be described in this fashion. The Earth and its climate are no exception.

June 9, 2011 12:35 pm

Bart says:
June 9, 2011 at 11:55 am
I will give an example of what I am talking about.
Now smooth the data and show what you get.
Bart says:
June 9, 2011 at 12:19 pm
Those cycles are in the MLO CO2 data. It is a bad idea to respond when flustered. You tend to miss details.
Details brought up by you.
it is entirely possible to detect a 23 year cycle, as I have explained to the point of exhaustion.
But you have not shown the result. You claim to detect 23-yr in both halves of the data. Prove it.
Are we done here? I think we should be.
If you continue to evade the issue, then perhaps we should be.

June 9, 2011 1:26 pm

Bart says:
June 9, 2011 at 12:32 pm
Every continuum system ever anywhere in the universe can be described in this fashion. The Earth and its climate are no exception
No doubt about that, but that you can describe them in this fashion, does not mean that those cycles actually exist as physical entities [which is the only thing of interest – otherwise it would just be numerology]. Remember the old joke about fitting an elephant.

Bart
June 9, 2011 5:20 pm

“…does not mean that those cycles actually exist as physical entities…”
It would only be shocking if they did not. Along the lines of discovering that gravity is a repulsive force.
“Prove it.”
Prove it to yourself. Learn about the subject. For the record, it is undeniably visible. But, if you understood half of what I have been telling you, you would realize it makes no difference whatsoever to my thesis. It chagrins me to say it, but you’ve really gone off the deep end here, Leif.

Bart
June 9, 2011 5:49 pm

“Now smooth the data and show what you get.”
How about you try this exercise. Generate the data as instructed. Then, pass it through a 29 point sliding average, and run your FFT on it.
Or, just generate a sinusoid with a 23 point period and pass that through a 29 point sliding average and plot the result. Do you still see the sinusoid? Of course you do, with an amplitude about 1/5 of the initial amplitude. As I’ve told you over, and over, and over, and over, and over, and….

June 9, 2011 5:51 pm

Bart says:
June 9, 2011 at 5:20 pm
Prove it to yourself. Learn about the subject. For the record, it is undeniably visible.
The hole you are in is that you claim that there is a 22-year cycle in both halves of the data. I have proven to my satisfaction there is not, so show your PSDs. If you do not know how to plot the data or link to your plot, email the (x,y) point values to me and I’ll show them for you.
it makes no difference whatsoever to my thesis.
Wrong attitude.

June 9, 2011 6:55 pm

Bart says:
June 9, 2011 at 5:49 pm
Or, just generate a sinusoid with a 23 point period and pass that through a 29 point sliding average and plot the result. Do you still see the sinusoid? Of course you do,
I think I have isolated your problem: The Loehle data was not constructed by running a 29-point average over yearly data. The time resolution was much worse: of the order of 30 years or in some cases 100 years with data taken at irregular large intervals, interpolating between the gaps. Imagine you have 30 yearly values that are all the same [because you only have one actual data value], followed by another 30 years of equal [but likely different from the first 30 years] values, and so on. Instead of assuming a constant value, you could interpolate between the scattered points. The values have a large noise component [likely larger than the difference between adjacent 30-yr periods]. This is the data you have to deal with. You claim categorically that you have found a clear 22-yr period in the first half of the data [about a 1000 years] and also in the last half of the data [naturally with the same phase]. This is what I dispute and ask you to demonstrate.

AstroH
June 9, 2011 7:31 pm

Great analysis. Clearly there needs to be more research into feedback responses since computer models obviously couldn’t predict them all.
However, I would disagree about the continuous negative feedback having a high probability of being in place throughout the 21st century, and that the linearity in the data is likely to continue in the present fashion. Some things to consider are the possible positive feedbacks that would still work despite ongoing negative feedbacks, and although their co-interactions, if any, will likely be non-linear, the additional feedback processes are always an item to consider within any complex system. Some more important factors that could affect the future climate as it pertains to the analysis of a hypothetical negative-feedback inferred from your sinusoid-plus-trend correlation:
-Lag times between forcings and climate response. This includes both the immediate and long-term effects of various GHGs, solar forcing, oceans, oscillations, ice-melt patterns, etc. For example, the DIRECT immediate solar forcing appears to have a lag time of ~2.2 a (Scafetta and West, 2005).
-The cloud and water vapor feedbacks. This is a rather complex system: increased tropospheric WV from warmer SSTs would augment the greenhouse effect (Held and Soden, 2000), while recent higher convection in the tropical Pacific combined with a cooler stratosphere has removed this greenhouse gas from the upper levels, reducing overall warming (Rosenlof and Reid, 2008). However, this negative feedback effect only ramped up after 2000, meaning it may represent a tipping point toward negative feedbacks, or it may be inherently unstable and could reverse itself at any time.
-The CAUSE of post-1860 base warming. Since regular 60-year cycles appear to raise global temperatures by about 0.6C before hitting the peak and cooling by 0.3C, it is important to determine the underlying factor. Is it recovery from the LIA and coinciding ‘solar re-awakening’, or is something more in play here, such as some long-term ocean feedback, an extra forcing from GHGs, or a yet-undiscovered cause? If so, could this effect be weakening, and thus no longer contribute to most of the post-1970 warming, or have GHGs only begun to augment this effect? It is impractical to assume linearity, without knowing what causes it.
-Undetected positive feedbacks. This of course includes the additional release of GHGs from permafrost melting, pine beetle and fungus population growths, methane clathrate releases, peat bog fires, conflagarations in weakened forests caused by biome shifts, Arctic dipole anomalies resulting in colder winter northern hemisphere continents and thus lowered CO2 absorption in winter, and the like. Many computer models assume the positive feedbacks will outweigh the negative ones, which may be true, but we don’t actually know.
-Interactions between GHG-induced forcings and other anthropogenic factors such as soot, brown clouds, Arctic haze, the ozone holes, ground-level ozone and contrails. Many of these effects will change over time, as for example the ozone hole has strengthened the Antarctic polar vortex and thus caused surface cooling over East Antarctica, while ozone recovery will have other effects, as will the Arctic polar ozone anomaly, polar stratospheric clouds, noctilucent clouds and depth changes in the Arctic troposhere. Meanwhile, soot blocks out the sun and so may be delaying the GHG-induced warming until it is removed, but soot accelerates polar ice melting when it lands. Contrails have a much similar effect, causing both cooling and greenhouse-type warming under various circumstances, and additionally change the water vapor feedback. Many of these factors may simply be delaying, removing altogether or immediately increasing the effects of GHG forcing and associated warming.
The lowest quoted figure I’ve seen to date for GHG-induced 20th century warming has been a contribution of 0.1C, which this article’s analysis does not contradict, but if the baseline warming cannot explain the warming post-1970 then the effect may be much greater, on the scale of 0.5C of GHG-induced warming. This effect, present and future, will depend on anthropogenic emissions and future feedback processes. It is very likely that climate sensitivity is variable over time, depending on possible ameliorating or augmenting factors such as background CO2 levels, direction of GHG change, rate of temperature and CO2 change, presence of potential feedback factors, ocean CO2 and oxygen, solar activity, ice extent, heat contribution of the ocean, forest cover, water vapor concentrations and others. In one example, the temperature and CO2 trends became decoupled during the Cretaceous, and this may occur sometime in the future. Sea level rise and temperature correlations may also be affected.
One more thing to consider is the CO2-absorption ability of the oceans, and how it is impacted by conditions such as temperature, salinity, pH, ocean current flow, oxygen content, atmospheric GHGs, bioprocesses, etc. Most of the negative feedbacks can be explained by two things: the biosphere, and the hydrosphere. Throughout geological history, CO2 has only had a long-term effect on climate, whereas temperature change often is likely to create a positive feedback by raising CO2 levels whenever global temperature warms. Under such circumstances as increasing CO2 when the Earth is completely ice-covered, the biosphere and rock-sequestration processes no longer work, allowing the warming effect to take place more drasticly than otherwise. If the oceans become acidic and stagnant, any negative feedback processes will likely, excuse the pun, be negated. Climate change also seems to have an effect on volcanoes.
Inevitably, the melting of large ice volumes increases the sequestration of CO2, by reducing both salinity and temperature and thus increasing the ocean’s uptake ability for absorbing the GHGs, without necessarily having much of a positive effect on plankton populations. Both the populations of plankton and coral are decreasing drasticly, and the recovery will likely be too slow to re-activate the negative feedback processes by absorbing the CO2 that they normally do. The result is likely to be an abundance of positive feedbacks, then a series of negative feedbacks taking their place, assuming that modern GHG emissions continue as usual before plateauing due to resource depletion. Some factors are likely to be linear, some exponential, and others oscillatory. Climate sensitivity may depend on the change itself. As the removal of all GHGs would require a reasonably large sensitivity to drop Earth’s temperatures ~50C lower than it is today, so should it be reasonable for the rapidity of current GHG increases and associated factors to influence this sensitivity. Of course, my guess is probably no better than computer models, which fail at holistic processing when it only receives 10% of the input required for the holistic process to work.
The one major positive feature of the article is that it refers to current analysis of warming rather than some oft-quoted graph of Phanaerozoic climate proxies being unaffected by assumed long-term CO2 levels. The likelihood of glaciations likely depends on factors other than CO2 and solar output.
This is most likely not my longest comment on a climate blog, so there’s no need to credit me for taking this onto a skeptic (refrain from non-sequitur River in Africa tangents, please) website.
REFERENCES
http://www.annualreviews.org/doi/abs/10.1146%2Fannurev.energy.25.1.441
http://www.agu.org/pubs/crossref/2008/2007JD009109.shtml
http://www.fel.duke.edu/~scafetta/pdf/2005GL023849.pdf

June 9, 2011 7:53 pm

Leif, I made no assumption about functional forms.
The decision to try a cosine fit stemmed from the observation of a sinusoid in the GISS (land+SST) minus GISS (land-only) difference anomalies, over 1880-2010. That difference sinusoid had a period of about 60 years and showed two full cycles. It’s clearly a sign that there is an oscillation within the SST anomalies.
There’s no numerology in the difference observation, and it justifies testing a cosine function in a fit to the entire (land + SST) anomaly data set. Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.

June 9, 2011 8:19 pm

Ryan, “Well of course you are correct that it may not be “proper” to fit a sine to a dataset just because its tempting peaks and troughs more or less beg you to do so..
By now, given my responses, you ought to know that I didn’t use a cosine fit just because there were attractive peaks and troughs in the centennial air temperature anomalies. I had “more data,” namely the net oscillation that appeared in the (land+SST) minus (land-only) difference anomalies.
Regarding your comment about the CET, I have test-fit the Central England Temperature anomaly data set. It’s very noisy, but one can get a pretty good fit using a ~60 year period, plus a longer period of 289 years, and a positive linear trend. Starting in 1650, the ~60 year period again propagates nicely into the peaks and troughs at ~1880, ~1940, and ~2005 in the CET data, just as it did in the more limited 130 year instrumental anomalies.
The line, by the way, implies a net non-cyclic warming of 1.1 C over the 355 intervening years.

June 9, 2011 8:33 pm

Pat Frank says:
June 9, 2011 at 7:53 pm
Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.
The numerology applies especially to your fits. I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.

June 9, 2011 8:37 pm

John H, Tamino’s critique centrally depends on invalid models. I’ll have more to say about that on his blog.

June 9, 2011 8:54 pm

J. Simpson, “CO2 must have some effect according to basic radiation physics even ignoring the IPCC’s attempts to mutiply it up.
Radiation physics tells us that added CO2 will put added energy into the climate system. It tells us nothing of what the climate will do with that energy, or how the climate will respond. To suppose the IPCC’s point of view about a change of temperature specifically in the atmosphere is to impose onto an empirical analysis the very theory being tested. This is to engage in a circular analysis.
Removing the empirically-justified oscillation from the total anomaly data left a positive trend that really is linear within the noise, and extending over the entire 130 year period. There’s no valid point in making an empirical analysis more complicated than the data themselves exhibit. Even dividing the data into early and late trends is a little more than a totally conservative approach to the data would permit. The most empirically conservative view of Figure 1 is that there has been no evident increase in the warming rate of the atmosphere for 130 years.

June 9, 2011 9:03 pm

Leif. “I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.
Not correct, Leif. An oscillation is apparent in the GISS (land+SST) minus (land-only) difference anomalies. Likewise in the CRU (land+SST) minus GISS (land-only) difference anomalies.
Reference to a physical observable puts my analysis distinctly outside your numerical philosophy.

June 9, 2011 9:07 pm

Pat Frank says:
June 9, 2011 at 7:53 pm
Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.
The numerology applies especially to your fits. I can fit a very nice sine wave to the Dow Jones index since 1997. It would be numerology in the same sense as yours is. Here is the DJI numerology 1997-2008: http://www.leif.org/research/DJI-1997-2008.png
A straight trend plus a sine curve. The fit is good [I only show the sine part], but has no meaning at all, pure numerology. And so is yours.
Here I have added the trend back in: http://www.leif.org/research/DJI-1997-2008-with-trend.png

June 9, 2011 9:16 pm

Pat Frank says:
June 9, 2011 at 9:03 pm
Reference to a physical observable puts my analysis distinctly outside your numerical philosophy.
I can build a wall in my backyard where the height of the wall [each brick horizontally] is proportional to the DJI, then I have a physical observable. Without a reason or plausible possible explanation, it would always be numerology. Balmer’s famous formula was numerology for a long time: Balmer noticed in 1885 that a single number had a relation to every line in the hydrogen spectrum that was in the visible light region. That number was 364.56 nm. When any integer higher than 2 was squared and then divided by itself squared minus 4, then that number multiplied by 364.56 gave a wavelength of another line in the hydrogen spectrum. Niels Bohr in 1913 ‘explained’ why the formula worked, but the real explanation came only in the 1920s with the advent of quantum mechanics.

1 5 6 7 8 9 14