Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record. He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now. – Anthony
Future Perfect
By Pat Frank
In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves. It turned out they have a lot to reveal.
As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.
Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.
For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.
I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.
The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.
Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like? Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.
Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.
Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.
Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.
Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.
The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).
The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.
In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.
So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.
But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.
The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.
Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.
But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010. In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.
Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.
Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.The fit results of the early and later temperature anomaly trends are in Table 1.
Table 1: Decadal Warming Rates for the Early and Late Periods.
| Data Set |
C/d (1880-1940) |
C/d (1960-2010) |
(late minus early) |
| GISS |
0.056 |
0.087 |
0.031 |
| CRU |
0.044 |
0.073 |
0.029 |
“C/d” is the slope of the fitted lines in Celsius per decade.
So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.
If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.
Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.
Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.
I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.
All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2). That plot is in Figure 5.
Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.
So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.
Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.
Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase. This value is 4.4x -13x smaller than the range projected by the IPCC.
The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?
But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.
But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.
So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.
If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.
It couldn’t be the negative feedback of Roy Spencer’s clouds, could it?
The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.
Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.
The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.
The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.
If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.
Semi-empirical linear model: 0.84 C warmer by 2100.
Fully empirical negative feedback model: 0.42 C warmer by 2100.
And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.
So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.
The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.
And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.
So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.
So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.
Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.
But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.
The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.
In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

“So, it does not matter what the “vast majority” does.”
Which is to say, it only serves to weight how prominent it will be in the result.
I’m not saying my results are gospel truth. I realize that the data are not particularly reliable. But, the quasi-cyclic processes at 88, 62, and 23 years are significant, and reinforce the hypothesis (expectation, really) that there should be modal signatures visible in the data. In the end, this lends support to Pat Frank’s method of analysis, and the conclusion that 20th century temperatures are largely explained by natural quasi-cyclic processes reinforcing constructively to drive them up to a local peak in the first decade of the 21st century.
“Cut the data in two halves, make a PSD for each half and you’ll see. “
This is an arbitrary, capricious, and irrelevant method of analysis of a stochastic signal. I have explained this over and over and over, and it simply does not penetrate. The hump is still there, albeit at lower energy. And, this is perfectly ordinary for a modal excitation with particular energy dissipation characteristics.
Bart says:
Adjusting them for the attenuation of the 29 year filter, they should be about 0.05, 0.06, and 0.07 degC RMS, respectively.
Which all are smaller than the standard error quoted by Loehle of 0.14 degC. And you have not shown the PSDs yet. Remember to split the data into two halves. What do you mean by ‘raw’ data? The 18 series that I plotted here:
http://www.leif.org/research/Loehle-1-18.png
Please note (because I don’t want anyone to be confused), the RMS of the 62 year process being at 0.041 degC, and the 23 year process being at 0.013 degC means that the energy ratio is (0.041/0.013)^2 = 9.95 or about 10X, as I stated above. Adjusting for the weighting of the sliding average show these processes have comparable RMS.
On this: “I realize that the data are not particularly reliable.” I also realize that the quasi-cyclic processes I observe could be artifacts of the way the data were constructed by Loehle. But, the 62 year and 23 year processes have similar periods to those noted in the 20th century data, so I suspect they are related.
“The hump is still there, albeit at lower energy. And, this is perfectly ordinary for a modal excitation with particular energy dissipation characteristics.”
It would be perfectly ordinary if it disappeared into incoherent noise altogether for a time. This is why I provided everyone with the simple simulation model to observe how such processes might behave.
“Which all are smaller than the standard error quoted by Loehle of 0.14 degC.”
Let’s suppose I have data with 0.14 degC independent standard error masking a bias of some value, and I perform a sample mean of 1024 data points. Can I discern the bias to less than 0.14 degC?
Yes. The square root of 1024 is 32, so my sample mean standard deviation is 0.0044.
I am away for the rest of the day. Temporary non-response should not be construed as acquiescence.
Bart says:
June 11, 2011 at 12:04 pm
It would be perfectly ordinary if it disappeared into incoherent noise altogether for a time. This is why I provided everyone with the simple simulation model to observe how such processes might behave.
1000 years is just not ‘for a time’. It is a very valid procedure to test if the signal is present in subsets of the data. Now, you might have a point if 88, 62, and 23 years were the only peaks in the PSDs, but they are not, so show the PSDs. The signal is what is important, don’t inflate it by squaring it to get the power. The 18 series are here: http://www.leif.org/research/Loehle-18-series.xls calculate the PSDs for each and report on what you find.
Leif, what I’m saying is that what you’re calling numerology isn’t numerology. It’s physical phenomenology, which is entirely within the scope of science.
You’re misappropriating “numerology,” and using it as a term to disparage a completely valid process in science in general, and the analysis I did in particular.
Bart says:
June 11, 2011 at 12:17 pm
Let’s suppose I have data with 0.14 degC independent standard error masking a bias of some value, and I perform a sample mean of 1024 data points. Can I discern the bias to less than 0.14 degC?
Yes. The square root of 1024 is 32, so my sample mean standard deviation is 0.0044.
Except the data points are not independent.
Pat Frank says:
June 11, 2011 at 12:42 pm
You’re misappropriating “numerology,” and using it as a term to disparage a completely valid process in science in general, and the analysis I did in particular.
The Titius-Bode ‘law’ is still a good example:
The law relates the semi-major axis, a, of each planet outward from the Sun in units such that the Earth’s semi-major axis is equal to 10: a = 4+n where n = 0,3,6,12,24,48, … where each value of n [except the first] is three times the previous one.
You would call this “physical phenomenology”. It fitted well until Neptune was discovered, but then fell as the first case outside of the defining domain was found. The Titius-Bode law was discussed as an example of fallacious reasoning by the astronomer and logician Peirce in 1898. The planetary science journal Icarus does not accept papers on the ‘law’.
As I said, it is OK to do numerology as long as you KNOW it is that. Once you believe that you can extrapolate outside of the defining domain without having another reason than just the fit, it is no longer OK.
“Except the data points are not independent.”
It does not matter if the “data” are not independent. What matters is if the errors are independent. Or, more precisely, how they are correlated.
“1000 years is just not ‘for a time’.
In geological terms, it is the blink of an eye. Moreover, it did not just disappear. It is simply near the noise floor (though still observable).
“It fitted well until Neptune was” yada, yada, yada…
But, there is no physical reason to expect that the planets would be arranged thusly. There is ample reason, as I have solidly justified, to expect sinusoidal variations in variables describing natural physical processes.
Should have said “It is simply nearer the noise floor.” Taking a second look, apparent average power is reduced by about 1/2, so RMS is about 70% of what it was. But, the PSD itself is significantly more variable, because I have half the data to smooth, so this is not, by any means, a certainty.
You are so far off base here, Leif. You really have no idea what you are talking about. If I find a way to post the graphic where you can see it, you will be embarrassed (or should be).
Bart says:
June 11, 2011 at 6:53 pm
What matters is if the errors are independent
For running means they are not.
But, there is no physical reason to expect that the planets would be arranged thusly. There is ample reason, as I have solidly justified, to expect sinusoidal variations in variables describing natural physical processes.
You have not justified that at all, and one does not expect sinusoidal variations in all natural physical processes. Many [most] are irreversible, e.g. making an omelet, a tornado going through, a solar flare going off, or a star burning its fuel.
Bart says:
June 11, 2011 at 7:10 pm
You are so far off base here, Leif. You really have no idea what you are talking about. If I find a way to post the graphic where you can see it, you will be embarrassed (or should be).
I don’t seem to be getting through to you, so perhaps as you suggested it is not even worth trying.
“For running means they are not.”
That depends on the correlation of the raw data. If they are uncorrelated sample to sample, a single average of a sliding average filter is still uncorrelated with other averages with which it does not overlap. Furthermore, while the sample mean reduces uncertainty for uncorrelated data by the inverse square root of N, where N is the number of samples, the uncertainty in estimates of other properties can go down faster than this. For example, when performing a linear trend on data with uncorrelated errors, the uncertainty in the slope estimate goes down as the inverse of N to the 3/2’s power. When the thing you are trying to estimate has a definite pattern or signature which can be easily discerned from the noise, you can get very rapid reduction in uncertainty.
“Many [most] are irreversible, e.g. making an omelet, a tornado going through, a solar flare going off, or a star burning its fuel.”
Poke your omelet with a fork. Does it not tend to jiggle in response at a particular frequency? Are tornadoes not spawned cyclically? Are the Sun and other stars really exceptions to this universal rule?
“I don’t seem to be getting through to you…”
What is getting through to me is that you haven’t studied PSD estimation, but you think it can’t be done any better than just running an FFT over the data. And, your presumption is that anyone who tells you differently is unworthy of any respect or credence.
Bart says:
June 12, 2011 at 3:07 am
That depends on the correlation of the raw data.
There are 18 samples. The number quoted [0.14] is not the standard deviation, but is already divided by the square root of 17.
this universal rule?
There is no such universal rule. You are confusing ‘natural’ vibrations where the frequency depends on the vibrating matter [and which are quickly damped out] and ‘forced’ vibrations where the frequency is that of an applied cyclic force [e.g. the solar cycle].
PSD estimation, but you think it can’t be done any better than just running an FFT over the data. And, your presumption is that anyone who tells you differently is unworthy of any respect or credence.
The PSD is no more than the FT of the autocorrelation of the signal. In a more innocent age long ago, the autocorrelation function was often used to tease out periodicities, e.g. http://www.leif.org/research/Rigid-Rotation-Corona.pdf and can, of course, today be used for the same purpose.
What I’m not getting through to you is the nature of the Loehle data. Here are the FFT of series 1, 7, and 13. http://www.leif.org/research/Loehle-1-7-13.png. The peaks are artifacts of the construction of the data, e.g. the lack of power at 29, 29/2, 29/3, 29/4, etc for series 7. The data is simply not good enough for demonstrating a 22-yr peak. And may not be good enough either for a 60-yr peak, in which case I may have been guilty of overreaching when stating that there was no genuine 60-yr peak in the Loehle reconstruction.
The respect is undermined a bit by your use of word like ‘capricious’ and ‘crap’, but let that slide, your words speak for themselves.
Leif Svalgaard says:
June 12, 2011 at 9:16 am
“The number quoted [0.14] is not the standard deviation, but is already divided by the square root of 17.”
I am talking about temporal smoothing of the time series in order to pick out components of the signal.
“…which are quickly damped out… ”
They are not, in general, quickly damped out – damping is a function of energy dissipation, and energy dissipation depends on the availability of sympathetic sinks. When a lightly damped mode is excited by persistent random forcing, it gets persistently regenerated. Even the “forced” vibrations you speak of are actually excitations of extremely lightly damped overall system modes.
“The PSD is no more than the FT of the autocorrelation of the signal.”
But, a PSD estimate formed by the Fourier transform of noisy data is NOT consistent (in a statistical sense) or well behaved. There are reams of literature on this subject, on how to improve the estimation process, to which you choose to turn a blind eye. I gave you pointers on the subject here.
A proper PSD estimate clearly shows well defined peaks at the 88 year, 62 year, and 23 year associated frequencies as I have stated. And, your analysis is crap.
“Even the “forced” vibrations you speak of are actually excitations of extremely lightly damped overall system modes.”
Eventually, the energy of the universe will all be locked away where no more sinks are available, and the universal heat death will ensue.
“The respect is undermined a bit by your use of word like ‘capricious’ and ‘crap’…”
And, yours, by the use of words like “numerology”, and by categorical statements like “Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter” when you have not performed a valid analysis.
capricious adjective 1. subject to, led by, or indicative of caprice or whim; erratic:
You seem to think it means something other than what it does.
“The data is simply not good enough for demonstrating a 22-yr peak.”
We can argue the quality of the data separately. The peak is there regardless. As I stated, I believe it is likely valid simply because a similar peak also appears in the 20th century data. This latter point on its validity is open to debate. The existence of the peak in a proper PSD estimate from the data is not.
“When a lightly damped mode is excited by persistent random forcing, it gets persistently regenerated.”
A rather dramatic example of this, taught to neophyte engineering students, is the excitation of the twisting mode of the Tacoma Narrows Bridge, which led to its ultimate collapse.
Indeed, the universality of modal excitation extends to the quantum world. The vibration modes of an atom are those of the bound states of electrons. In classical mechanics, these modes would eventually decay through loss of energy. But, they are continually excited by the quantum potential in the Hamilton-Jacobi equation, as present by Bohm (pages 28,29).
I’ve now posted a three-part response to Tamino’s second round of criticisms. We’ll see how it fares.
Leif, “The Titius-Bode ‘law’ … You would call this “physical phenomenology”.”
No, I wouldn’t. I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics. Other examples include linear free energy relationships in organic chemistry, including the Hammett equation.