Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record. He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now. – Anthony
Future Perfect
By Pat Frank
In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves. It turned out they have a lot to reveal.
As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.
Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.
For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.
I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.
The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.
Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like? Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.
Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.
Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.
Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.
Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.
The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).
The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.
In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.
So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.
But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.
The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.
Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.
But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010. In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.
Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.
Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.The fit results of the early and later temperature anomaly trends are in Table 1.
Table 1: Decadal Warming Rates for the Early and Late Periods.
| Data Set |
C/d (1880-1940) |
C/d (1960-2010) |
(late minus early) |
| GISS |
0.056 |
0.087 |
0.031 |
| CRU |
0.044 |
0.073 |
0.029 |
“C/d” is the slope of the fitted lines in Celsius per decade.
So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.
If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.
Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.
Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.
I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.
All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2). That plot is in Figure 5.
Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.
So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.
Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.
Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase. This value is 4.4x -13x smaller than the range projected by the IPCC.
The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?
But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.
But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.
So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.
If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.
It couldn’t be the negative feedback of Roy Spencer’s clouds, could it?
The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.
Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.
The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.
The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.
If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.
Semi-empirical linear model: 0.84 C warmer by 2100.
Fully empirical negative feedback model: 0.42 C warmer by 2100.
And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.
So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.
The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.
And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.
So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.
So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.
Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.
But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.
The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.
In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

Squaw Valley will reopen for 4th of July weekend.
Pat says: “…for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.” You are totally wrong to assume that they are real and meaningful. They are not as comparison with satellite temperature measurements will tell you. Obviously you have not read my book “What Warming?” or you would know that they fabricate temperature curves. One example is the period in the eighties and nineties they show as a steady temperature rise called the “late twentieth century warming.” I have proved that this so-called warming does not even exist. What does exist in the eighties and nineties is a temperature oscillation caused by the alternation of El Nino and La Nina phases of ENSO, up and down by half a degree for twenty years, but no rise until 1998. That is ten years after Hansen invoked global warming in front of the Senate in 1988. His testimony gave a kick start to the present global warming craze which turns out to have been founded on a non-existent warming. There is more – get my book from Amazon and read it. I can see why warmists want to ignore it but there is no reason for someone who wants to learn the truth about global warming not to know what is in it.
well done indeed – some very impressive sounding words like “oscillation”, “residuals” and “sensitivity”, nice curvy lines that fit the data properly, and, best of all, a conclusion that confirms by beliefs. I’m have no scientific training, let along understanding of climatology, but I know this must be real, empirical science (not like that IPCC rubbish). It gives the answer I want.
Anthony’s response to Joel Shore’s comment above commits the same blunder he accuses Tamino of — dismissal by rubbishing the integrity of the commenter. As I think I’ve said before here, “play the ball, not the man”
That said, I really would love to see a response from Frank!
SteveSadlov (June 3, 2011 at 10:14 am) wrote:
“Now for a quick primer regarding the Pacific / Hawaiian High. This feature, one of the famous semi permanent Semi Tropical / Horse Latitudes Highs, is normally well up into the mid latitudes by this time of year. But not this year. It is stuck in the tropics.
Consider this. What is described here, given the relative extents and masses of the Pacific and Atlantic Oceans, is essentially a low frequency input signal being applied to the global climate circuit. Draw your own conclusions.”
–
Requires too much thought for those who think in anomalies and can’t be bothered with changes of state of water. Looks like it will be decades before people clue in. Good to see evidence that there’s at least one person thinking — much appreciated.
“… as we all know and has been demonstrated repeatedly, Grant Foster can’t tolerate any dissenting analysis/comments there.”
That has not been my experience. He allows plenty of dissention, but he does not suffer fools gladly; nor should he. If Pat Franks is so confident of his analysis, he should submit it for publication to any of the peer reviewed climate journals, and then see where the chips fall.
I would be happy to see an exchange here or at Open Mind between Pat Franks and Tamino/Grant Foster. It seems to me that at this point, Mr. Franks has some explaining to do in responding to the critique of Mr. Foster and the others who responded in detail at Open Mind.
@Charles (June 3, 2011 at 8:57 pm)
Tamino is very heavy-handed with censorship, even of benign comments.
Matt says:
June 3, 2011 at 2:39 pm
“The problem is: the climate system is driven by the interplay of multiple natural and multiple human forcings. In order to separate human and natural forcings, you need to meticulously account for these effects.”
The problem with that is: process of elimination only works when your knowledge of all alternatives is complete. Climate Science has only been researched seriously for a very few decades, and the Earth’s climate system is immensely complex. Based on your sober writing, I doubt you would claim that every potentially significant effect which could cause a ~60 year temperature cycle has been investigated and demonstrated to be insignificant. If you did, the only effect on my perspective would be to lower my opinion of your sagacity.
“You cannot just take the difference between a slope before and after some arbitrary year. That is nonsense.”
I think you are misinterpreting the exercise. The author is performing an experiment in which he accepts the IPCC argument that significant change occurred mid-century, and follows the path where that leads. And, given the presence of a 60-ish year cycle in the data, it leads to less climate sensitivity than the IPCC claims.
Pace Tamino and his ilk, there clearly is a ~60 year cyclical process in the data over the last century evident by inspection. Is it a phantom of measurement error, or mere coincidence in timing between between an early transient and subsequent rise to significance of GHG forcing? Or, is it the excitation of a fundamental mode of the system which began a century or more ago, and has yet to damp out?
Given the third coincidence of peaking in the early part of this century right on schedule, I would tend to suspect the latter. In fact, this is precisely how the output of such a mode, coupled in series with an integrator or longer cycle mode, might look driven by white noise or any other random process within the bandwidth. I would suggest to the author trying out a fit with an amplitude modulated sinusoid, which looks to me could be contrived to give a better fit.
Leif Svalgaard says:
June 2, 2011 at 7:21 am
“Without physics, this is as much numerology as Frank’s.”
With incomplete knowledge of all significantly contributing physical processes, it’s all “numerology” at some level. When you do not know what is going on (and, don’t anyone try to tell me the climate establishment fully understands the lull in temperature rise of the last decade), you look at the data and try to tease out some order which can give you new directions in which to investigate.
Matt says:
June 3, 2011 at 2:39 pm
“I started reading the literature and I was shocked to discover that the work is very thorough.”
One last comment on this posting. No matter how brilliant the researchers or “thorough” their work, they can still be hopelessly wrong. Ptolemaic astronomers had an incredibly thorough and deeply researched methodology which, contrary to most peoples’ perceptions, gave a reasonably good and repeatable description of the movement of heavenly bodies with well established predictive power. It was just completely and utterly wrong in its driving assumptions. These were not primitive cave dwellers. They were profoundly knowledgeable and intellectually vibrant men who were limited only by the state of knowledge of the day.
Climate is the rock against which the ship of 20th century reductionist-inductive (linear catholic logic) science is going to founder. This rock bears a striking resemblance to the head of Karl Popper.
Thank you very much for an excellent article! Granted, the causes of everything is not explained. But would any one have criticized Tycho Brahe for his excellent work measuring the star positions? Perhaps some fine tuning on the numbers can be done. However now we need a ‘Kepler’ and ‘Newton’ to explain these graphs.
Werner Brozek says:
June 4, 2011 at 11:20 am
Perhaps some fine tuning on the numbers can be done. However now we need a ‘Kepler’ and ‘Newton’ to explain these graphs.
Initially Kepler fell into the same trap as Frank. Fitting crummy [limited] data to beautiful curves: http://www.georgehart.com/virtual-polyhedra/kepler.html
Leif Svalgaard says:
June 4, 2011 at 11:59 am
“Fitting crummy [limited] data to beautiful curves…”
Again, I think this is a misinterpretation. Frank is engaging in hypothesis testing. The IPCC says the data are good. Do the data, then, take us where the IPCC says we are going?
If the data are that crummy, then what information, if any, do they hold?
Bart says:
June 4, 2011 at 12:29 pm
Frank is engaging in hypothesis testing
his hypothesis then is that the curves and trend found for the data in the fitting window are also valid outside, for which there is no evidence [especially not for the future part]. This might be valid if there is a theory that says that it must be so. If no such theory is supplied, it it just numerology.
Bart says:
I think the IPCC has always been pretty clear in noting that the instrumental temperature record does not alone place very strong bounds on climate sensitivity. Rather, better empirical evidence is obtained from combining it with constraints from other events such as the least glacial maximum, the climate response to the Mt. Pinatubo eruption (which involves the instrumental temperature record but just a small portion of it), … And, these empirical constraints on climate sensitivity give a similar range as is found using climate models.
So, Frank’s post is really nothing new…If you make some assumptions regarding the instrumental temperature record, you can find a very low climate sensitivity; however, if you make other assumptions, you can find a very high climate sensitivity.
Leif Svalgaard says:
June 4, 2011 at 5:07 pm
“…for which there is no evidence…”
Kind of the entire AGW brouhaha in a nutshell, that.
The data may be crummy but until we get the BEST, we will have to use what we have. Scientists have always been forced to use less than perfect data, however I will readily admit the climate data is worse than most.
With regards to explaining the graphs, unless I am mistaken, I believe Willis Eschenbach, with regards to his post: http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/ may be able to go a long ways to explaining the spikes in the lower graphs of Figure 1. As for the sine or cosine curve part, I believe Bob Tisdale could take a good stab at explaining that. In terms of predicting the future climate, I believe the sine curve would have greater predictive value although having a good estimate of sunspots over the next decades, with help from Leif Svalgaard, should provide better forecasts than the IPCC projections.
As an experiment, I tried my ad hoc cosine series approximation method on the full range of the HadCRUTv global mean temperature data. I believe this method, using the Microsoft Excel Solver utility, attempts to explain the observed data as a discrete number of minimum amplitude sinusoidal (actually cosines) waveforms and as such, it is likely to be incomplete as it does not necessarily find all sinusoids or account for random forcing events (including data collection methodology changes.)
I used a binary, log-periodic series of cosine periods from 5 to 1280 years. I first ran the optimization adjusting only the amplitudes (deg C) and the base dates (nearest cosine peak to 1930.667 decimal year-date) and then I allowed it to optimize the periods as well. Each element of the series is calculated by subtracting the base date from the actual date and multiplying the result by two times pi() divided by the period (in decimal years)to create the argument for the cosine function. A temperature offset constant is also included in the sum of all elements. I used a method that forces a minimum element amplitude solution to prevent unrealistic solutions with large mutually cancelling element amplitudes over the known data interval. The Data to Error ratio is ten times the log of the sum of the squares (SUMSQ()) of the original data divided by the sum of the squares of the approximation error.
The final solution seems to predict a temperature drop of 0.4 degrees C from now to 2040 and seems to indicate temperatures dropped 0.2 from 1835 to 1845. The predictive validity of this method depends on how much our climate depends periodic on periodic processes. I note that periods close to one and two times the sunspot period do seem to be present. The elements with periods longer than the data interval (161.250 years) probably approximate the linear slope used in the main article.
Spector says:
June 5, 2011 at 12:27 pm
“I used a binary, log-periodic series of cosine periods from 5 to 1280 years. “
That’s pretty arbitrary. If you do a PSD, you can find the periods which best describe the data for the last century. Beyond that… how to choose? Analyze proxy data?
It definitely replicates the series of the last century. And, it captures the LIA as well. But, it falls apart at the MWP. In principle, you could always find a good replication over any given interval using any functional basis, so there is no particular reason to believe this has predictive power.
It does, however, highlight the fact that everything we see and have seen could easily be the effect of many steady state cyclical processes alternatingly interfering constructively and destructively.
Meant to say: “…so there is no particular reason to believe this has long term predictive power.” It’s probably not too far off for the immediate future.
RE: Bart: (June 5, 2011 at 3:47 pm)
Spector says:
June 5, 2011 at 12:27 pm
“If you do a PSD, you can find the periods which best describe the data for the last century. Beyond that… how to choose? Analyze proxy data?”
The plot is based solely on the HadCRUT3v data from Jan, 1850 to Mar, 2011 using the Microsoft Office 2007, Excel Solver utility to adjust the parameters for minimum square error. I forced a minimum amplitude of .001 deg C and required the periods to be sequential. To prevent unrealistic solutions, I also multiplied the error sum by one plus 0.1 times the square root of the sum of the squares of the trial amplitude factors. I believe that forcing a minimum energy solution reduces the likelihood that the approximation might be ill behaved at the end points. (Which it often is if I don’t.)
Given that the data interval was 161 years, I would be surprised if any predictability extended more than 40 years on either end. It seems to be treating our current warm interval as an enhanced repetition of the peaks of 1940 and 1880.
I based this technique on the fact that an FFT will not estimate the frequency of a small fraction of a sine wave contained in a multi-sample record, but if you ask an optimization program to find the best fitting sine curve, it may give you a good answer.
Sorry to be silent for so long. You’ve all provided intelligent commentary, and I regret not having time to participate and attempt replies.
But I did have some time today, and posted a reply at Tamino’s critique. We’ll see what happens. Those of you who put credence there are encouraged to take a look, and participate.
Why post there? AFAIK the main issues raised there were first raised here and this is a civilised uncensored forum unlike Tamino’s which doesn’t deserve patronage.
Leif, you wrote, “his hypothesis then is that the curves and trend found for the data in the fitting window are also valid outside…”
My hypothesis, first, was that the oscillation that appeared in the GISS 1999 anomalies, following addition of the SST anomalies to the land-only anomalies, reflected a net world ocean thermal oscillation. The cosine + linear fits proceeded from that hypothesis.
In the event, the cosine in the full fit had about the same period as the oscillation that appeared in the GISS data set after the SSTs were added.
Then, pace Bob Tisdale, the cosine period proved to be about the same as the PDO+AMO period noted by Joe D’Aleo and about the same as the persistent ocean periods of ~64 years reported by Marcia Wyatt, et al.
I took those correspondences — the appearance with SST, correspondence with the ocean thermal periods — to provide physical meaning to the oscillation in the global temperature anomaly data sets, represented by the cosine parts of the fits. This doesn’t seem unreasonable, and lifts the analysis above “numerology.”
Following the assignment of physical meaning, an empirical analysis such as the above must be hypothetically conservative and mustn’t ring in expectations from theory. If the early part of the 20th century showed warming generally accepted as almost entirely natural, then it is empirically unjustifiable to arbitrarily decide that the natural warming after 1950 is different than the natural warming before 1950.
That means the natural warming rate from the early part of the 20th century is most parsimoniously extrapolated into the later 20th century, absent any indicator of significant changes in the underlying climate mode.
The rest of my analysis follows directly from that. The net trend, after projecting the natural warming trend in evidence from the early 20th century, is that the later 20th century, through to 2010, warmed 0.03 C/decade faster than the early 20th century.
This excess rate may turn out to be wrong, when a valid theory of climate disentangles all the 20th century drivers and forcings. However, it is presently empirically justifiable.
The trend I extrapolated to 2100 wasn’t a prediction, but merely a projection, ala the IPCC, of what could happen if nothing changes between now and then. That, of course, is hardly to be expected, but at least I put that qualifier transparently in evidence. I.e., “Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.”
And so it goes. 🙂