"Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2"

Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record.  He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now.  – Anthony

Future Perfect

By Pat Frank

In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves.  It turned out they have a lot to reveal.

As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.

Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.

For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.

I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.

The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.

Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.

Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like?  Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.

Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.

Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.

Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.

Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.

The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.

Figure 3: Comparison of the GISS and CRU fitted cosines.

The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).

The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.

In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.

So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.

But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.

The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.

Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.

But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010.  In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.

Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.

Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.

The fit results of the early and later temperature anomaly trends are in Table 1.

 

Table 1: Decadal Warming Rates for the Early and Late Periods.

Data Set

C/d (1880-1940)

C/d (1960-2010)

(late minus early)

GISS

0.056

0.087

0.031

CRU

0.044

0.073

0.029

“C/d” is the slope of the fitted lines in Celsius per decade.

So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.

If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.

Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.

Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.

I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.

All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2).  That plot is in Figure 5.

Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.

There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.

So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.

Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.

The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.

Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase.  This value is 4.4x -13x smaller than the range projected by the IPCC.

The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?

But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.

But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.

So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.

If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.

It couldn’t be  the negative feedback of Roy Spencer’s clouds, could it?

The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.

Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.

The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.

The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.

If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.

Semi-empirical linear model: 0.84 C warmer by 2100.

Fully empirical negative feedback model: 0.42 C warmer by 2100.

And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.

So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.

The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.

And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.

So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.

So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.

Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.

But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.

The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.

In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
337 Comments
Inline Feedbacks
View all comments
meab
June 2, 2011 8:21 am

The difference in slope between the periods 1880-1940 (60 years) and 1960-2010 (50 years) doesn’t reflect AGW warming, it’s simply an artifact of the start and end times w.r.t. to the 60 year sinusoidal period. 1880-1940 started and ended at the same phase of the oscillating component, both dates were just before the peak – therefore the early period linear fit is valid. However, for the late period 1960 was approaching the sinusoidal trough while 2010 was just after the peak – the late period linear fit is therefore completely non-valid. Conclusions regarding any difference in the linear slope between early and late time periods are therefore completely faulty.
It would be much better to adjust the 2nd period to start at 1940 and end at 2000 , a valid 60 year period starting and ending at the same phase angle. Of course the cosine/linear fit that leaves no obvious low frequency trend in the residuals already indicates that there won’t be any significant difference.
A do-over is needed here.

June 2, 2011 8:26 am

Phil Clarke says:
June 2, 2011 at 8:07 am
and Camp & Tung 2007 found a variance of about 0.2K in global temps attributable to the solar cycle
which is the amplitude of Frank’s 60-yr cycle, but does not show up in the residuals he plots. There should be an expected 0.07 degree solar cycle effect.
And the rise in TSI 1900-1950 and subsequent flatline is shown in various places
There is no evidence for a rise in the background TSI. The flatlining starts when we begin to actually measuring TSI.

Dave X
June 2, 2011 8:27 am

From eyeballing the residuals in fig 1, it looks like the next lower fourier harmonic period of ~120 years would be of similar significance. It looks as least as significant as the discontinuous linear fit in fig 4 and would probably reduce the trend difference even further.

stephen richards
June 2, 2011 8:28 am

Fred Haynie has presented a very simlar view of the climate cycles on his site. Worth a looK. Search on his name.

tallbloke
June 2, 2011 8:54 am

I’ve just written an essay on how global warming happened too.
http://tallbloke.wordpress.com/2011/06/02/what-caused-global-warming-in-the-late-c20th/

Curt
June 2, 2011 8:56 am

Phil Clarke:
You say, “you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans.”
In linear systems analysis, the response of a system with “inertia” to a ramp input does reach its own matching ramping slope after about one time constant of the system. Since Pat starts this part of his analysis in 1960, and the “ramp” of CO2 forcing started at least 100 years earlier, he is not ignoring ocean thermal inertia (more properly “capacitance”). And his 50-year period is hardly “instantaneous”.
Now, under this analysis, if we were able to suddenly freeze the forcing at the current level, warming would continue for a while (how long is hotly debated). But the “slope analysis” Pat does is fundamentally valid (it’s done in engineering systems analysis all the time).

bob
June 2, 2011 9:09 am

It all boils down to “we are still recovering from the little ice age and will continue to recover from the little ice age through to 2100”
I don’t buy it.
I am pretty sure that the little ice age was at least in part caused by volcanic activity, which has since subsided quite a bit, as well as several solar minimums (maunder and others)
It is the forcings that matter, and the recovery from the little ice age doesn’t count as a forcing.

juanslayton
June 2, 2011 9:11 am

Probably a dumb question from a geezer whose last trig class was in Pomona High School in 1957: Is there a reason the periodicity is expressed as a cosine, rather than a a sign?

June 2, 2011 9:14 am

Leif and Phil Clarke (and others) criticize, probably correctly, for reasons physicists of various sorts understand. The question for all of us, though, is whether the criticisms lead to an incorrect conclusion; the other question for the warmists is whether, were the alleged mistakes corrected, there would be a significant difference in how the conclusion looked. Skeptics need the answer; the warmists need the appearance as well (shades of the Hockey Stick, in reverse).
Frank’s discussion is very straightforward and easy to follow regardless of errors of commission or omission. Could Leif’s and Phil’s comments be added in to the analysis? Could Leif/Phil not revise Frank’s post and show us how the revision affects the answer?
The negative comments seem like shooting ourselves (the skeptics) in the foot. Frank’s post appears to show the same results as other, more lettered posters have done. He has certainly done what many of us technically educated but not climatologically vetted citizen-scientists (love that term) are doing. Looking at the data and applying basic principles and reasoning to find how close the IPCC supermen come to our technical commen sense. Not close, of course.
In a previous WUWT post (I think it was) about how the IPCC various computations and adjustments can be considered a “black box”, it was said you can simulate the relationships inside bbs by looking at how the data-in compares to the data-out. The relationship, as demonstrated, is remarkably simple. As such, one doesn’t need, perhaps, all the detail analyses that Leif and Phil suggest. Most cancel out or contibute only within the error band.
I’ve done my own such analyses as Frank’s, and find the IPCC black box to not reflect simple considerations. If Leif and Phil are correct, that we have to discount Frank’s work because he hasn’t got all the pieces right, that that terminally weakens his conclusions, then all of us outside the authorities – the IPCC, Gavin and Jim – might as well roll over. But I think Frank’s approach has great value. He is not saying that the IPCC is 10% out. He is saying they are 80% or more out. So let Frank be 50% wrong, and Frank is still demonstrating that the IPCC is 40% out. That is still a deathblow as far as the non-natural theories go. No CAGW attributable to CO2. And are Leif-and-Phil’s criticisms even at the 50% error level?
The two nicely demonstrated patterns here are that a 60-year cosine function lies atop a linear function. The IPCC model does not have a cosine function. They don’t have a linear function either. And while there may be a CO2 function prior to 1965, the models are relevant to the post-1965 period, and especially the post-1980 period. Both sides can forget the prior period when looking to the near future, and act as if CO2 increases began the day Al Gore found his mission in life. So such criticism (even though true) is irerelevant.
The IPCC theme is that the past is not the predictor of the future, at least prior to 1965. The future is the product of the present. The skeptic theme is that the past is, indeed, the predictor of the future, though with some minor modification by the present. The pro- and con-CAGW arguments are rooted in this disconnect. “That was then, and this is now” is the fundamental break the warmists have from the skeptics. If the past is, indeed, a good predictor of the future, then Frank’s (mine and others’) simpler view is valid. The previous broad patterns continue into the future through the magic of our own “black box”. No, we don’t have the mechanisms, but to refute the IPCC we don’t need one, as our proof is in the observations, not in the beauty of the “projections”. The details that Leif-and-Phil’s are looking for are handled and hidden inside the box.
This is a great post in large part because it demonstrates how adherence to basic principles while using the IPCC data lead to a significantly different climate energy scenario in hard numbers. The scenario is therefore falsifiable, something that, at least in the 10-year term, the IPCC “projections” are not.
By Frank’s work, by 2022 (my estimate) the global temperature will have dropped by about 0.3C, over which time the IPCC says the temperature should have risen by another 0.22 – 0.30C. The two will then be apart by 0.5 to 0.6C, something terminal for the IPCC model. By 2015 the difference will be still within the error bands, but will be looking like 0.1C – in the wrong direction. We’re getting into the time-frame when Gavin and Jim look to retirement and accolades while they can.
There needs to be an empirical consideration to criticisms such as we have here. We need to know if errors are terminal, moderate or cosmetic. This is the stuff that the generalist can understand, maybe even the MSM journalist. So let’s build on it.

G. Karst
June 2, 2011 9:17 am

This seems to be an ideal analysis, for the good folks at Lucia’s blackboard, to sink their teeth into.
I certainly hope it gets published more widely than a few blogs… Is anyone considering a paper, so that it can be safely referred/cited to? GK

Ryan
June 2, 2011 9:42 am

Even before you do the cosine analysis, the gradient 1910 to 1943 is just as steep as the gradient 1970 – 2000. On that basis there is really no evidence that mankind is doing something unusual to the earths temperatures – you would have to demonstrate that the cause of post war warming was different to the cause of pre-war warming. Otherwise the graphs are really showing just a 130 yr upward trend unaffected by modern technology.

James Sexton
June 2, 2011 9:50 am

bob says:
June 2, 2011 at 9:09 am
“It all boils down to “we are still recovering from the little ice age and will continue to recover from the little ice age through to 2100″
I don’t buy it………..
It is the forcings that matter, and the recovery from the little ice age doesn’t count as a forcing.”
=========================================
Bob, all that is fine, except, we will never, (I repeat for emphasis) never, come to an understanding of all of the forcings and specific weights to each forcing that goes into our climatology. Its a pipe-dream and a fools errand to go chasing such. It would be much easier to state we don’t know and move on.

Tom_R
June 2, 2011 9:51 am

>> Phil Clarke says:
June 2, 2011 at 4:19 am
it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.”
Also, you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans. … <<
These two statements combine to say that early 20th century warming was caused by anthropogenic forcing from the 19th century. Do you really buy that?

SteveSadlov
June 2, 2011 10:00 am

To those who say, only the Western part of North America is experiencing unusually cold conditions, I present:
http://www.ansa.it/web/notizie/regioni/valledaosta/2011/06/01/visualizza_new.html_842381390.html

Ammonite
June 2, 2011 10:12 am

Phil Clarke says: June 2, 2011 at 4:19 am
Post Quote: “The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature.”
Phil Clarke response: “Er, no it is not. See figure 2.23 in IPCC AR4. Long lived Greenhouse Gas (LLGHG) forcing contributes about 0.35 W/m2 pre-1950…”
Thank you Phil. Advice for general readers. Please check any post that describes a “central AGW tenet” or “major assumption” or “fundamental prediction” etc against the relevant IPCC chapter. If the claim is not well founded it is often a sign of strawman cometh. In this post the claim above is well off the mark.

June 2, 2011 10:14 am

We are making the mistake of looking at real world data. As we keep getting told the real world data is wrong unless it matches the models, as it is only the models that are right.
Please remember that a snowball earth for the next 10 thousand years is not inconsistent with CAGW, that would just be weather not climate but a clear sunny day in summer is climate not weather /sarc

Dave X
June 2, 2011 10:36 am

@”Probably a dumb question from a geezer whose last trig class was in Pomona High School in 1957: Is there a reason the periodicity is expressed as a cosine, rather than a a sign?”
Just tradition, based on FFT and maybe tidal analysis. A cosine curve is the same as a sine curve with a 90 degree phase shift, or A*cosine(w*t+p)=A*sine(w*t+p+pi/2).
Maybe it also makes it easier to interpret the fitted phase term p in that it is the offset of the peak rather than of the zero crossing.
In either case, fitting a single trig curve fits three parameters (or fudge factors): amplitude, frequency, and phase.

Paul Vaughan
June 2, 2011 10:43 am

@Leif Svalgaard (June 2, 2011 at 8:16 am)
You owe Marcia Wyatt an apology.

SteveSadlov
June 2, 2011 10:48 am

Yesterday I noted the extremely cold conditions aloft, as an upper low came ashore in Western North America. I commented that with -27 deg C at 500mB, I was worried about twisters:
http://www.sfgate.com/cgi-bin/article.cgi?f=/n/a/2011/06/01/state/n190225D61.DTL
I wonder what will happen when that package of air (now over the four corners region) hits the Eastern US?

Tim Folkerts
June 2, 2011 11:29 am

By my own quick analysis, the best fit to the HADCRUT 3 data from 1880-2010 is
T = -11.7464 + 0.00597 /yr + 0.1331 cos (0.1062 * T + 1.1330)
That gives 0.0597 C/decade and a period of 59.3 years. For the years, 1880 to 2010, there is an average deviation of 0.09 C between the fit and the data (ie the average of |Fit – Actual| is 0.09), with about half above and half below.
The big problem is that before 1880, the fit is LOUSY! every single point from 1850 – 1883 is above the fit, by an average amount of 0.31 C. (And remember, that is using 5 free parameters to fit the data — slope, intercept, amplitude, period and phase angle.)
And even at the end, I get 9/10 points in the last decade as being above the fit. 2011 will be interesting to see. It started with a huge drop in global temperatures from the 2010 levels, but is jumping back up a bit. If the drop was a fluke, then the spike down might have been an anomaly, but if the spike is a trend, then we might be on a downward trend as Pat Frank’s analysis would predict.

Greg, Spokane WA
June 2, 2011 11:33 am

Just in case anyone is still interested:
Demetris Koutsoyiannis

pochas
June 2, 2011 11:34 am

To support this analysis I reiterate my earlier comment to WUWT, wherein I calculate a sensitivity of 0.121 ℃ /(w/m²). This calculation ignored negative feedbacks from clouds, etc, whereas the present analysis would include these factors, so I would consider the two to be in agreement.
http://wattsupwiththat.com/2010/10/25/sensitivity-training-determining-the-correct-climate-sensitivity/#comment-516753

June 2, 2011 11:37 am

Paul Vaughan says:
June 2, 2011 at 10:43 am
You owe Marcia Wyatt an apology.
For what?

John Finn
June 2, 2011 11:46 am

Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
So how do you explain the LGM or the LIA and the MWP for that matter?

June 2, 2011 11:52 am

Tim Folkerts says:
June 2, 2011 at 11:29 am
The big problem is that before 1880, the fit is LOUSY!
This is the usual problem with numerology: once you go outside of the interval on which the original fit is based, the correlation breaks down.