Guest post by Bob Tisdale
INTRODUCTION
Chapter 4-7 of If the IPCC was Selling Manmade Global Warming as a Product, Would the FTC Stop their deceptive Ads? included comparisons of the CRUTEM3 Land Surface Temperature anomalies to the multi-model mean of the CMIP3 climate models. For those who have purchased the book, see page 99. As you will recall, CMIP3 stands for Phase 3 of the Coupled Model Intercomparison Project, and CMIP3 served as the source of the climate models used by the Intergovernmental Panel on Climate Change (IPCC) for their 4th Assessment Report (AR4). CRUTEM3 is the land surface temperature data available from the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia.
This post compares the new and improved CRUTEM4 land surface temperature anomaly data to the same CMIP3 multi-model mean. CRUTEM4 data was documented by the 2012 Jones et al paper Hemispheric and large-scale land surface air temperature variations: An extensive revision and an update to 2010. I’ve used the annual time-series data, specifically the data in the second column here, changing the base years for anomalies to 1901 to 1950 to be consistent with Figure 9.5 of the IPCC’s AR4.
And, as I had with the other 20th Century Model-Observations comparisons, the two datasets are broken down into the 4 periods that are acknowledged by the IPCC in AR4. These include the early “flat temperature” period from 1901 to 1917, the early warming period from 1917 to 1938, the mid-20th Century ‘flat temperature” period from 1938 to 1976, and the late warming period. The late warming period in the chapter 4-7 comparisons in If the IPCC was Selling Manmade Global Warming as a Product, Would the FTC Stop their deceptive Ads? ended in 2000. For the late warming period comparisons in this post, I’ve extended the model and CRUTEM4 data to 2010.
COMPARISONS
As shown in Figure 1, and as one would expect, the models do a good job of simulating the rate at which the CRUTEM4-based global land surface temperatures rose during the late warming period of 1976 to 2010.
Figure 1
But like CRUTEM3 data, that’s the only period when the IPCC’s climate models came close to matching the CRUTEM4-based observed linear trends.
According to the CMIP3 multi-model mean, land surface temperatures should have warmed at a rate of 0.043 deg C per decade from 1938 to 1976, but according to the CRUTEM4 data, global land surface temperature anomalies cooled at a rate of -0.05 deg C per decade, as shown in Figure 2.
Figure 2
Figure 3 compares the models to the global CRUTEM4 data during the early warming period of 1917 to 1938. The observed rate at which global land surface temperatures warmed is almost 5 times faster than simulated by the IPCC’s climate models. 5 times faster.
Figure 3
And in Figure 4, the models are shown to be unable to simulate the very slow rate at which land surface temperatures warmed during the early “flat temperature” period.
Figure 4
According to the models, the linear trend of the global land surface temperatures during the late warming period should be 6.6 times higher than during the early warming period. See Figure 5.
Figure 5
Yet according to the new and improved CRUTEM4 land surface temperature data, Figure 6, the land surface temperatures warmed during the late warming period at a rate that was only 40% higher than during the early warming period.
Figure 6
CLOSING
The models show no skill at being able to simulate the rates at which global land surface temperatures warmed and cooled over the period of 1901 to 2010. Why should we have any confidence in their being able to project global land surface temperatures into the future?
ABOUT: Bob Tisdale – Climate Observations
ebook (pdf and Kindle formats): If the IPCC was Selling Manmade Global Warming as a Product, Would the FTC Stop their deceptive Ads?
SOURCES
The CMIP3 multi-model mean data is available through the KNMI Climate Explorer
http://climexp.knmi.nl/selectfield_co2.cgi?id=someone@somewhere
And as noted in the post, the annual CRUTEM4 data is available through the Met Office website:
http://www.metoffice.gov.uk/hadobs/crutem4/data/diagnostics/global/nh+sh/index.html
Specifically:
http://www.metoffice.gov.uk/hadobs/crutem4/data/diagnostics/global/nh+sh/global_n+s






So what was the cloud cover in all this years?
We are in Northern Hemisphere and entering Spring a cloudy day just drops 5ºC or more in temperature at day time…
The land warming from 1976 to 2010 is 0.284 C/decade. And according to UAH, from December 1978 to the present is 0.17 C/decade. Over that period of time, this demands a very good explanation.
I’m rather puzzled by all this fuss over CRUTEM4. More data, particularly in previously under-sampled high latitudes (where ‘polar amplification’ should be most visible), and better handling of changing sampling techniques for SST…. I would have thought there would be celebration over better data.
Better data should be an improvement. Regardless of your previous outlook, better data means a more solidly based set of information going forward.
Bill Illis – The models represent ‘best guesses’ of how the physics works. And the range of results represent numerous possible paths for the climate to take if those models are anywhere near accurate.
There is absolutely _no_ physical requirement that the evolution of climate (based upon all the natural variations, influences, etc.) should pin itself strictly to the _mean_ of physically possible outcomes. Modelling indicates that decadal high or low trends are to be expected – the norm, not the exception – with the climate following the mean only over climatically significant periods of ~20-30 years.
You are asking for pin-point, short term replication, in models that make no such claim. That’s a strawman argument (http://en.wikipedia.org/wiki/Straw_man). Not at all reasonable.
—
And., as I noted before, Bob Tisdale has _neglected_ to show the range of modeled outcomes – and has therefore painted the models in an unwarranted, unfair comparison.
KR says:
March 20, 2012 at 8:59 pm
I’m rather puzzled by all this fuss over CRUTEM4. More data, particularly in previously under-sampled high latitudes (where ‘polar amplification’ should be most visible)
Why did all the polar amplification just amplify all the years from 2001 to 2010 but neglect to amplify 1998? That is what I find puzzling. See
http://www.metoffice.gov.uk/news/releases/archive/2012/hadcrut-updates
Well, I’m happy about this.. it proves that “REVISIONS” have given us more than 50% of the warming in the 1976-2010 period, and guessimating that UHI effects and station loss/disappearance/removal (where did they go, I wonder ?) also accounts for some 50% of the warming, that means that it has actually gotten COLDER !!!
Which coincides with my experience of the situation.
Certainly been a pretty yucky summer or two down here !!
It is such a pity that the main temperture databases are controlled by the AGW priesthood.
They have become meaningless because of this.
Doug Proctor says:
March 20, 2012 at 11:31 am
The defense is that the differences between observation and models are due to “natural” variations that have higher amplitudes but shorter frequencies than expected from modelling. It is not an unreasonable defense.
So during periods when temperatures are flat or declining, ‘natural variations’ in the Earth’s climate system lose heat as fast or faster than the heat gain from increases in forcings.
Were forcings completely independent from ‘natural variations’, then this would be a reasonable position, but of course they are not independent in the Earth’s climate, although perhaps they are in the models.
The alternate view is that “natural” variations that have higher amplitudes but shorter frequencies means the Earth’s climate is determined by processes operating on shorter timescales than anthropogenic forcings and that the Forcings Model is invalid, with has no predictive value.
“It’s not possible to know what the future holds, of course, and such modelling – economic or scientific – is a highly imperfect way of making predictions. Even so, some idea is better than no idea”
http://www.theage.com.au/opinion/politics/human-cost-of-inaction-incalculable-20120320-1vhrv.html#ixzz1pijjmvlM
Well he’s right about reaching those tipping points. MINE!!!!
The evil that men do lives after them; the good is oft interred with their bones, and so let it be with HadCRUT3.
HadCRUT3 had the potential to stop the CAGW fraud dead in its tracks, as it clearly showed a ZERO warming since 1998 and a definite downward trajectory from 2001 to the present, with plummeting temperatures from the start of solar cycle 24.
I’m sure that when AR5 comes out, the only images that will hit newspapers and TV screens will be this “new and improved” tortured HadCRUT4 graph (ending at 2010 of course) showing politicians and the world that IPCC “nailed” it and that CAGW is racing towards, “seas boiling away”, as Hansen so graphically and erroneously put it.
Behold Hockey Stick 2.0. A craftily designed tool, designed not for understanding, but rather to beat the world into submission. Like it’s predecessor, HadCRUT4 will be unceremoniously airbrushed out of AR5 when it’s proven to be unrepresentative of reality.
One major flaw to this chicanery is what to do with the MASSIVE discrepancies between the laughable HadCRUT4 temperature data and the RSS and UAH satellite temperature data records. Are these nefarious “scientists” going to simply “recalibrate” the satellite data, too? To pull this scam off, they’d almost have to “deal” with the satellite evidence as it clearly shows gross manipulation of terrestrial data.
“Modelling indicates that decadal high or low trends are to be expected – the norm, not the exception – with the climate following the mean only over climatically significant periods of ~20-30 years.
You are asking for pin-point, short term replication, in models that make no such claim. That’s a strawman argument (http://en.wikipedia.org/wiki/Straw_man). Not at all reasonable.”
Since Bob’s analysis is over 30 year periods or longer what point are you trying to make here? I can’t follow your jumbled logic, sorry.
KR says: “I’m rather puzzled by all this fuss over CRUTEM4.”
There’s no fuss, just discussion and persons venting. If this thread had 200 comments in 24 hours, that would be a fuss.
KR says: “More data, particularly in previously under-sampled high latitudes (where ‘polar amplification’ should be most visible), and better handling of changing sampling techniques for SST…. I would have thought there would be celebration over better data.”
This post was about CRUTEM4. There’s no sea surface temperature data included. Also, polar amplification is a natural process.
KR says: “And., as I noted before, Bob Tisdale has _neglected_ to show the range of modeled outcomes – and has therefore painted the models in an unwarranted, unfair comparison.”
I’ve already replied to your “unfair” comment, providing a link that explained why I presented only the model mean. I’m only interested in presenting the trends of the “forced signal”, not the “noise”, to put it in the terms used by Gavin Schmidt in the link I provided earlier. Or to use the terminology presented by NCAR, I’m interested in showing the trends of the “best representation of a scenario” that is presentable by all of the climate models in the CMIP3 archive.
I am not buying this. The surfaces temp observations and the modeling agree very well. The agreement is not so great around the 1970 dip, where the models plateau slightly. That observation would be consistent with the common recognition that that dip was due to post WW2 aerosol production and the general opinion the models are not good with aerosols.
In fact considering the difficulties inherent in the modeling and the surface temperature estimates, and the fact that they are independant of each other, the agreement is brilliant.
On the other hand Bob’ s gradient drawing exercise has a bad small about it. Anyone else notice that all the end points are chosen so they just happen to coincide with a local maximum/minimum.. And that the early starting point of each segment is chosen to have a down tick, while the ending point is chosen to show an up tick. This has the effect of exaggerating positive gradients. And negative gradients show the same trick in reverse.
Bob must be saying “whip me Tamino, whip me!”.
Will Nitschke says:
March 20, 2012 at 7:25 pm
An interesting reminder, and the only precedent I am aware of, where citizens (“sceptics”) took the Crown Research Institute, NIWA, to court over their updated adjustments. They eventually abandoned their adjustments and the country was apparently .7C cooler as a result.
————-
Will it seems that the temperature series shown here still shows warming:
http://www.niwa.co.nz/climate/nz-temperature-record
Juraj V. says:
March 20, 2012 at 11:28 am
Models are laboriously tuned to match the latest warm AMO period. Period.
————–
Prove it. Wishfully thinking on your part is not good enough.
johnmcguire says:
March 20, 2012 at 12:07 pm
. I am weary of the constant adjustment to exsisting data in order to make it fit the false models.
———————-
Unless you can prove this happens, I am calling this a concocted fantasy.
The fact of the matter is that anyone can take the raw data and process it and come up with a temperature series and none of them deviate much from what is shown in CRUTEM. No reference to any model is made.
So in this thread we now have had two mutually contradictory assertions, both by people who know nothing of the internal processes by which things are done.
One guy asserts the models are adjusted to fit the observations.
The other guy asserts the observations are adjusted to fit the models.
They can’t both be right.
More than likely they are both wrong.
And making stuff up.
Bob Tisdale – “This post was about CRUTEM4. There’s no sea surface temperature data included…”
Headslap – My apologies, I had just been reading the HadCRUT4 thread and the press releases on it…
LazyTeenager says: “I am not buying this.”
That’s because you’re gullible and you believe in the hypothesis of anthropogenic global warming, the existence of which can only be demonstrated through poorly performing climate models.
LazyTeenager says: “The surfaces temp observations and the modeling agree very well…”
Apparently you have trouble reading graphs. The only period when the models appear to perform well, based on the linear trends, is the late warming period.
LazyTeenager says: “On the other hand Bob’ s gradient drawing exercise has a bad small [sic] about it. Anyone else notice that all the end points are chosen so they just happen to coincide with a local maximum/minimum…”
You’re complaining about the most logical way to select break points between cooling and warming periods. Would you select other years? If so, please identify which years and present the reasons for selecting those years.
So if I’m reading this right, in the last 30 years, despite all talk of start points and endpoints, the model mean was off 5 fold early in the 20th century, 2 fold middle of century, and right on the last 30 years or so? Sounds like the model mean is just fine, just that forcing inputs becoming more accurate with better data and time, keep up the good work climatologists!
Bob:
“I’m only interested in presenting the trends of the ‘forced signal’, not the ‘noise'”. But the observed temps which you’re comparing trends from obviously do present the noise. Apples to oranges it seems…
LazyTeenager says:
Will it seems that the temperature series shown here still shows warming:
http://www.niwa.co.nz/climate/nz-temperature-record
==========================
Did you read the article or just look at the graph? The “sceptics” disputed adjustments that increased NZ temperatures by around 1C. Because the adjustments could not ultimately be defended, they got thrown out and temperatures declined by .7C. Leaving an actual .3C of warming over the century. Or as sceptics would characterise this, about what one might expect as a result of natural variability.
AndyG55 says:
March 20, 2012 at 10:12 pm
///////////////////////////////////
Yes.
But of more significance is that for the main part they do not measure energy content and therefore are useless to the consideration as to whether the Earth is gaining energy as a consequence of IR trapping by GHGs.
They should be thrown away and a new data set created from scratch which is based upon the higherst quality siting, proper spatial coverage and one which measures energy content. It should be designed and set up in such a way that there is no need for any adjustments to be made to the raw data collected.
They should also track min/max energy content (including tracking time) as well as average energy content.
We might then begin to understand what is going on. At least we would have a data set which is of some relevance to the issue at hand.
LazyTeenager says:
One guy asserts the models are adjusted to fit the observations.
The other guy asserts the observations are adjusted to fit the models.
They can’t both be right.
==============
Why can’t you do both? In fact that’s exactly what can happen. My understanding is that RSS MSU temperature data is based in part on a model. Why wouldn’t one then take that data and try to tune a climate model to fit that data? A model should do more than just tune itself to a signal of course, but some amount of tuning is probably inevitable.
Lazy, I always find your logic baffling, if “logic” is the right word here…
Who cares ? Unfortunately almost ALL of the historical data, regardless of source, is trash quite simply because there is no way to know how well a single non-random, unreplicated daily observation from nearly ALL recording sites (n=1) reflects the actual population of that site-day’s temperature. None of the assumptions/requirements about the data required for the stats employed are actually met. Neither Type 1 nor Type 2 errors can be estimated from n=1.
Historical land-temps are not even pseudo-science; they are simply BAD science. Your reported precision is simply absurd given the instrument limits of observability and unknown variance.
Frankly, even the more recent data is problematic, as evidenced by readings of over 400 degrees C reported from the great lakes, disparate values from each sat, and failure to consistently calibrate sat values with real world observations.
I certainly appreciate your efforts, even as useless as they are. Sorry, to be so rude about all this but science is all about the wet fish slap in the face that is reality.
BioBob says:
March 21, 2012 at 9:30 pm
Who cares ? Unfortunately almost ALL of the historical data, regardless of source, is trash quite simply because there is no way to know how well a single non-random, unreplicated daily observation from nearly ALL recording sites (n=1) reflects the actual population of that site-day’s temperature. None of the assumptions/requirements about the data required for the stats employed are actually met. Neither Type 1 nor Type 2 errors can be estimated from n=1.
Historical land-temps are not even pseudo-science; they are simply BAD science. Your reported precision is simply absurd given the instrument limits of observability and unknown variance….
________________________________
Agreed.
We know it is trash thanks to Anthony’s surfacestations.org project in the USA and the more recent Australian survey Australian temperature records shoddy, inaccurate, unreliable In that study they found the readings were often to the nearest degree.
If the data from the USA and from Australia is shoddy, the data from Africa at times non-existent. If the USSR data before the fall was artificially cooled to increase the coal allotment from the government, the New Zealand data artificially made to show an increase and the ocean surface temperature completely dependent on random bucket measurements.
Given all that how can they claim to know the actual temperature increase over the 20th century to the nearest degree much less add decimal points?
AJ Strata, a NASA aerospace engineer, does a good analysis of the error in the temperature data here: http://strata-sphere.com/blog/index.php/archives/11420
On 2012.03.05 and 2012.03.11 I was able to plot HADCRUT3gl data up to 2012.08, but note that maybe in an effort to ‘hide the decline’ on March 11, 2012 HADCRUT3 was truncated from 2012.08 to 2011.92.
Also, October, November and December 2011 were “warmed” in the data.
Can anyone explain this?
Can anyone please post the truncated data points in HADCRUT3gl; 2011.92 to 2012.08?
See my WoodForTrees graph at http://www.oarval.org/ClimateChangeBW.htm