Spencer: Spurious warming demonstrated in CRU surface data

Spurious Warming in the Jones U.S. Temperatures Since 1973

by Roy W. Spencer, Ph. D.

INTRODUCTION

As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009

I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.

CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.

CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES

I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
259 Comments
Inline Feedbacks
View all comments
February 28, 2010 9:57 am

David Schnare (07:34:16) :
people should take great care in “cleaning up” the data and “adjusting” for UHI.
Anyways it would be great if folks from individual states took the lead on researching their state histories.
At the BOTTOM of all this is not climate science. at the bottom is history and record keeping. The “science” part of it is just stats. cook book garden variety stats.

Ivan
February 28, 2010 9:58 am

Osborn
“The Satellite data is showing Record hight values for Janury and Now February this year, it looks almost as if something is incrementally adding the values.
How does the good Dr rationalise that with the current NH weather and his own US results for 2010 in the graph above.”
This is quite separate issue. You are comparing apples and oranges and obscuring maybe unintentionally the real problem. What you are saying basically boils down to the following: we in the USA have cold winter, Spencer’s data show warm winter for the ENTIRE world, so they need to be wrong. No, they need not.
What I am doing is something entirely different. I am comparing apples with apples – rural stations trend in the USA 1979-2009 with the UAH satellite data trend for the USA tropospheric temperature in the same period. These two trends must be roughly equal or similar, but they are clearly not. Actually, the UAH trend is 3 times higher, and that requires an explanation. “Too High” January temperatures for the entire world have noting to do with that. Even if this January was really that warm as Spencer and RSS have reported, that would not remove the problem of the inconsistency of surface and satellite data over the USA in the least.

February 28, 2010 10:04 am

Lucy Skywalker (17:44:20) :
long records in pristine spots.
That’s a great idea. Start a list.. or get some of the guys who program and can pull records from GHCN and ISH to make a list.
Station name: ID numbers ( various) Lat Lon alt.
Start with that.

DirkH
February 28, 2010 10:14 am

mike roddy (09:14:21) :
[..]Glaciers are melting, antarctic ice is calving,[…]. Humans are the cause.”
Mike, glaciers always melt, icebergs always calve. They don’t need humans for that. Check your logic. Even the BBC, no stranger to warn against global meltdown, has admitted that the break off of that glacier tongue in the antarctic is not related to global warming. So who looks ridiculous now?

Kum Dollison
February 28, 2010 10:24 am

I’m beginning to think it’s all a con job on both sides of the issue. Dr. Spencer attempts to measure something in some place called the troposphere. God only knows what Hansen, et al are measuring.
The good folks of Ms have been taking the temperature On the Ground in Ms for 100+ years, and it’s been, basically, a flat line. I understand the story is the same in most all other states. I wouldn’t be surprised if the same applied to most any place in Europe, Australia, China, Africa, or Russia that I picked out by throwing a dart at a map.
I feel like all I’m witnessing is massive rent-seeking from both sides of the aisle. The only one I’m seeing actually doing ANYTHING that resembles what I would call the Scientific Method is ANTHONY WATTS AND HIS VOLUNTEERS. At least, they are attempting to “calibrate” their intruments before they get all wound up in trying to advance some hare-brained theory.
If I were Dr. Spencer I would be very concerned that my proxy isn’t matching up with temps “on the ground.” I don’t see how I could proceed any further until I’d managed to reconcile these differences.
All Heat – No Light. A Pox on all houses.

son of mulder
February 28, 2010 10:25 am

3×2 (09:50:07) :
David Schnare (07:34:16) :
Why does one need more than the following?
1) all the historical raw temperature data for sites split into 3 groupings ie rural, urban and mixed (ie sites that have transitioned over time from one type to the other?
2) a piece of software (excel) to calculate the averaged raw temperarture across sites by time within each of the 3 groupings.
3) an acceptance of the law of large numbers
http://en.wikipedia.org/wiki/Law_of_large_numbers
Then calculate the temperature trend for each set. How will that not settle whether we have had significant warming without urban heat influences?
Who is denying access to such raw data?

Peter Miller
February 28, 2010 10:27 am

Has everyone forgotten we are at the high point of a strong El Nino cycle, when global temperatures are around 0.7-1.0 degrees C higher than normal?
Not surprisingly, global temperatures for January and February this year are higher than normal.

David Alan Evans
February 28, 2010 10:48 am

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance.

Isn’t that what used to be called science?
DaveE.

Rereke Whakaaro
February 28, 2010 11:04 am

Forgive me if this has been mentioned on this thread before – there are so many interesting comments, but I haven’t had time to read them all – day job pressures, you understand.
But has anybody considered identifying weather stations at airports and treating them as a separate data set?
These sites are where they are primarily for safety reasons. It is important for aircraft to have accurate information about the weather conditions *on the runway* for take-off and landing. That is their intent, and if they are calibrated, that is what they are calibrated for.
Their purpose is therefore different to sites that are intended to help farmers decide when to plant and when to harvest. Their purpose is also different to urban sites that are primarily intended to manage energy load requirements, and to help people decide what to wear today.
So, perhaps there should be three data sets: Urban, Rural, and Avionic.
Analysis can then be done, comparing like with like. It would also be possible to apply “standard” adjustments at the data set level in order to combine two or more data sets in a predictable way.
As I see it, the current practice of adjusting each site as a stand-alone entity is fraught with problems of consistency. It is also open to interpretation by the person doing the individual adjustment to each station.

Ivan
February 28, 2010 11:09 am

Kevin Kilty:
“If the UAH set mentioned is the satellite data, then it measures temperature at all sorts of different heights in the atmosphere depending on channel. Just try to measure the temperature of a glass of ice water accurately and you’ll get an impression of the problem. It ought to be 0C but is in fact different at various places inside the glass.”
======================
So, your argument is that there is nothing strange if the lower tropospheric trend over the USA as calculated by UAH and reported as “USA 48 trend” on their website, is 3 times higher than the surface trend in the USA as measured by the rural stations, because those two sets measure “different things”? Is there any known theory that predicts or even allows that over the vast portion of the very vast continent surface temperature trend during the period of 30 years should be 3 times lower than at the altitude of 4.5 km up above the same continent?

pkasse
February 28, 2010 11:11 am

sunsettommy (09:54:52) :
Your Surfacestations.org link does not work.
“Sorry, the page you were looking for could not be found”
Does that mean it is now gone?
REPLY: No just being retooled to handle a traffic surge
“…traffic surge”
Is this a hint of a forthcoming announcement?

rbateman
February 28, 2010 11:17 am

Peter Miller (10:27:49) :
Not surprisingly, global temperatures for January and February this year are higher than normal.

Where?

Manfred
February 28, 2010 11:18 am

Ivan (09:58:26) : UAH versus rural
rural and UAH are not measuring the same thing.
http://climateaudit.files.wordpress.com/2008/06/hadat43.gif
Looking at above picture with the L48 USA situated roughly between 30-48 deg latitude, the UAH measured 600 mbar troposphere should warm by approx: 2.7 / 1.6 faster than the ground.
This is a factor of 1.7.
0.22 deg / 1.7 is 0.13 deg.
so this fact – generally ignored by warmists -, removes almost 2/3 of the difference.
the remaining 0.05 deg may be well expilicable with inaccurracies in above picture or temperature measurements in general.

Star
February 28, 2010 11:27 am

Lately i been looking at the weather reports because i barely watch news. i want to know if the world is coming to the end because of heat and upcoming weather events. if so can yall technology detect wen the heat will destroy the world?

ClimateWatcher
February 28, 2010 12:03 pm

Independently a researcher at the University of Washington is attempting a similar analysis of the US temperature record. He is currently having roadblocks thrown in his way by a senior faculty member who in a manner eerily reminiscent of climategate is trying to deny access to an important data resource needed to properly carry out this research.

aMINO aCIDS iN mETEORITES
February 28, 2010 12:06 pm

DeNihilist (22:31:43) :
Here is the best example yet, of toturing the data to get the result wanted!
……………………………………………………………………………………………………………….
There’s a problem with his math—-in 10 minutes from now USA, not Canada, must win the gold today!

Christopher Hanley
February 28, 2010 12:09 pm

The surface temperature record 1979-2010 shows a warming trend of about 0.17°C/decade while the satellite trend is about 0.13°C /decade.
http://www.woodfortrees.org/plot/gistemp/from:1979/trend/offset:-0.1/plot/uah/trend/offset:0.1
Can that discrepancy be extrapolated back in time?
This February may be the warmest in the 30 year satellite record, but where, for instance, is it in relation to the late 1930s?

February 28, 2010 12:18 pm

guys, I have been adding the CRU data to my site, http://www.knowyourplanet.com/climate-data for people to browse around, I read alot of the posts and comments above and I would say that many of you would appreciate this.
There are quite a few maps, I am still loading Russia, Europe, Pacific and Asia but North and South America is more or less complete.
The google app for the graphs is pretty cool, it can display as a line graph or a rolling animation and you can zoom, change colours.

Alexej Buergin
February 28, 2010 12:27 pm

” mike roddy (09:14:21) :
Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.”
Mike, it is much worse. Antartic ice is not only calving, it is disappearing. We have lost about 15 million square kilometers of sea ice these last few months. That is bad, really BAD.

vigilantfish
February 28, 2010 12:39 pm

son of mulder (10:25:58) :
3×2 (09:50:07) :
David Schnare (07:34:16) :
Why does one need more than the following?
1) all the historical raw temperature data for sites split into 3 groupings ie rural, urban and mixed (ie sites that have transitioned over time from one type to the other?
2) a piece of software (excel) to calculate the averaged raw temperarture across sites by time within each of the 3 groupings.
3) an acceptance of the law of large numbers
http://en.wikipedia.org/wiki/Law_of_large_numbers
Then calculate the temperature trend for each set. How will that not settle whether we have had significant warming without urban heat influences?
—————————
How long have you been at WUWT? Why would you want to further comlicate things?
1) As various recent threads have shown, it is not always easy to discern whether sites can be classified as urban or rural or when a transition has occurred. How are the classifications made: population density vs. urban structures vs. light intensity as recorded by satellites? See for starters:
http://wattsupwiththat.com/2010/02/26/contribution-of-ushcn-and-giss-bias-in-long-term-temperature-records-for-a-well-sited-rural-weather-station/
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/
http://wattsupwiththat.com/2010/02/21/fudged-fevers-in-the-frozen-north/
2) It has become increasingly apparent that there is no simple algorithm for figuring out how to average the raw temperatures of these sites in such a way as to take into account the variations in the increase in UHI over time. Microclimates and physical changes over time — even to rural surface station sites — introduce variations that should not be ignored When you add this to the problem of identifying how to classify the locations of different stations, it makes a simple averaging approach pretty much useless in identifying clearly what is going on with global temperatures.
3) Trust the acceptance of the law of large numbers: sure – using only carefully documented raw data from rural stations. Why complicate things with urban or transitional sites?
If you want to study the effect of urbanization, that should be a separate study from a study of natural historical temperature trends. Logical, really: to study natural trends, you need natural settings. To use temperatures contaminated by urbanization – the warming trends of which have been strangely disputed by Phil Jones et al. – is like trying to understand the natural behaviour of forest racoons by studying them in an urban setting.

February 28, 2010 12:53 pm


jorgekafkazar (00:06:29) :
re: Claude Harvey (19:56:33)
Record snowfall means record amounts of latent heat removed from water vapor to produce ice. The ice falls to the ground; the heat remains in the atmosphere. Somewhere else, ocean heat went into vaporizing seawater. The vapor went up; the ocean cooled. Everything would balance, but high atmospheric temperatures result in increased heat loss to space. Net result: lower actual global heat content.

Bingo!
We had a veritable conveyor belt set up a few weeks ago over Texas (with our record snow event); and we have overcast for weeks (that spells NO insolation cloud top albedo being what it is), nearly constant precip of one form or another (a wringing out of latent heat energy from terrestrially-sourced water vapor, though evaporation) …
And similar situation/weather events occurring across the states to our east and north as well … how would one inventory/audit the change in heat content given these events (compared to, say, these events not occurring, no snow or precip events, insolation occurring with clear skies etc.)
.
.

February 28, 2010 12:55 pm

Star (11:27:03) : | Reply w/ Link
Lately i been looking at the weather reports because i barely watch news. i want to know if the world is coming to the end because of heat and upcoming weather events. if so can yall technology detect wen the heat will destroy the world?

Yes, I think it will happen 4 to 5 billion years from now.

Wren
February 28, 2010 1:04 pm

Manfred (11:18:32) :
Ivan (09:58:26) : UAH versus rural
rural and UAH are not measuring the same thing.
http://climateaudit.files.wordpress.com/2008/06/hadat43.gif
Looking at above picture with the L48 USA situated roughly between 30-48 deg latitude, the UAH measured 600 mbar troposphere should warm by approx: 2.7 / 1.6 faster than the ground.
This is a factor of 1.7.
0.22 deg / 1.7 is 0.13 deg.
———————–
I believe Ivan’s question was a about the difference between UAH and rural ground records for the U.S. over the 1979-2009 period. Do you mean UAH “should” warm 1.6 times faster than rural stations in the U.S. over this 30-year period?
I know UAH global records show about the same 1979-2009 warming trend as GISS, despite the latter including ground records of both rural stations and the warming-biased urban stations. If UAH should be warming faster globally, why are the trends so much alike?

Peter Miller
February 28, 2010 1:09 pm

rbateman (11:17:27) :
Peter Miller (10:27:49) :
Not surprisingly, global temperatures for January and February this year are higher than normal.
Where?
Everywhere in Australia for one – they are close to the Pacific El Nino.
Also, look at UAH daily temperatures at: discover.itsc.uah.edu/amsutemps/
What I don’t understand about the UAH figures is: Why are the high altitude temperatures decreasing, while the low altitude ones are increasing during the El Nino phenomenon?

Channon
February 28, 2010 1:17 pm

Spurious transitions or step changes, because they occur over such short sections of the data set can generate several plausible alternative models.
This makes using them as a predictive platform very difficult.
Since the whole data set is quite small and the variance large, the possibilities for error caused by a spurious observation are large too.
Not much of a foundation to build on.