Spencer: Spurious warming demonstrated in CRU surface data

Spurious Warming in the Jones U.S. Temperatures Since 1973

by Roy W. Spencer, Ph. D.

INTRODUCTION

As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009

I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.

CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.

CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES

I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
259 Comments
Inline Feedbacks
View all comments
richard
February 28, 2010 5:41 am

Starting from scratch sounds like a particularly good idea. When I hear that the Royal Society is granting up to £100M for research into ‘geo-engineering’ prototypes it makes me feel extremely uneasy.

February 28, 2010 5:42 am

Cal;
The thought is that as the temperature drops the urban energy consumption would increase, as people heat their homes>>
I have no doubt that is part of it. Another part is the buildings themselves, even if they are not heated. In winter the inclination of the Sun is low to the horizon and inclination of the Sun’s rays very sharp compared to the ground. But a building presents a vertical surface at almost a right angle to the Sun (and hence much higher absorption) so I would expect UHI to be more pronounced in winter than in summer. This would also, I suspect, result in a larger UHI on the south side of the downtown core than on the north side.
Some winter cities try and arrange their streets so that the houses are built with their largest vertical surface area facing south with the intent of reducing

mark
February 28, 2010 5:43 am

Carsten Arnholm, Norway (04:36:34)
Here’s a link to a article about it:
http://cat.inist.fr/?aModele=afficheN&cpsidt=16372328
Otherwise, when I was doing the climatology for NAS Keflavik when I was the Head Observer / Data collector (in house), I would take the whole days worth of obs and take the Max and Min, then take the average of all the obs, find the average min, average max, etc. But the Synoptic obs were used by Asheville for whatever magic they used it for…which I’m guessing was entry into some data base. I’ll look and see if they still have any of that data at the Navy / Air Force Joint Climatology detachment.

February 28, 2010 5:44 am

ooops, cut MYSELF off there
Some winter cities try and arrange their streets so that the houses are built with their largest vertical surface area facing south with the intent of reducing energy consumption in winter. Who knew they were causing UHI and global warming at the same time? 🙂

Ed Scott
February 28, 2010 5:56 am

Bali-Hoo: U.N Still Pushing for Global Environmental Control
http://www.foxnews.com/story/0,2933,587426,00.html

john
February 28, 2010 6:04 am

It would be much more helpful to try to consider what reasons they may have to deliberately falsify the data and then present it as correct.
Obviously the political content of climate change/global warming data is going to colour results from those funded by government.

February 28, 2010 6:35 am

Thanks for all the comments. I agree that my use of “spurious” sounds a little disparaging..”anomalous” would be better.
I’m using 4x per day measurements because the hourly dataset does not contain max/min data. As someone mentioned, Tmax/Tmin isn’t necessarily the best…especially since Tmin is so sensitive to cloud and wind conditions around sunrise.
Remember, the primary reason for hourly weather measurements has been aviation support, not climate monitoring. I used the synoptic reporting time of 00, 06, 12, and 18 UTC because those are the times that have the most stations reporting routinely.
I agree that the *absolute* temperatures should be analyzed…this is why my next task (if time allows) is to quantify the UHI effect based upon 1 km population density data from the U.S. decadal census data From what I can tell, a proper UHI analysis of all of the temperature data has yet to be done. If we can get estimates of spurious warming as a function of population density, then adjustments to the temperature record over time can be made.
Of course, if there happens to be enough good rural thermometer data, then you could just throw out all of the temperature data that have UHI effects. Unfortunately, there will always be ‘experts’ who claim there are no UHI effects, and then use the UHI-affected data to demonstrate global warming.
So, in either case it becomes necessary to quantify the UHI effect. Given the huge amount of US data, this should be possible to do fairly convincingly. But then, I have a bad habit of being overly optimistic before I analyze a new dataset.

Ivan
February 28, 2010 6:37 am

Alan:
“Roy, as you are starting out looking at thermometers, you naturally start with a number of the best thermometers in unchanged rural locations with long records. They, after all, are your best data sets and would be relatively free of UHI effects. You would then immediately see that some show slight warming and others a slight cooling with little warming overall.”:
However, it’s not what dr Spencer had done. He did not start with rural stations. He used all stations and tried to construct the “better” index than Jones out of them. Actually, if he used only the rural stations for the USA he would discover that they show 3 times lower trend than his own satellite data (as dr Long demonstrated)! What he did instead was to repackage the old Jones’s analysis so as to validate UAH data. Slight warming bias he discovered in Jones’s data is exactly by how much Jones and UAH diverge over the USA 48. The real question I asked and nobody answered is: HOW IS IT POSSIBLE THAT RURAL NETWORK IN THE USA SHOWS 3 TIMES LOWER RATE OF WARMING THAN UAH???

BarryW
February 28, 2010 6:41 am

George E. Smith (23:29:15) :
I agree that replicating Jones is not the intent. What I thought might be interesting was if there is a change in the patterned of diurnal warming/cooling that is occurring over time. That might be a clue as to what else is going on physically. For example, how would changes in cloud cover change the temperature cycle as opposed to UHI issues.

sleeper
February 28, 2010 6:45 am

Re: Steve Case (Feb 28 04:37),
We have thousands of climate scientists around the world who actually believe it is possible to measure the GAT to within a few tenths of a degree C per decade. Ever notice that you rarely ever see error bars in any of these graphs?

Pamela Gray
February 28, 2010 7:03 am

re: distribution of temp sensors. All temp sensors are situated in microclimates, of which there are many. Climate zones, altitude, and proximity to large bodies of water have significant affects on microclimate response to weather pattern variation forcings. Sensor placement must therefore be carefully considered to result in a randomized sample. Station drop out, or any other reduction in sensor numbers used for analysis must be carefully done so as to ensure against a non-random biased sample, which can lead to erroneous conclusions about trends and causes.
re: verification vs replication. Replication is always the first step. But verification (I like the term substantiation) must follow. If an affect is robust, it should show up in more than just one kind of analysis. Averaging the entire daily set, averaging max and min, averaging only max, averaging only min, analyzing record daily max lows and highs as well as record daily min lows and highs. Dividing the data set into climate zones and repeating the analysis within each set, etc. Any good scientist will look for any areas where the hypothesis does not hold up.

SaskMike
February 28, 2010 7:12 am

Ongoing discussion here:
http://www.theglobeandmail.com/news/world/global-warming-panel-to-get-independent-review/article1484168/
Feel free to comment, as it would appear that this article misses more than a few important points.

harrywr2
February 28, 2010 7:21 am

Dr Roy,
“In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction.”
I think if you investigate the methodology you will find that Jones et al believe the UHI maxes out. Hence the adjustments are all made pre-1973.
Of course this is ludicrous on it’s face. Thefirst Boeing 747 wasn’t built until 1969. The great airport expansion didn’t occur until the advent of 747.

Paddy
February 28, 2010 7:29 am

It appears that the tsunami models overstated the magnitude of the waves everywhere. Is predicted v actual data for all earthquakes that models have predicted tsunami events being accumulated? Are those responsible for the model design and construction going to recalibrate these models to correct for measured differences of wave magnitude? Should they? If not, why not?

David Schnare
February 28, 2010 7:34 am

I think Lucy is on target:
Lucy Skywalker (17:44:20) :
“I would like to see a century of global mean temperature changes estimated from individual stations which all have long and checkable track records, with individual corrections for UHI and other site factors, rather than the highly contaminated gridded soup made from hugely varying numbers of stations.”
That’s what we plan to do in Virginia.
However, the excellent comments made by many on this site suggest to me that we are not going to find a clean, perfect record at any site, no matter how well sited.
Thoses of us who have done measurements as the basis for our scientific papers know the lengths we go to ensure accuracy, precision and completeness in those measurements. The temperature records were not kept to support scientific studies made 100 years later.
So, we have an imperfect data set. Lucy gets it right in that we need to clean up the “original” data in a manner that highlights what is missing, what changed and what may obviously have influenced the measurements. Until we have that data base, I don’t believe GISS, CRU or NCDC has enough knowledge to discuss the uncertainty surrounding the data, much less any data projections made there on.
Here’s another tidbit that ought whet someone’s appitite for hard work. The one long-term rural site I have found so far in Virginia that does not appear to have been adjusted by GISS is Bremo Bluff. Yet, the Bremo Bluff station is, and always has been, placed inside the fence of the transformer yard, less than 100 feet from the wall of a coal fired electric generating unit consisting, originally, of four boilers. In the mid-1970’s the nearby river flooded, destroying the utility of two boilers. These were never rebuilt (because they would have had to install expensive pollution controls). Thus, the actual placement of the station has a built in UHI, its record demands careful consideration of the reduction by half of the major UHI influence in the mid-1970s, and its use as a contribution to grouped data should be reexamined.
This begins with the efforts of our host, but we are going to need to go well beyond that, as Lucy suggests.
Basically, NCDC needs to do more than hit the reset button. It needs to make available in easy to use form all the actually reported data from each station, and the MMS system needs to be amplified to determine whether there were any micro-climate implications of local land use change, going so far as to examine the kind of thing Pielke, Sr. has discussed.
Its a long slog, but considering the economic consequences of going haphazardly forward, it is worth doing and is doable.

bruce ryan
February 28, 2010 7:44 am

well if the urban heat island effect is human caused climate change it only makes sense to increase its stature relative to unbiased readings. After all, are we not trying to show mans impact on the climate?
as spurious as that sounded I have an inkling there is a bit of definition involved.

February 28, 2010 7:48 am

David Hagen, 18:01:37 on 27 02,
If you collect site data sets for Arctic regions, as I do, you would be quite familiar with the almost complete dearth of data for Dec, Jan and Feb, and sometimes for Nov and Mar, often over many years, for many sites. Associated with this is the near total absence of data values that are less then -20 (C). Histograms of monthly data show this skewed information readily. It follows that analyses of data using sites of this type will misrepresent the effects (If any) of unusual occurrences or long term trends that might take place during these deep winter months. As has been pointed out, it is very easy to sympathise with the “ethical” and physical problems faced by the human observers. This does not help in elucidating what actually happened to local weather in the cold season.
My opinion is that it may be very misleading to accept records of sites with this type of “missing value” as being worthy of including in serious analyses. “Annual averages” will be very misleading indeed, and should not be included in multi-site analyses if the important “very cold” data have been lost or never gathered.
Robin

February 28, 2010 7:48 am

Carsten Arnholm, Norway (04:36:34) :
“I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values.”
I have no idea what ‘scientists’ do. As an engineer I think it is far more significant the length of time a temperature range persists, rather than recording a spurious peak or trough, e.g. 2 h of 20C is far more significant then 5min of 22C.
I suggest, since you are going to do a frequent recording, you could as an exercise integrate and chart separately 3-4 hours of the peak day time, and 3-4h of bottom night time data, ignoring the rest. That way you would have good record for the future years comparison.

rbateman
February 28, 2010 7:57 am

Ivan (20:28:06) :
The answer to that question may lie in exactly what UAH has been calibrated to.
If we use UAH to throw out our historical dataset (and we do see the internal agenda that has attempted to both drop the rural stations and /or move them) we lose the continuity of the last 100-150 years of data.
Where climate cycles are concerned, that is a fatal blow.
So, has UAH been calibrated to a raw dataset, or to one of the CRU/GISS datasets which have been subjected to highly questionable alterations which make no sense?

Josh
February 28, 2010 8:02 am

Al Gore returns with an opinion piece in the Old York Times: http://www.nytimes.com/2010/02/28/opinion/28gore.html
In the first sentence he writes “…the science of global warming…” which made me LOL.

wayne
February 28, 2010 8:02 am

Maybe OT but due to temp graphs:
IPCC Bali – Major topics:
— Global system of governance
— Radical transformation of the world economies
— Radical transformation of the world social order
— Vast sums of money to flow to developing nations
Bali-Hoo: U.N Still Pushing for Global Environmental Control.
http://www.foxnews.com/story/0,2933,587426,00.html

latitude
February 28, 2010 8:09 am

Paddy (07:29:08) :
“It appears that the tsunami models overstated the magnitude of the waves everywhere.”
Yes, but there was a run on popcorn about 3pmEDT yesterday.

Stephen Wilde
February 28, 2010 8:13 am

wayne (05:22:33)
An excellent post and just what we need.
Or would satellite sensors do the job well enough ?
Probably not because satellite sensing would be limited to measuring the energy content of the atmosphere at various levels AFTER energy has already left the oceans or the ground whereas it is the energy content of the oceans and the near surface ground that most accurately reflect the actual Earth system temperature.
Furthermore the energy content of the atmosphere would be skewed by any variations in the rate of energy loss to space.
Given that land surface energy retention is only brief and small in quantity as compared to oceanic energy retention I think we would get close enough by measuring global ocean energy content more accurately and as wayne says for climate purposes the most important measurement is that taken a fraction below the ocean surface.
I have stated elsewhere that the ideal location would be at or just below the point at which the ocean skin gives way to the bulk ocean below.
Changes in temperature at that specific location would be the critical controlling factor affecting the rate of the major part of the Earth system energy transfer from oceans to air and thence to space.
If that specific location warms up or cools down even fractionally on a globally averaged basis then the speed of the flow of energy through the entire system changes and all climate phenomena must shift with it.

February 28, 2010 8:18 am

George E. Smith (23:14:49)
Thanks for past info about work of Mr. Hendriksen.
If George E. Smith of these web pages is the same as Dr. George Elwood Smith of CCD fame, then I am evermore grateful for ending 15 tedious years of my working life; setting operating parameters of plumbicon, leddicon and saticon tubes.
My thanks, gratitude and greatest respect for Dr. George Elwood Smith.

rbateman
February 28, 2010 8:21 am

Basically, NCDC needs to do more than hit the reset button. It needs to make available in easy to use form all the actually reported data from each station,
Yes, the pdf download from NCDC is an arduous process. There is no station download option with the documents marked in chronological order.
But there are even more problems at NCDC. I suspect that a lot of the original forms have been transcribed by a computerized process, not humans. That leads to numbers coming off the forms in error, depending on the noise present in the image.
I have found some of these. And last, but not least, there are missing months (as E.M.Smith has documented) in the original forms when you come forward to the 1990’s and 2000’s.
Are these missing months in dusty boxes?
Where’s the docs on this?

1 4 5 6 7 8 11