Spencer: Spurious warming demonstrated in CRU surface data

Spurious Warming in the Jones U.S. Temperatures Since 1973

by Roy W. Spencer, Ph. D.

INTRODUCTION

As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009

I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.

CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.

CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES

I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
259 Comments
Inline Feedbacks
View all comments
Bruce
February 28, 2010 3:03 am

[if you’re actually serious, try commenting again without using insulting terminology ~ ctm]

Geoff Sherrington
February 28, 2010 3:09 am

janama (19:24:53) : “So I went back to the Bureau of Meteorology site and sure enough there was a second listing for Casino Airport but it only had data from 1995.”
The Casino NSW site BoM 058063 started recording in 1858. However, as part of quality control, the BoM have cut out the period to 1965. You might be able to get earlier data from them, but it might not be so useful.

DirkH
February 28, 2010 3:19 am

scienceofdoom (21:34:32) :
[…]
than spatial coverage of urban surfaces. In fact, the population of Tokyo has almost unchanged or even decreased since the 1960s (from 8.9 million in 1965 to 8.5 million in 2005), in which most of its domain had already been covered by urban surfaces, but still there have been substantial increase in cars and tall buildings in the central business area accompanied by an intensifying heat island (Figure 1; Kawamura, 1985). For longer time span
[…]
Interesting stuff, and maybe worth a follow up post”
The intensity of the UHI probably rises with the energy conversion rate per volume unit of a city, as more waste heat is created. Same for rural, of course. Watch out for Google/Amzon/whatever data centers in the countryside. Should be visible as very bright spots in infrared.

Geoff Sherrington
February 28, 2010 3:20 am

For Roy Spencer,
By coincidence, the last line in my submission to the Russell Inquiry was “In other words, is there hard evidence for the whole global warming hypothesis?”
You refer to the sudden break between 1988 and 1996. In some countries (Australia included) this was the main period of changeover from daily thermometer observations to half-hourly thermocouple/thermistor types. Just as your 4-times-a day method shows differences to CRUTem3, I would expect that part of the explanation for that break lies in the adjustments needed with instrument change. Whether the adjustments required the hand of man to splice a neat transition remains unclear to me.

DirkH
February 28, 2010 3:32 am

“Mr Lynn (21:26:02) :
c james (18:26:46) :
Slightly OT….Have you seen Al Gore’s article in the New York Times where he calls us a “criminal generation” if we ignore AGW? This was published today.
http://www.nytimes.com/2010/02/28/opinion/28gore.html?hp
Well, the Goracle has broken his silence. […]”
He also doesn’t fail to mention tobacco. Does he mention blogs? No. Well, given that he was the prime enabler of the Internet in his time as vice president, he doesn’t seem to hold it in high regard these days.
Does anyone still take him seriously?

Geoff Sherrington
February 28, 2010 3:33 am

Zeke Hausfather (23:25:48) :
I can almost exactly replicate his graph by comparing GHCN v2.mean (raw data) with v2.mean_adj (adjusted data):
Not sure I agree. Have a close look at Dr Spencer’s first graph, above, starting about 1990 to now. You will notice most peaks are red on top and blue on bottom. Not the same as your graph. This means, as the trend lines and difference graph shows, that ISH is cooler in this period, arising from both lower cool and warm months.

Martin Brumby
February 28, 2010 3:41 am

Yet another really interesting posting from a real scientist. Thanks, Roy and thanks, as ever to Anthony & his team for making it possible.
It is clear that, without a lot more hard (and honest!) work, we don’t have a very reliable idea what global temperatures have been up to even in the recent past, although the more recent satellite data should be less putrid than the massaged and cherry picked surface data. Let alone the infamous “proxy” data sets.
This is a sorry state of affairs after the investment of billions of tax payers’ money.
If I could just ask a really naive question, however, (accepting all the uncertainties about where we are now):- How about the future?
OK, I note that such a luminary as David Adam of the Grauniad says that:-
“I used to think sceptics were bad and mad but now the bad people (lobbyists for fossil fuel industries) had gone, leaving only the mad. ”
http://bishophill.squarespace.com/blog/2010/2/27/how-to-report-climate-change-after-climategate.html
But as a mad denier, I’d like to be so bold as to ask Dr. Roy the following:-
Do you think that climate is / will eventually be predictable, or is it probably just a complicated random walk?
If you were offered, say 3 to one odds on a $5 bet that you could predict what the climate will be like in twenty years (Or ten. Or five), would you be tempted to have a flutter? You can be as precise or as vague as you like, (within reason).
This is just a hypothetical question and I don’t even expect you to reveal your prediction – but I’d like to have a feel for how confident a scientist (that I can respect) can be, that with the current state of scientific knowledge, any very meaningful prediction can be made.
Obviously, I’m aware of the work Piers Corbyn and Joe Bastardi and others do. They seem to be able to make a living predicting weather a few months ahead and it is clear that they leave the enormously resourced MET Office for dead.
But how far away are we from being able to state with ANY meaningful confidence what the climate will be in even five years?
I would be fascinated to see the response (if he has time) from Dr.Roy or any of the other genuine scientists who grace Anthony’s blog with their attention.
Anthony, you might even consider setting up a simple poll to ask this question?

cal
February 28, 2010 3:44 am

Most of the warming appears to have occured in the winter months and it has already been postulated that this could be due to the UHI.
Has anyone plotted the difference between an urban and a closely sited rural station in January as a function of temperature?
The thought is that as the temperature drops the urban energy consumption would increase, as people heat their homes, and the UHI error would increase.
It might even be possible to detect a difference in this relationship at different times of day if the 06, 12, 18, and 24 hour readings that Dr Spencer used were analysed seperately. Since most heating is reduced at midnight my guess is that the 06 error would not be as high as that at 18 or 24 hours.
The same would be true in summer, where the increase in airconditioning would increase the UHI error, but this time the relationship with temperature would be the reverse.
Because it will have have different signs in the winter and summer months this temperature relationship should be unambiguous. It would also mean that if you compared average annual temperatures agains UHI you might miss the signal because the two slopes might cancel out.
If one did this for a number of urban sites of different sizes one could estimate how the error varies with both temperature and urban size and then be able to calculate a proper UHI correction factor applicable to each reading.

AlanG
February 28, 2010 4:00 am

Roy, as you are starting out looking at thermometers, you naturally start with a number of the best thermometers in unchanged rural locations with long records. They, after all, are your best data sets and would be relatively free of UHI effects. You would then immediately see that some show slight warming and others a slight cooling with little warming overall. However, if you are running an AGW agenda, you have a eureka moment. You realize you can fix this ‘problem’ in one of three ways.
1. Average the rural and urban temperatures to show a warming bias from UHI.
2. Selectively drop out thermometers over time making sure there is a steady march of the thermometers to lower altitude, latitude and towards cities.
3. Use gridding and interpolation for missing thermometers. There will always be less thermometers up mountains and nearer the poles (which would be cooler) so you are automatically introducing a warming bias.
There you are. Problem solved and your AGW hypothesis survives. Dr. Spencer is finding out what the AGW advocates worked out years ago. Who were the original AGW advocates? Hansen and Wigley. Who are the keepers of the temperature averages? Hansen and Wigley.

Gareth
February 28, 2010 4:11 am

1988 – IPCC came into being.
1997 – Kyoto protocol.
Just sayin’.

Tenuc
February 28, 2010 4:15 am

Good piece of work Dr Spencer, the more the various temperature data-sets are examined the more flaws are revealed. No surprise there are differences between GISS, HadCRUt, GHCN, RSS and UAH temperature anomalies when so many assumption have to be made to process the data and produce a result.
Even accurate global mean temperature measurements wouldn’t show the direction climate is taking, as it is the amount of energy held that is key. In non-linear systems trends are useless for predicting future behaviour and are a ‘cherry pickers’ delight!

joannD
February 28, 2010 4:22 am

There is a serious problem with your monthly warming trends.
Because the monthly binning of the data stream is perdiodic in time, warming trends determined for Nov/Dec must be reasonably continuous with those in Jan/Feb. They are not so in your third figure.
So either:
– there is a fundamental problem in the dataset, or
– there is a problem with your binning algorithm, or
– the error bars on the extracted monthly trends (which you don’t give) are so large as make any differences meaningless
After rechecking your code, you might do three other runs with the monthly bin boundaries shifted to the 7th, 15th, and 22nd of each month to see if the monthly trends extracted from all four runs lead to a robust [ 🙂 ] conclusion.

AlanG
February 28, 2010 4:31 am

scienceofdoom (18:02:24) : the “correlation” between temperature rise and population density [in Japan] was relatively low (0.4)
Population density wouldn’t be my starting point. UHI is partly from microclimate effects as you say but I reckon what probably matters is energy consumption per unit area. A better starting point might be comparing the calorific value of the electricity, gas and fuel consumption in cities most of which ends up as waste heat except for the light that escapes. So far I’ve seen no paper that has tried this approach.
Interestingly, the temperature records (as published) show a flattening of the temperature rise in the last decade or so. This might be because of a cooler sun or whatever but it is what you should expect in the US, Japan and Europe. The energy consumption in those regions has flattened out over the same period. Once built, the centers of cities only change slowly and energy usage/GDP has fallen.

February 28, 2010 4:36 am

I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

Can someone point to an internationally accepted standard procedure for calculating an average daily temperature at a defined location?
I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values.

February 28, 2010 4:37 am

We go to great lengths to remove and argue constantly about the Urban Heat Island effect. Maybe …
http://antwrp.gsfc.nasa.gov/apod/image/0810/earthlights2_dmsp_big.jpg
we ought to consider it as a fact of life.
What percentage of the earth’s surface is in fact warmer because of it?
Adjusting the raw urban weather station temperatures downward is just as much a lie as claiming those elevated temperatures are caused by CO2.
stacase@hotmail.com

AlanG
February 28, 2010 5:00 am

Claude Harvey (19:56:33) : …the dismal numbers may simply be the oceans puking up stored heat as they periodically do
That’s my interpretation. The oceans are dumping heat out to space via the atmosphere. All warming periods are followed by cooling periods and visa versa.

William D.
February 28, 2010 5:00 am

Very interesting.
Would you consider re-doing your analysis using daily max and min temps, thus replicating Jones’s work without the UHI correction?

February 28, 2010 5:01 am

Carsten Arnholm, Norway (04:36:34) :
I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values.

You can actually do a little “back of the the envelope” scenario and see for yourself that, yes how often you measure will affect the average. It’s simplistic but very visual. Calculate the average of a max / min, say 20°C and 10°C. Then do an average where most of the temps are weighted toward the high end and another with most of the temps weighted toward the low end. Compare the three averages.
That said, it can be argued that over a long period of time, everything evens out and those differences become insignificant. That’s why ideally one wants to use a collection of stations that have been around a very long time and have a large record set to calculate the so-called global temperature.

Dave Mullen
February 28, 2010 5:02 am

Where has surfacestations.org website gone ? Is this a temporary glitch in the web server setup or have you moved this gold mine of information and data somewhere else ?

David L
February 28, 2010 5:05 am

Martin Brumby (3:41)
I agree with you. I believe it will be impossible to predict the climate. Too many factors and interactions and even worse the effects are nonlinear. In quantum mechanics it’s known as the “many body problem”. The exact solution cannot be derived due to the massive scale of interactions between all the electrons and protons of a molecule. Only approximations about the solution can be made. One will argue that approximations are good enough but remember that quantum mechanics is based on precisely known and quantifiable physical forces. The forces of nature (climate) are poorly understood and most are only weak correlations. You cannot build a predictive model on correlated factors and effects. This is what astrology attempted to do: correlate the movements of the heavens with things on earth.
We’ll have the same ability to predict climate as we can predict the stock market or the exact point of landfall a hurricane will make from the instant a storm is seen brewing off the coast of Africa.

wayne
February 28, 2010 5:22 am

Dr. Spencer, Anthony & Ken Hatfield (23:36:55) from T&N:

Ken: … Why not just stop with the UHI and urban stations and limit measurement data sources to data from areas with as little UHI effect as is possible? No adjustments to discuss and no reason to waste time attempting to solve a problem that has several unresolvable variables.
One should not need to measure temperature in cities to obtain data series that would produce statistically significant measurements of changes over time.


Ken, I came upon your comment and you are correct and seem on the right track. That’s proper analysis; I came up with the same and any proper physicist should come up with something nearly identical.
Follow this for a starter of a system. Try to concentrate on the logical aspect, not the specific implementation.
If you are going to measure how the temperature (energy) is affected in a complex system you measure only at the energy sinks. On the Earth the sinks are the oceans and rural land areas. All other sources of heat, including urban areas, must be dropped from being measured. To add an energy source and then attempt to compensate for its effect only increases the noise in the measurements. It’s rather simple physics.
And to carry that to an extreme, to get high accuracy, you only measure once per day per the longitude line that is located just before sunrise, pre-dawn I will call it, from the Arctic to Antarctic every 5 degrees, including in the measurement only rural land area temperatures and sea temperatures on that pre-dawn longitude line.
Of course every 20 minutes there is a new longitude line at that point of measurement (pre-dawn) 5 degrees to the west, so measurements would be continuous in time, summed down the longitude line, every 20 minutes to give 5 degree resolution. Only then are you actually measuring how temperature (energy), is being affected day by day, year by year, and decade by decade, untouched, as much as possible, by the weather that constantly mixes and re-distributes the energy present in the Earth system. The sum of any contiguous 72 measurements of the longitude line would be the true base temperature of the Earth, limited by the resolution and measurements accuracy. The daily heat-up and cool-down would be ignored by design. Along the longitude line, stations would have to be as close to 5 degrees (300 nm) as possible away from all cities and heat sources. Buoys would have to be designed to hold their position.
And to get the ultimate accuracy, all sensors would need to measure not the air temperature but the temperature a fraction of an inch below the surface of either soil or seawater. Then you are truly measuring the temperature of the Earth. The air only reflects the surfaces temperature due to thermal inertia. The small amount of energy held by the air is totally ignored, as it is limited, and small compared to the systems total surface energy. Any deviations should be tiny across day time spans. A smooth roll on a graph as the season cycles repeat during a year, but any 72-measurement (full revolution) sum would be basically a horizontal line on a graph only deviating slowly across months or years with minimal variations due to reasonable system noise.
Everyone worries about the daily warming in the day and cooling at night, the warming in summer and cooling in winter, the chill of highs in winter and sweltering heat of highs in summer, the frontal pressure line storms and snow, and that worry is understandable, that is weather and is what we experience daily. But weather is only the mixing and re-distribution of existing energy above the immediate surface of Earth. All of that noise needs to be totally ignored by a proper temperature sensing system by design if you are measuring the globes temperature across long periods. The energy actually entering into the Earth climate system only changes slowly over months and years and that is controlled primarily by the sun, the albedo, and variations in the radiation rates of LW radiation. Small factors that are rather constant, as heat from the core, fission in the soil of various isotopes are rather small and shouldn’t affect the accuracy.
And amazingly such a system would only require 1652 sensors across the globe for 5 degree resolution or 3302 for 2.5 degree resolution. And what make you sick is looking at all of the money spent on the current system that basically doesn’t work, flawed at the logical level. We are currently taking a system that measures weather (air) and trying to back out all of the noise to get a base temperature of the globe, totally backwards.
A sequence of satellites could feasible handle the same logical system but their polar orbit would have to be heliostationary. Knowing gravity fairly well, I am pretty sure if such an orbit is not possible due to Earth’s oblateness. But assuming it were possible, the orbit would have to maintain a pre-dawn and pre-dusk orientation with satellites spaced to perform a scan pass every 20 minuets, for 5 degree resolution that is. The onboard instrument would have to accurately measure the true surface temperature; soil, seawater, and ice, not the air. That would do exactly the same logical thing as the ground based system above, only from space. The night side half-orbit scan would be the only one actually used. But since it seems that orbit it is not possible, a large number of satellites in staggered orbits would be required and ground-based is hugely more economical, but that gives you a logical equivalent system but based in space.
The question is how do we actually get proper science to be performed? One starting step using the current system would be the public availability of raw hourly per-station temperature data with no adjustments, especially needed are the rural stations. Also, all Argo data needs to be untouched and public. That needs to be mandatory. Without it you are stuck in the flawed system, measuring weather, not the globes temperature.
That’s a mouthful but what are your thoughts on that hypothetical system?

mark
February 28, 2010 5:35 am

Do you have a link to the website or data set that you used to work up this data? Thanks.

mark
February 28, 2010 5:37 am

Mr. Spencer,
Can you post the link to the website or data set that you used to work up this data? I’m still new to this and don’t know all the secret data websites
Thank you.

Pamela Gray
February 28, 2010 5:39 am

Doh! I missed the error bars! My statistics prof (took a grad level course so I could be a research audiologist – we did the stats by hand, no computer program allowed) would not be happy with me right now. When I finally got into a lab, the first thing I did was buy Statview SE for our little macs. Nice critique joannD.

February 28, 2010 5:41 am

The fact that Dr Spencer used four temperatures per day might be the reason of differences in overall trend. Since UHI affects mostly Tmin, half of daily data are contaminated. In case of four measurements, UHI is somewhat diluted, even at 18.00PM it should be present as well.

1 3 4 5 6 7 11