Spurious Warming in the Jones U.S. Temperatures Since 1973
by Roy W. Spencer, Ph. D.
INTRODUCTION
As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.
While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.
U.S. TEMPERATURE TRENDS, 1973-2009
I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.
The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.
This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.
A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.
While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).
The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.
THE NEED FOR NEW TEMPERATURE RENALYSES
I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.
I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.
In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.
Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.
Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.
In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.
It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.



“”” Nick (22:12:25) :
If you want to keep things simple,why did you not compare apples with apples,and use max/min data, Dr Spencer? “””
Insanity has often been defined as doing the same thing over and over and expecting to get different results.
One reason to not use min/max data is that we know that fails to satisfy the Nyquist criterion; even for recovery of the daily average.
Besides; was it Einstein who said: “scientific theories should be as simple as possible; but no simpler.”
Same goes for scientific data gathering, or processing
Oops, my fault. I see now that the graph does not exactly = 0 in Dec. Still its proximity I find a little weird. Almost looks like a bias from too many normal distributions. 🙂
Anthony (and Dr. Spencer, if you read this),
I think what Dr. Spencer is showing here is the USHCN adjustments here: http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif , or something very close.
I can almost exactly replicate his graph by comparing GHCN v2.mean (raw data) with v2.mean_adj (adjusted data): http://i81.photobucket.com/albums/j237/hausfath/Picture61-1.png
A better test for his new temperature data would be to compare it to raw GHCN data (e.g. v2.mean), which would make sense since the data he is using is also raw. I suspect based on the chart above that they would be nearly identical. If he has concerns with the way U.S. temp data is adjusted by GHCN/USHCN, I understand, but I’m not sure how this is new news per se.
“”” BarryW (19:35:54) :
Dr Spencer, could you’re difference be partly attributable to the difference in using max/min average vs your synoptic average? A faster rate of cooling, for example, might cause the actual high and low to be about the same but the intermediate values might be depressed causing your average to be lower. If this is changing over time (more radiative cooling?), could that not be what you’re seeing? “””
BarryW, I suspect that if Dr Spencer does exactly the same thing that Dr Phil Jones did (or his crew); we would find that Dr Spencer would get exactly the same reults as Phil Jones.
That is NOT the assigned task here.
I think the idea; and possibly why Roy is doing this; IS TO TRY AND GET THE RIGHT ANSWER. Well to the extent, that there is a right answer for that data set.
Duplicating Jones’ results is not the purpose here; using his data wisely; may be why Roy did this exercise.
I did a lot of family history research 15 years ago.
It took me five years to identify the parents of my paternal grandmother, as family members had taken great care to hide her true story and available historic records were sparce.
I made no progress at all until I disproved ALL the family stories.
At that time, I realised with horror that I had absolutely no knowledge of her parentage.
That was the big breakthrough.
I was then at the starting line and was able to reasonably quickly tease out the few valid clues left in the official records.
We may be at the true starting point with global temperature.
More to the point, it would seem that any valid global temperature database will only contain records of rural stations.
scienceofdoom.
Love your site and your writing. very clear.
There are a variety of studies on UHI ( my fav is the bubble study)
The presence of tall structures causes two issues.. change to the boundary layer and turbulent mixing and radiative canyons ( think of it like a corner reflector for IR)
The key, I think, is NOT to adjust for UHI, but rather to pick sites where it is less likely to occur. How many of those are there? dunno.
An Inquirer (21:14:46) :
yes on my reading he makes No adjustment. Weird. he calculated the bias and left it in, nudging the error bars asymetrically. in the emails suasan solomon has a hard time keeping this straight, jones explantion is not lucid–clear
Adjusting for UHI in the wrong direction? Why the surprise? GISS do it all the time in Australia (in 5 out of 8 sites I have analysed).
Dr Spencer,
Thanks for your ‘open society’ investigation/ presentation.
The grey clouds are shifting,
The blue sky is lifting!
Interesting that US winters look to warm the most. Here in Central Europe, April-August show visible cooling 1960-1980 and equal warming 1980-2006, many other months show almost no trend during the whole 20th century.
http://climexp.knmi.nl/data/tsicrutem3_17.5-22.5E_47.5-50N_nmonth.png
However, it is encouraging that it looks possible to replicate the datasets by relative simple way.
Kum Dollison (21:47:46) :
Someone really needs to answer Ivan’s question.
……………………………………………………………………………………………………………..
If these seem important to you then you should answer them Kum. You can be the someone
Claude Harvey (19:56:33) : “Perhaps we should focus on the satellite measured, global average temperature at 14,000 feet…It’s setting “high” records again for the month of February.
“I’m a skeptic of AGW theory but not a denier of measured data that has not been unduly “adjusted” by unknown algorithms. I’m currently comforted that the dismal numbers may simply be the oceans puking up stored heat as they periodically do, but those numbers cannot be ignored.”
Record snowfall means record amounts of latent heat removed from water vapor to produce ice. The ice falls to the ground; the heat remains in the atmosphere. Somewhere else, ocean heat went into vaporizing seawater. The vapor went up; the ocean cooled. Everything would balance, but high atmospheric temperatures result in increased heat loss to space. Net result: lower actual global heat content.
pat (17:16:38) :
“Well I think we all know what this means. Scientific fraud. How many other disciplines have been contaminated by spurious, agenda driven, analysis and data alteration”
Probably more than we realise. That said, the situation for climate is at best (ie thinking the best of the situation): post-normalism or plain incompetence, or a combination thereof. I’d vote for the former – too many scientists are becoming too complacent and/or arrogant to listen to their critics.
WAKE UP WUWT. If you don’t see what the IOP has done you are dense!
REPLY: The IOP was covered in this thread: http://wattsupwiththat.com/2010/02/27/16772/
steven mosher (23:39:17) :
scienceofdoom.
Love your site and your writing. very clear.
There are a variety of studies on UHI ( my fav is the bubble study)
The presence of tall structures causes two issues.. change to the boundary layer and turbulent mixing and radiative canyons ( think of it like a corner reflector for IR)
Ha! I love it! Now imagine the effects of windmills enough to power the Eastern seaboard. Talk about disturbing atmospheric conditions…
Still, I feel we should find a way to pollute less, I just don’t feel the emergency given by my fellow humans on the CO2 issue.
>Mindbuilder (18:28:01) :
>We’ve been calling for this since 2007. It’s formally called reproducible results
My suggestion goes farther than just calling for reproducible results. Its a call for a specific method that will guarantee that the results are reproducible easily. By calling for an actual package and single command, scripted calculations, it can be quickly and easily verified that the data and code actually match the results before the paper is published and before an involved analysis of the paper is required by a reviewer. Authors couldn’t just claim that they had provided everything necessary to reproduce the results, everything would have to actually be there.
Another benefit is that it would be easier to tweak a study by minor modifications of the script.
I might be willing to relent on the requirement that graphs be generated by scripts if that is impractical. But I think even that wouldn’t be too hard. And if proprietary software packages are considered indispensable, then maybe hashes of the ISO’s of the software install disks could be allowed. Eventually statistical software manufacturers might even start releasing standard versions of their software with a published hash.
I expect skeptics could get everyone to do this if they would lead by example. Maybe we could call it “Fully scripted calculations”. Every paper should make the claim that it has fully scripted calculations for reproducibility.
If you support this idea, speak up. If you don’t support it, why not?
I place this and my previous post in the public domain so you can use them to promote the idea elsewhere if you like.
Amino, why the snark? I’m just a “civilian” trying to learn.
BUT, doesn’t it appear to you that there is a Glaring Contradiction in those numbers?
Let’s put it real simple. Everyone Raved over those Rural numbers. Then Everyone “Raved” over the UAH numbers; BUT they are in Direct Contradiction.
Ivan is right to ask, Which Is It?
Roy Spencer’s comments about the measurement of the Earth’s thermal state, from satellite measurements, are based on the underlying physics, a radiating sphere immersed in a vacuum.
Another approach is via the Plasma Universe model which considers the Earth as an electrically connected object encapsulated by (possibly) cascading Langmuire sheafs, or plasma double layers.
If so, the what are the satellite sensors measuring?
The thermal state of…….what, the Earth’s physical surface?
The thermal state of…….what else?
The point I make here is that, like it or not, present day tests are based on confirming the science, not disproving it.
AGW theory is looking more and more like socialism.
In other words, both make pretty good sense if you don’t examine the facts too hard.
If you do, neither makes any sense at all.
In the case of AGW, the ‘facts’ – the data used by the climate establishment – are becoming ever more suspect. The degree of manipulation of the ‘facts’ in support of a false dogma is becoming daily more apparent. This post is yet another blow to the credibility of the climate ‘facts’, which all too many people have accepted as being gospel truth for far too long.
Not only is there widespread data manipulation, but there is also massive data omission (UHI, equipment location etc) – basically, we need to start all over again, using trusted raw data and universally agreed temperature adjustment formulae.
The present cabal of ‘climate scientists’ cannot be trusted to police this vitally needed process.
Both AGW and socialism, if put into practice, have a common thread of being an incredible waste of resources to produce near universal poverty in an environment where no dissent is tolerated.
By my own, rough and ready, calculation based on average energy received at the earths surface from the sun at 8.9 E+16 W and the energy consumption of the planet at 1.504 E+13 W ( includes all sources , fossil, nuclear, renewables ) we are making heat, light and movement at 1/6000 the rate of natural solar energy.
I think that it would be fair to say that burning a fire at a rate of 15 TW would warm the atmosphere more than the smoke and particulates would do. However the energy from our Anthropogenic fire is only 0.017% of the work being done by the sun. We would have to increase our fuel burn five fold to bring it up to 0.10%.
It’s hard to see the creation of heat out of the forcing referred to by others but if they are correct then it looks like some kind of perpetual motion machine and they are few and far between.
Surely those who say that the sun is the dominant factor in our global temperature must have a good point while those who claim it’s the soot and ashes of our global bonfire must be barking up the wrong tree. Mind you nobody likes soot and ashes much but that’s not because they are changing the weather but because they are ugly.
I suspect that the wish to reach the goal intended by the alarmists is the cause of all these divergences. New efforts by the MET etc to redo the analyses are therefore not useful at all. What is needed is a thorough investigation of the disparity between the two methodologies (alarmists vs sceptics) by the two parties together and ferret out the big WHY. Now you see two sides digging in their heels, alarmists just affirming they are right, the sceptics ever more insistently coming with evidence that the right is on their side. Unfortunatly the big capital is on the alarmists’ side. And I am afraid that the alarmists will bank on time, getting the sceptics tired and the public interest to die down so that they can quietly continue implementing the economic side of the affair. Fot this is not only about science.
Once again Dr Spencer an excellent analysis and thankyou for that.
You state:
“Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect.”
Surely the only effective way of eliminating the UHI effect is to use rural station data. Of course this would reduce the number of station data available but it would produce a more honest and reliable result.
The problem this novice minor brain, sees is we dont know what happened in x year to prove of disprove a temperature increase or decrease.
Someone could have had a barbecue, near and the smoke was blown. there are so many variables that are not recorded. but thats impossible to do.
I dont say models are irrelevant, but they should make a model of real life use it to see what happens then if real life is any different alter the model to fit real life.. only that way could you be as accurate as you could, No forcings.
Location, Weather patterns, human contacts, urban heat sinks, and so on.. while the temperature is a good indication, its not all that the agw crowd should be looking at.
[try again. this time with respect and courtesy or be banned. ~ ctm]