Spencer: Spurious warming demonstrated in CRU surface data

Spurious Warming in the Jones U.S. Temperatures Since 1973

by Roy W. Spencer, Ph. D.

INTRODUCTION

As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009

I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.

CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.

CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES

I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
259 Comments
Inline Feedbacks
View all comments
a jones
February 27, 2010 6:36 pm

As I have commented elsewhere even if the data were carefully recorded and logged with everything done shipshape and Bristol fashion I suspect few physicists, the experts in precision measurement, and few statisticians, the experts in reconciling large data sets, [and note few self styled climatalologists are expert in either discipline] that the real experts would regard the resulting figure as anything but a statistical artifact which might or might NOT bear some relationship to Global Temperature if indeed that term itself has any meaning.
Since there is neither any way to know whether or not there is any relationship or to find whether one might exist it seems to me, however laudable the idea of cleaning up the surface temperature data might seem, it is a futile exercise than can tell us nothing except how badly the original work was done by these selfsame self styled climatologists.
And in referring to these rogues as climatologists I mean no disrespect to the many genuine scientists who toil in the field including Dr. Spencer.
The fact is we don’t need this data, we are in the satellite era which can provide all the data we need. We have the ARGO buoy system. Although we still learning how to use them we have satellite sea level measurement and even gravitational measurements. And we can measure from space both TSI and reflected radiation radiation from the earth. In short all the tools we need.
For if there is any lesson to learn from this mess it is that nothing much happened to the Global climate in the 20th century and rather than trying to analyse this non event it with inadequate tools it is far better to see for ourselves what is really happening, if anything, now and in the future so that in the next few decades we really will have a better if imperfect understanding of what is going on.
And be assured despite alarmist urgings to the contrary there is no urgency about this, we can take our time because nothing cataclysmic in climate terms is going to happen in the next few hundred years or so. However much fossil fuel we burn. Or how many babies are born.
There is a wonderful word which I discovered in the Times of London today, Plunderbund. It is credited as German 1949 and means a corrupt political, commercial and financial alliance.
Well now the AGW plunderbund is collapsing perhaps we can get back to doing some real science again.
Kindest Regards

February 27, 2010 6:39 pm

I’ve thought often about UHI and how to get around it. Surface stations are just simply subject to too many variables. Tree grows too tall. Tree falls down. Someone puts up a building. I came up with one odd idea which was to stop trying to avoid the UHI and use it instead.
In every urban centre stick a weather station at the top of the tallest building, right downtown. Up on a pole or something so that air conditioners and other things on the roof are eliminated as much as possible. Since it is the tallest building, it can’t get shade from another building, nearby buildings don’t just fall down on their own, and if someone builds a new and taller building, you will know well in advance. Then you build two or three concentric rings of weather stations, all sited on the top of the tallest building in that area, right out to the edge of the ‘burbs. That should allow you to measure the temperature gradient between city centre and city edge. Now here’s the interesting part.
Every urban centre that has a handful of properly sited weather stations in the surrounding area now becomes a hub. The “UHI free” temperature data can now be compared to the “UHI included” data and the UHI gradient for each hub calculated. In every hub that we are lucky enough to have historical from both right downtown weather stations and rural weather stations, we should be able to “extract” the UHI signal from the downtown weather station data and extrapolate backward the downtown data without the UHI signal. Going forward in time we now have trend line information on both fluctuations in UHI (which ought to be interesting all on their own) and temperature trends from downtown weather stations from which the temperature without UHI can be derived.
Thoughts?

Editor
February 27, 2010 6:47 pm

Hello David
I am trying to understand all of the potential drivers of Earth’s climate system;
http://www.physicalgeography.net/fundamentals/7y.html
http://oceanservice.noaa.gov/education/pd/climate/factsheets/whatfactors.pdf
and determine which ones are primarily responsible for recent and forthcoming changes in Earth’s climate system.
There seems to be reasonable evidence of a significant ocean component based on the cycles of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation;
http://icecap.us/docs/change/ocean_cycle_forecasts.pdf
http://www.appinsys.com/GlobalWarming/PDO_AMO.htm
http://www.atmos.washington.edu/~mantua/REPORTS/PDO/PDO_egec.htm
http://www.atmos.washington.edu/~mantua/REPORTS/PDO/PDO_cs.htm
And there also seems to be reasonable evidence for a significant volcanic component based historical observation:
http://www.geology.sdsu.edu/how_volcanoes_work/climate_effects.html
http://www.longrangeweather.com/global_temperatures.htm
http://adsabs.harvard.edu/abs/1991vci..nasa…..R
How significant a factor do you consider solar variability as a driver of recent and forthcoming changes in Earth’s climate system as compared to the impact of ocean cycles, volcanic activity, natural variability and other factors?

Larry
February 27, 2010 6:48 pm

Good work, Roy. I hope another follow-up book on the AGW subject is forthcoming for the layman, to help better explain all the new information you have found. Well, maybe sometime soon, anyway. I know you’re busy.

Doug in Seattle
February 27, 2010 6:58 pm

Dr. Spencer:
Why the insistence on using grids. It would seem, given the irregular placement of stations that TIN would be a better choice. I would also allow better separation of land and ocean since so many land (lighthouses/marinas) and ocean stations (buoys/oil platforms) are close to shorelines.
I see no reason why the TIN polygons would more more difficult to work with, and I think it would be easier to eliminate hot spots.

John Blake
February 27, 2010 7:02 pm

Valid data, evaluated with integrity, conclusions not in 180-degree opposition to manifest results– too much to ask? Only because “climate studies” is not an empirical, experimental discipline but a classification exercise akin to botany, dealing only in hindsight because linear extrapolation of complex dynamic systems are mathematically and physically impossible, has this Green Gang of AGW propagandists foisted GIGO to the extent they have.
Start from scratch, by all means… but remember, since 1988 if not before, the world’s entire climatology establishment has gotten away with serial scientific murder, complicit in so many areas as to render every aspect of their endeavor suspect for a generation. By (say) 2040, as Earth enters a probable Dalton or even 70-year Maunder Minimum presaging an overdue end to our current Holocene Interglacial Epoch, this extraordinary episode will be seen for what it is: A frontal assault on industrial/technological civilization by nihilistic Luddite sociopaths (see Ehrlich, Holdren, recently Keith Farnish) bent on sabotaging, subverting, global energy economies in furtherance of an extreme radical anti-humanist agenda.
For such as these, we truly lack a word. Would “thanatocists” be apropos?

February 27, 2010 7:20 pm

Dr. Spencer, an alternative explanation for the warming trend from 1973 to 2009 could be that January and February are not all that warmer in recent years, however, they were quite a bit colder in the late 1970’s. That alone would create an impression of a warming trend. I refer to this on my blog as the Abilene Effect, after the small town in Texas where it is very clearly demonstrated.
http://sowellslawblog.blogspot.com/2010/01/cold-winters-created-global-warming.html

janama
February 27, 2010 7:24 pm

I really question the accuracy of the whole system. Yesterday I visited that airport near me that I found to have no discernible warming in it’s 1908 – 2009 temperature record and found the Stevenson screen sited in an open lawn area which would qualify as perfect under Anthony’s criteria. It was locked and there was a cable out the rear so I assume it’s automatic. A few yards away was an automatic rainfall gauge.
As I left the park I asked the park manager if anyone came and read the thermometer and he said that some one came twice a day but he read the other meter. Other meter? yes – there was another Stevenson screen approx 200 yards from the original and was set up on a nice lawn but with more buildings around it but nothing I could see as a problem.
So I went back to the Bureau of Meteorology site and sure enough there was a second listing for Casino Airport but it only had data from 1995. So I downloaded that data and put it up against the original 1908 – 2009 data and in 1995 they were identical but varied after and it appeared that the new station varied around .5C – .7C cooler.
Now I’ve been told we’ve warmed .7C/century but judging by this it’s +/- .7C.

February 27, 2010 7:26 pm

Dr. Spencer,
I’m not sure how the satellite data are calibrated. Are they calibrated using “ground truthing” or do you use on-board calibrators? Forgive me, if you have already addressed this on your website or in earlier posts.
In any case, if you use the former, I wouldn’t use any ground-based thermometers that could conceivably be contaminated by microclimatic effects, no matter how “rural” their location. Anyone who has walked on an asphalt or concrete patch knows they don’t have the same temp as, say, grass, no matter how small the patch. This is certainly true in the daytime, as well as at night. Think, e.g., about where dew or frost or snow dusting is likely to be observed in the morning.
If you use on-board calibration devices (or whatever), any likelhood that they can be systematically biased (one way or the other) considering that the instruments are in pretty harsh environment (or are they air conditioned?)
Perhaps you can direct me to a primer on satellite temp measurements.
Thanks for your postings, BTW.
REPLY: The AMSU calbration method has been covered here: http://wattsupwiththat.com/2010/01/12/how-the-uah-global-temperatures-are-produced/
– Anthony

Squidly
February 27, 2010 7:26 pm

Sorry, OT, but has anyone caught the newest revelations from Gore, published today in the New York Times?
http://www.nytimes.com/2010/02/28/opinion/28gore.html?ref=opinion
Someone evidently found Gore, dug his ass out of a snowbank, and wouldn’t you know it, he picks up right where he left off. Amazing BS in this Op-Ed.

hotrod ( Larry L )
February 27, 2010 7:28 pm

Due to the complexity involved in teasing possible temperature trends out of historical temperature data, perhaps the KISS (Keep It Simple Stupid) principle is a very good place to start.
Find a geographically uncomplicated area that has multiple high quality rural stations which have long uninterrupted records.
Work out an objective well documented methodology to compute important characteristics of those stations and their temperature data. Figuring individual trends, and the trends of the whole group etc., and apply it to that small set of stations. Then test what that methodology does when you drop a single station out of the set, or multiple stations out of the set, so you can characterize the behavior to expect from the data as you have station dropouts and additions.
Once you are satisfied the processing method does not introduce odd or unreasonable behavior, figure out a realistic error budget for your output data.
Once you have a reliable, objective and well documented and well behaving process, try it on other more complicated areas.
Repeat as necessary to find the weaknesses in the process, and develop rational methods to work with different types of data problems, such as station moves and gradual urbanization.
Do all this in an open source model process where the wisdom of the crowd can help refine the process. A process that allows independent verification and validation of the methods by those who have the special skills and experience in statistics, instrumentation and measurement precision, weather, basic physics and micro climate effects etc. to produce an set of well documented code modules to perform the necessary process steps to follow this method.
Let individuals apply those code modules to various small sub sets of temperature data from around the world to verify the code modules are flexible enough and well behaved to handle real world data in a predictable manner.
Then expand the process to country sized analysis, then hemisphere size analysis etc.
My personal feeling is that this sort of walk before you can run verification was not done, in the existing data set processing methods, and as a result I suspect some of the analysis methods do odd things like that unexplained discontinuity in 1997.
Without well documented and validate code blocks that everyone agrees behave reasonably with real data, I think it is an exercise in futility to try to process even good data and produce a trustworthy output.
I know I have been surprised more than once when what I thought was a relatively trivial programming problem had a hidden bug that did something totally unexpected in certain specific situations. Does the code complain if some station due to an input error shows a 72 deg high for the day when it should have been entered as 27, or does it just blindly process that value and bury it in multiple steps of processing. What does it do if the high for the day is lower than the supposed low for the day at a station?
As I read through some of the studies and reports related to climate you have simple statements about how the data was handled but without knowing the actual code that processed the data you have no way of knowing if the intended processing stated in the study actually occurred, or unknown to everyone involved including the author, that perhaps some computational artifact was introduced into the processed data.
We also have no idea what if any error checking was done on the input data to ensure that it was not corrupted at some point by hardware, software or even data entry errors.
Larry

BarryW
February 27, 2010 7:35 pm

Dr Spencer, could you’re difference be partly attributable to the difference in using max/min average vs your synoptic average? A faster rate of cooling, for example, might cause the actual high and low to be about the same but the intermediate values might be depressed causing your average to be lower. If this is changing over time (more radiative cooling?), could that not be what you’re seeing?

John Whitman
February 27, 2010 7:37 pm

Dr Spencer,
I apologize for misspelling your name in my comment “John Whitman (18:17:19)”

John

crosspatch
February 27, 2010 7:38 pm

“Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis.”
I reached that same conclusion, Dr. Spencer, a couple of years back. It would seem that trying to keep track of all the adjustments would be a ball of snakes. One might think that some prestigious academic institution would want to create some standard repository of climate data from which research could be performed over the years.

February 27, 2010 7:43 pm

Dave McK (17:13:09) :
I would like to see the raw data for each station plotted on a map and animated.
Forget homogenizing, gridding and all the other manipulations until first you show what you have to work with.
When each data point is color coded by absolute temperature (forget anomalies) it will be readily apparent what stations behave oddly, what are daily effects, monthly, seasonal, annual –
it will be possible to examine the month of January 100 years ago alongside the same month this year – visually, at any scale and over any time period.
This sort of representation will reveal everything- quality of the data at each station on over any time span – at the native resolution of the data- and from there you can speed it up, zoom, split screen for comparison- anything.
You don’t know what you’ve even got yet – the very first part of the job has yet to be finished.
My reply;
The basic concept you suggest seems simple enough, if it were not for the “monthly” variation from year to year as a result of the lack of consideration of long term periods of cyclic influences on weather patterns.
First the 27.32 day periods of lunar declinational atmospheric tides, slew through the 30-31 day months, as well as have a four fold pattern of 109 day period that repeat in the four types of Rossby wave patterns occuring.
On top of that there is the 18.6 year Mn signal of variation between the Max and Min culmination angle, that shows up as a very complex set, of shifting of the background patterns, of meridional flow surges that has made this approach, impossible in past studies, where this was tried.
I think that to try to show shifts in the temps from the same season of different years, has always been adversely affected because of this lack of consideration of the Natural Patterns of atmospheric response to Lunar declinational tides, and their several periods of effects.
Dr, Roy has compensated well for this effect by using the same time period as the original study to effectively negate the pattern problems. I think what he has done here is valid, because of this, and he is to be commended, Thanks.
The maps shown on my site reflect the similarity of the sample periods, by Lunar declination patterns, season and the 18.6 Mn period, if you have any questions on how I have applied this method or how it could be helpful to add QA to the type of study you are suggesting feel free to contact me.
Richard Holle

Claude Harvey
February 27, 2010 7:56 pm

Re: scienceofdoom (18:02:24) :
“Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.”
Perhaps we should focus on the satellite measured, global average temperature at 14,000 feet as Spencer has for some time now in his monthly report. The past 9 months will bring tears to the eyes. Stand by for another “ugh” month in the midst of blizzard conditions closer to earth. It’s setting “high” records again for the month of February.
I’m a skeptic of AGW theory but not a denier of measured data that has not been unduly “adjusted” by unknown algorithms. I’m currently comforted that the dismal numbers may simply be the oceans puking up stored heat as they periodically do, but those numbers cannot be ignored.

Enginear
February 27, 2010 8:08 pm

My thanks to Dr Spencer,
As bad as this sounds to me it appears there is a consensus. We need to redo the temperature data, all the data, including Paleo. The problem lies in who should do this work and what are the rules. For the surface staion historical data I think it would be wise to contract an auditing firm(s) and give them a set of rules and methods and let the data do its work. All the work needs to be explained plainly in language that someone without a master’s degree can understand and without all the acronyms.
All of the research uses the “fact” of the unprecedented warming as the basis for thier findings. The problem is we don’t know how much it really warmed so we can’t be sure it’s unprecedented. Don’t tell me there isn’t time. None of the dire predictions have even hinted at becoming true. And, given the choice, I’ll take warmer vs. colder any day assuming we’re influencing the climates. I vote for a start over.
Sorry for the rant,
Barry Strayer

DR
February 27, 2010 8:09 pm

Ok, maybe I’m missing something or its because I didn’t read the previous post.
Is Roy Spencer saying the U.S. record is way off but the global record is in agreement with Jones?

suricat
February 27, 2010 8:09 pm

scienceofdoom (18:02:24) :
“Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.”
Yes. Most of Earth’s surface is water, so why make land-based observations ‘prima facial’ for global obs!
Perhaps this is because we live on land and not water! However, I also put more pertinence into OHC than the surface record.
Best regards, suricat.

February 27, 2010 8:14 pm

Claude Harvey:
I’m with you on the satellite measurements. Adds a well-needed check on surface temperatures and is perhaps more reliable – because the micro-climate impacts on a relatively small (few thousand) number of weather stations could be significant.
Whereas it’s much harder for those changes to impact the whole of the lower troposphere.
Also, perhaps more to the point, as I think you are suggesting – the land temperatures can be significantly affected by the oceans “puking up” stored heat.
It’s all about energy. The oceans store 1000x more energy than the atmosphere.
So a few months where deeper water (which is colder) gets turned over to the surface will result in colder land and sea surface temperatures.
But it hasn’t actually meant that the earth has cooled. And the reverse is true as well.
It’s supposed to be harder to measure OHC, but every time I see another one of these articles I think it must be easier to measure OHC. And seeing OHC instead of temperature is much more meaningful..

David L. Hagen
February 27, 2010 8:17 pm

Re: Jack Wedel (Feb 27 18:33),

Mercury freezes solid at -40 F

Amazing. Mercury actually freezes!
NIST reports a triple point of 234.3156 K (~ -38.8344 deg C, or -37.0919 deg F).
Now wonder how they measured -45 F to -55 F with a mercury thermometer on the DEW line? (The difficulties of selective citation and/or memory!)

In the winter most stateside thermometers would be useless – they don’t go low enough. Temperatures usually range between 40° and 50° below zero, but 60° and 65° below are not uncommon. The record low recorded at one site was a frigid 86° below zero. In summer the mercury rises to the 60° level, but seldom higher.

The Distant Early Warning (DEW) Line: A Bibliography and Documentary Resource List
Maybe they used “Spirit Filled” thermometers?
(Wonder if those were developed in Wales?)

G.L. Alston
February 27, 2010 8:25 pm

Graeme W — Unless the word “spurious” has a specific meaning in climate research, I found the use of the word here to indicate a strong bias of “I’m right and the other is wrong”.
Spurious data is generally that which is false and ultimately caused by an outside factor. You should be able to look at a data plot and see that which is not natural.
BarryW — If this is changing over time (more radiative cooling?), could that not be what you’re seeing?
I don’t see that this is important. You could record once a day and as long as the temp was recorded at the same time each day regardless of min/max this would still yield enough information to detect an overall trend when viewed at a long enough timescale. The actual temp isn’t meaningful; only the derivative signal has meaning in this case.
****
Dr. Spencer —
It seems to me that if we have reliable nighttime ground based temps of desert areas then looking at these would be the best indicator re whether CO2 has any effect at all — i.e. since deserts lack water vapour, wouldn’t a warming signal tell us if what warming exists is based on CO2 or other GHG’s that are not water vapour? Or am I missing something?
Thanks!

Editor
February 27, 2010 8:27 pm

Just The Facts (18:47:57) :
Retracted, posted on wrong thread, D’oh!

Ivan
February 27, 2010 8:28 pm

USA 48 RURAL 1979-2009 – WARMING 0.08 degrees K PER DECADE
USA 48 URBAN 1979-2009 – WARMING 0.25 degrees K PER DECADE
USA 48 UAH 1979-2009 – WARMING 0.22 degrees PER DECADE
So: UAH and URBAN WRONG??????
Or RURAL WRONG?????
Any thoughts?

steven mosher
February 27, 2010 8:30 pm

scienceofdoom
Population is only a PROXY for UHI.
UHI results from changes to the GEOMETRY at the surface and changes
to the MATERIAL PROPERTIES, and finally to waste heat from human
activity. Now typically more people means more waste heat and tall buildings ( raditaive canyons) and disturbed boundry layers and surfaces that act like heat sinks..
But population is only a proxy for uhi