Spencer: Spurious warming demonstrated in CRU surface data

Spurious Warming in the Jones U.S. Temperatures Since 1973

by Roy W. Spencer, Ph. D.

INTRODUCTION

As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009

I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.

CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.

CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES

I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Ian H

If anyone were to question the need for data to be freely available, work like this that makes it clear.

Allan M

“In my opinion, simplicity, objectivity, and repeatability should be of paramount importance.”
Isn’t this the case in most things in life.
“Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.”
This is probably the motive.

Mr Lynn

Dr. Spencer:
. . .It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Or even whether the world has warmed in recent decades?
That’s the real ‘square one’.
/Mr Lynn

Dirk

Well done, nice work, falling eye lids, see y soon.

suricat

“It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.”
I wholeheartedly concur with you Roy.
Not only do we have a lack of station resolution; we cant tell if the effect if clouds (and rain) are evident with only four samples per day. We also have network resolution problems; any station node within a network can only accurately report temp within a ~500m radius.
Needless to say, without a full resolution it is only too easy to end up with a false result! Your adherence to only ‘surviving stations’ shows this.
Best regards, suricat.

pat

Well I think we all know what this means. Scientific fraud. How many other disciplines have been contaminated by spurious, agenda driven, analysis and data alteration?

ROM

The climate warming onion is being peeled and the closer to the core it is peeled, the more rotten that core seems to be.
The first layer was the principal advocates of the CO2 based climate warming, the CRU scientists who by their own words were shown to have massaged and corrupted and possibly deleted relevant data to achieve their personal agendas.
Then the single most important supposedly science based climate organisation in the whole climate warming scam, the IPCC was shown, again by it’s own writings, to be rotten and corrupt and to have deliberately taken on an advocacy role in advising and attempting to influence the world’s governments to alter the very social structure of the way most of the world’s peoples live.
Now right down near the onion’s increasingly rotten core, it seems that the very data that the supposed catastrophic rising of global temperatures is based on and from which all the claims of global warming emanate is being shown to have been either accidently distorted and compromised due to complete incompetence on the part of the advocate climate “scientists” or deliberately and wantonly twisted, massaged and altered by those same “scientists” to achieve a preordained result.
From this it appears that nearly all of the papers, articles and opinions which relied entirely on the veracity of the supposedly science based data supporting the concept of catastrophic global warming are no longer worth the paper they are printed on or the gigabytes of electrons that flowed from their publication.
Why should we ever trust in any way these “scientists” ever again?
If these are the standards of honesty and integrity that so many of the world’s scientists apparently accepted of climate science for nearly two decades, why should we the public who pay the salaries and the often lavish grants that fund science, ever again place any trust in any science until science itself is openly seen to be cleaning it’s filthy augean stables?

Carbon Dioxode

Thank you Roy.
Once again it would seem that Prof Jones reached his conclusion and then selected data to support it.
If this allowed to stand, the Enlightenment and the Age of Reason may as well never happened, and we may as well go back to using Aristotle as the font of all wisdom and pigs bladders as a means of predicting earthquakes.

Graeme W

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

Unless the word “spurious” has a specific meaning in climate research, I found the use of the word here to indicate a strong bias of “I’m right and the other is wrong”. I would personally prefer a more neutral term to be used, such as anomalous – which indicates that there is a strange difference without specify any indication as to what it means or which dataset is correct. After all, it is stated that the actual figures being compared are not the same measures. It could be that the difference is due to that fact alone.
Having said that, I found it very interesting. I’ve wondered several times recently what temperature anomaly graphs would look like purely off raw data without any modification.

J.Peden

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one
It’s been painfully apparent for quite some time that the elitist Climate Scientists themselves did not care about their “science” enough to start from the beginning in deciding what they were measuring and how best to measure it.
It is perhaps even more astonishing that many other people who think they are Climate Scientists and also feed off “Climate Science” didn’t care enough about their all important basic claim – that “it” is warming “globally” – to personally look into what “it” is and how “it” is measured.
They’ve only had over 20 years to find out.

Looks like a straight forward approach to me, as it follows the KISS principle.
And the findings are interesting, to say the least.
It would be great if it could be the start of a genuine discussion with the people that created the CRUT data. And by discussion I mean a focus on content, not on trying to discredit the other party.
As a newcomer to the climate discussion I am surprised how the two camps talk (bad) about each other iso with each other.

John Whitman

Dr Spencer,
No, the achievement of your article above is not developing a dataset yourself and comparing it to the CRU dataset thereby opening questions about their adjustments . . . and thank you for that.
The truly significant achievement of your article is your contribution to the art of scientific communication. The clearness of your writing strongly illuminates the topic.
I am most grateful for your clear professional style, secondarily grateful for your contribution to the knowledge of the US Surface Temperature Records.
Let there be light . . . . on the temperature dataset.
John

oh excellent post Roy on several counts. Thank you.
I would like to see a century of global mean temperature changes estimated from individual stations which all have long and checkable track records, with individual corrections for UHI and other site factors, rather than the highly contaminated gridded soup made from hugely varying numbers of stations.
The January spike in warming trends suggests UHI to me. Exactly the same is seen in the Salehard (Yamal) record over recent years.

Kevin Kilty

Well, ask and ye shall receive. I wrote about using first-order stations to have a look at temperature trends on a thread earlier today, and now find Dr. Spencer has already done something similar.

Pamela Gray

It is important to know what kind of research is being done on temperature data. And since we cannot yet replicate Jones’ research, we are left to verify the null hypothesis, or in this case, not verify it. Good example of verification research (done a different way with different analysis, different data set, etc) supporting the null hypothesis (IE the CO2 increase is not greatly warming the atmosphere) in contrast to Jones’ work, which rejects it. The design is simple, straightforward, transparent, and leaves the ball in the other court to replicate your work, and attempt to verify it or not.

Pamela Gray

And by the way, the paper Leif sited re: forcing, left 25% of the warming unexplained. Could a calculation error in Jones’ temperature enhancements be that 25%?

Ivan

Wouldn’t be much easier and fruitful for dr Spencer to compile the rural stations in the USA and calculate the trend from this data set, without trying to correct Jones’s mistakes in constructing his temperature index. And then maybe to compare this rural trend with his own UAH trend? For the USA 48, UAH finds decadal trend of 0.22 degrees C. Preliminary analysis of Dr Long based on just rural stations in the USA shows approximately 0.07 or 0.08 per decade. Isn’t that a peculiar inconsistency worth of exploring a little bit more (especially when coupled with an even more peculiar CONSISTENCY between Spencer’s data and NOAA urban, adjusted trend)???

David L. Hagen

Very insightful explorations.
Good to have US “anthropogenic warming” confirmed!
Could any of the January/February “warming” be due to a “daylight saving time” impact temperature reading time?
Or the change in heating period affect the UHI?
Other examples of “anthropogenic” influence on temperatures is shown in:
Fabricating Temperatures on the DEW Line

For numerous reasons many reports were fabricated. No one imagined their fabrications would comprise a data set that would, in future years, be used to detect minor global warming trends and trigger a panic in the world.
Some of the reasons why the reports were fabricated: . . .
(The significance of the difference between -55F and -45F was not appreciated. Both temperatures would freeze your balls off. So why split hairs?) . . .
(a.) physical discomfort of leaving a warm environment and venturing out into the extreme weather conditions to read mercury thermometers located about 200 ft. from the living modules.
(b.) fear of frost bite, getting disoriented by limited visibility, or being mauled by marauding polar bears. . . .
Missing data happens even when polar bears aren’t prowling between you and the thermometer.

The more you read about land-based temperature measurements the more confused you get (well I do).
What was interesting reading the 2009 paper about UHI in Japan (which was reposted on this blog) was that there were a large number of high quality stations with hourly measurement and yet the “correlation” between temperature rise and population density was relatively low (0.4).
At the same time there was a clear trend showing increasing population density caused higher temperature measurements to a 99% significance.
What that means is that there is definitely a UHI effect in Japan. And also that the variation is huge – microclimate effects probably.
(The paper also showed that there had definitely been a significant real warming in Japan over three decades).
Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.

John Whitman

Anthony,
Thank you for posting Dr Spencer’s article.
Anthony, are you (and Dr Spencer) thinking what I am thinking?
You did it with thermometers. It is time to do it at one level up, this time a SURVEY 23 GRIDs OF THE US SURFACE TEMPERATURE DATASET . Do it by assigning on of 23 grids (5 C deg grid squares) to some volunteer to do dataset survey Grid cell by Grid cell. I can help, though no statistician.
John

kim

OK, new contest; find something correct in An Inconvenient Truth.
=================================

Keith Minto

I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.
Are these reported manually, reading bulb thermometers and are maxima and minima calculated from these events ?. These reporting times are presumably convenient and standard and would not necessarily coincide with daily maxima/minima.
This has always puzzled me, even the old bulb thermometers had maximum/minimum markers that could be read and reset once each day and this information would be more readily available to the newer thermistors. So why have ‘reporting times’ ?
Good thoughtful article, Dr Spencer.

c james

Slightly OT….Have you seen Al Gore’s article in the New York Times where he calls us a “criminal generation” if we ignore AGW? This was published today.
http://www.nytimes.com/2010/02/28/opinion/28gore.html?hp

Mindbuilder

I’d like to propose a new standard for climate research. I propose that every paper include a zip file containing all data and a script initiated by a single command that will automatically carry out all calculations of every number AND graph in the paper. Any needed manual modification of raw data should be carried out by explicit lines in the script, along with an explanation. All needed software should be included in the zip file if possible, therefore open source software should be strongly encouraged. In order to save download bandwidth, it may be permissible for the script to specify separate data packages or software packages by cryptographic hash, if those packages would be frequently used in many papers. This way we would not only have the data and the code, we would know that both the data and the code were the ones used in the calculations, and we could easily check that the calculations were repeatable. This procedure would add a small burden,especially to the plotting of graphs, which would have to be scripted, but it would dramatically increase the credibility of the research. Skeptics may be able to almost force climate researchers to use this method by using it themselves, and thus establishing it as a required best practice.

Jack Wedel

Hey David. Mercury freezes solid at -40 F

a jones

As I have commented elsewhere even if the data were carefully recorded and logged with everything done shipshape and Bristol fashion I suspect few physicists, the experts in precision measurement, and few statisticians, the experts in reconciling large data sets, [and note few self styled climatalologists are expert in either discipline] that the real experts would regard the resulting figure as anything but a statistical artifact which might or might NOT bear some relationship to Global Temperature if indeed that term itself has any meaning.
Since there is neither any way to know whether or not there is any relationship or to find whether one might exist it seems to me, however laudable the idea of cleaning up the surface temperature data might seem, it is a futile exercise than can tell us nothing except how badly the original work was done by these selfsame self styled climatologists.
And in referring to these rogues as climatologists I mean no disrespect to the many genuine scientists who toil in the field including Dr. Spencer.
The fact is we don’t need this data, we are in the satellite era which can provide all the data we need. We have the ARGO buoy system. Although we still learning how to use them we have satellite sea level measurement and even gravitational measurements. And we can measure from space both TSI and reflected radiation radiation from the earth. In short all the tools we need.
For if there is any lesson to learn from this mess it is that nothing much happened to the Global climate in the 20th century and rather than trying to analyse this non event it with inadequate tools it is far better to see for ourselves what is really happening, if anything, now and in the future so that in the next few decades we really will have a better if imperfect understanding of what is going on.
And be assured despite alarmist urgings to the contrary there is no urgency about this, we can take our time because nothing cataclysmic in climate terms is going to happen in the next few hundred years or so. However much fossil fuel we burn. Or how many babies are born.
There is a wonderful word which I discovered in the Times of London today, Plunderbund. It is credited as German 1949 and means a corrupt political, commercial and financial alliance.
Well now the AGW plunderbund is collapsing perhaps we can get back to doing some real science again.
Kindest Regards

I’ve thought often about UHI and how to get around it. Surface stations are just simply subject to too many variables. Tree grows too tall. Tree falls down. Someone puts up a building. I came up with one odd idea which was to stop trying to avoid the UHI and use it instead.
In every urban centre stick a weather station at the top of the tallest building, right downtown. Up on a pole or something so that air conditioners and other things on the roof are eliminated as much as possible. Since it is the tallest building, it can’t get shade from another building, nearby buildings don’t just fall down on their own, and if someone builds a new and taller building, you will know well in advance. Then you build two or three concentric rings of weather stations, all sited on the top of the tallest building in that area, right out to the edge of the ‘burbs. That should allow you to measure the temperature gradient between city centre and city edge. Now here’s the interesting part.
Every urban centre that has a handful of properly sited weather stations in the surrounding area now becomes a hub. The “UHI free” temperature data can now be compared to the “UHI included” data and the UHI gradient for each hub calculated. In every hub that we are lucky enough to have historical from both right downtown weather stations and rural weather stations, we should be able to “extract” the UHI signal from the downtown weather station data and extrapolate backward the downtown data without the UHI signal. Going forward in time we now have trend line information on both fluctuations in UHI (which ought to be interesting all on their own) and temperature trends from downtown weather stations from which the temperature without UHI can be derived.
Thoughts?

Hello David
I am trying to understand all of the potential drivers of Earth’s climate system;
http://www.physicalgeography.net/fundamentals/7y.html
http://oceanservice.noaa.gov/education/pd/climate/factsheets/whatfactors.pdf
and determine which ones are primarily responsible for recent and forthcoming changes in Earth’s climate system.
There seems to be reasonable evidence of a significant ocean component based on the cycles of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation;
http://icecap.us/docs/change/ocean_cycle_forecasts.pdf
http://www.appinsys.com/GlobalWarming/PDO_AMO.htm
http://www.atmos.washington.edu/~mantua/REPORTS/PDO/PDO_egec.htm
http://www.atmos.washington.edu/~mantua/REPORTS/PDO/PDO_cs.htm
And there also seems to be reasonable evidence for a significant volcanic component based historical observation:
http://www.geology.sdsu.edu/how_volcanoes_work/climate_effects.html
http://www.longrangeweather.com/global_temperatures.htm
http://adsabs.harvard.edu/abs/1991vci..nasa…..R
How significant a factor do you consider solar variability as a driver of recent and forthcoming changes in Earth’s climate system as compared to the impact of ocean cycles, volcanic activity, natural variability and other factors?

Larry

Good work, Roy. I hope another follow-up book on the AGW subject is forthcoming for the layman, to help better explain all the new information you have found. Well, maybe sometime soon, anyway. I know you’re busy.

Doug in Seattle

Dr. Spencer:
Why the insistence on using grids. It would seem, given the irregular placement of stations that TIN would be a better choice. I would also allow better separation of land and ocean since so many land (lighthouses/marinas) and ocean stations (buoys/oil platforms) are close to shorelines.
I see no reason why the TIN polygons would more more difficult to work with, and I think it would be easier to eliminate hot spots.

John Blake

Valid data, evaluated with integrity, conclusions not in 180-degree opposition to manifest results– too much to ask? Only because “climate studies” is not an empirical, experimental discipline but a classification exercise akin to botany, dealing only in hindsight because linear extrapolation of complex dynamic systems are mathematically and physically impossible, has this Green Gang of AGW propagandists foisted GIGO to the extent they have.
Start from scratch, by all means… but remember, since 1988 if not before, the world’s entire climatology establishment has gotten away with serial scientific murder, complicit in so many areas as to render every aspect of their endeavor suspect for a generation. By (say) 2040, as Earth enters a probable Dalton or even 70-year Maunder Minimum presaging an overdue end to our current Holocene Interglacial Epoch, this extraordinary episode will be seen for what it is: A frontal assault on industrial/technological civilization by nihilistic Luddite sociopaths (see Ehrlich, Holdren, recently Keith Farnish) bent on sabotaging, subverting, global energy economies in furtherance of an extreme radical anti-humanist agenda.
For such as these, we truly lack a word. Would “thanatocists” be apropos?

Dr. Spencer, an alternative explanation for the warming trend from 1973 to 2009 could be that January and February are not all that warmer in recent years, however, they were quite a bit colder in the late 1970’s. That alone would create an impression of a warming trend. I refer to this on my blog as the Abilene Effect, after the small town in Texas where it is very clearly demonstrated.
http://sowellslawblog.blogspot.com/2010/01/cold-winters-created-global-warming.html

janama

I really question the accuracy of the whole system. Yesterday I visited that airport near me that I found to have no discernible warming in it’s 1908 – 2009 temperature record and found the Stevenson screen sited in an open lawn area which would qualify as perfect under Anthony’s criteria. It was locked and there was a cable out the rear so I assume it’s automatic. A few yards away was an automatic rainfall gauge.
As I left the park I asked the park manager if anyone came and read the thermometer and he said that some one came twice a day but he read the other meter. Other meter? yes – there was another Stevenson screen approx 200 yards from the original and was set up on a nice lawn but with more buildings around it but nothing I could see as a problem.
So I went back to the Bureau of Meteorology site and sure enough there was a second listing for Casino Airport but it only had data from 1995. So I downloaded that data and put it up against the original 1908 – 2009 data and in 1995 they were identical but varied after and it appeared that the new station varied around .5C – .7C cooler.
Now I’ve been told we’ve warmed .7C/century but judging by this it’s +/- .7C.

Dr. Spencer,
I’m not sure how the satellite data are calibrated. Are they calibrated using “ground truthing” or do you use on-board calibrators? Forgive me, if you have already addressed this on your website or in earlier posts.
In any case, if you use the former, I wouldn’t use any ground-based thermometers that could conceivably be contaminated by microclimatic effects, no matter how “rural” their location. Anyone who has walked on an asphalt or concrete patch knows they don’t have the same temp as, say, grass, no matter how small the patch. This is certainly true in the daytime, as well as at night. Think, e.g., about where dew or frost or snow dusting is likely to be observed in the morning.
If you use on-board calibration devices (or whatever), any likelhood that they can be systematically biased (one way or the other) considering that the instruments are in pretty harsh environment (or are they air conditioned?)
Perhaps you can direct me to a primer on satellite temp measurements.
Thanks for your postings, BTW.
REPLY: The AMSU calbration method has been covered here: http://wattsupwiththat.com/2010/01/12/how-the-uah-global-temperatures-are-produced/
– Anthony

Squidly

Sorry, OT, but has anyone caught the newest revelations from Gore, published today in the New York Times?
http://www.nytimes.com/2010/02/28/opinion/28gore.html?ref=opinion
Someone evidently found Gore, dug his ass out of a snowbank, and wouldn’t you know it, he picks up right where he left off. Amazing BS in this Op-Ed.

hotrod ( Larry L )

Due to the complexity involved in teasing possible temperature trends out of historical temperature data, perhaps the KISS (Keep It Simple Stupid) principle is a very good place to start.
Find a geographically uncomplicated area that has multiple high quality rural stations which have long uninterrupted records.
Work out an objective well documented methodology to compute important characteristics of those stations and their temperature data. Figuring individual trends, and the trends of the whole group etc., and apply it to that small set of stations. Then test what that methodology does when you drop a single station out of the set, or multiple stations out of the set, so you can characterize the behavior to expect from the data as you have station dropouts and additions.
Once you are satisfied the processing method does not introduce odd or unreasonable behavior, figure out a realistic error budget for your output data.
Once you have a reliable, objective and well documented and well behaving process, try it on other more complicated areas.
Repeat as necessary to find the weaknesses in the process, and develop rational methods to work with different types of data problems, such as station moves and gradual urbanization.
Do all this in an open source model process where the wisdom of the crowd can help refine the process. A process that allows independent verification and validation of the methods by those who have the special skills and experience in statistics, instrumentation and measurement precision, weather, basic physics and micro climate effects etc. to produce an set of well documented code modules to perform the necessary process steps to follow this method.
Let individuals apply those code modules to various small sub sets of temperature data from around the world to verify the code modules are flexible enough and well behaved to handle real world data in a predictable manner.
Then expand the process to country sized analysis, then hemisphere size analysis etc.
My personal feeling is that this sort of walk before you can run verification was not done, in the existing data set processing methods, and as a result I suspect some of the analysis methods do odd things like that unexplained discontinuity in 1997.
Without well documented and validate code blocks that everyone agrees behave reasonably with real data, I think it is an exercise in futility to try to process even good data and produce a trustworthy output.
I know I have been surprised more than once when what I thought was a relatively trivial programming problem had a hidden bug that did something totally unexpected in certain specific situations. Does the code complain if some station due to an input error shows a 72 deg high for the day when it should have been entered as 27, or does it just blindly process that value and bury it in multiple steps of processing. What does it do if the high for the day is lower than the supposed low for the day at a station?
As I read through some of the studies and reports related to climate you have simple statements about how the data was handled but without knowing the actual code that processed the data you have no way of knowing if the intended processing stated in the study actually occurred, or unknown to everyone involved including the author, that perhaps some computational artifact was introduced into the processed data.
We also have no idea what if any error checking was done on the input data to ensure that it was not corrupted at some point by hardware, software or even data entry errors.
Larry

BarryW

Dr Spencer, could you’re difference be partly attributable to the difference in using max/min average vs your synoptic average? A faster rate of cooling, for example, might cause the actual high and low to be about the same but the intermediate values might be depressed causing your average to be lower. If this is changing over time (more radiative cooling?), could that not be what you’re seeing?

John Whitman

Dr Spencer,
I apologize for misspelling your name in my comment “John Whitman (18:17:19)”

John

crosspatch

“Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis.”
I reached that same conclusion, Dr. Spencer, a couple of years back. It would seem that trying to keep track of all the adjustments would be a ball of snakes. One might think that some prestigious academic institution would want to create some standard repository of climate data from which research could be performed over the years.

Dave McK (17:13:09) :
I would like to see the raw data for each station plotted on a map and animated.
Forget homogenizing, gridding and all the other manipulations until first you show what you have to work with.
When each data point is color coded by absolute temperature (forget anomalies) it will be readily apparent what stations behave oddly, what are daily effects, monthly, seasonal, annual –
it will be possible to examine the month of January 100 years ago alongside the same month this year – visually, at any scale and over any time period.
This sort of representation will reveal everything- quality of the data at each station on over any time span – at the native resolution of the data- and from there you can speed it up, zoom, split screen for comparison- anything.
You don’t know what you’ve even got yet – the very first part of the job has yet to be finished.
My reply;
The basic concept you suggest seems simple enough, if it were not for the “monthly” variation from year to year as a result of the lack of consideration of long term periods of cyclic influences on weather patterns.
First the 27.32 day periods of lunar declinational atmospheric tides, slew through the 30-31 day months, as well as have a four fold pattern of 109 day period that repeat in the four types of Rossby wave patterns occuring.
On top of that there is the 18.6 year Mn signal of variation between the Max and Min culmination angle, that shows up as a very complex set, of shifting of the background patterns, of meridional flow surges that has made this approach, impossible in past studies, where this was tried.
I think that to try to show shifts in the temps from the same season of different years, has always been adversely affected because of this lack of consideration of the Natural Patterns of atmospheric response to Lunar declinational tides, and their several periods of effects.
Dr, Roy has compensated well for this effect by using the same time period as the original study to effectively negate the pattern problems. I think what he has done here is valid, because of this, and he is to be commended, Thanks.
The maps shown on my site reflect the similarity of the sample periods, by Lunar declination patterns, season and the 18.6 Mn period, if you have any questions on how I have applied this method or how it could be helpful to add QA to the type of study you are suggesting feel free to contact me.
Richard Holle

Claude Harvey

Re: scienceofdoom (18:02:24) :
“Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.”
Perhaps we should focus on the satellite measured, global average temperature at 14,000 feet as Spencer has for some time now in his monthly report. The past 9 months will bring tears to the eyes. Stand by for another “ugh” month in the midst of blizzard conditions closer to earth. It’s setting “high” records again for the month of February.
I’m a skeptic of AGW theory but not a denier of measured data that has not been unduly “adjusted” by unknown algorithms. I’m currently comforted that the dismal numbers may simply be the oceans puking up stored heat as they periodically do, but those numbers cannot be ignored.

Enginear

My thanks to Dr Spencer,
As bad as this sounds to me it appears there is a consensus. We need to redo the temperature data, all the data, including Paleo. The problem lies in who should do this work and what are the rules. For the surface staion historical data I think it would be wise to contract an auditing firm(s) and give them a set of rules and methods and let the data do its work. All the work needs to be explained plainly in language that someone without a master’s degree can understand and without all the acronyms.
All of the research uses the “fact” of the unprecedented warming as the basis for thier findings. The problem is we don’t know how much it really warmed so we can’t be sure it’s unprecedented. Don’t tell me there isn’t time. None of the dire predictions have even hinted at becoming true. And, given the choice, I’ll take warmer vs. colder any day assuming we’re influencing the climates. I vote for a start over.
Sorry for the rant,
Barry Strayer

DR

Ok, maybe I’m missing something or its because I didn’t read the previous post.
Is Roy Spencer saying the U.S. record is way off but the global record is in agreement with Jones?

suricat

scienceofdoom (18:02:24) :
“Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.”
Yes. Most of Earth’s surface is water, so why make land-based observations ‘prima facial’ for global obs!
Perhaps this is because we live on land and not water! However, I also put more pertinence into OHC than the surface record.
Best regards, suricat.

Claude Harvey:
I’m with you on the satellite measurements. Adds a well-needed check on surface temperatures and is perhaps more reliable – because the micro-climate impacts on a relatively small (few thousand) number of weather stations could be significant.
Whereas it’s much harder for those changes to impact the whole of the lower troposphere.
Also, perhaps more to the point, as I think you are suggesting – the land temperatures can be significantly affected by the oceans “puking up” stored heat.
It’s all about energy. The oceans store 1000x more energy than the atmosphere.
So a few months where deeper water (which is colder) gets turned over to the surface will result in colder land and sea surface temperatures.
But it hasn’t actually meant that the earth has cooled. And the reverse is true as well.
It’s supposed to be harder to measure OHC, but every time I see another one of these articles I think it must be easier to measure OHC. And seeing OHC instead of temperature is much more meaningful..

David L. Hagen

Re: Jack Wedel (Feb 27 18:33),

Mercury freezes solid at -40 F

Amazing. Mercury actually freezes!
NIST reports a triple point of 234.3156 K (~ -38.8344 deg C, or -37.0919 deg F).
Now wonder how they measured -45 F to -55 F with a mercury thermometer on the DEW line? (The difficulties of selective citation and/or memory!)

In the winter most stateside thermometers would be useless – they don’t go low enough. Temperatures usually range between 40° and 50° below zero, but 60° and 65° below are not uncommon. The record low recorded at one site was a frigid 86° below zero. In summer the mercury rises to the 60° level, but seldom higher.

The Distant Early Warning (DEW) Line: A Bibliography and Documentary Resource List
Maybe they used “Spirit Filled” thermometers?
(Wonder if those were developed in Wales?)

G.L. Alston

Graeme W — Unless the word “spurious” has a specific meaning in climate research, I found the use of the word here to indicate a strong bias of “I’m right and the other is wrong”.
Spurious data is generally that which is false and ultimately caused by an outside factor. You should be able to look at a data plot and see that which is not natural.
BarryW — If this is changing over time (more radiative cooling?), could that not be what you’re seeing?
I don’t see that this is important. You could record once a day and as long as the temp was recorded at the same time each day regardless of min/max this would still yield enough information to detect an overall trend when viewed at a long enough timescale. The actual temp isn’t meaningful; only the derivative signal has meaning in this case.
****
Dr. Spencer —
It seems to me that if we have reliable nighttime ground based temps of desert areas then looking at these would be the best indicator re whether CO2 has any effect at all — i.e. since deserts lack water vapour, wouldn’t a warming signal tell us if what warming exists is based on CO2 or other GHG’s that are not water vapour? Or am I missing something?
Thanks!

Just The Facts (18:47:57) :
Retracted, posted on wrong thread, D’oh!

Ivan

USA 48 RURAL 1979-2009 – WARMING 0.08 degrees K PER DECADE
USA 48 URBAN 1979-2009 – WARMING 0.25 degrees K PER DECADE
USA 48 UAH 1979-2009 – WARMING 0.22 degrees PER DECADE
So: UAH and URBAN WRONG??????
Or RURAL WRONG?????
Any thoughts?

scienceofdoom
Population is only a PROXY for UHI.
UHI results from changes to the GEOMETRY at the surface and changes
to the MATERIAL PROPERTIES, and finally to waste heat from human
activity. Now typically more people means more waste heat and tall buildings ( raditaive canyons) and disturbed boundry layers and surfaces that act like heat sinks..
But population is only a proxy for uhi