Spencer: Spurious warming demonstrated in CRU surface data

Spurious Warming in the Jones U.S. Temperatures Since 1973

by Roy W. Spencer, Ph. D.

INTRODUCTION

As I discussed in my last post, I’m exploring the International Surface Hourly (ISH) weather data archived by NOAA to see how a simple reanalysis of original weather station temperature data compares to the Jones CRUTem3 land-based temperature dataset.

While the Jones temperature analysis relies upon the GHCN network of ‘climate-approved’ stations whose number has been rapidly dwindling in recent years, I’m using original data from stations whose number has been actually growing over time. I use only stations operating over the entire period of record so there are no spurious temperature trends caused by stations coming and going over time. Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

U.S. TEMPERATURE TRENDS, 1973-2009

I compute average monthly temperatures in 5 deg. lat/lon grid squares, as Jones does, and then compare the two different versions over a selected geographic area. Here I will show results for the 5 deg. grids covering the United States for the period 1973 through 2009.

The following plot shows that the monthly U.S. temperature anomalies from the two datasets are very similar (anomalies in both datasets are relative to the 30-year base period from 1973 through 2002). But while the monthly variations are very similar, the warming trend in the Jones dataset is about 20% greater than the warming trend in my ISH data analysis.

CRUTem3-and-ISH-US-1973-2009

This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

CRUTem3-minus-ISH-US-1973-2009

While it might be a little premature to blame these spurious transitions on the Jones dataset, I use only those stations operating over the entire period of record, which Jones does not do. So, it is difficult to see how these effects could have been caused in my analysis. Also, the number of 5 deg grid squares used in this comparison remained the same throughout the 37 year period of record (23 grids).

The decadal temperature trends by calendar month are shown in the next plot. We see in the top panel that the greatest warming since 1973 has been in the months of January and February in both datasets. But the bottom panel suggests that the stronger warming in the Jones dataset seems to be a warm season, not winter, phenomenon.

CRUTem3-vs-ISH-US-1973-2009-by-calendar-month

THE NEED FOR NEW TEMPERATURE RENALYSES

I suspect it would be difficult to track down the precise reasons why the differences in the above datasets exist. The data used in the Jones analysis has undergone many changes over time, and the more complex and subjective the analysis methodology, the more difficult it is to ferret out the reasons for specific behaviors.

I am increasingly convinced that a much simpler, objective analysis of original weather station temperature data is necessary to better understand how spurious influences might have impacted global temperature trends computed by groups such as CRU and NASA/GISS. It seems to me that a simple and easily repeatable methodology should be the starting point. Then, if one can demonstrate that the simple temperature analysis has spurious temperature trends, an objective and easily repeatable adjustment methodology should be the first choice for an improved version of the analysis.

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance. Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.

Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis. In my experience, better methods of data analysis come from the ideas of individuals, not from the majority rule of a committee.

Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect. The recent paper by McKitrick and Michaels suggests that a substantial UHI influence continues to infect the GISS and CRU temperature datasets.

In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction. Coincidentally, this is also the conclusion of a recent post on Anthony Watts’ blog, discussing a new paper published by SPPI.

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

0 0 votes
Article Rating
259 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ian H
February 27, 2010 4:44 pm

If anyone were to question the need for data to be freely available, work like this that makes it clear.

Allan M
February 27, 2010 4:53 pm

“In my opinion, simplicity, objectivity, and repeatability should be of paramount importance.”
Isn’t this the case in most things in life.
“Once one starts making subjective adjustments of individual stations’ data, the ability to replicate work becomes almost impossible.”
This is probably the motive.

February 27, 2010 5:05 pm

Dr. Spencer:
. . .It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.

Or even whether the world has warmed in recent decades?
That’s the real ‘square one’.
/Mr Lynn

Dirk
February 27, 2010 5:06 pm

Well done, nice work, falling eye lids, see y soon.

suricat
February 27, 2010 5:06 pm

“It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one.”
I wholeheartedly concur with you Roy.
Not only do we have a lack of station resolution; we cant tell if the effect if clouds (and rain) are evident with only four samples per day. We also have network resolution problems; any station node within a network can only accurately report temp within a ~500m radius.
Needless to say, without a full resolution it is only too easy to end up with a false result! Your adherence to only ‘surviving stations’ shows this.
Best regards, suricat.

pat
February 27, 2010 5:16 pm

Well I think we all know what this means. Scientific fraud. How many other disciplines have been contaminated by spurious, agenda driven, analysis and data alteration?

ROM
February 27, 2010 5:18 pm

The climate warming onion is being peeled and the closer to the core it is peeled, the more rotten that core seems to be.
The first layer was the principal advocates of the CO2 based climate warming, the CRU scientists who by their own words were shown to have massaged and corrupted and possibly deleted relevant data to achieve their personal agendas.
Then the single most important supposedly science based climate organisation in the whole climate warming scam, the IPCC was shown, again by it’s own writings, to be rotten and corrupt and to have deliberately taken on an advocacy role in advising and attempting to influence the world’s governments to alter the very social structure of the way most of the world’s peoples live.
Now right down near the onion’s increasingly rotten core, it seems that the very data that the supposed catastrophic rising of global temperatures is based on and from which all the claims of global warming emanate is being shown to have been either accidently distorted and compromised due to complete incompetence on the part of the advocate climate “scientists” or deliberately and wantonly twisted, massaged and altered by those same “scientists” to achieve a preordained result.
From this it appears that nearly all of the papers, articles and opinions which relied entirely on the veracity of the supposedly science based data supporting the concept of catastrophic global warming are no longer worth the paper they are printed on or the gigabytes of electrons that flowed from their publication.
Why should we ever trust in any way these “scientists” ever again?
If these are the standards of honesty and integrity that so many of the world’s scientists apparently accepted of climate science for nearly two decades, why should we the public who pay the salaries and the often lavish grants that fund science, ever again place any trust in any science until science itself is openly seen to be cleaning it’s filthy augean stables?

Carbon Dioxode
February 27, 2010 5:23 pm

Thank you Roy.
Once again it would seem that Prof Jones reached his conclusion and then selected data to support it.
If this allowed to stand, the Enlightenment and the Age of Reason may as well never happened, and we may as well go back to using Aristotle as the font of all wisdom and pigs bladders as a means of predicting earthquakes.

Graeme W
February 27, 2010 5:23 pm

A plot of the difference between the two datasets is shown next, which reveals some abrupt transitions. Most noteworthy is what appears to be a rather rapid spurious warming in the Jones dataset between 1988 and 1996, with an abrupt “reset” downward in 1997 and then another spurious warming trend after that.

Unless the word “spurious” has a specific meaning in climate research, I found the use of the word here to indicate a strong bias of “I’m right and the other is wrong”. I would personally prefer a more neutral term to be used, such as anomalous – which indicates that there is a strange difference without specify any indication as to what it means or which dataset is correct. After all, it is stated that the actual figures being compared are not the same measures. It could be that the difference is due to that fact alone.
Having said that, I found it very interesting. I’ve wondered several times recently what temperature anomaly graphs would look like purely off raw data without any modification.

J.Peden
February 27, 2010 5:24 pm

It is increasingly apparent that we do not even know how much the world has warmed in recent decades, let alone the reason(s) why. It seems to me we are back to square one
It’s been painfully apparent for quite some time that the elitist Climate Scientists themselves did not care about their “science” enough to start from the beginning in deciding what they were measuring and how best to measure it.
It is perhaps even more astonishing that many other people who think they are Climate Scientists and also feed off “Climate Science” didn’t care enough about their all important basic claim – that “it” is warming “globally” – to personally look into what “it” is and how “it” is measured.
They’ve only had over 20 years to find out.

February 27, 2010 5:31 pm

Looks like a straight forward approach to me, as it follows the KISS principle.
And the findings are interesting, to say the least.
It would be great if it could be the start of a genuine discussion with the people that created the CRUT data. And by discussion I mean a focus on content, not on trying to discredit the other party.
As a newcomer to the climate discussion I am surprised how the two camps talk (bad) about each other iso with each other.

February 27, 2010 5:35 pm

Dr Spencer,
No, the achievement of your article above is not developing a dataset yourself and comparing it to the CRU dataset thereby opening questions about their adjustments . . . and thank you for that.
The truly significant achievement of your article is your contribution to the art of scientific communication. The clearness of your writing strongly illuminates the topic.
I am most grateful for your clear professional style, secondarily grateful for your contribution to the knowledge of the US Surface Temperature Records.
Let there be light . . . . on the temperature dataset.
John

February 27, 2010 5:44 pm

oh excellent post Roy on several counts. Thank you.
I would like to see a century of global mean temperature changes estimated from individual stations which all have long and checkable track records, with individual corrections for UHI and other site factors, rather than the highly contaminated gridded soup made from hugely varying numbers of stations.
The January spike in warming trends suggests UHI to me. Exactly the same is seen in the Salehard (Yamal) record over recent years.

Kevin Kilty
February 27, 2010 5:44 pm

Well, ask and ye shall receive. I wrote about using first-order stations to have a look at temperature trends on a thread earlier today, and now find Dr. Spencer has already done something similar.

Pamela Gray
February 27, 2010 5:56 pm

It is important to know what kind of research is being done on temperature data. And since we cannot yet replicate Jones’ research, we are left to verify the null hypothesis, or in this case, not verify it. Good example of verification research (done a different way with different analysis, different data set, etc) supporting the null hypothesis (IE the CO2 increase is not greatly warming the atmosphere) in contrast to Jones’ work, which rejects it. The design is simple, straightforward, transparent, and leaves the ball in the other court to replicate your work, and attempt to verify it or not.

Pamela Gray
February 27, 2010 5:59 pm

And by the way, the paper Leif sited re: forcing, left 25% of the warming unexplained. Could a calculation error in Jones’ temperature enhancements be that 25%?

Ivan
February 27, 2010 5:59 pm

Wouldn’t be much easier and fruitful for dr Spencer to compile the rural stations in the USA and calculate the trend from this data set, without trying to correct Jones’s mistakes in constructing his temperature index. And then maybe to compare this rural trend with his own UAH trend? For the USA 48, UAH finds decadal trend of 0.22 degrees C. Preliminary analysis of Dr Long based on just rural stations in the USA shows approximately 0.07 or 0.08 per decade. Isn’t that a peculiar inconsistency worth of exploring a little bit more (especially when coupled with an even more peculiar CONSISTENCY between Spencer’s data and NOAA urban, adjusted trend)???

David L. Hagen
February 27, 2010 6:01 pm

Very insightful explorations.
Good to have US “anthropogenic warming” confirmed!
Could any of the January/February “warming” be due to a “daylight saving time” impact temperature reading time?
Or the change in heating period affect the UHI?
Other examples of “anthropogenic” influence on temperatures is shown in:
Fabricating Temperatures on the DEW Line

For numerous reasons many reports were fabricated. No one imagined their fabrications would comprise a data set that would, in future years, be used to detect minor global warming trends and trigger a panic in the world.
Some of the reasons why the reports were fabricated: . . .
(The significance of the difference between -55F and -45F was not appreciated. Both temperatures would freeze your balls off. So why split hairs?) . . .
(a.) physical discomfort of leaving a warm environment and venturing out into the extreme weather conditions to read mercury thermometers located about 200 ft. from the living modules.
(b.) fear of frost bite, getting disoriented by limited visibility, or being mauled by marauding polar bears. . . .
Missing data happens even when polar bears aren’t prowling between you and the thermometer.

February 27, 2010 6:02 pm

The more you read about land-based temperature measurements the more confused you get (well I do).
What was interesting reading the 2009 paper about UHI in Japan (which was reposted on this blog) was that there were a large number of high quality stations with hourly measurement and yet the “correlation” between temperature rise and population density was relatively low (0.4).
At the same time there was a clear trend showing increasing population density caused higher temperature measurements to a 99% significance.
What that means is that there is definitely a UHI effect in Japan. And also that the variation is huge – microclimate effects probably.
(The paper also showed that there had definitely been a significant real warming in Japan over three decades).
Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.

February 27, 2010 6:17 pm

Anthony,
Thank you for posting Dr Spencer’s article.
Anthony, are you (and Dr Spencer) thinking what I am thinking?
You did it with thermometers. It is time to do it at one level up, this time a SURVEY 23 GRIDs OF THE US SURFACE TEMPERATURE DATASET . Do it by assigning on of 23 grids (5 C deg grid squares) to some volunteer to do dataset survey Grid cell by Grid cell. I can help, though no statistician.
John

kim
February 27, 2010 6:18 pm

OK, new contest; find something correct in An Inconvenient Truth.
=================================

Keith Minto
February 27, 2010 6:22 pm

I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.
Are these reported manually, reading bulb thermometers and are maxima and minima calculated from these events ?. These reporting times are presumably convenient and standard and would not necessarily coincide with daily maxima/minima.
This has always puzzled me, even the old bulb thermometers had maximum/minimum markers that could be read and reset once each day and this information would be more readily available to the newer thermistors. So why have ‘reporting times’ ?
Good thoughtful article, Dr Spencer.

c james
February 27, 2010 6:26 pm

Slightly OT….Have you seen Al Gore’s article in the New York Times where he calls us a “criminal generation” if we ignore AGW? This was published today.
http://www.nytimes.com/2010/02/28/opinion/28gore.html?hp

Mindbuilder
February 27, 2010 6:28 pm

I’d like to propose a new standard for climate research. I propose that every paper include a zip file containing all data and a script initiated by a single command that will automatically carry out all calculations of every number AND graph in the paper. Any needed manual modification of raw data should be carried out by explicit lines in the script, along with an explanation. All needed software should be included in the zip file if possible, therefore open source software should be strongly encouraged. In order to save download bandwidth, it may be permissible for the script to specify separate data packages or software packages by cryptographic hash, if those packages would be frequently used in many papers. This way we would not only have the data and the code, we would know that both the data and the code were the ones used in the calculations, and we could easily check that the calculations were repeatable. This procedure would add a small burden,especially to the plotting of graphs, which would have to be scripted, but it would dramatically increase the credibility of the research. Skeptics may be able to almost force climate researchers to use this method by using it themselves, and thus establishing it as a required best practice.

Jack Wedel
February 27, 2010 6:33 pm

Hey David. Mercury freezes solid at -40 F

a jones
February 27, 2010 6:36 pm

As I have commented elsewhere even if the data were carefully recorded and logged with everything done shipshape and Bristol fashion I suspect few physicists, the experts in precision measurement, and few statisticians, the experts in reconciling large data sets, [and note few self styled climatalologists are expert in either discipline] that the real experts would regard the resulting figure as anything but a statistical artifact which might or might NOT bear some relationship to Global Temperature if indeed that term itself has any meaning.
Since there is neither any way to know whether or not there is any relationship or to find whether one might exist it seems to me, however laudable the idea of cleaning up the surface temperature data might seem, it is a futile exercise than can tell us nothing except how badly the original work was done by these selfsame self styled climatologists.
And in referring to these rogues as climatologists I mean no disrespect to the many genuine scientists who toil in the field including Dr. Spencer.
The fact is we don’t need this data, we are in the satellite era which can provide all the data we need. We have the ARGO buoy system. Although we still learning how to use them we have satellite sea level measurement and even gravitational measurements. And we can measure from space both TSI and reflected radiation radiation from the earth. In short all the tools we need.
For if there is any lesson to learn from this mess it is that nothing much happened to the Global climate in the 20th century and rather than trying to analyse this non event it with inadequate tools it is far better to see for ourselves what is really happening, if anything, now and in the future so that in the next few decades we really will have a better if imperfect understanding of what is going on.
And be assured despite alarmist urgings to the contrary there is no urgency about this, we can take our time because nothing cataclysmic in climate terms is going to happen in the next few hundred years or so. However much fossil fuel we burn. Or how many babies are born.
There is a wonderful word which I discovered in the Times of London today, Plunderbund. It is credited as German 1949 and means a corrupt political, commercial and financial alliance.
Well now the AGW plunderbund is collapsing perhaps we can get back to doing some real science again.
Kindest Regards

davidmhoffer
February 27, 2010 6:39 pm

I’ve thought often about UHI and how to get around it. Surface stations are just simply subject to too many variables. Tree grows too tall. Tree falls down. Someone puts up a building. I came up with one odd idea which was to stop trying to avoid the UHI and use it instead.
In every urban centre stick a weather station at the top of the tallest building, right downtown. Up on a pole or something so that air conditioners and other things on the roof are eliminated as much as possible. Since it is the tallest building, it can’t get shade from another building, nearby buildings don’t just fall down on their own, and if someone builds a new and taller building, you will know well in advance. Then you build two or three concentric rings of weather stations, all sited on the top of the tallest building in that area, right out to the edge of the ‘burbs. That should allow you to measure the temperature gradient between city centre and city edge. Now here’s the interesting part.
Every urban centre that has a handful of properly sited weather stations in the surrounding area now becomes a hub. The “UHI free” temperature data can now be compared to the “UHI included” data and the UHI gradient for each hub calculated. In every hub that we are lucky enough to have historical from both right downtown weather stations and rural weather stations, we should be able to “extract” the UHI signal from the downtown weather station data and extrapolate backward the downtown data without the UHI signal. Going forward in time we now have trend line information on both fluctuations in UHI (which ought to be interesting all on their own) and temperature trends from downtown weather stations from which the temperature without UHI can be derived.
Thoughts?

Editor
February 27, 2010 6:47 pm

Hello David
I am trying to understand all of the potential drivers of Earth’s climate system;
http://www.physicalgeography.net/fundamentals/7y.html
http://oceanservice.noaa.gov/education/pd/climate/factsheets/whatfactors.pdf
and determine which ones are primarily responsible for recent and forthcoming changes in Earth’s climate system.
There seems to be reasonable evidence of a significant ocean component based on the cycles of the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation;
http://icecap.us/docs/change/ocean_cycle_forecasts.pdf
http://www.appinsys.com/GlobalWarming/PDO_AMO.htm
http://www.atmos.washington.edu/~mantua/REPORTS/PDO/PDO_egec.htm
http://www.atmos.washington.edu/~mantua/REPORTS/PDO/PDO_cs.htm
And there also seems to be reasonable evidence for a significant volcanic component based historical observation:
http://www.geology.sdsu.edu/how_volcanoes_work/climate_effects.html
http://www.longrangeweather.com/global_temperatures.htm
http://adsabs.harvard.edu/abs/1991vci..nasa…..R
How significant a factor do you consider solar variability as a driver of recent and forthcoming changes in Earth’s climate system as compared to the impact of ocean cycles, volcanic activity, natural variability and other factors?

Larry
February 27, 2010 6:48 pm

Good work, Roy. I hope another follow-up book on the AGW subject is forthcoming for the layman, to help better explain all the new information you have found. Well, maybe sometime soon, anyway. I know you’re busy.

Doug in Seattle
February 27, 2010 6:58 pm

Dr. Spencer:
Why the insistence on using grids. It would seem, given the irregular placement of stations that TIN would be a better choice. I would also allow better separation of land and ocean since so many land (lighthouses/marinas) and ocean stations (buoys/oil platforms) are close to shorelines.
I see no reason why the TIN polygons would more more difficult to work with, and I think it would be easier to eliminate hot spots.

John Blake
February 27, 2010 7:02 pm

Valid data, evaluated with integrity, conclusions not in 180-degree opposition to manifest results– too much to ask? Only because “climate studies” is not an empirical, experimental discipline but a classification exercise akin to botany, dealing only in hindsight because linear extrapolation of complex dynamic systems are mathematically and physically impossible, has this Green Gang of AGW propagandists foisted GIGO to the extent they have.
Start from scratch, by all means… but remember, since 1988 if not before, the world’s entire climatology establishment has gotten away with serial scientific murder, complicit in so many areas as to render every aspect of their endeavor suspect for a generation. By (say) 2040, as Earth enters a probable Dalton or even 70-year Maunder Minimum presaging an overdue end to our current Holocene Interglacial Epoch, this extraordinary episode will be seen for what it is: A frontal assault on industrial/technological civilization by nihilistic Luddite sociopaths (see Ehrlich, Holdren, recently Keith Farnish) bent on sabotaging, subverting, global energy economies in furtherance of an extreme radical anti-humanist agenda.
For such as these, we truly lack a word. Would “thanatocists” be apropos?

February 27, 2010 7:20 pm

Dr. Spencer, an alternative explanation for the warming trend from 1973 to 2009 could be that January and February are not all that warmer in recent years, however, they were quite a bit colder in the late 1970’s. That alone would create an impression of a warming trend. I refer to this on my blog as the Abilene Effect, after the small town in Texas where it is very clearly demonstrated.
http://sowellslawblog.blogspot.com/2010/01/cold-winters-created-global-warming.html

janama
February 27, 2010 7:24 pm

I really question the accuracy of the whole system. Yesterday I visited that airport near me that I found to have no discernible warming in it’s 1908 – 2009 temperature record and found the Stevenson screen sited in an open lawn area which would qualify as perfect under Anthony’s criteria. It was locked and there was a cable out the rear so I assume it’s automatic. A few yards away was an automatic rainfall gauge.
As I left the park I asked the park manager if anyone came and read the thermometer and he said that some one came twice a day but he read the other meter. Other meter? yes – there was another Stevenson screen approx 200 yards from the original and was set up on a nice lawn but with more buildings around it but nothing I could see as a problem.
So I went back to the Bureau of Meteorology site and sure enough there was a second listing for Casino Airport but it only had data from 1995. So I downloaded that data and put it up against the original 1908 – 2009 data and in 1995 they were identical but varied after and it appeared that the new station varied around .5C – .7C cooler.
Now I’ve been told we’ve warmed .7C/century but judging by this it’s +/- .7C.

February 27, 2010 7:26 pm

Dr. Spencer,
I’m not sure how the satellite data are calibrated. Are they calibrated using “ground truthing” or do you use on-board calibrators? Forgive me, if you have already addressed this on your website or in earlier posts.
In any case, if you use the former, I wouldn’t use any ground-based thermometers that could conceivably be contaminated by microclimatic effects, no matter how “rural” their location. Anyone who has walked on an asphalt or concrete patch knows they don’t have the same temp as, say, grass, no matter how small the patch. This is certainly true in the daytime, as well as at night. Think, e.g., about where dew or frost or snow dusting is likely to be observed in the morning.
If you use on-board calibration devices (or whatever), any likelhood that they can be systematically biased (one way or the other) considering that the instruments are in pretty harsh environment (or are they air conditioned?)
Perhaps you can direct me to a primer on satellite temp measurements.
Thanks for your postings, BTW.
REPLY: The AMSU calbration method has been covered here: http://wattsupwiththat.com/2010/01/12/how-the-uah-global-temperatures-are-produced/
– Anthony

Squidly
February 27, 2010 7:26 pm

Sorry, OT, but has anyone caught the newest revelations from Gore, published today in the New York Times?
http://www.nytimes.com/2010/02/28/opinion/28gore.html?ref=opinion
Someone evidently found Gore, dug his ass out of a snowbank, and wouldn’t you know it, he picks up right where he left off. Amazing BS in this Op-Ed.

hotrod ( Larry L )
February 27, 2010 7:28 pm

Due to the complexity involved in teasing possible temperature trends out of historical temperature data, perhaps the KISS (Keep It Simple Stupid) principle is a very good place to start.
Find a geographically uncomplicated area that has multiple high quality rural stations which have long uninterrupted records.
Work out an objective well documented methodology to compute important characteristics of those stations and their temperature data. Figuring individual trends, and the trends of the whole group etc., and apply it to that small set of stations. Then test what that methodology does when you drop a single station out of the set, or multiple stations out of the set, so you can characterize the behavior to expect from the data as you have station dropouts and additions.
Once you are satisfied the processing method does not introduce odd or unreasonable behavior, figure out a realistic error budget for your output data.
Once you have a reliable, objective and well documented and well behaving process, try it on other more complicated areas.
Repeat as necessary to find the weaknesses in the process, and develop rational methods to work with different types of data problems, such as station moves and gradual urbanization.
Do all this in an open source model process where the wisdom of the crowd can help refine the process. A process that allows independent verification and validation of the methods by those who have the special skills and experience in statistics, instrumentation and measurement precision, weather, basic physics and micro climate effects etc. to produce an set of well documented code modules to perform the necessary process steps to follow this method.
Let individuals apply those code modules to various small sub sets of temperature data from around the world to verify the code modules are flexible enough and well behaved to handle real world data in a predictable manner.
Then expand the process to country sized analysis, then hemisphere size analysis etc.
My personal feeling is that this sort of walk before you can run verification was not done, in the existing data set processing methods, and as a result I suspect some of the analysis methods do odd things like that unexplained discontinuity in 1997.
Without well documented and validate code blocks that everyone agrees behave reasonably with real data, I think it is an exercise in futility to try to process even good data and produce a trustworthy output.
I know I have been surprised more than once when what I thought was a relatively trivial programming problem had a hidden bug that did something totally unexpected in certain specific situations. Does the code complain if some station due to an input error shows a 72 deg high for the day when it should have been entered as 27, or does it just blindly process that value and bury it in multiple steps of processing. What does it do if the high for the day is lower than the supposed low for the day at a station?
As I read through some of the studies and reports related to climate you have simple statements about how the data was handled but without knowing the actual code that processed the data you have no way of knowing if the intended processing stated in the study actually occurred, or unknown to everyone involved including the author, that perhaps some computational artifact was introduced into the processed data.
We also have no idea what if any error checking was done on the input data to ensure that it was not corrupted at some point by hardware, software or even data entry errors.
Larry

BarryW
February 27, 2010 7:35 pm

Dr Spencer, could you’re difference be partly attributable to the difference in using max/min average vs your synoptic average? A faster rate of cooling, for example, might cause the actual high and low to be about the same but the intermediate values might be depressed causing your average to be lower. If this is changing over time (more radiative cooling?), could that not be what you’re seeing?

February 27, 2010 7:37 pm

Dr Spencer,
I apologize for misspelling your name in my comment “John Whitman (18:17:19)”

John

crosspatch
February 27, 2010 7:38 pm

“Therefore, more important than the recently reported “do-over” of a global temperature reanalysis proposed by the UK’s Met Office would be other, independent researchers doing their own global temperature analysis.”
I reached that same conclusion, Dr. Spencer, a couple of years back. It would seem that trying to keep track of all the adjustments would be a ball of snakes. One might think that some prestigious academic institution would want to create some standard repository of climate data from which research could be performed over the years.

February 27, 2010 7:43 pm

Dave McK (17:13:09) :
I would like to see the raw data for each station plotted on a map and animated.
Forget homogenizing, gridding and all the other manipulations until first you show what you have to work with.
When each data point is color coded by absolute temperature (forget anomalies) it will be readily apparent what stations behave oddly, what are daily effects, monthly, seasonal, annual –
it will be possible to examine the month of January 100 years ago alongside the same month this year – visually, at any scale and over any time period.
This sort of representation will reveal everything- quality of the data at each station on over any time span – at the native resolution of the data- and from there you can speed it up, zoom, split screen for comparison- anything.
You don’t know what you’ve even got yet – the very first part of the job has yet to be finished.
My reply;
The basic concept you suggest seems simple enough, if it were not for the “monthly” variation from year to year as a result of the lack of consideration of long term periods of cyclic influences on weather patterns.
First the 27.32 day periods of lunar declinational atmospheric tides, slew through the 30-31 day months, as well as have a four fold pattern of 109 day period that repeat in the four types of Rossby wave patterns occuring.
On top of that there is the 18.6 year Mn signal of variation between the Max and Min culmination angle, that shows up as a very complex set, of shifting of the background patterns, of meridional flow surges that has made this approach, impossible in past studies, where this was tried.
I think that to try to show shifts in the temps from the same season of different years, has always been adversely affected because of this lack of consideration of the Natural Patterns of atmospheric response to Lunar declinational tides, and their several periods of effects.
Dr, Roy has compensated well for this effect by using the same time period as the original study to effectively negate the pattern problems. I think what he has done here is valid, because of this, and he is to be commended, Thanks.
The maps shown on my site reflect the similarity of the sample periods, by Lunar declination patterns, season and the 18.6 Mn period, if you have any questions on how I have applied this method or how it could be helpful to add QA to the type of study you are suggesting feel free to contact me.
Richard Holle

Claude Harvey
February 27, 2010 7:56 pm

Re: scienceofdoom (18:02:24) :
“Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.”
Perhaps we should focus on the satellite measured, global average temperature at 14,000 feet as Spencer has for some time now in his monthly report. The past 9 months will bring tears to the eyes. Stand by for another “ugh” month in the midst of blizzard conditions closer to earth. It’s setting “high” records again for the month of February.
I’m a skeptic of AGW theory but not a denier of measured data that has not been unduly “adjusted” by unknown algorithms. I’m currently comforted that the dismal numbers may simply be the oceans puking up stored heat as they periodically do, but those numbers cannot be ignored.

Enginear
February 27, 2010 8:08 pm

My thanks to Dr Spencer,
As bad as this sounds to me it appears there is a consensus. We need to redo the temperature data, all the data, including Paleo. The problem lies in who should do this work and what are the rules. For the surface staion historical data I think it would be wise to contract an auditing firm(s) and give them a set of rules and methods and let the data do its work. All the work needs to be explained plainly in language that someone without a master’s degree can understand and without all the acronyms.
All of the research uses the “fact” of the unprecedented warming as the basis for thier findings. The problem is we don’t know how much it really warmed so we can’t be sure it’s unprecedented. Don’t tell me there isn’t time. None of the dire predictions have even hinted at becoming true. And, given the choice, I’ll take warmer vs. colder any day assuming we’re influencing the climates. I vote for a start over.
Sorry for the rant,
Barry Strayer

DR
February 27, 2010 8:09 pm

Ok, maybe I’m missing something or its because I didn’t read the previous post.
Is Roy Spencer saying the U.S. record is way off but the global record is in agreement with Jones?

suricat
February 27, 2010 8:09 pm

scienceofdoom (18:02:24) :
“Perhaps as Roger Pielke Sr says we should really focus on ocean heat content and not on measuring the temperature 6ft off the ground in random locations around the world.”
Yes. Most of Earth’s surface is water, so why make land-based observations ‘prima facial’ for global obs!
Perhaps this is because we live on land and not water! However, I also put more pertinence into OHC than the surface record.
Best regards, suricat.

February 27, 2010 8:14 pm

Claude Harvey:
I’m with you on the satellite measurements. Adds a well-needed check on surface temperatures and is perhaps more reliable – because the micro-climate impacts on a relatively small (few thousand) number of weather stations could be significant.
Whereas it’s much harder for those changes to impact the whole of the lower troposphere.
Also, perhaps more to the point, as I think you are suggesting – the land temperatures can be significantly affected by the oceans “puking up” stored heat.
It’s all about energy. The oceans store 1000x more energy than the atmosphere.
So a few months where deeper water (which is colder) gets turned over to the surface will result in colder land and sea surface temperatures.
But it hasn’t actually meant that the earth has cooled. And the reverse is true as well.
It’s supposed to be harder to measure OHC, but every time I see another one of these articles I think it must be easier to measure OHC. And seeing OHC instead of temperature is much more meaningful..

David L. Hagen
February 27, 2010 8:17 pm

Re: Jack Wedel (Feb 27 18:33),

Mercury freezes solid at -40 F

Amazing. Mercury actually freezes!
NIST reports a triple point of 234.3156 K (~ -38.8344 deg C, or -37.0919 deg F).
Now wonder how they measured -45 F to -55 F with a mercury thermometer on the DEW line? (The difficulties of selective citation and/or memory!)

In the winter most stateside thermometers would be useless – they don’t go low enough. Temperatures usually range between 40° and 50° below zero, but 60° and 65° below are not uncommon. The record low recorded at one site was a frigid 86° below zero. In summer the mercury rises to the 60° level, but seldom higher.

The Distant Early Warning (DEW) Line: A Bibliography and Documentary Resource List
Maybe they used “Spirit Filled” thermometers?
(Wonder if those were developed in Wales?)

G.L. Alston
February 27, 2010 8:25 pm

Graeme W — Unless the word “spurious” has a specific meaning in climate research, I found the use of the word here to indicate a strong bias of “I’m right and the other is wrong”.
Spurious data is generally that which is false and ultimately caused by an outside factor. You should be able to look at a data plot and see that which is not natural.
BarryW — If this is changing over time (more radiative cooling?), could that not be what you’re seeing?
I don’t see that this is important. You could record once a day and as long as the temp was recorded at the same time each day regardless of min/max this would still yield enough information to detect an overall trend when viewed at a long enough timescale. The actual temp isn’t meaningful; only the derivative signal has meaning in this case.
****
Dr. Spencer —
It seems to me that if we have reliable nighttime ground based temps of desert areas then looking at these would be the best indicator re whether CO2 has any effect at all — i.e. since deserts lack water vapour, wouldn’t a warming signal tell us if what warming exists is based on CO2 or other GHG’s that are not water vapour? Or am I missing something?
Thanks!

Editor
February 27, 2010 8:27 pm

Just The Facts (18:47:57) :
Retracted, posted on wrong thread, D’oh!

Ivan
February 27, 2010 8:28 pm

USA 48 RURAL 1979-2009 – WARMING 0.08 degrees K PER DECADE
USA 48 URBAN 1979-2009 – WARMING 0.25 degrees K PER DECADE
USA 48 UAH 1979-2009 – WARMING 0.22 degrees PER DECADE
So: UAH and URBAN WRONG??????
Or RURAL WRONG?????
Any thoughts?

steven mosher
February 27, 2010 8:30 pm

scienceofdoom
Population is only a PROXY for UHI.
UHI results from changes to the GEOMETRY at the surface and changes
to the MATERIAL PROPERTIES, and finally to waste heat from human
activity. Now typically more people means more waste heat and tall buildings ( raditaive canyons) and disturbed boundry layers and surfaces that act like heat sinks..
But population is only a proxy for uhi

tokyoboy
February 27, 2010 8:31 pm

scienceofdoom (18:02:24) :
“there is definitely a UHI effect in Japan. And also that the variation is huge – microclimate effects probably.”
Doom, yes you’re right. Our MET Office publishes this graph (sorry for the accompanying language):
http://www.data.kishou.go.jp/climate/cpdinfo/temp/an_jpn.html
and says that a temp rise of +1.13 degC is noted for past 100 years or so. However, this graph has been drawn using data from 17 stations, and most of them exhibit conspicuous warming due to urbanization, especially from the 70s. The claim by MET, that they selected sites with minimal urbanization, is utter nonsense.

steven mosher
February 27, 2010 8:33 pm

Mindbuilder (18:28:01) :
We’ve been calling for this since 2007. It’s formally called reproducible results

February 27, 2010 8:41 pm
RockyRoad
February 27, 2010 8:53 pm

Maybe Al Gore would have a comment or two on this:
http://www.foxnews.com/scitech/2010/02/26/inconvenient-truth-for-al-gore/
I guess not.

February 27, 2010 8:55 pm

Most of the adverse things that can happen to an originally well-sited surface station will raise the average temperature, and yet the adjustments applied by GISS and now by USHCN v2 run just the opposite direction, increasing the warming trend. Fully a third and maybe a half of the claimed warming this last century is “adjustments.”
I understand the world has six times as many people now as it did when the 20th century began, so that alone implies an overall UHI effect that really isn’t compensated for anywhere.
Recomputing with new algorithms on the adjusted data, as Dr Peterson
did when he compared surfacestations.org’s best stations to the overall gov’t homogenized record, is just a waste of time. All you find is minute detail of what the government did to the real numbers.

February 27, 2010 8:56 pm

I’m not sure of the significance of pointing out the difference of a few hundredths of a degree in two different data sets.

February 27, 2010 8:59 pm

The decision of GISS to classify every station with a population of less that 10,000 as rural appears to be an error.
If one uses just the USHCN data for Missouri and Kansas there is a significant trend in temperature as a function of community size, but it is a logarithmic relationship, and works all the way down to the smallest size, which shows a greater change with growth than you see in the larger communities.

Frank
February 27, 2010 9:00 pm

“simplicity, objectivity, and repeatability should be of paramount importance”
Isn’t this what people used to refer to as science? I’m getting old.

Ed
February 27, 2010 9:03 pm

I don’t understand why the first two data sets are represented with different scales. It would be nice to highlight this with more than “20% difference” between the data sets. At least for us older people with poor eyes . . .

steven mosher
February 27, 2010 9:12 pm

Dr. Spencer
If you read brohan 06 I think you can see that CRUTEM3 is not adjusted for UHI. The text is not that terribly clear as they cite a variety of studies that give contradictory findings. It’s my belief ( supported by things Jones says in the mails ) That here is how CRUTEM3 handles UHI.
Jones argues in effect that his previous study showed that the UHI effect was .05C per century starting in 1900 ( 1sd) Refers to papers that have figures below this and one paper with a figure of .3C per century.
he then argues that they dont have all the meta data to assess the issue properly and consequently they use the .05C figure. This value is NOT
subtracted from the Series but rather is reflected in the error bars,
a one sided ajustment is applied to the error.
But I could be wrong of course

An Inquirer
February 27, 2010 9:14 pm

From the Spencer article: “yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.”
While the several papers from Jones have lacked clarity on this point, in the last couple of years, I have been convinced that Jones’s message is that he does not make a UHI adjustment in his baseline estimate of temperature trends, but he does increase his error bars by a miniscule amount in consideration of UHI.

Ivan
February 27, 2010 9:15 pm

USA 48 Rural 1979-2009 – Warming 0.08 degrees K per decade
USA 48 Urban 1979-2009 – Warming 0.25 degrees K per decade
USA 48 UAH 1979-2009 – Warming 0.22 degrees per decade
So: UAH and URBAN WRONG??????
Or RURAL WRONG?????
Any thoughts?

February 27, 2010 9:26 pm

c james (18:26:46) :
Slightly OT….Have you seen Al Gore’s article in the New York Times where he calls us a “criminal generation” if we ignore AGW? This was published today.
http://www.nytimes.com/2010/02/28/opinion/28gore.html?hp

Well, the Goracle has broken his silence. In addition to repeating all the well-worn canards of AGW alarmism, he is now explicitly attacking capitalism, “market triumphalism,” “unrestrained markets,” and “market fundamentalism.”
Except, of course, the market in ‘carbon’ trading, created by Cap and Trade legislation, where he is invested.
His true colors are showing.
/Mr Lynn

February 27, 2010 9:34 pm

steven mosher:

Population is only a PROXY for UHI.
UHI results from changes to the GEOMETRY at the surface and changes
to the MATERIAL PROPERTIES, and finally to waste heat from human
activity.

I agree it’s only one proxy. The Japan UHI paper had some more extensive analysis and discussion that I didn’t post. The paper also looked at land surface properties as well and found similar results.

As an alternative index of urbanization, an analysis based
on the areal coverage of urban surface was performed..
There is a positive signal of about 0.1 °C/decade for categories 5–6, and 0.02–0.03 °C/decade for the category 3. Thus the overall feature of the relationship between U3 (land use) and δT mean is quite similar to that between D3 (population density) and δT mean.

In the conclusion:

A related problem in our result is the lack of correlation between temperature trends and the rate of changes in the areal coverage of urban surfaces. This fact may imply that urban warming is more closely related to internal changes, such as increase in business activity and building height, rather than spatial coverage of urban surfaces. In fact, the population of Tokyo has almost unchanged or even decreased since the 1960s (from 8.9 million in 1965 to 8.5 million in 2005), in which most of its domain had already been covered by urban surfaces, but still there have been substantial increase in cars and tall buildings in the central business area accompanied by an intensifying heat island (Figure 1; Kawamura, 1985). For longer time span tracing back to the early 20th century, however, urban landscapes have so changed that there may be closer relationship between changes in geographical parameters and urban temperature.

Interesting stuff, and maybe worth a follow up post..

February 27, 2010 9:34 pm

””””Mike McMillan (20:55:15) : . . . compared surfacestations.org’s best stations to the overall gov’t homogenized record, is just a waste of time. All you find is minute detail of what the government did to the real numbers.””””
Does anyone find it increasingly disturbing, as I do, that it is our government that is doing these data adjustments? [I did not say manipulation but it is increasingly starting to appear more and more that way to me] How did it come to be in the first place that our government has taken this role at all? What part couldn’t be done better/cheaper with more integrity by the voluntary/private sector?
I don’t have confidence that any of the current governmental processes that led us into this situation with the surface temperature datasets are capable of leading us out of it. My distrust is evolving from a background hum to irritation.
John

Kum Dollison
February 27, 2010 9:47 pm

Ivan (20:28:06) :
USA 48 RURAL 1979-2009 – WARMING 0.08 degrees K PER DECADE
USA 48 URBAN 1979-2009 – WARMING 0.25 degrees K PER DECADE
USA 48 UAH 1979-2009 – WARMING 0.22 degrees PER DECADE
So: UAH and URBAN WRONG??????
Or RURAL WRONG?????
Any thoughts?

Someone really needs to answer Ivan’s question.

KW
February 27, 2010 9:51 pm

Interesting. So winter warms a tad. Big deal. Looks like there will be non-spurious cold the next 6-14 days, eh?
http://www.cpc.noaa.gov/products/predictions/610day/index.php
http://www.cpc.ncep.noaa.gov/products/predictions/814day/

Nick
February 27, 2010 10:12 pm

If you want to keep things simple,why did you not compare apples with apples,and use max/min data, Dr Spencer?

rbateman
February 27, 2010 10:29 pm

I finished up the semi-rural station of Grants Pass, Oregon.
http://www.robertb.darkhorizons.org/TempGr/GrPass1889_2009.GIF
The years of 2002-4 were a mess, with one of them missing 4 months of data
(good grief !). I used Ashland, Or. to match up the pattern and fill in.
Still, from 1920 – 2009, the median temp stays on a level plane, though the high temps dropped and the lows rose.
What’s interesting is the diurnal (bottom line) which is the difference between the median yearly high and median yearly low. It looks to be independent of warming or cooling cycles, doing more of a job on moisture content.
I’ll get around to trying it sometime with a UHI afflicted station, unless someone wants to beat me to it.

DeNihilist
February 27, 2010 10:31 pm

Here is the best example yet, of toturing the data to get the result wanted!
🙂

Apu
February 27, 2010 10:32 pm

[Also, while the Jones dataset is based upon daily maximum and minimum temperatures, I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.]
You are sampling mostly at night. Could that affect your trend?

February 27, 2010 10:53 pm

”””’rbateman (22:29:27) : I finished up the semi-rural station of Grants Pass, Oregon.”””””
Robert B,
I agree with your analysis that the upward (warming) lo avg trend is driving the median trend.
Nice work. Thanks.
John

rbateman
February 27, 2010 11:05 pm

KW (21:51:14) :
I have to wonder why Canada isn’t on NOAA’s radar screen.

George E. Smith
February 27, 2010 11:08 pm

Well a tiny chink of daylight shining through. Dr Roy, I am proud of you; a whole four temperatures per day. At last we can claim to satisfy Nyquist as to the question of temporal aliassing noise ; at least as it affects the daily average; which is after all what you claim to do with that data. So no allowance for cloud variations; but hey; I’ll take any improvement at all, and 4 times daily is a step forward.
I am curious though Dr Spencer; if I understand you correctly, your four reporting times are set to UTC; meaning you read ALL station thermometers at exactly the same time; which would be a differnet diurnal time for each station at least as far as longitude shift.
I like your process; my mind asks what does the local time spread do to such data (if anything). But I’ll worry about that as soon as I digest what else you are doing.
Yes it helps to have other people reading the thermometers; or at least twiddling with the same set of numbers. Good hunting there Dr Spencer

Dave F
February 27, 2010 11:09 pm

May I proffer that in financial auditing, we would find that the ends of the graph (Jones minnus ISH) showing no difference would be odd and warrant further investigation? The endpoints being the only spots close to 0 is strange, and maybe coincidental, but certainly would deserve a further look. Maybe there is a seasonal bias?

George E. Smith
February 27, 2010 11:14 pm

“”” Nick (22:12:25) :
If you want to keep things simple,why did you not compare apples with apples,and use max/min data, Dr Spencer? “””
Insanity has often been defined as doing the same thing over and over and expecting to get different results.
One reason to not use min/max data is that we know that fails to satisfy the Nyquist criterion; even for recovery of the daily average.
Besides; was it Einstein who said: “scientific theories should be as simple as possible; but no simpler.”
Same goes for scientific data gathering, or processing

Dave F
February 27, 2010 11:24 pm

Oops, my fault. I see now that the graph does not exactly = 0 in Dec. Still its proximity I find a little weird. Almost looks like a bias from too many normal distributions. 🙂

February 27, 2010 11:25 pm

Anthony (and Dr. Spencer, if you read this),
I think what Dr. Spencer is showing here is the USHCN adjustments here: http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif , or something very close.
I can almost exactly replicate his graph by comparing GHCN v2.mean (raw data) with v2.mean_adj (adjusted data): http://i81.photobucket.com/albums/j237/hausfath/Picture61-1.png
A better test for his new temperature data would be to compare it to raw GHCN data (e.g. v2.mean), which would make sense since the data he is using is also raw. I suspect based on the chart above that they would be nearly identical. If he has concerns with the way U.S. temp data is adjusted by GHCN/USHCN, I understand, but I’m not sure how this is new news per se.

George E. Smith
February 27, 2010 11:29 pm

“”” BarryW (19:35:54) :
Dr Spencer, could you’re difference be partly attributable to the difference in using max/min average vs your synoptic average? A faster rate of cooling, for example, might cause the actual high and low to be about the same but the intermediate values might be depressed causing your average to be lower. If this is changing over time (more radiative cooling?), could that not be what you’re seeing? “””
BarryW, I suspect that if Dr Spencer does exactly the same thing that Dr Phil Jones did (or his crew); we would find that Dr Spencer would get exactly the same reults as Phil Jones.
That is NOT the assigned task here.
I think the idea; and possibly why Roy is doing this; IS TO TRY AND GET THE RIGHT ANSWER. Well to the extent, that there is a right answer for that data set.
Duplicating Jones’ results is not the purpose here; using his data wisely; may be why Roy did this exercise.

AusieDan
February 27, 2010 11:33 pm

I did a lot of family history research 15 years ago.
It took me five years to identify the parents of my paternal grandmother, as family members had taken great care to hide her true story and available historic records were sparce.
I made no progress at all until I disproved ALL the family stories.
At that time, I realised with horror that I had absolutely no knowledge of her parentage.
That was the big breakthrough.
I was then at the starting line and was able to reasonably quickly tease out the few valid clues left in the official records.
We may be at the true starting point with global temperature.

AusieDan
February 27, 2010 11:36 pm

More to the point, it would seem that any valid global temperature database will only contain records of rural stations.

steven mosher
February 27, 2010 11:39 pm

scienceofdoom.
Love your site and your writing. very clear.
There are a variety of studies on UHI ( my fav is the bubble study)
The presence of tall structures causes two issues.. change to the boundary layer and turbulent mixing and radiative canyons ( think of it like a corner reflector for IR)
The key, I think, is NOT to adjust for UHI, but rather to pick sites where it is less likely to occur. How many of those are there? dunno.

steven mosher
February 27, 2010 11:44 pm

An Inquirer (21:14:46) :
yes on my reading he makes No adjustment. Weird. he calculated the bias and left it in, nudging the error bars asymetrically. in the emails suasan solomon has a hard time keeping this straight, jones explantion is not lucid–clear

February 27, 2010 11:45 pm

Adjusting for UHI in the wrong direction? Why the surprise? GISS do it all the time in Australia (in 5 out of 8 sites I have analysed).

Beth Cooper
February 27, 2010 11:46 pm

Dr Spencer,
Thanks for your ‘open society’ investigation/ presentation.
The grey clouds are shifting,
The blue sky is lifting!

February 27, 2010 11:56 pm

Interesting that US winters look to warm the most. Here in Central Europe, April-August show visible cooling 1960-1980 and equal warming 1980-2006, many other months show almost no trend during the whole 20th century.
http://climexp.knmi.nl/data/tsicrutem3_17.5-22.5E_47.5-50N_nmonth.png
However, it is encouraging that it looks possible to replicate the datasets by relative simple way.

aMINO aCIDS iN mETEORITES
February 28, 2010 12:01 am

Kum Dollison (21:47:46) :
Someone really needs to answer Ivan’s question.
……………………………………………………………………………………………………………..
If these seem important to you then you should answer them Kum. You can be the someone

jorgekafkazar
February 28, 2010 12:06 am

Claude Harvey (19:56:33) : “Perhaps we should focus on the satellite measured, global average temperature at 14,000 feet…It’s setting “high” records again for the month of February.
“I’m a skeptic of AGW theory but not a denier of measured data that has not been unduly “adjusted” by unknown algorithms. I’m currently comforted that the dismal numbers may simply be the oceans puking up stored heat as they periodically do, but those numbers cannot be ignored.”
Record snowfall means record amounts of latent heat removed from water vapor to produce ice. The ice falls to the ground; the heat remains in the atmosphere. Somewhere else, ocean heat went into vaporizing seawater. The vapor went up; the ocean cooled. Everything would balance, but high atmospheric temperatures result in increased heat loss to space. Net result: lower actual global heat content.

Dave N
February 28, 2010 12:09 am

pat (17:16:38) :
“Well I think we all know what this means. Scientific fraud. How many other disciplines have been contaminated by spurious, agenda driven, analysis and data alteration”
Probably more than we realise. That said, the situation for climate is at best (ie thinking the best of the situation): post-normalism or plain incompetence, or a combination thereof. I’d vote for the former – too many scientists are becoming too complacent and/or arrogant to listen to their critics.

stan stendera
February 28, 2010 12:24 am

WAKE UP WUWT. If you don’t see what the IOP has done you are dense!
REPLY: The IOP was covered in this thread: http://wattsupwiththat.com/2010/02/27/16772/

Dave F
February 28, 2010 12:32 am

steven mosher (23:39:17) :
scienceofdoom.
Love your site and your writing. very clear.
There are a variety of studies on UHI ( my fav is the bubble study)
The presence of tall structures causes two issues.. change to the boundary layer and turbulent mixing and radiative canyons ( think of it like a corner reflector for IR)

Ha! I love it! Now imagine the effects of windmills enough to power the Eastern seaboard. Talk about disturbing atmospheric conditions…
Still, I feel we should find a way to pollute less, I just don’t feel the emergency given by my fellow humans on the CO2 issue.

Mindbuilder
February 28, 2010 12:33 am

>Mindbuilder (18:28:01) :
>We’ve been calling for this since 2007. It’s formally called reproducible results
My suggestion goes farther than just calling for reproducible results. Its a call for a specific method that will guarantee that the results are reproducible easily. By calling for an actual package and single command, scripted calculations, it can be quickly and easily verified that the data and code actually match the results before the paper is published and before an involved analysis of the paper is required by a reviewer. Authors couldn’t just claim that they had provided everything necessary to reproduce the results, everything would have to actually be there.
Another benefit is that it would be easier to tweak a study by minor modifications of the script.
I might be willing to relent on the requirement that graphs be generated by scripts if that is impractical. But I think even that wouldn’t be too hard. And if proprietary software packages are considered indispensable, then maybe hashes of the ISO’s of the software install disks could be allowed. Eventually statistical software manufacturers might even start releasing standard versions of their software with a published hash.
I expect skeptics could get everyone to do this if they would lead by example. Maybe we could call it “Fully scripted calculations”. Every paper should make the claim that it has fully scripted calculations for reproducibility.
If you support this idea, speak up. If you don’t support it, why not?
I place this and my previous post in the public domain so you can use them to promote the idea elsewhere if you like.

Kum Dollison
February 28, 2010 12:48 am

Amino, why the snark? I’m just a “civilian” trying to learn.
BUT, doesn’t it appear to you that there is a Glaring Contradiction in those numbers?
Let’s put it real simple. Everyone Raved over those Rural numbers. Then Everyone “Raved” over the UAH numbers; BUT they are in Direct Contradiction.
Ivan is right to ask, Which Is It?

Louis Hissink
February 28, 2010 1:14 am

Roy Spencer’s comments about the measurement of the Earth’s thermal state, from satellite measurements, are based on the underlying physics, a radiating sphere immersed in a vacuum.
Another approach is via the Plasma Universe model which considers the Earth as an electrically connected object encapsulated by (possibly) cascading Langmuire sheafs, or plasma double layers.
If so, the what are the satellite sensors measuring?
The thermal state of…….what, the Earth’s physical surface?
The thermal state of…….what else?
The point I make here is that, like it or not, present day tests are based on confirming the science, not disproving it.

February 28, 2010 1:25 am

AGW theory is looking more and more like socialism.
In other words, both make pretty good sense if you don’t examine the facts too hard.
If you do, neither makes any sense at all.
In the case of AGW, the ‘facts’ – the data used by the climate establishment – are becoming ever more suspect. The degree of manipulation of the ‘facts’ in support of a false dogma is becoming daily more apparent. This post is yet another blow to the credibility of the climate ‘facts’, which all too many people have accepted as being gospel truth for far too long.
Not only is there widespread data manipulation, but there is also massive data omission (UHI, equipment location etc) – basically, we need to start all over again, using trusted raw data and universally agreed temperature adjustment formulae.
The present cabal of ‘climate scientists’ cannot be trusted to police this vitally needed process.
Both AGW and socialism, if put into practice, have a common thread of being an incredible waste of resources to produce near universal poverty in an environment where no dissent is tolerated.

thethinkingman
February 28, 2010 1:30 am

By my own, rough and ready, calculation based on average energy received at the earths surface from the sun at 8.9 E+16 W and the energy consumption of the planet at 1.504 E+13 W ( includes all sources , fossil, nuclear, renewables ) we are making heat, light and movement at 1/6000 the rate of natural solar energy.
I think that it would be fair to say that burning a fire at a rate of 15 TW would warm the atmosphere more than the smoke and particulates would do. However the energy from our Anthropogenic fire is only 0.017% of the work being done by the sun. We would have to increase our fuel burn five fold to bring it up to 0.10%.
It’s hard to see the creation of heat out of the forcing referred to by others but if they are correct then it looks like some kind of perpetual motion machine and they are few and far between.
Surely those who say that the sun is the dominant factor in our global temperature must have a good point while those who claim it’s the soot and ashes of our global bonfire must be barking up the wrong tree. Mind you nobody likes soot and ashes much but that’s not because they are changing the weather but because they are ugly.

Boncoeur
February 28, 2010 1:44 am

I suspect that the wish to reach the goal intended by the alarmists is the cause of all these divergences. New efforts by the MET etc to redo the analyses are therefore not useful at all. What is needed is a thorough investigation of the disparity between the two methodologies (alarmists vs sceptics) by the two parties together and ferret out the big WHY. Now you see two sides digging in their heels, alarmists just affirming they are right, the sceptics ever more insistently coming with evidence that the right is on their side. Unfortunatly the big capital is on the alarmists’ side. And I am afraid that the alarmists will bank on time, getting the sceptics tired and the public interest to die down so that they can quietly continue implementing the economic side of the affair. Fot this is not only about science.

KeithGuy
February 28, 2010 2:31 am

Once again Dr Spencer an excellent analysis and thankyou for that.
You state:
“Of particular interest to me at this point is a simple and objective method for quantifying and removing the spurious warming arising from the urban heat island (UHI) effect.”
Surely the only effective way of eliminating the UHI effect is to use rural station data. Of course this would reduce the number of station data available but it would produce a more honest and reliable result.

mercurior
February 28, 2010 2:35 am

The problem this novice minor brain, sees is we dont know what happened in x year to prove of disprove a temperature increase or decrease.
Someone could have had a barbecue, near and the smoke was blown. there are so many variables that are not recorded. but thats impossible to do.
I dont say models are irrelevant, but they should make a model of real life use it to see what happens then if real life is any different alter the model to fit real life.. only that way could you be as accurate as you could, No forcings.
Location, Weather patterns, human contacts, urban heat sinks, and so on.. while the temperature is a good indication, its not all that the agw crowd should be looking at.

Bruce
February 28, 2010 2:56 am

[try again. this time with respect and courtesy or be banned. ~ ctm]

Bruce
February 28, 2010 3:03 am

[if you’re actually serious, try commenting again without using insulting terminology ~ ctm]

Geoff Sherrington
February 28, 2010 3:09 am

janama (19:24:53) : “So I went back to the Bureau of Meteorology site and sure enough there was a second listing for Casino Airport but it only had data from 1995.”
The Casino NSW site BoM 058063 started recording in 1858. However, as part of quality control, the BoM have cut out the period to 1965. You might be able to get earlier data from them, but it might not be so useful.

DirkH
February 28, 2010 3:19 am

scienceofdoom (21:34:32) :
[…]
than spatial coverage of urban surfaces. In fact, the population of Tokyo has almost unchanged or even decreased since the 1960s (from 8.9 million in 1965 to 8.5 million in 2005), in which most of its domain had already been covered by urban surfaces, but still there have been substantial increase in cars and tall buildings in the central business area accompanied by an intensifying heat island (Figure 1; Kawamura, 1985). For longer time span
[…]
Interesting stuff, and maybe worth a follow up post”
The intensity of the UHI probably rises with the energy conversion rate per volume unit of a city, as more waste heat is created. Same for rural, of course. Watch out for Google/Amzon/whatever data centers in the countryside. Should be visible as very bright spots in infrared.

Geoff Sherrington
February 28, 2010 3:20 am

For Roy Spencer,
By coincidence, the last line in my submission to the Russell Inquiry was “In other words, is there hard evidence for the whole global warming hypothesis?”
You refer to the sudden break between 1988 and 1996. In some countries (Australia included) this was the main period of changeover from daily thermometer observations to half-hourly thermocouple/thermistor types. Just as your 4-times-a day method shows differences to CRUTem3, I would expect that part of the explanation for that break lies in the adjustments needed with instrument change. Whether the adjustments required the hand of man to splice a neat transition remains unclear to me.

DirkH
February 28, 2010 3:32 am

“Mr Lynn (21:26:02) :
c james (18:26:46) :
Slightly OT….Have you seen Al Gore’s article in the New York Times where he calls us a “criminal generation” if we ignore AGW? This was published today.
http://www.nytimes.com/2010/02/28/opinion/28gore.html?hp
Well, the Goracle has broken his silence. […]”
He also doesn’t fail to mention tobacco. Does he mention blogs? No. Well, given that he was the prime enabler of the Internet in his time as vice president, he doesn’t seem to hold it in high regard these days.
Does anyone still take him seriously?

Geoff Sherrington
February 28, 2010 3:33 am

Zeke Hausfather (23:25:48) :
I can almost exactly replicate his graph by comparing GHCN v2.mean (raw data) with v2.mean_adj (adjusted data):
Not sure I agree. Have a close look at Dr Spencer’s first graph, above, starting about 1990 to now. You will notice most peaks are red on top and blue on bottom. Not the same as your graph. This means, as the trend lines and difference graph shows, that ISH is cooler in this period, arising from both lower cool and warm months.

Martin Brumby
February 28, 2010 3:41 am

Yet another really interesting posting from a real scientist. Thanks, Roy and thanks, as ever to Anthony & his team for making it possible.
It is clear that, without a lot more hard (and honest!) work, we don’t have a very reliable idea what global temperatures have been up to even in the recent past, although the more recent satellite data should be less putrid than the massaged and cherry picked surface data. Let alone the infamous “proxy” data sets.
This is a sorry state of affairs after the investment of billions of tax payers’ money.
If I could just ask a really naive question, however, (accepting all the uncertainties about where we are now):- How about the future?
OK, I note that such a luminary as David Adam of the Grauniad says that:-
“I used to think sceptics were bad and mad but now the bad people (lobbyists for fossil fuel industries) had gone, leaving only the mad. ”
http://bishophill.squarespace.com/blog/2010/2/27/how-to-report-climate-change-after-climategate.html
But as a mad denier, I’d like to be so bold as to ask Dr. Roy the following:-
Do you think that climate is / will eventually be predictable, or is it probably just a complicated random walk?
If you were offered, say 3 to one odds on a $5 bet that you could predict what the climate will be like in twenty years (Or ten. Or five), would you be tempted to have a flutter? You can be as precise or as vague as you like, (within reason).
This is just a hypothetical question and I don’t even expect you to reveal your prediction – but I’d like to have a feel for how confident a scientist (that I can respect) can be, that with the current state of scientific knowledge, any very meaningful prediction can be made.
Obviously, I’m aware of the work Piers Corbyn and Joe Bastardi and others do. They seem to be able to make a living predicting weather a few months ahead and it is clear that they leave the enormously resourced MET Office for dead.
But how far away are we from being able to state with ANY meaningful confidence what the climate will be in even five years?
I would be fascinated to see the response (if he has time) from Dr.Roy or any of the other genuine scientists who grace Anthony’s blog with their attention.
Anthony, you might even consider setting up a simple poll to ask this question?

cal
February 28, 2010 3:44 am

Most of the warming appears to have occured in the winter months and it has already been postulated that this could be due to the UHI.
Has anyone plotted the difference between an urban and a closely sited rural station in January as a function of temperature?
The thought is that as the temperature drops the urban energy consumption would increase, as people heat their homes, and the UHI error would increase.
It might even be possible to detect a difference in this relationship at different times of day if the 06, 12, 18, and 24 hour readings that Dr Spencer used were analysed seperately. Since most heating is reduced at midnight my guess is that the 06 error would not be as high as that at 18 or 24 hours.
The same would be true in summer, where the increase in airconditioning would increase the UHI error, but this time the relationship with temperature would be the reverse.
Because it will have have different signs in the winter and summer months this temperature relationship should be unambiguous. It would also mean that if you compared average annual temperatures agains UHI you might miss the signal because the two slopes might cancel out.
If one did this for a number of urban sites of different sizes one could estimate how the error varies with both temperature and urban size and then be able to calculate a proper UHI correction factor applicable to each reading.

AlanG
February 28, 2010 4:00 am

Roy, as you are starting out looking at thermometers, you naturally start with a number of the best thermometers in unchanged rural locations with long records. They, after all, are your best data sets and would be relatively free of UHI effects. You would then immediately see that some show slight warming and others a slight cooling with little warming overall. However, if you are running an AGW agenda, you have a eureka moment. You realize you can fix this ‘problem’ in one of three ways.
1. Average the rural and urban temperatures to show a warming bias from UHI.
2. Selectively drop out thermometers over time making sure there is a steady march of the thermometers to lower altitude, latitude and towards cities.
3. Use gridding and interpolation for missing thermometers. There will always be less thermometers up mountains and nearer the poles (which would be cooler) so you are automatically introducing a warming bias.
There you are. Problem solved and your AGW hypothesis survives. Dr. Spencer is finding out what the AGW advocates worked out years ago. Who were the original AGW advocates? Hansen and Wigley. Who are the keepers of the temperature averages? Hansen and Wigley.

Gareth
February 28, 2010 4:11 am

1988 – IPCC came into being.
1997 – Kyoto protocol.
Just sayin’.

Tenuc
February 28, 2010 4:15 am

Good piece of work Dr Spencer, the more the various temperature data-sets are examined the more flaws are revealed. No surprise there are differences between GISS, HadCRUt, GHCN, RSS and UAH temperature anomalies when so many assumption have to be made to process the data and produce a result.
Even accurate global mean temperature measurements wouldn’t show the direction climate is taking, as it is the amount of energy held that is key. In non-linear systems trends are useless for predicting future behaviour and are a ‘cherry pickers’ delight!

joannD
February 28, 2010 4:22 am

There is a serious problem with your monthly warming trends.
Because the monthly binning of the data stream is perdiodic in time, warming trends determined for Nov/Dec must be reasonably continuous with those in Jan/Feb. They are not so in your third figure.
So either:
– there is a fundamental problem in the dataset, or
– there is a problem with your binning algorithm, or
– the error bars on the extracted monthly trends (which you don’t give) are so large as make any differences meaningless
After rechecking your code, you might do three other runs with the monthly bin boundaries shifted to the 7th, 15th, and 22nd of each month to see if the monthly trends extracted from all four runs lead to a robust [ 🙂 ] conclusion.

AlanG
February 28, 2010 4:31 am

scienceofdoom (18:02:24) : the “correlation” between temperature rise and population density [in Japan] was relatively low (0.4)
Population density wouldn’t be my starting point. UHI is partly from microclimate effects as you say but I reckon what probably matters is energy consumption per unit area. A better starting point might be comparing the calorific value of the electricity, gas and fuel consumption in cities most of which ends up as waste heat except for the light that escapes. So far I’ve seen no paper that has tried this approach.
Interestingly, the temperature records (as published) show a flattening of the temperature rise in the last decade or so. This might be because of a cooler sun or whatever but it is what you should expect in the US, Japan and Europe. The energy consumption in those regions has flattened out over the same period. Once built, the centers of cities only change slowly and energy usage/GDP has fallen.

February 28, 2010 4:36 am

I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.

Can someone point to an internationally accepted standard procedure for calculating an average daily temperature at a defined location?
I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values.

February 28, 2010 4:37 am

We go to great lengths to remove and argue constantly about the Urban Heat Island effect. Maybe …
http://antwrp.gsfc.nasa.gov/apod/image/0810/earthlights2_dmsp_big.jpg
we ought to consider it as a fact of life.
What percentage of the earth’s surface is in fact warmer because of it?
Adjusting the raw urban weather station temperatures downward is just as much a lie as claiming those elevated temperatures are caused by CO2.
stacase@hotmail.com

AlanG
February 28, 2010 5:00 am

Claude Harvey (19:56:33) : …the dismal numbers may simply be the oceans puking up stored heat as they periodically do
That’s my interpretation. The oceans are dumping heat out to space via the atmosphere. All warming periods are followed by cooling periods and visa versa.

William D.
February 28, 2010 5:00 am

Very interesting.
Would you consider re-doing your analysis using daily max and min temps, thus replicating Jones’s work without the UHI correction?

February 28, 2010 5:01 am

Carsten Arnholm, Norway (04:36:34) :
I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values.

You can actually do a little “back of the the envelope” scenario and see for yourself that, yes how often you measure will affect the average. It’s simplistic but very visual. Calculate the average of a max / min, say 20°C and 10°C. Then do an average where most of the temps are weighted toward the high end and another with most of the temps weighted toward the low end. Compare the three averages.
That said, it can be argued that over a long period of time, everything evens out and those differences become insignificant. That’s why ideally one wants to use a collection of stations that have been around a very long time and have a large record set to calculate the so-called global temperature.

Dave Mullen
February 28, 2010 5:02 am

Where has surfacestations.org website gone ? Is this a temporary glitch in the web server setup or have you moved this gold mine of information and data somewhere else ?

February 28, 2010 5:05 am

Martin Brumby (3:41)
I agree with you. I believe it will be impossible to predict the climate. Too many factors and interactions and even worse the effects are nonlinear. In quantum mechanics it’s known as the “many body problem”. The exact solution cannot be derived due to the massive scale of interactions between all the electrons and protons of a molecule. Only approximations about the solution can be made. One will argue that approximations are good enough but remember that quantum mechanics is based on precisely known and quantifiable physical forces. The forces of nature (climate) are poorly understood and most are only weak correlations. You cannot build a predictive model on correlated factors and effects. This is what astrology attempted to do: correlate the movements of the heavens with things on earth.
We’ll have the same ability to predict climate as we can predict the stock market or the exact point of landfall a hurricane will make from the instant a storm is seen brewing off the coast of Africa.

wayne
February 28, 2010 5:22 am

Dr. Spencer, Anthony & Ken Hatfield (23:36:55) from T&N:

Ken: … Why not just stop with the UHI and urban stations and limit measurement data sources to data from areas with as little UHI effect as is possible? No adjustments to discuss and no reason to waste time attempting to solve a problem that has several unresolvable variables.
One should not need to measure temperature in cities to obtain data series that would produce statistically significant measurements of changes over time.


Ken, I came upon your comment and you are correct and seem on the right track. That’s proper analysis; I came up with the same and any proper physicist should come up with something nearly identical.
Follow this for a starter of a system. Try to concentrate on the logical aspect, not the specific implementation.
If you are going to measure how the temperature (energy) is affected in a complex system you measure only at the energy sinks. On the Earth the sinks are the oceans and rural land areas. All other sources of heat, including urban areas, must be dropped from being measured. To add an energy source and then attempt to compensate for its effect only increases the noise in the measurements. It’s rather simple physics.
And to carry that to an extreme, to get high accuracy, you only measure once per day per the longitude line that is located just before sunrise, pre-dawn I will call it, from the Arctic to Antarctic every 5 degrees, including in the measurement only rural land area temperatures and sea temperatures on that pre-dawn longitude line.
Of course every 20 minutes there is a new longitude line at that point of measurement (pre-dawn) 5 degrees to the west, so measurements would be continuous in time, summed down the longitude line, every 20 minutes to give 5 degree resolution. Only then are you actually measuring how temperature (energy), is being affected day by day, year by year, and decade by decade, untouched, as much as possible, by the weather that constantly mixes and re-distributes the energy present in the Earth system. The sum of any contiguous 72 measurements of the longitude line would be the true base temperature of the Earth, limited by the resolution and measurements accuracy. The daily heat-up and cool-down would be ignored by design. Along the longitude line, stations would have to be as close to 5 degrees (300 nm) as possible away from all cities and heat sources. Buoys would have to be designed to hold their position.
And to get the ultimate accuracy, all sensors would need to measure not the air temperature but the temperature a fraction of an inch below the surface of either soil or seawater. Then you are truly measuring the temperature of the Earth. The air only reflects the surfaces temperature due to thermal inertia. The small amount of energy held by the air is totally ignored, as it is limited, and small compared to the systems total surface energy. Any deviations should be tiny across day time spans. A smooth roll on a graph as the season cycles repeat during a year, but any 72-measurement (full revolution) sum would be basically a horizontal line on a graph only deviating slowly across months or years with minimal variations due to reasonable system noise.
Everyone worries about the daily warming in the day and cooling at night, the warming in summer and cooling in winter, the chill of highs in winter and sweltering heat of highs in summer, the frontal pressure line storms and snow, and that worry is understandable, that is weather and is what we experience daily. But weather is only the mixing and re-distribution of existing energy above the immediate surface of Earth. All of that noise needs to be totally ignored by a proper temperature sensing system by design if you are measuring the globes temperature across long periods. The energy actually entering into the Earth climate system only changes slowly over months and years and that is controlled primarily by the sun, the albedo, and variations in the radiation rates of LW radiation. Small factors that are rather constant, as heat from the core, fission in the soil of various isotopes are rather small and shouldn’t affect the accuracy.
And amazingly such a system would only require 1652 sensors across the globe for 5 degree resolution or 3302 for 2.5 degree resolution. And what make you sick is looking at all of the money spent on the current system that basically doesn’t work, flawed at the logical level. We are currently taking a system that measures weather (air) and trying to back out all of the noise to get a base temperature of the globe, totally backwards.
A sequence of satellites could feasible handle the same logical system but their polar orbit would have to be heliostationary. Knowing gravity fairly well, I am pretty sure if such an orbit is not possible due to Earth’s oblateness. But assuming it were possible, the orbit would have to maintain a pre-dawn and pre-dusk orientation with satellites spaced to perform a scan pass every 20 minuets, for 5 degree resolution that is. The onboard instrument would have to accurately measure the true surface temperature; soil, seawater, and ice, not the air. That would do exactly the same logical thing as the ground based system above, only from space. The night side half-orbit scan would be the only one actually used. But since it seems that orbit it is not possible, a large number of satellites in staggered orbits would be required and ground-based is hugely more economical, but that gives you a logical equivalent system but based in space.
The question is how do we actually get proper science to be performed? One starting step using the current system would be the public availability of raw hourly per-station temperature data with no adjustments, especially needed are the rural stations. Also, all Argo data needs to be untouched and public. That needs to be mandatory. Without it you are stuck in the flawed system, measuring weather, not the globes temperature.
That’s a mouthful but what are your thoughts on that hypothetical system?

mark
February 28, 2010 5:35 am

Do you have a link to the website or data set that you used to work up this data? Thanks.

mark
February 28, 2010 5:37 am

Mr. Spencer,
Can you post the link to the website or data set that you used to work up this data? I’m still new to this and don’t know all the secret data websites
Thank you.

Pamela Gray
February 28, 2010 5:39 am

Doh! I missed the error bars! My statistics prof (took a grad level course so I could be a research audiologist – we did the stats by hand, no computer program allowed) would not be happy with me right now. When I finally got into a lab, the first thing I did was buy Statview SE for our little macs. Nice critique joannD.

February 28, 2010 5:41 am

The fact that Dr Spencer used four temperatures per day might be the reason of differences in overall trend. Since UHI affects mostly Tmin, half of daily data are contaminated. In case of four measurements, UHI is somewhat diluted, even at 18.00PM it should be present as well.

richard
February 28, 2010 5:41 am

Starting from scratch sounds like a particularly good idea. When I hear that the Royal Society is granting up to £100M for research into ‘geo-engineering’ prototypes it makes me feel extremely uneasy.

davidmhoffer
February 28, 2010 5:42 am

Cal;
The thought is that as the temperature drops the urban energy consumption would increase, as people heat their homes>>
I have no doubt that is part of it. Another part is the buildings themselves, even if they are not heated. In winter the inclination of the Sun is low to the horizon and inclination of the Sun’s rays very sharp compared to the ground. But a building presents a vertical surface at almost a right angle to the Sun (and hence much higher absorption) so I would expect UHI to be more pronounced in winter than in summer. This would also, I suspect, result in a larger UHI on the south side of the downtown core than on the north side.
Some winter cities try and arrange their streets so that the houses are built with their largest vertical surface area facing south with the intent of reducing

mark
February 28, 2010 5:43 am

Carsten Arnholm, Norway (04:36:34)
Here’s a link to a article about it:
http://cat.inist.fr/?aModele=afficheN&cpsidt=16372328
Otherwise, when I was doing the climatology for NAS Keflavik when I was the Head Observer / Data collector (in house), I would take the whole days worth of obs and take the Max and Min, then take the average of all the obs, find the average min, average max, etc. But the Synoptic obs were used by Asheville for whatever magic they used it for…which I’m guessing was entry into some data base. I’ll look and see if they still have any of that data at the Navy / Air Force Joint Climatology detachment.

davidmhoffer
February 28, 2010 5:44 am

ooops, cut MYSELF off there
Some winter cities try and arrange their streets so that the houses are built with their largest vertical surface area facing south with the intent of reducing energy consumption in winter. Who knew they were causing UHI and global warming at the same time? 🙂

Ed Scott
February 28, 2010 5:56 am

Bali-Hoo: U.N Still Pushing for Global Environmental Control
http://www.foxnews.com/story/0,2933,587426,00.html

john
February 28, 2010 6:04 am

It would be much more helpful to try to consider what reasons they may have to deliberately falsify the data and then present it as correct.
Obviously the political content of climate change/global warming data is going to colour results from those funded by government.

Roy Spencer
February 28, 2010 6:35 am

Thanks for all the comments. I agree that my use of “spurious” sounds a little disparaging..”anomalous” would be better.
I’m using 4x per day measurements because the hourly dataset does not contain max/min data. As someone mentioned, Tmax/Tmin isn’t necessarily the best…especially since Tmin is so sensitive to cloud and wind conditions around sunrise.
Remember, the primary reason for hourly weather measurements has been aviation support, not climate monitoring. I used the synoptic reporting time of 00, 06, 12, and 18 UTC because those are the times that have the most stations reporting routinely.
I agree that the *absolute* temperatures should be analyzed…this is why my next task (if time allows) is to quantify the UHI effect based upon 1 km population density data from the U.S. decadal census data From what I can tell, a proper UHI analysis of all of the temperature data has yet to be done. If we can get estimates of spurious warming as a function of population density, then adjustments to the temperature record over time can be made.
Of course, if there happens to be enough good rural thermometer data, then you could just throw out all of the temperature data that have UHI effects. Unfortunately, there will always be ‘experts’ who claim there are no UHI effects, and then use the UHI-affected data to demonstrate global warming.
So, in either case it becomes necessary to quantify the UHI effect. Given the huge amount of US data, this should be possible to do fairly convincingly. But then, I have a bad habit of being overly optimistic before I analyze a new dataset.

Ivan
February 28, 2010 6:37 am

Alan:
“Roy, as you are starting out looking at thermometers, you naturally start with a number of the best thermometers in unchanged rural locations with long records. They, after all, are your best data sets and would be relatively free of UHI effects. You would then immediately see that some show slight warming and others a slight cooling with little warming overall.”:
However, it’s not what dr Spencer had done. He did not start with rural stations. He used all stations and tried to construct the “better” index than Jones out of them. Actually, if he used only the rural stations for the USA he would discover that they show 3 times lower trend than his own satellite data (as dr Long demonstrated)! What he did instead was to repackage the old Jones’s analysis so as to validate UAH data. Slight warming bias he discovered in Jones’s data is exactly by how much Jones and UAH diverge over the USA 48. The real question I asked and nobody answered is: HOW IS IT POSSIBLE THAT RURAL NETWORK IN THE USA SHOWS 3 TIMES LOWER RATE OF WARMING THAN UAH???

BarryW
February 28, 2010 6:41 am

George E. Smith (23:29:15) :
I agree that replicating Jones is not the intent. What I thought might be interesting was if there is a change in the patterned of diurnal warming/cooling that is occurring over time. That might be a clue as to what else is going on physically. For example, how would changes in cloud cover change the temperature cycle as opposed to UHI issues.

sleeper
February 28, 2010 6:45 am

Re: Steve Case (Feb 28 04:37),
We have thousands of climate scientists around the world who actually believe it is possible to measure the GAT to within a few tenths of a degree C per decade. Ever notice that you rarely ever see error bars in any of these graphs?

Pamela Gray
February 28, 2010 7:03 am

re: distribution of temp sensors. All temp sensors are situated in microclimates, of which there are many. Climate zones, altitude, and proximity to large bodies of water have significant affects on microclimate response to weather pattern variation forcings. Sensor placement must therefore be carefully considered to result in a randomized sample. Station drop out, or any other reduction in sensor numbers used for analysis must be carefully done so as to ensure against a non-random biased sample, which can lead to erroneous conclusions about trends and causes.
re: verification vs replication. Replication is always the first step. But verification (I like the term substantiation) must follow. If an affect is robust, it should show up in more than just one kind of analysis. Averaging the entire daily set, averaging max and min, averaging only max, averaging only min, analyzing record daily max lows and highs as well as record daily min lows and highs. Dividing the data set into climate zones and repeating the analysis within each set, etc. Any good scientist will look for any areas where the hypothesis does not hold up.

SaskMike
February 28, 2010 7:12 am

Ongoing discussion here:
http://www.theglobeandmail.com/news/world/global-warming-panel-to-get-independent-review/article1484168/
Feel free to comment, as it would appear that this article misses more than a few important points.

harrywr2
February 28, 2010 7:21 am

Dr Roy,
“In fact, the results for the U.S. I have presented above almost seem to suggest that the Jones CRUTem3 dataset has a UHI adjustment that is in the wrong direction.”
I think if you investigate the methodology you will find that Jones et al believe the UHI maxes out. Hence the adjustments are all made pre-1973.
Of course this is ludicrous on it’s face. Thefirst Boeing 747 wasn’t built until 1969. The great airport expansion didn’t occur until the advent of 747.

Paddy
February 28, 2010 7:29 am

It appears that the tsunami models overstated the magnitude of the waves everywhere. Is predicted v actual data for all earthquakes that models have predicted tsunami events being accumulated? Are those responsible for the model design and construction going to recalibrate these models to correct for measured differences of wave magnitude? Should they? If not, why not?

David Schnare
February 28, 2010 7:34 am

I think Lucy is on target:
Lucy Skywalker (17:44:20) :
“I would like to see a century of global mean temperature changes estimated from individual stations which all have long and checkable track records, with individual corrections for UHI and other site factors, rather than the highly contaminated gridded soup made from hugely varying numbers of stations.”
That’s what we plan to do in Virginia.
However, the excellent comments made by many on this site suggest to me that we are not going to find a clean, perfect record at any site, no matter how well sited.
Thoses of us who have done measurements as the basis for our scientific papers know the lengths we go to ensure accuracy, precision and completeness in those measurements. The temperature records were not kept to support scientific studies made 100 years later.
So, we have an imperfect data set. Lucy gets it right in that we need to clean up the “original” data in a manner that highlights what is missing, what changed and what may obviously have influenced the measurements. Until we have that data base, I don’t believe GISS, CRU or NCDC has enough knowledge to discuss the uncertainty surrounding the data, much less any data projections made there on.
Here’s another tidbit that ought whet someone’s appitite for hard work. The one long-term rural site I have found so far in Virginia that does not appear to have been adjusted by GISS is Bremo Bluff. Yet, the Bremo Bluff station is, and always has been, placed inside the fence of the transformer yard, less than 100 feet from the wall of a coal fired electric generating unit consisting, originally, of four boilers. In the mid-1970’s the nearby river flooded, destroying the utility of two boilers. These were never rebuilt (because they would have had to install expensive pollution controls). Thus, the actual placement of the station has a built in UHI, its record demands careful consideration of the reduction by half of the major UHI influence in the mid-1970s, and its use as a contribution to grouped data should be reexamined.
This begins with the efforts of our host, but we are going to need to go well beyond that, as Lucy suggests.
Basically, NCDC needs to do more than hit the reset button. It needs to make available in easy to use form all the actually reported data from each station, and the MMS system needs to be amplified to determine whether there were any micro-climate implications of local land use change, going so far as to examine the kind of thing Pielke, Sr. has discussed.
Its a long slog, but considering the economic consequences of going haphazardly forward, it is worth doing and is doable.

bruce ryan
February 28, 2010 7:44 am

well if the urban heat island effect is human caused climate change it only makes sense to increase its stature relative to unbiased readings. After all, are we not trying to show mans impact on the climate?
as spurious as that sounded I have an inkling there is a bit of definition involved.

February 28, 2010 7:48 am

David Hagen, 18:01:37 on 27 02,
If you collect site data sets for Arctic regions, as I do, you would be quite familiar with the almost complete dearth of data for Dec, Jan and Feb, and sometimes for Nov and Mar, often over many years, for many sites. Associated with this is the near total absence of data values that are less then -20 (C). Histograms of monthly data show this skewed information readily. It follows that analyses of data using sites of this type will misrepresent the effects (If any) of unusual occurrences or long term trends that might take place during these deep winter months. As has been pointed out, it is very easy to sympathise with the “ethical” and physical problems faced by the human observers. This does not help in elucidating what actually happened to local weather in the cold season.
My opinion is that it may be very misleading to accept records of sites with this type of “missing value” as being worthy of including in serious analyses. “Annual averages” will be very misleading indeed, and should not be included in multi-site analyses if the important “very cold” data have been lost or never gathered.
Robin

February 28, 2010 7:48 am

Carsten Arnholm, Norway (04:36:34) :
“I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values.”
I have no idea what ‘scientists’ do. As an engineer I think it is far more significant the length of time a temperature range persists, rather than recording a spurious peak or trough, e.g. 2 h of 20C is far more significant then 5min of 22C.
I suggest, since you are going to do a frequent recording, you could as an exercise integrate and chart separately 3-4 hours of the peak day time, and 3-4h of bottom night time data, ignoring the rest. That way you would have good record for the future years comparison.

rbateman
February 28, 2010 7:57 am

Ivan (20:28:06) :
The answer to that question may lie in exactly what UAH has been calibrated to.
If we use UAH to throw out our historical dataset (and we do see the internal agenda that has attempted to both drop the rural stations and /or move them) we lose the continuity of the last 100-150 years of data.
Where climate cycles are concerned, that is a fatal blow.
So, has UAH been calibrated to a raw dataset, or to one of the CRU/GISS datasets which have been subjected to highly questionable alterations which make no sense?

Josh
February 28, 2010 8:02 am

Al Gore returns with an opinion piece in the Old York Times: http://www.nytimes.com/2010/02/28/opinion/28gore.html
In the first sentence he writes “…the science of global warming…” which made me LOL.

wayne
February 28, 2010 8:02 am

Maybe OT but due to temp graphs:
IPCC Bali – Major topics:
— Global system of governance
— Radical transformation of the world economies
— Radical transformation of the world social order
— Vast sums of money to flow to developing nations
Bali-Hoo: U.N Still Pushing for Global Environmental Control.
http://www.foxnews.com/story/0,2933,587426,00.html

latitude
February 28, 2010 8:09 am

Paddy (07:29:08) :
“It appears that the tsunami models overstated the magnitude of the waves everywhere.”
Yes, but there was a run on popcorn about 3pmEDT yesterday.

Stephen Wilde
February 28, 2010 8:13 am

wayne (05:22:33)
An excellent post and just what we need.
Or would satellite sensors do the job well enough ?
Probably not because satellite sensing would be limited to measuring the energy content of the atmosphere at various levels AFTER energy has already left the oceans or the ground whereas it is the energy content of the oceans and the near surface ground that most accurately reflect the actual Earth system temperature.
Furthermore the energy content of the atmosphere would be skewed by any variations in the rate of energy loss to space.
Given that land surface energy retention is only brief and small in quantity as compared to oceanic energy retention I think we would get close enough by measuring global ocean energy content more accurately and as wayne says for climate purposes the most important measurement is that taken a fraction below the ocean surface.
I have stated elsewhere that the ideal location would be at or just below the point at which the ocean skin gives way to the bulk ocean below.
Changes in temperature at that specific location would be the critical controlling factor affecting the rate of the major part of the Earth system energy transfer from oceans to air and thence to space.
If that specific location warms up or cools down even fractionally on a globally averaged basis then the speed of the flow of energy through the entire system changes and all climate phenomena must shift with it.

February 28, 2010 8:18 am

George E. Smith (23:14:49)
Thanks for past info about work of Mr. Hendriksen.
If George E. Smith of these web pages is the same as Dr. George Elwood Smith of CCD fame, then I am evermore grateful for ending 15 tedious years of my working life; setting operating parameters of plumbicon, leddicon and saticon tubes.
My thanks, gratitude and greatest respect for Dr. George Elwood Smith.

rbateman
February 28, 2010 8:21 am

Basically, NCDC needs to do more than hit the reset button. It needs to make available in easy to use form all the actually reported data from each station,
Yes, the pdf download from NCDC is an arduous process. There is no station download option with the documents marked in chronological order.
But there are even more problems at NCDC. I suspect that a lot of the original forms have been transcribed by a computerized process, not humans. That leads to numbers coming off the forms in error, depending on the noise present in the image.
I have found some of these. And last, but not least, there are missing months (as E.M.Smith has documented) in the original forms when you come forward to the 1990’s and 2000’s.
Are these missing months in dusty boxes?
Where’s the docs on this?

DirkH
February 28, 2010 8:22 am

“Josh (08:02:34) :
[…]
In the first sentence he writes “…the science of global warming…” ”
Oh. Now we can call Hansen et.al. global warmologists. al Gore gave his blessing. Nice.

A C Osborn
February 28, 2010 8:23 am

Ivan (20:28:06) :
Kum Dollison (21:47:46) :
aMINO aCIDS iN mETEORITES (00:01:22) :
Kum Dollison (00:48:00) :
aMINO I agree with Ivan & Kum, we really need an answer and it needs to come from Dr Roy.
The Satellite data is showing Record hight values for Janury and Now February this year, it looks almost as if something is incrementally adding the values.
How does the good Dr rationalise that with the current NH weather and his own US results for 2010 in the graph above.

Ivan
February 28, 2010 8:24 am

rbateman:
“So, has UAH been calibrated to a raw dataset, or to one of the CRU/GISS datasets which have been subjected to highly questionable alterations which make no sense?”
That is the million dolar question. I have no clue. I am a complete laymen.
Dr Spencer claims that UAH is not calibrated to any ground-based data set. However, if this is so, then it is pretty weird that UAH trend over the USA is 3 times higher than the rural trend as measured by the ground-based thermometers. Is it really possible that the lower troposphere over the USA 48 warmed 3 times as much as the ground? What is the explanation for this? Or maybe the rural network in the USA has some unexplained “cooling bias” that leads to so vast underestimation of the real temperature trend?

beng
February 28, 2010 8:26 am

*******
Ivan (20:28:06) :
USA 48 RURAL 1979-2009 – WARMING 0.08 degrees K PER DECADE
USA 48 URBAN 1979-2009 – WARMING 0.25 degrees K PER DECADE
USA 48 UAH 1979-2009 – WARMING 0.22 degrees PER DECADE
So: UAH and URBAN WRONG??????

********
As I stated in another thread, sat & surface temps are apples and oranges to some extent. Standard, bare-bones GHG theory says mid-tropospheric temp trends (satellites) should be magnified almost 2X from surface temps.
Bare-bones GHG theory is certainly incomplete, tho.

wayne
February 28, 2010 8:34 am

Robin Edwards (07:48:12) :
“Associated with this is the near total absence of data values that are less then -20 (C).”
Makes you wonder if the missing data is due to thermometers only going to -20(C), as many older thermometers actually were limited, or that it was just too darn cold for someone to either be there or just not wanting to go out for frostbite.

Pascvaks
February 28, 2010 8:53 am

Does anyone know if there’s a daily record (graph) for the temperature at the bottom center, down inside the Great Pyramid at Giza? It would seem that it would be, at the very least, a good “control” temp:-)
Personally, I think that adding up all those local temps around the world and dividing by “x” is too micro. I still think that one (or two) sputniks could do the job better if there were about 50,000,000 kms out and taking a group shot of us here on Water World.
PS: Is it warmer? Is it cooler? Watch the Ice!

February 28, 2010 8:54 am

Anthony,
You led the way in this. Congratulations: as more research is done into the surface station datasets, you are going to be vindicated. We would not want your reputation to be besmirched by a Nobel Prize – another prize will have to be invented to award those who risk ridicule and condemnation in the quest for scientific truth. You and Stephen McIntyre should be the first recipients.

February 28, 2010 9:04 am

[quote] wayne (05:22:33) : If you are going to measure how the temperature (energy) is affected in a complex system you measure only at the energy sinks. On the Earth the sinks are the oceans and rural land areas. All other sources of heat, including urban areas, must be dropped from being measured. To add an energy source and then attempt to compensate for its effect only increases the noise in the measurements. It’s rather simple physics. [/quote]
Second your proposition and approach– As a former professional wx observer and as a wx forecaster during the advent of using computers/computer modeling for wx forecasting (1970’s), and now dealing with ground water sampling and remediation, it’s time to move beyond the smoke screens and recognize that, generally speaking, weather is what we experience in the atmosphere — which is a manifestation of the planet’s response towards achieving thermal equilibrium; – climate trends will be better determined by what’s going on with the heat sinks that, ultimately, drive the weather.

Pamela Gray
February 28, 2010 9:11 am

UHI is not climate change (in particular, reference to climate zones). It simply means that you will be hotter standing next to a building bathed in direct afternoon sunlight. The climate hasn’t changed, just the temperature where you are standing has changed. The minute you leave that area where the UHI is in affect you will be in a cooler spot, but your overall “climate”, IE the zone you live in, has not changed. To wit, no seed catalog will re-issue it’s climate zone hardiness ratings just because you might grow grapes in town next to that hot building, instead of on your outlying farm. Changes in temperature, or for that matter changes in weather pattern variation, does not mean climate change. Not unless it is severe enough to cause a change in your climate zone rating. An ice aged one-season frozen Astoria with an ice blocked Columbia River, where once stood a temperate climate beach, is a climate change.

mike roddy
February 28, 2010 9:14 am

Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.
Deal with it, wattsupwiththat readers, or risk becoming increasingly ridiculous.

February 28, 2010 9:16 am

First, thanks to Dr. Spencer for an interesting article.
Regarding my question on calculating daily average:

JLKrueger (05:01:08)

Thank you, I will consider those “back of the the envelope” ideas. Perhaps build it into my software.

mark (05:43:06) :

Thank you for the reference! Looks like one has some freedom to explore various scenarios with such frequent recording (every 10 minutes).

Vuk etc. (07:48:40) :

I am an engineer too, so I like engineering approaches. Perhaps 3 derived curves might be usefukl in addition to the raw data: a running 24h mean, 3-4 hours average of of the peak day time and similar for the night.

Pamela Gray
February 28, 2010 9:22 am

Oh good heavens Mike. Citations? Mechanisms? At least provide links. Otherwise, I can give you links to web pages with colorful pictorial explanations and interactive learning games related to science. You can even have an adult set the difficulty level for you.

Steve Goddard
February 28, 2010 9:23 am

I’d like to hear Dr. Spencer explain why January UAH temps had a very large spike which was not seen in GISS temperatures.

February 28, 2010 9:32 am

rbateman (22:29:27) :
I finished up the semi-rural station of Grants Pass, Oregon.
http://www.robertb.darkhorizons.org/TempGr/GrPass1889_2009.GIF
The years of 2002-4 were a mess, with one of them missing 4 months of data
(good grief !). I used Ashland, Or. to match up the pattern and fill in.
rbateman, I have lived in Grants Pass Oregon, lately. As in, the last year. There are multiple stations available inside of the local land features for the information.
Ashland, is a completely different microclimate 45 miles away, on the side of a mountain, with different wind patterns, its where we go for snow skiing.
Grants Pass is inside of a bowl with high mountains around it, it has its own weather features, do to a lack of air circulation.
The airport of Grants Pass, which is actually in Merlin has a weather station named after it, that is actually about 6-8 miles away from the airport at the top of a mountain pass. A location significantly colder then in town itself.
I wouldn’t use Ashland for Grants Pass.
Grants Pass regularly gets air inversion events. I lived 250ft above Grants Pass, on the side of one of the hills, overlooking town. Our temps changed by staggering levels compared to the valley floor, where my office was.
I could be 5 degrees warmer, or 5 degrees colder, then the valley floor, depending on fog/snow conditions happening inside of the bowl. 250ft was enough to have snow stay for days at my house, or melt in hours on the floor. For example.
Best Regards,
Jack

John Peter
February 28, 2010 9:35 am

I don’t think that anyone has drawn attention to Christopher Booker’s latest frontal attack on the IPCC and AGW supporters in general headed:
“A perfect storm is brewing for the IPCC” here on
http://www.telegraph.co.uk/comment/7332803/A-perfect-storm-is-brewing-for-the-IPCC.html
dated 27 February 2010. As usual he is not mincing his words.

steven mosher
February 28, 2010 9:36 am

For people who have concerns about how a “daily” temperature reading is taken, there is a treasure trove of highly accurate data ( either 5 minute or 1 hour ) from the USHCRN. Every station has three sensors. You can download that data. and compare these measures over time:
1. Average the temperature by integration.
2. Average the temperature by (tmin+tmax)/2
3. Average the temperature by taking 4 measurements as Spencer has.
Then you can compare all three.
Then you can see if the methods have different answers for the computed trends. Chances are that when you average 30 days or so for a month, and 12 months for a year that you will see no substantial difference in the trend over decades.
the issues since at least 2007 have been: the data sources and metadata and the adjustments–adjustments prior to giss or hadcru.

rbateman
February 28, 2010 9:37 am

Ivan (08:24:45) :
If the UHI is going up because of increase in building that absorbs heat, then I would suspect changes in the atmosphere that absorb heat (UAH going up), and since the rural dataset is headed down, a lot of that tropospheric absorbtion is not getting down to where we live. We live where it counts, on the ground.

Ivan
February 28, 2010 9:40 am

“As I stated in another thread, sat & surface temps are apples and oranges to some extent. Standard, bare-bones GHG theory says mid-tropospheric temp trends (satellites) should be magnified almost 2X from surface temps.”
This is not correct. Two times (or even slightly higher) amplification rate is expected to occur only in tropics, while in the extra-tropical latitudes the trends at the surface and up in the atmosphere should be roughly equal. In the polar regions there should be higher trend at the surface.
So, 3 times higher rate of warming in the atmosphere over the USA than at the surface is inconsistent even with GHG models. Something really big must be wrong with at least one of these data sets.

steven mosher
February 28, 2010 9:42 am

rbateman (08:21:00) :
There are three main sources of data that go into USHCN. Not so sure about GHCN stuff. I haven’t looked at dailies since 2007 or so when I first got interested in this, anyways, just follow your nose.
SOMEBODY needs to do the big old flowchart on this stuff and make it
a permanent resource for the community of people who want to comment
or investigate stuff. it’s dataset hell.
Quoting:
The three sources of daily observations included DSI-3200, DSI-3206 and DSI-3210. Daily maximum and minimum temperature values that passed the evaluation checks were used to compute monthly average values. However, no monthly temperature average or total precipitation value was calculated for station-months in which more than 9 were missing or flagged as erroneous. Monthly values calculated from the three daily data sources then were merged with two additional sources of monthly data values to form a comprehensive dataset of serial monthly temperature and precipitation values for each HCN station. Duplicate records between data sources were eliminated. Following the merging procedure, the monthly values from all stations were subject to an additional set of quality evaluation procedures, which removed between 0.1 and 0.2% of monthly temperature values and less than 0.02% of monthly precipitation values.

steven mosher
February 28, 2010 9:49 am

Vuk etc. (07:48:40) :
Carsten Arnholm, Norway (04:36:34) :
“I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values.”
Look this kind of data already exists in good quantities over periods of years. This work is already done.
Just start with hourly data: ( AND LEARN ABOUT TOBS)
http://www.john-daly.com/tob/TOBSUM.HTM
http://www.john-daly.com/tob/TOBSUMC.HTM
hat tip to jerryB who taught me way more than I ever wanted to know about TOBS.

rbateman
February 28, 2010 9:49 am

mike roddy (09:14:21) :
Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.

I have not observed birds and plants migrating north, but I have observed birds migrating increasingly south.

3x2
February 28, 2010 9:50 am

David Schnare (07:34:16) :
However, the excellent comments made by many on this site suggest to me that we are not going to find a clean, perfect record at any site, no matter how well sited. […The temperature records were not kept to support scientific studies made 100 years later.]
Have to agree, not sure we will ever get some kind of perfect site(s). Who, when the instrumentation was set up, could have known that there would be arguments over a few 10ths of a degree here and there one hundred and fifty years later?
So, we have an imperfect data set. Lucy gets it right in that we need to clean up the “original” data
I cannot see that “cleaning up” will help. All of the original source information should be freely available to all (as far as I can see much is already digitised over at the NOAA). That source should be locked as and when it is digitised. From that point, be you Dr. Jones or “Lucy”, you are all working from one single inviolate source. Under this kind of system you simply “SQL” a sub-set of the raw source and work on it as you will. Given your SQL source and an adequate description of your subsequent actions anybody can replicate your work and your results. End of “conspiracy theories” and end of “you are not qualified to know”.
Until we have that data base, I don’t believe GISS, CRU or NCDC has enough knowledge to discuss the uncertainty surrounding the data, much less any data projections made there on.
Couldn’t agree more. There are always going to be arguments about “Darwin” and “Matanuska”. I still cannot agree with some that the minutiae don’t matter. If there is no confidence in the trend for an individual site what confidence can there be in “regional” or “global” trends? The suggestion (by some) seems to be that it is OK if the “accounts” are fabricated at the “unit level”, we can still trust that the corporate level accounts [and your pension fund] are safe. Makes absolutely no sense (and is illegal in accounting) if it were accounting we were discussing.
Basically, NCDC needs to do more than hit the reset button. It needs to make available in easy to use form all the actually reported data from each station (…)
Its a long slog, but considering the economic consequences of going haphazardly forward, it is worth doing and is doable.

Not sure what else can be done other than “reset” given the current level of trust in “climate science”. The “climate science” community need to think long and hard about openness and reproducibility on an international level and scale. Get with the 21st C. It takes me a couple of minutes to reproduce a graph that Willis (for example) has created to support his post, welcome to the information age.

Kevin Kilty
February 28, 2010 9:51 am

Kum Dollison (21:47:46) :
Ivan (20:28:06) :
USA 48 RURAL 1979-2009 – WARMING 0.08 degrees K PER DECADE
USA 48 URBAN 1979-2009 – WARMING 0.25 degrees K PER DECADE
USA 48 UAH 1979-2009 – WARMING 0.22 degrees PER DECADE
So: UAH and URBAN WRONG??????
Or RURAL WRONG?????
Any thoughts?
Someone really needs to answer Ivan’s question.

OK, I’ll give it a shot. Temperature is a lot harder to measure accurately than most people think. The above data sets do not all measure the same temperature. The rural and urban stations are different sets and are affected by different influences. Corrections applied to the data, as we now see, are problematic. If the UAH set mentioned is the satellite data, then it measures temperature at all sorts of different heights in the atmosphere depending on channel. Just try to measure the temperature of a glass of ice water accurately and you’ll get an impression of the problem. It ought to be 0C but is in fact different at various places inside the glass.
Someone on a different thread yesterday said “we are trying to make a silk purse out of a sow’s ear.” Prosaic, but true. The data may never be capable of the sort of resolution people are hoping for and some claim is so. I actually have doubts that the Global Mean temperature is all that useful in the first place.

A C Osborn
February 28, 2010 9:52 am

mike roddy (09:14:21) :
Only an idiot would come on this Science orientated site and make a statement like that.
You obviously need to do some more reading on here.

February 28, 2010 9:54 am

Your Surfacestations.org link does not work.
“Sorry, the page you were looking for could not be found”
Does that mean it is now gone?
REPLY: No just being retooled to handle a traffic surge.

steven mosher
February 28, 2010 9:57 am

David Schnare (07:34:16) :
people should take great care in “cleaning up” the data and “adjusting” for UHI.
Anyways it would be great if folks from individual states took the lead on researching their state histories.
At the BOTTOM of all this is not climate science. at the bottom is history and record keeping. The “science” part of it is just stats. cook book garden variety stats.

Ivan
February 28, 2010 9:58 am

Osborn
“The Satellite data is showing Record hight values for Janury and Now February this year, it looks almost as if something is incrementally adding the values.
How does the good Dr rationalise that with the current NH weather and his own US results for 2010 in the graph above.”
This is quite separate issue. You are comparing apples and oranges and obscuring maybe unintentionally the real problem. What you are saying basically boils down to the following: we in the USA have cold winter, Spencer’s data show warm winter for the ENTIRE world, so they need to be wrong. No, they need not.
What I am doing is something entirely different. I am comparing apples with apples – rural stations trend in the USA 1979-2009 with the UAH satellite data trend for the USA tropospheric temperature in the same period. These two trends must be roughly equal or similar, but they are clearly not. Actually, the UAH trend is 3 times higher, and that requires an explanation. “Too High” January temperatures for the entire world have noting to do with that. Even if this January was really that warm as Spencer and RSS have reported, that would not remove the problem of the inconsistency of surface and satellite data over the USA in the least.

steven mosher
February 28, 2010 10:04 am

Lucy Skywalker (17:44:20) :
long records in pristine spots.
That’s a great idea. Start a list.. or get some of the guys who program and can pull records from GHCN and ISH to make a list.
Station name: ID numbers ( various) Lat Lon alt.
Start with that.

DirkH
February 28, 2010 10:14 am

mike roddy (09:14:21) :
[..]Glaciers are melting, antarctic ice is calving,[…]. Humans are the cause.”
Mike, glaciers always melt, icebergs always calve. They don’t need humans for that. Check your logic. Even the BBC, no stranger to warn against global meltdown, has admitted that the break off of that glacier tongue in the antarctic is not related to global warming. So who looks ridiculous now?

Kum Dollison
February 28, 2010 10:24 am

I’m beginning to think it’s all a con job on both sides of the issue. Dr. Spencer attempts to measure something in some place called the troposphere. God only knows what Hansen, et al are measuring.
The good folks of Ms have been taking the temperature On the Ground in Ms for 100+ years, and it’s been, basically, a flat line. I understand the story is the same in most all other states. I wouldn’t be surprised if the same applied to most any place in Europe, Australia, China, Africa, or Russia that I picked out by throwing a dart at a map.
I feel like all I’m witnessing is massive rent-seeking from both sides of the aisle. The only one I’m seeing actually doing ANYTHING that resembles what I would call the Scientific Method is ANTHONY WATTS AND HIS VOLUNTEERS. At least, they are attempting to “calibrate” their intruments before they get all wound up in trying to advance some hare-brained theory.
If I were Dr. Spencer I would be very concerned that my proxy isn’t matching up with temps “on the ground.” I don’t see how I could proceed any further until I’d managed to reconcile these differences.
All Heat – No Light. A Pox on all houses.

February 28, 2010 10:25 am

3×2 (09:50:07) :
David Schnare (07:34:16) :
Why does one need more than the following?
1) all the historical raw temperature data for sites split into 3 groupings ie rural, urban and mixed (ie sites that have transitioned over time from one type to the other?
2) a piece of software (excel) to calculate the averaged raw temperarture across sites by time within each of the 3 groupings.
3) an acceptance of the law of large numbers
http://en.wikipedia.org/wiki/Law_of_large_numbers
Then calculate the temperature trend for each set. How will that not settle whether we have had significant warming without urban heat influences?
Who is denying access to such raw data?

February 28, 2010 10:27 am

Has everyone forgotten we are at the high point of a strong El Nino cycle, when global temperatures are around 0.7-1.0 degrees C higher than normal?
Not surprisingly, global temperatures for January and February this year are higher than normal.

David Alan Evans
February 28, 2010 10:48 am

In my opinion, simplicity, objectivity, and repeatability should be of paramount importance.

Isn’t that what used to be called science?
DaveE.

Rereke Whakaaro
February 28, 2010 11:04 am

Forgive me if this has been mentioned on this thread before – there are so many interesting comments, but I haven’t had time to read them all – day job pressures, you understand.
But has anybody considered identifying weather stations at airports and treating them as a separate data set?
These sites are where they are primarily for safety reasons. It is important for aircraft to have accurate information about the weather conditions *on the runway* for take-off and landing. That is their intent, and if they are calibrated, that is what they are calibrated for.
Their purpose is therefore different to sites that are intended to help farmers decide when to plant and when to harvest. Their purpose is also different to urban sites that are primarily intended to manage energy load requirements, and to help people decide what to wear today.
So, perhaps there should be three data sets: Urban, Rural, and Avionic.
Analysis can then be done, comparing like with like. It would also be possible to apply “standard” adjustments at the data set level in order to combine two or more data sets in a predictable way.
As I see it, the current practice of adjusting each site as a stand-alone entity is fraught with problems of consistency. It is also open to interpretation by the person doing the individual adjustment to each station.

Ivan
February 28, 2010 11:09 am

Kevin Kilty:
“If the UAH set mentioned is the satellite data, then it measures temperature at all sorts of different heights in the atmosphere depending on channel. Just try to measure the temperature of a glass of ice water accurately and you’ll get an impression of the problem. It ought to be 0C but is in fact different at various places inside the glass.”
======================
So, your argument is that there is nothing strange if the lower tropospheric trend over the USA as calculated by UAH and reported as “USA 48 trend” on their website, is 3 times higher than the surface trend in the USA as measured by the rural stations, because those two sets measure “different things”? Is there any known theory that predicts or even allows that over the vast portion of the very vast continent surface temperature trend during the period of 30 years should be 3 times lower than at the altitude of 4.5 km up above the same continent?

pkasse
February 28, 2010 11:11 am

sunsettommy (09:54:52) :
Your Surfacestations.org link does not work.
“Sorry, the page you were looking for could not be found”
Does that mean it is now gone?
REPLY: No just being retooled to handle a traffic surge
“…traffic surge”
Is this a hint of a forthcoming announcement?

rbateman
February 28, 2010 11:17 am

Peter Miller (10:27:49) :
Not surprisingly, global temperatures for January and February this year are higher than normal.

Where?

Manfred
February 28, 2010 11:18 am

Ivan (09:58:26) : UAH versus rural
rural and UAH are not measuring the same thing.
http://climateaudit.files.wordpress.com/2008/06/hadat43.gif
Looking at above picture with the L48 USA situated roughly between 30-48 deg latitude, the UAH measured 600 mbar troposphere should warm by approx: 2.7 / 1.6 faster than the ground.
This is a factor of 1.7.
0.22 deg / 1.7 is 0.13 deg.
so this fact – generally ignored by warmists -, removes almost 2/3 of the difference.
the remaining 0.05 deg may be well expilicable with inaccurracies in above picture or temperature measurements in general.

Star
February 28, 2010 11:27 am

Lately i been looking at the weather reports because i barely watch news. i want to know if the world is coming to the end because of heat and upcoming weather events. if so can yall technology detect wen the heat will destroy the world?

ClimateWatcher
February 28, 2010 12:03 pm

Independently a researcher at the University of Washington is attempting a similar analysis of the US temperature record. He is currently having roadblocks thrown in his way by a senior faculty member who in a manner eerily reminiscent of climategate is trying to deny access to an important data resource needed to properly carry out this research.

aMINO aCIDS iN mETEORITES
February 28, 2010 12:06 pm

DeNihilist (22:31:43) :
Here is the best example yet, of toturing the data to get the result wanted!
……………………………………………………………………………………………………………….
There’s a problem with his math—-in 10 minutes from now USA, not Canada, must win the gold today!

Christopher Hanley
February 28, 2010 12:09 pm

The surface temperature record 1979-2010 shows a warming trend of about 0.17°C/decade while the satellite trend is about 0.13°C /decade.
http://www.woodfortrees.org/plot/gistemp/from:1979/trend/offset:-0.1/plot/uah/trend/offset:0.1
Can that discrepancy be extrapolated back in time?
This February may be the warmest in the 30 year satellite record, but where, for instance, is it in relation to the late 1930s?

February 28, 2010 12:18 pm

guys, I have been adding the CRU data to my site, http://www.knowyourplanet.com/climate-data for people to browse around, I read alot of the posts and comments above and I would say that many of you would appreciate this.
There are quite a few maps, I am still loading Russia, Europe, Pacific and Asia but North and South America is more or less complete.
The google app for the graphs is pretty cool, it can display as a line graph or a rolling animation and you can zoom, change colours.

Alexej Buergin
February 28, 2010 12:27 pm

” mike roddy (09:14:21) :
Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.”
Mike, it is much worse. Antartic ice is not only calving, it is disappearing. We have lost about 15 million square kilometers of sea ice these last few months. That is bad, really BAD.

February 28, 2010 12:39 pm

son of mulder (10:25:58) :
3×2 (09:50:07) :
David Schnare (07:34:16) :
Why does one need more than the following?
1) all the historical raw temperature data for sites split into 3 groupings ie rural, urban and mixed (ie sites that have transitioned over time from one type to the other?
2) a piece of software (excel) to calculate the averaged raw temperarture across sites by time within each of the 3 groupings.
3) an acceptance of the law of large numbers
http://en.wikipedia.org/wiki/Law_of_large_numbers
Then calculate the temperature trend for each set. How will that not settle whether we have had significant warming without urban heat influences?
—————————
How long have you been at WUWT? Why would you want to further comlicate things?
1) As various recent threads have shown, it is not always easy to discern whether sites can be classified as urban or rural or when a transition has occurred. How are the classifications made: population density vs. urban structures vs. light intensity as recorded by satellites? See for starters:
http://wattsupwiththat.com/2010/02/26/contribution-of-ushcn-and-giss-bias-in-long-term-temperature-records-for-a-well-sited-rural-weather-station/
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/
http://wattsupwiththat.com/2010/02/21/fudged-fevers-in-the-frozen-north/
2) It has become increasingly apparent that there is no simple algorithm for figuring out how to average the raw temperatures of these sites in such a way as to take into account the variations in the increase in UHI over time. Microclimates and physical changes over time — even to rural surface station sites — introduce variations that should not be ignored When you add this to the problem of identifying how to classify the locations of different stations, it makes a simple averaging approach pretty much useless in identifying clearly what is going on with global temperatures.
3) Trust the acceptance of the law of large numbers: sure – using only carefully documented raw data from rural stations. Why complicate things with urban or transitional sites?
If you want to study the effect of urbanization, that should be a separate study from a study of natural historical temperature trends. Logical, really: to study natural trends, you need natural settings. To use temperatures contaminated by urbanization – the warming trends of which have been strangely disputed by Phil Jones et al. – is like trying to understand the natural behaviour of forest racoons by studying them in an urban setting.

February 28, 2010 12:53 pm


jorgekafkazar (00:06:29) :
re: Claude Harvey (19:56:33)
Record snowfall means record amounts of latent heat removed from water vapor to produce ice. The ice falls to the ground; the heat remains in the atmosphere. Somewhere else, ocean heat went into vaporizing seawater. The vapor went up; the ocean cooled. Everything would balance, but high atmospheric temperatures result in increased heat loss to space. Net result: lower actual global heat content.

Bingo!
We had a veritable conveyor belt set up a few weeks ago over Texas (with our record snow event); and we have overcast for weeks (that spells NO insolation cloud top albedo being what it is), nearly constant precip of one form or another (a wringing out of latent heat energy from terrestrially-sourced water vapor, though evaporation) …
And similar situation/weather events occurring across the states to our east and north as well … how would one inventory/audit the change in heat content given these events (compared to, say, these events not occurring, no snow or precip events, insolation occurring with clear skies etc.)
.
.

February 28, 2010 12:55 pm

Star (11:27:03) : | Reply w/ Link
Lately i been looking at the weather reports because i barely watch news. i want to know if the world is coming to the end because of heat and upcoming weather events. if so can yall technology detect wen the heat will destroy the world?

Yes, I think it will happen 4 to 5 billion years from now.

Wren
February 28, 2010 1:04 pm

Manfred (11:18:32) :
Ivan (09:58:26) : UAH versus rural
rural and UAH are not measuring the same thing.
http://climateaudit.files.wordpress.com/2008/06/hadat43.gif
Looking at above picture with the L48 USA situated roughly between 30-48 deg latitude, the UAH measured 600 mbar troposphere should warm by approx: 2.7 / 1.6 faster than the ground.
This is a factor of 1.7.
0.22 deg / 1.7 is 0.13 deg.
———————–
I believe Ivan’s question was a about the difference between UAH and rural ground records for the U.S. over the 1979-2009 period. Do you mean UAH “should” warm 1.6 times faster than rural stations in the U.S. over this 30-year period?
I know UAH global records show about the same 1979-2009 warming trend as GISS, despite the latter including ground records of both rural stations and the warming-biased urban stations. If UAH should be warming faster globally, why are the trends so much alike?

February 28, 2010 1:09 pm

rbateman (11:17:27) :
Peter Miller (10:27:49) :
Not surprisingly, global temperatures for January and February this year are higher than normal.
Where?
Everywhere in Australia for one – they are close to the Pacific El Nino.
Also, look at UAH daily temperatures at: discover.itsc.uah.edu/amsutemps/
What I don’t understand about the UAH figures is: Why are the high altitude temperatures decreasing, while the low altitude ones are increasing during the El Nino phenomenon?

Channon
February 28, 2010 1:17 pm

Spurious transitions or step changes, because they occur over such short sections of the data set can generate several plausible alternative models.
This makes using them as a predictive platform very difficult.
Since the whole data set is quite small and the variance large, the possibilities for error caused by a spurious observation are large too.
Not much of a foundation to build on.

Ivan
February 28, 2010 1:34 pm

Manfred,
the picture you linked shows the pattern of warming in the case that GHG are the primary driver of warming. So, you assume that IPCC is basically right in attribution of warming to GHG.
Second, dr Spencer in his analysis in this comment asserts that correctly calculated surface trend for the USA should be roughly equal to the currently reported UAH satellite trend. If this is so, then, according to your hypothesis, predicted tropospheric warming should be about 0.37 or 0.38 degrees K per decade (1.7 x surface), not 0.22, as reported by Spencer and Christy. Are you suggesting that dr Spencer actually wanted to say by this analysis that his own satellite data set UNDERESTIMATED the real tropospheric trend by almost a half?

old construction worker
February 28, 2010 1:39 pm

‘mike roddy (09:14:21)
Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.
Deal with it, wattsupwiththat readers, or risk becoming increasingly ridiculous’
I’m assuming that you are assuming that “CO2 Drives the Climate”?

David Alan Evans
February 28, 2010 1:47 pm

Politicians couldn’t give a rats!
Sorry, it’s pointless, it’s just another tax until the revolution.
DaveE.

wayne
February 28, 2010 2:13 pm

Stephen Wilde (08:13:07) :
“Or would satellite sensors do the job well enough ?”
Hey Stephen, should have mention you in there (on oceans). I agree, satellites would be the last choice.
I included satellites just to give a hypothetical system an alternate view if someone didn’t understand the one described on the ground. I wanted everyone to think of the theory, not a specific implementation. How is it best to measure this globes temperature? Is it even possible? If so, how. How with the least error so no adjustments needed.
I had to jump out of the box. Whole point of that was to have everyone stop and say, wait, what in the world are doing? We are going in circles! Or another way to say it is performing adjustments on top of adjustments. Crazy! This whole process has bothered me of late and I just had to come up with a possible alternative.
It’s not meant to be perfect. Add to it. Change it. It’s just a start so we don’t keep thinking of the same broken system with layers on layers of adjustments!
telecorder (09:04:23) :
Thanks. Appreciate that at least someone stopped and read it. Didn’t know if it was just a hair-brain idea or not, well, yes, I think its close. I enjoy theoretical physics, it is a design created this morning for the public to modify, and most of all, stop and think. It may never be physical but in theory it’s what we need to approach.

Daniel
February 28, 2010 2:45 pm

May I suggest a very simple way to avoid any Urban Heat Island effect ? it would be to exclude any urban station, and only focus on pure rural sites !
Perhaps too simple ?
daniel

David Alan Evans
February 28, 2010 3:20 pm

Quite simply.
Temperature alone has NO relevance
Never has & never will!
My tuppence.
DaveE.

Pamela Gray
February 28, 2010 3:45 pm

Daniel, you would have to control for climate zone affect. This can cause your “random” sample to be not random at all. One of the main pitfalls in data collection is to assure you are taking from a random sample. GPS address would become a key measure of randomness if you selected only rural stations. That would tell you if your data set is randomized for longitude, latitude, altitude, and proximity to local micro-climate parameters such as large bodies of water or mountain shadows. Not to mention proximity to homemade truck sized BBQ’s for roasting quarters on a spit.

Pamela Gray
February 28, 2010 3:49 pm

Remember, sensor GPS address can attenuate or accentuate weather pattern variation drivers (such as greenhouse gases, humidity, clouds, El Nino/La Nina, Jet stream influences, pressure cells, topography, etc). So you must randomize through the climate zones and microclimates.

Ivan
February 28, 2010 3:52 pm

Daniel: “May I suggest a very simple way to avoid any Urban Heat Island effect ? it would be to exclude any urban station, and only focus on pure rural sites !
Perhaps too simple ?”
Or perhaps too dangerous for so many vested interests in climate science industry…

dr.bill
February 28, 2010 3:56 pm

Hello Everyone,
This is my first time posting at WUWT. I live in Montreal, and have been an “avid lurker” on WUWT for a long time (since the NorCal days, actually). I’ve sometimes had the urge to submit a comment, but someone else has always said more or less what I would have, and without much delay, so there was generally no need. I greatly appreciate the existence of this blog and the work that it does. In some sense, I feel that I already “know” some of you, and I find myself looking forward to posts from various individuals. That would make a fairly long list. In particular, though, I have liked almost everything written by Willis Eschenbach, including his recent semi-rants regarding Drs. Ravetz and Curry, and I religiously “click” all of Smokey’s links. 🙂 I am greatly impressed by the thought processes exhibited by many of the contributors, to say nothing of their concern for the integrity of Science. The only mild criticism I might make is that people sometimes spend way too much time feeding trolls, but I suppose that’s rooted in good intentions. (It would also be nice if we could search the site for more than the words contained in post titles, but I seem to remember that Anthony is somewhat at the mercy of other parties in that regard, so I won’t enter a full-blown complaint.)
OK then, introductions done, I am responding to the request by Carsten Arnholm, Norway (04:36:34) regarding the proper way of finding an average temperature. Before doing that, I would like to say that I’m a physicist, and I am thus not entirely comfortable with the notion of finding an average temperature for anything beyond one thermometer at one location. Everybody seems to be doing it anyway, however, and I’ve been known to say things like “It’s been hot today.”, which is tantamount to making a comment on the temperature across a non-infinitesimal region, so if you want to define such a thing and calculate it, here is my take on how it should be done (for that one thermometer).
::
The Lagrange Approach:
To calculate the mean value of any quantity over some interval, the ideal situation is to have a continuous function, which would be integrated over the full interval and divided by its “length”. For daily temperatures, that would mean integrating over 24 hours and dividing by 24. In practice, if only a finite number of equally-spaced readings are available, the integration then becomes a weighted sum of the individual values, a form of numerical quadrature, as it is called.
– The weights are determined by the number of points available and the function used to represent the entire set of values for that day. If we use a polynomial fit, the order of the polynomial can be anything up to one less than the number of measurements used. We can fit a straight line to two points, a parabola to three, and so on. This leads to well-known results such as the Trapezoidal Rule, Simpson’s Rule, etc.
– In Dr. Spencer’s case, there are readings every six hours, starting at Midnight, which effectively gives 5 points per day (or 4 intervals) since the two Midnights would define the start and end of the data set for an individual day. In this case, a quartic (4th order) polynomial can be used. The procedure for doing this is to find the coefficients of the Lagrange Polynomial that exactly reproduces the original five values. There are many ways to do this, some more efficient than others.
– This polynomial is then used in place of the actual continuous function that would give the temperature at any instant, but because it (the polynomial) is an explicit function, it can be integrated over the whole day, and the weights that should be assigned to each of the five readings can be found. If we use Dr. Spencer’s every-six-hours approach, the weighting formula (starting at 00hrs and going to 24hrs in steps of 6hrs) works out to:
T(avg) = {7T(00) + 32T(06) + 12T(12) + 32T(18) + 7T(24)}/90
::
The Chebyshev Approach:
The Lagrange procedure described above is perfectly workable and sound. There is, however, another method that allows for simple arithmetical averaging of the temperature values, and which gives equally valid results. In this case, however, the weighting is accomplished by taking the readings at non-equally-spaced times within the 24-hour period. In other words, instead of unequal weights, we use unequal intervals, but it accomplishes the same thing.
– The time-values at which the readings should be taken are found from the roots of a group of functions called Chebyshev Polynomials (or variant spellings). With four readings, the 4th-order polynomial is appropriate, and is given by f(x) = x^4 – (2/3)x^2 + (1/45). Note that this function is not intended to represent the temperature itself, but rather its roots are used to determine the times at which measurements should be made. As given, it is defined on an interval from x = -1 to +1, so adapting it to a 24-hour time-period would put t = 0 at Noon, and the end-points at -12 and +12hrs, thus covering a “self-contained” day centered on Noon. Note that there would be no reading at Midnight. All four readings would be made “inside of the day”.
– The roots of this polynomial are approximately: +/- 0.18759 and +/- 0.79465, which when translated into time-values would result in the following optimal choices for observation times:
(2:28AM, 9:45AM, 2:15PM, and 9:32PM) or (02:28, 09:45, 14:15, and 21:32)
Readings taken at these times can simply be averaged (just add them up and divide by 4), and would give the best accuracy available with four readings per day. Whether it might be practical to obtain readings on such a precise schedule is, of course, another matter.
::
Try It At Home:
If you want to check the accuracy of these procedures, you can make up functions yourself and try them out. Both will give exact results if the actual function is a polynomial of order lower than five. They can also be used as approximations for any funtion you like, including non-symmetric transcendental ones, as long they aren’t “pathologically wiggly” or contain singularities. If you pick something that can be integrated exactly by some analytical method, you can compare the three outcomes, and you will find that they give very similar results.
Example: Find the average value of f(x) = exp(x) + cos(x) on the interval (-1,1).
2.0166706 – Chebyshev Method
2.0166722 – Analytical Result
2.0166745 – Lagrange Method
::
Final Comment:
I would not personally consider such calculations to be valid with fewer than four readings per day because of the Nyquist sampling/aliasing issues that have been previously pointed out by E.M. Smith, “jordan”, and others. And, of course, “if I were God”, I would ban the “two-readings max/min thing” to oblivion. But that’s just me. 🙂
dr.bill
PS: To 10 decimals, the roots of the Chebyshev 4th order polynomial are: +/- 0.1875924741 and +/- 0.7946544723. Any standard textbook on Numerical Analysis intended for physicists, engineers, chemists, or geologists will have full explanations of these methods. Look in the Interpolation and Numerical Quadrature sections. Several other sets of special polynomials (Legendre, Hermite, Laguerre, …) can also be used for such purposes, and they are sometimes collectively referred to as Gaussian Quadrature methods.

February 28, 2010 4:37 pm

” vigilantfish (12:39:39) :
Why would you want to further comlicate things?”
Forget about global average temperature and focus on average temperature changes for the global population of measuring sites.
I’m trying to simplify as I sure it’s probably possible to find enough site characteristics so that some could argue that each site is unique which would get us nowhere.
May I suggest that if a site is rural now it be classed as always been rural and worldwide the number of such sites would be large enough to calculate a reasonably accurate average warming/cooling trend for these sites (forget about grid squares).
Say one then found that rural sites averaged small increase, transition sites were steepest and always urban averaged between the other 2 then I believe it reasonable that UHI has been demonstrated, measured and characterised and most importantly such an average measure of rural temperature change would give a reasonable bound on anthropic CO2 warming. Of course other results might be found and need to be explained but what odds would you give me that rural weren’t lowest growth?
If a site is urban now then it would be known if it used to be rural at some point and if so it’s a transition site.
If a site is known to have always been urban then class it as urban.
By averaging each class seperately you’d get 3 measurable comparative behaviours.
Of course one could just look at sites that are currently rural as Daniel (14:45:17) : suggests which I’d be happy as a yardstick for AGW growth. But I’d suggest the urban evidence is needed to help fully explain results currently provided by UEA, GISS etc etc.
And I ask again, Who is denying access to such raw data, as certainly some seems to be available judging from
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/

DeNihilist
February 28, 2010 4:52 pm

Amino – “There’s a problem with his math—-in 10 minutes from now USA, not Canada, must win the gold today”
Hmmm, looks like The Team may be onto something here eh? Torture, toture, ah, there…
Sorry, we won!

wayne
February 28, 2010 4:54 pm

dr.bill (15:56:52) :
Excellent! Excellent! You just cleared up a problem I have had for a long time concerning Chebyshev polynomials. Thanks.

February 28, 2010 4:59 pm

Apparently you have reproduced Menne, Williams and Palecki’s finding that electronic temperature sensors have a slight cooling bias.
REPLY: Josh, knowing you, apparently you’ll just take that wrong impression and shout it from the rooftops. Let me be clear. Menne et al used an incomplete dataset against my wishes, denying my right to publish first. At 88% the network looks a lot different. If the situation was reversed, you are your cronies would be all over me, telling the world how terrible I am for doing such a thing. Yet you and your band of anonymous bunny trolls give Menne a free pass for his actions because you side with the findings. Such integrity. Now back to your hole bunny boy. Watch out for flying cabbages. – Anthony Watts

harrywr2
February 28, 2010 5:04 pm

mike roddy (09:14:21) :
“Even according to Spencer, it’s still warming, so what’s the point?”
The question isn’t whether it is warming or cooling, the question is whether it is warming or cooling in an historically unprecedented way.
Scenario A)
Total Global Oil reserves are burned up in 50 years. Total world coal reserves burned up in 113 years. CO2 emissions problem solved. Their won’t be any more Oil or Coal to burn.
Somewhere between now and the time oil and coal run out Clean Nuclear or Affordable Solar become a reality. The world switches.
Scenario B)
The world will burn up if we don’t invest in expensive technology now.
Somewhere in the middle is probably reality…how fast we are warming dictates which road to take. We will get off of fossil fuels in the next 100 years no matter what.

February 28, 2010 5:07 pm

” Ivan (15:52:24) :
Or perhaps too dangerous for so many vested interests in climate science industry…”
A good headline would be ‘Insignificant global manmade CO2 effect on rural thermometers’. From that most folk would understand from that AGW was dead.

Pamela Gray
February 28, 2010 5:27 pm

Trolls are like Furbies. If you pay attention to them they do funny things. It’s like having a hamster in a roller ball cage meandering about the classroom. They continue to be entertaining if you feed and water them now and then.

wayne
February 28, 2010 5:42 pm

Pamela Gray (15:45:05) :
Daniel, you would have to control for climate zone affect. This can cause your “random” sample to be not random at all. One of the main pitfalls in data collection is to assure you are taking from a random sample. GPS address would become a key measure of randomness if you selected only rural stations. That would tell you if your data set is randomized for longitude, latitude, altitude, and proximity to local micro-climate parameters such as large bodies of water or mountain shadows. Not to mention proximity to homemade truck sized BBQ’s for roasting quarters on a spit.
Pamela, respectably I must disagree here. You are talking of random samples. I am assuming you are saying cities must be included in the sample because if left out, you would not have a random sample. Exaggerate the example. Think in your mind the UHI is twenty degrees. UHI pictorial much like a big bump over the city. Anthony has a couple of good illustrations of this. Now, when wind is blowing, there is little UHI effect, the heat doesn’t create the bubble. When there is no wind, you have full UHI effect. If your objective is to accurately as possible measure the world’s temperature, why would you include cities in your measurements? The heat from the city will continuously be dispersed to the surrounding rural locations, spread out and smoothed. To leave them in, the displacement of the measurement would never be more than the excess heat you see in the bubble over the city, but it would add large amounts of error, now depending on the wind and its speed. Error and complexity has sneaked in, because the cities were included.
Now the rural, generated-heat-free sites, they must be randomly distributed or at least form an evenly covered grid as close as possible. Am I missing something?
Now about the micro-climates, unlike the UHI effects of the cities where extraneous heat is created, they are part of the world we are measuring and do not create heat of themselves. Every point on this Earth is part of some micro-climate. I don’t see why you are including them as something specially handled.
This is a good example how seemingly most have accepted as real and necessary errors and deviations that can be eliminated if looking at the problem in a new light. I think Daniel is correct. You seem to be in the mind-set of controlling, not thinking of a measurement system that needs no controlling, the system handles itself, and of course, all of the above is only in proper physics and science.

Noelene
February 28, 2010 5:52 pm

Pamela Gray
The hamsters are also distracting the students,the trolls know what they are doing,the intelligent ones do anyway.
Furbies-hehe
Remember the subliminal messages implanted by the Japanese?
Some people actually believed that.

wayne
February 28, 2010 6:05 pm

son of mulder (16:37:55) :
“And I ask again, Who is denying access to such raw data, as certainly some seems to be available judging from ”
For per-station daily raw data, I have only found sites (NCDC/NOAA) wanting to purchase the data or order CDs or are only of recent years. If you can find an explicit link to a page or ftp directory, please let us know. Some now have .gov, .edu, .org domain limits for access but can’t recall exactly where.

DeNihilist
February 28, 2010 10:17 pm

This comment – “Eli Rabett (16:59:39) :
Apparently you have reproduced Menne, Williams and Palecki’s finding that electronic temperature sensors have a slight cooling bias.”
is a bit off-putting. For it can be taken in more then one way. Anthony, if you have submitted your paper for review, it could be taken that once again, The Team has been discussing others’ works, before publication! If so, then the S**** better hit the fan fast and HARD!
Another way of looking at it is, if the electronic sensors actually showed a cooler temp. when put into operation, like the satellite data, I personnaly would trust the thermistors. Maybe mr. rabbit, the temps were actually cooler…..

Patrick Davis
February 28, 2010 10:57 pm

Peter Miller (13:09:46) :
rbateman (11:17:27) :
Peter Miller (10:27:49) :
Not surprisingly, global temperatures for January and February this year are higher than normal.
Where?
Everywhere in Australia for one – they are close to the Pacific El Nino.”
Where in Australia exactly are temperatures higher than normal? Certainly not where I am, and today, 1 day after the end of summer, it rose to a “scorching” 19c in Sydney, possibly 22c in the inner west. That’s NOT usual Summer/Autumn temperatures. But if you want to believe KRudd747, Mzzzz W(r)ong, Mzzzz Gillard, the now demoted Environment Minister, Mr Garrett and the heavily biased Australian MSM that “our beds are burning” you are free to do so. Spring was a warm, summer was a lot cooler than usual but was horidly sticky with humidity up to 95% at times. Last summer was pretty cool too and just like last summer, there were almost no flies. I think if “summers” continue to be this cold I’ll forget how to do the “Aussie wave”!!!
Lets see where this winter heads. I’ll predict that we’ll see an early start to the snow season in Victoria and New South Wales possibly up to 4-6 weeks early, maybe earlier.
“mike roddy (09:14:21) :
Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.”
As we say in Aus, yeah right!!!!

Orkneygal
February 28, 2010 11:19 pm

Dear Dr. Spencer-
I am a student in Conservation Biology at Victoria University, Wellington New Zealand and I just wanted to let you know how much I appreciate the work that you have done here.
Thank you.
Orkneygal

Manfred
February 28, 2010 11:29 pm

Wren (13:04:31) :
“I believe Ivan’s question was a about the difference between UAH and rural ground records for the U.S. over the 1979-2009 period. Do you mean UAH “should” warm 1.6 times faster than rural stations in the U.S. over this 30-year period?”
It should have warmed 1.7 times faster. Actually, UAH has even warmed slower, what is another strong indication, that ground based measurements have a strong warming bias.

February 28, 2010 11:48 pm

Response to Patrick Davis re Australia – Average February 2010 temperatures:
City Min Max
Alice Springs* +0.6 -0.8
Adelaide +1.9 +2.1
Canberra +1.9 +2.1
Darwin +1.1 +1.2
Melbourne +1.8 +2.2
Perth +0.6 +0.5
Sydney +2.4 +1.7
All figures in degrees C
* Almost 5 times average rainfall in February. Source: Weatherzone

David W
February 28, 2010 11:48 pm

Finally some research based on pure data without any adjustments, fixes, translations, one-off increases/decreases …. And what does this study show? A conclusion at odds with the supposed basic analysis of “raw” temperature data from the AGW sect (which turns out not to be raw at all- but massaged several times). This sort of back-to-basics analysis is going to be critical in unraveling the eco-political mess that is global warming science. More power to Roy W Spencer.

Manfred
February 28, 2010 11:51 pm

Ivan (13:34:07) :
“Manfred,
the picture you linked shows the pattern of warming in the case that GHG are the primary driver of warming. So, you assume that IPCC is basically right in attribution of warming to GHG.
Second, dr Spencer in his analysis in this comment asserts that correctly calculated surface trend for the USA should be roughly equal to the currently reported UAH satellite trend. If this is so, then, according to your hypothesis, predicted tropospheric warming should be about 0.37 or 0.38 degrees K per decade (1.7 x surface), not 0.22, as reported by Spencer and Christy. Are you suggesting that dr Spencer actually wanted to say by this analysis that his own satellite data set UNDERESTIMATED the real tropospheric trend by almost a half?”
1. realclimate claims, that the enhanced tropospheric warming should occur not only for greenhouse gases. they write:
“The basis of the issue is that models produce an enhanced warming in the tropical troposphere when there is warming at the surface. This is true enough. Whether the warming is from greenhouse gases, El Nino’s, or solar forcing, trends aloft are enhanced.”
http://www.realclimate.org/index.php/archives/2007/12/tropical-troposphere-trends/
here is their comparison for 2*CO2 and solar forcing:
http://www.realclimate.org/images/solar_tropical_enhance.gif
(Though models are most likely wrong considering feedback, they may be correct in this more fundamental point, that doesn’t require (garbage)assumptions.)
2. I can’t speak for Dr. Spencer, but in my opinion above trend enhancement is not neglibile. The warming bias of the ground based measurements is then even worse.
3. Assuming that ground based measurements are not reliable (due to critics by Watts, McKitrick, Pielke, missing UHI adjustments, dozens of case studies…) and assuming open source code + open data satellite based measurements are correct, land based measurments then roughly overstate warming by a factor of 2. this is in very good accordance with other peer reviewed literature such as McKitrick’s.

Manfred
March 1, 2010 12:08 am

actually, CRU does NOT correct for UHI, they only increase the uncertainty a little.
http://climateaudit.org/2009/01/20/realclimate-and-disinformation-on-uhi/
GISS does some correction, but outside the US there are almost as many downward (!) as upward corrections for UHI, making their algorithm appear useless.
By far worst of the pack is Tom Karl’s NOAA, doing no correction at all. this is particularly disturbing because NOAA controls other most important data sets as well and because Karl has been appointed as chief of the new influential government agency.

E.M.Smith
Editor
March 1, 2010 12:42 am

I would wager that the further back in time you extend your series, the more the divergence will be… The most extreme data oddities are based further in the past…

March 1, 2010 1:43 am

” wayne (17:42:15) :
Now the rural, generated-heat-free sites, they must be randomly distributed or at least form an evenly covered grid as close as possible. Am I missing something?”
The best we’ll have is the global set of rural stations. So what is happening to their average raw temperature measurement over time?

March 1, 2010 1:46 am

” wayne (18:05:24) :
For per-station daily raw data, I have only found sites (NCDC/NOAA) wanting to purchase the data or order CDs or are only of recent years. If you can find an explicit link to a page or ftp directory, please let us know. Some now have .gov, .edu, .org domain limits for access but can’t recall exactly where.”
Do you believe we are not allowed to see the elephant in the room?

Espen
March 1, 2010 2:24 am

An interesting exercise is to use the GISS map tool to try to pinpoint where the extra winter warming is located. I compared the 1921-1950 period to the 1980-2009 period for Dec-Jan-Feb and for the summer months. The 1921-1950 period is similar to the recent period in that the Arctic shows a similar positive anomaly. One of the biggest differences is that the 1980-2009 winters have a very warm interior of Siberia – in the 1921-1950 period only the Arctic sea coast of Siberia was warmer than normal.
I tried to look up individual station data from the warm interior of Siberia, and quickly ended up with Krasnoyarsk (Krasnojarsk in GISS), one of Russias largest cities. And (not very surprising…) the GISS adjustments ADD warming to the trend of this city instead of correcting for UHI:
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=222295700006&data_set=1&num_neighbors=1
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=222295700006&data_set=2&num_neighbors=1
For an even larger city, Omsk, the homogenity adjustment only seems to strip all pre 1930 data, but does not correct for the most certainly huge UHI.

Patrick Davis
March 1, 2010 2:58 am

“Peter Miller (23:48:02) :
Response to Patrick Davis re Australia – Average February 2010 temperatures:
City Min Max
Alice Springs* +0.6 -0.8
Adelaide +1.9 +2.1
Canberra +1.9 +2.1
Darwin +1.1 +1.2
Melbourne +1.8 +2.2
Perth +0.6 +0.5
Sydney +2.4 +1.7
All figures in degrees C
* Almost 5 times average rainfall in February. Source: Weatherzone”
Not sure what your point is. To me this seems to be rather normal variablity for Australian cities, certainly not proof that “global temperatures for January and February this year are higher than normal.” (Your words!) and certainly NOT the warmest ever recorded in Australia in modern history. But also consider that the NH is the other half of the globe too and they’ve had record cold. Some parts of the SH too have had record cold, some parts of the globe had snow, last year, for the first time in living memory.
Consider also that data from 75% of the land based thermometers have been removed from the official database, many that still do get used are badly sited or even at airports.
As for the rainfall, so? How far to records go back, 50, 100, 150 years? 5 time what average? So, let me get this straight. Rainfall was different in Feb 2010 to what it was in Feb 2009 to what it was in Feb 2008….uh huh, I hear ya!

Dinjo
March 1, 2010 3:57 am

mike roddy (09:14:21) :

Even according to Spencer, it’s still warming, so what’s the point? Glaciers are melting, antarctic ice is calving, and birds and plants are migrating north. Humans are the cause.
Deal with it, wattsupwiththat readers, or risk becoming increasingly ridiculous.

Love your sense of humour Mike, y’almost had me fooled there for a minute! (wink)
Hey up! Just seen a flock of dwarf bamboos flying in formation, heading north… wattsupwiththat!???!

Gareth
March 1, 2010 7:04 am

Peter Miller (13:09:46) :
What I don’t understand about the UAH figures is: Why are the high altitude temperatures decreasing, while the low altitude ones are increasing during the El Nino phenomenon?
Is El Nino a cause or a symptom? A slowdown in convection moving energy upwards could appear to us as a warming of the ocean due to a lack of cooling, a warming of the lower atmosphere (same as oceans) and a cooling of the upper atmosphere (a lack of warming). The timing of the changes would be key to working that one out.

Ivan
March 1, 2010 7:13 am

Manfred,
don’t you see what is the problem here? If you are right, it is quite puzzling why dr Spencer don’t accept the rural record over the USA as the best approximation of the real climatic trend, instead of trying to slightly correct Jones calculations based upon the urban and upward adjusted rural network (he does exactly this in his article)?
If the rural record is consistent with his satellite data (as you posit), and he still, as we clearly see, rejects that rural record and argues that the “real” surface trend is 2 or 2,5 times higher than that, that can only mean that he assumes his own satellite record to be a wild UNDERESTIMATE of the real tropospheric trend. Do you really believe that Spencer considers his own work to be so fatally flawed?
You cannot have it both ways. Either Spencer rejects your amplification theory, or he rejects his own satellite data and assumes that real tropospheric warming in USA is 0.37-0.38 deg per decade. Tertium non Datur.

George E. Smith
March 1, 2010 9:59 am

“”” Carsten Arnholm, Norway (04:36:34) :
I am computing an average of the 4 temperature measurements at the standard synoptic reporting times of 06, 12, 18, and 00 UTC.
Can someone point to an internationally accepted standard procedure for calculating an average daily temperature at a defined location?
I have just set up my own weather station and record data every 10 minutes. I am guessing that recording only min/max per day or 4 times per day as above might produce different results than averaging 6*24=144 daily values. “””
Well Carsten, there’s no question, that your recording every ten minutes, will yield a better average than the min/max, or even the four times daily.
My mathematics is showing a lot of rust; but if I am not mistaken, the average of min and max, is the true mean of a continuous function; if and only if the function is cyclical and time symmetric.
Meaning that f(t) = f(T-t) where T is the period of the cyclic variation. Now since you mentioned for a “defined location”, one might argue that one can model that location as a fixed object having a certain spectral emissivity, and also absorptance, over the range of wavelengths encompassed by solar radiation, and surface thermal radiation. Well already I am ignoring other thermal processes like conduction and convection; not to mention evaporation.
But the radiation only model is in principle solvable; and I think if you do that just assuming black body conditions to start with, you will find that the diurnal heating and cooling are not symmetrical. Cooling after sundown, should be slower, than heating after sunup; bearing in mind that the surface will be cooling fastest when it is at its highest temperature. In any case, I believe that your ten minute data, probably plots to show a faster warming, than cooling. In that case, min/max must have an error in the average of those two numbers; compared to the true average of the continuous function.
If the function is not a simple sinusoid; then the min/max strategy is already in violation of the Nyquist sampling theorem, even for recovery of the average, since the function must contain at least a second harmonic component (assuming it is a repetitive cyclic function); in which case four times a day, is the minimum required sampling to recover the average.
The real rub comes in when you ask what is the total thermal effect of the diurnal temperature cycling (still restricting ourselves to the radiative component.)
The rate of energy loss is not linear with temperature; but varies about as the 4th power of the temperature; so during the higher temperatures, the loss rate is higher than at the lower temperatures; and to get the average loss rate, you really need to average the 4th power of the temperature, rather than the temperature itself. Some very simply math assuming a repetitive cyclic daily function, will demonstrate that the average of the 4th power always has a positive offset compared to the 4th power of the average temperature; so the daily average temperature does not yield the correct daily average energy loss (radiative).
It gets worse than that if one considers the effect of a GHG like CO2, which absorbs at about 15 microns (13.5 to 16.5).
At the global mean of about 288K, the peak of the surface thermal spectrum is about 10.1 microns, where we have an atmospheric window interrupted only by the Ozone 9-10 micron hole.
The surface emittance (assumed BB) at the peak of the spectrum varies as the 5th power of the temperature; not the 4th, so at higher surface temperatures, the peak radiant emission grows even faster, than the total, and the wavelength moves further away from the 15 micorn CO2 band, so the influence of CO2 is diminished over the hottest deserts, during the day. Now to be fair to the cO2 we need to note that the mean particle velocity (atmospheric) varies as the square root of the temperature (K), so the Doppler broadening of the CO2 line will be more at higher temperatures; but if you do all the math, you will see that the CO2 still loses out at higher temperatures.
You could do us all afavor Carsten, by plotting some of your 10 minute daily data, so we can all see, just what it really looks like. It would be nice if it could be done on a cloudless day(s) so as to not run into the additional complication of cloud variation; whcih of course really screws up the daily average temperature obtained from min/max.
I’m happy to have the four times daily method, that Dr Spencer has used for this study, as an advance over the min/max; which clearly is quite wrong.
I’d love to see real cloud coverage properly included; but any step forward is better than the status quo.

George E. Smith
March 1, 2010 10:29 am

Well all this beautiful statistrickery, is well and good; and there seems to be a fascinating fixation on the condimental details of that methodology.
Does anybody really believe that suddenly, somebody is going to stumble of the CORRECT long tem trend,a nd the CORRECT standard deviation; etc etc, and suddenly we will have the final answer to whether there is MMGWAGWCC or not.
I commend to your study; the 600 million year history contained here:-
http://www.geocraft.com/WVFossils/PageMill_Images/image277.gif
Now of course it is a proxy study, since Hansen and Mann weren’t around 600 million years ago.
The first thing I would like you to note about this global temperature, and atmospheric CO2 abundance data; is how beautifully logarithmic is the relationship between the temperature and the CO2; thereby confirming the wisdom of Dr Steven Schneider’s concept of “Climate Senmsitivity”, which is the equivalent to the velocity of light (c) to climate “scientists”.
The second thing about this 600 million years of data I would like you to note is that temperature impenetrable ceiling of 22 deg C. If anybody has an explanation for those clearly fraudulent anomalies at -248 million years at the Permian/Triassic boundary, and the other one at about -50 million years in the early Tertiary.
Now I am sure that all the Electronic circuit engineers here know exactly how Voltage Regulators work. You need a fixed and known Reference Standard Voltage; such as a semiconductor band gap reference; or even a superconducting quantum reference. Then you compare your system output Voltage to that reference, and you apply a feedback loop (negative ) to forcee the output Voltage error from the reference, to zero.
It is quite clear that an exactly analagous process has been operating for the last 600 million years, to prohibit the earth’s mean temperature from ever going above 22 deg C.
Through all the Geologic changes, and meteorite collision and volcanic anomalies that have hapopened along with orbital shifts, etc, along with plate tectonics, and continental drift; SOMETHING has been acting as an absolute temperature reference, and powerful feedback processes, have acted to keep the earth’s mean temperature at 22 deg C for most of that 600 million years. Now it would appear that during the carboniferous period, and the boundaries of that era, something really powerful was stopping the earth from warming; yet it didn’t have the same effect during the mesozoic.
So what is the absolute temperature reference that has acted to maintain 22 deg C for most of this history. The one thing we know that has been there all that time, has been the earth’s oceans aka WATER, H2O; which has very spoecific physical properties as to freezing, and boiling temperatures, specific heats, latent heats of vaorization and freezing,a dn on and on; many of them attributable in some way to that 104 deg bend angle in the water molecule, and its resultant electrostatic polar moment. Throw in the unique dielectric constant of about 81, which enables water to dissolve most anything, and you have the makings of a universal Temperature reference, that is capable of marhslling the properties of water in its three phases, to prohibit earth’;s temperature mean from ever exceeding 22 deg C.
So as I have said many times: “IT’S THE WATER, SILLY !”

George E. Smith
March 1, 2010 10:36 am

Well danged if I know what happened to my post, that just vanished off the face of the earth. And when I tried posting it again, since it was still in the comment window, I got a duplicate post error message.
So I not only got my post scrubbed; but got told off for having it scrubbed twice.

Manfred
March 1, 2010 10:58 am

Ivan (07:13:19)
Spencer doesn’t argue “that the “real” surface trend is 2 or 2,5 times higher than rural.”
he clearly states that his result is not yet adjusted for UHI (and even so lower than CRU).
“This is a little curious since I have made no adjustments for increasing urban heat island (UHI) effects over time, which likely are causing a spurious warming effect, and yet the Jones dataset which IS (I believe) adjusted for UHI effects actually has somewhat greater warming than the ISH data.”

George E. Smith
March 1, 2010 12:00 pm

“”” Manfred (10:58:55) :
Ivan (07:13:19)
Spencer doesn’t argue “that the “real” surface trend is 2 or 2,5 times higher than rural.”
he clearly states that his result is not yet adjusted for UHI (and even so lower than CRU). “””
Well when I read about ‘”adjustments for UHI”, I immediately hear alarm bells go off.
There should be NO need to adjust for UHI. UHI are real places, that have real temperatures that can be read, just as easily as the temperature of Foggy Bottom Swamp can be read. The real measured temperature of FBS affects the global mean temperature just as does that of UHI.
The big problem; and the apparent need for “Adjustmnent”; read ‘fake data’, lies not in the temperature value measured at FBS or UHI, but the quite unwarranted assumption that that temperature reading is a good one to use for some other place than FBS or UHI. It is not; and it specially is not a good temperature to use for some place that is 1200 km away, or even 900 km away.
If “adjustments” for UHI are deemd necessary; then clearly the function being “adjusted” does not correctly represent the average temperature of the earth or its surface; if it did, then adjustments would not be necessary.
So once again the problem is in the sampling, and not in the data.
The temperature read in a UHI, whether or not the obligatory barbecue is running or not, is the correct temperature to use for that place; it is NOT the correct temperature to use to represent someplace else.
The basic problem is quite trivial. You multiply each measured temperature sample by the total area for which that temperature is a good sample; you add those products all up, and divide by the total earth surface area, to get the global mean temperature.
If you aren’t doing that then you aren’t reading the mean earth temperature.

sturat
March 1, 2010 2:08 pm

wrt:
“Menne et al used an incomplete dataset against my wishes, denying my right to publish first. At 88% the network looks a lot different.”
So, when does the world get to take a look at your conclusions? This week? next week? Next month?
Other’s are “publishing” their code and results. For example:
http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/
and
http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
REPLY: When it comes out in a journal, just like Menne et al. Assuming it makes it past the gauntlet of peer review that should be in a few months. I can’t tell you when exactly since I have no control over publication. An SI will be published online with all data and anyone can then engage in any sort of analysis desired. – Anthony

sturat
March 1, 2010 2:46 pm

Fair enough. What’s your estimate of when you will be able to wrap up your analysis, complete the paper, and send it off for review?
The SI you mention to be published online with all data, would that include the code, also. (sorry, a little pedantic, here)
REPLY: Data, spreadsheets, code, everything needed to replicate. – Anthony

Stephen Wilde
March 1, 2010 2:54 pm

George E Smith (10:29:44)
Well spotted George. And of course to this day the tropical ocean surfaces never go over 22C because that is the temperature set by the sun/ocean/atmospheric density and pressure interaction.
So how must that be maintained ?
A constantly changing speed of the hydrological cycle mediated by the size, intensity and latitudinal position of all the global air circulation systems.
Is this all ever going to ‘click’ in the heads of the climate establishment ?
Unless the extra CO2 ever becomes sufficient to significantly alter total atmospheric density and pressure then it will be of no significant effect and the work of Miskolczi suggests that even as more CO2 enters the atmosphere the system reduces total water vapour to compensate and thereby retain a constant optical depth which neutralises the effects from albedo changes or changes in the quantity of cosmic rays and ENSO phenomena as well so quite a few sceptical viewpoints bite the dust as well as the idea of CO2 ‘forcing’.
Thus, again, the speed of the hydrological cycle is the fundamental governor continually adjusting to move back towards an equilibrium in the troposphere despite ever changing energy flows from the oceans below and from stratosphere to space above.

sturat
March 1, 2010 3:39 pm

Just noticed you didn’t reply with an estimate of current progress and expected paper submittal date.
Can you provide these estimates?
Thanks

Keith Minto
March 1, 2010 4:47 pm

George E. Smith (10:29:44)
Now it would appear that during the carboniferous period, and the boundaries of that era, something really powerful was stopping the earth from warming…..
The answer must be to do with the ‘carboniferous’ part, the dynamics of land and aquatic based flora and fauna interacting with oceans/atmosphere to provide biota nurture.
We are so fortunate to have had liquid water on this planet for so long.

DeNihilist
March 1, 2010 7:29 pm

George E Smith – thank-you!

Manfred
March 1, 2010 9:42 pm

George E. Smith (12:00:31)
you are right with your points, but I think this doesn’t matter in this context.
As I understood it, Spencer’s analysis did not aim to compute he “correct” temperature trend.
He showed, however, that with an open and straightforward approach and with a worst case maximum warming assumption (no UHI corrections at all), he computed a lower trend than CRU.
this is sufficient to falsify a hypothesis, he doesn’t have to provide the “correct” answer as well.

George E. Smith
March 2, 2010 10:54 am

“”” Manfred (21:42:42) :
George E. Smith (12:00:31)
you are right with your points, but I think this doesn’t matter in this context. “””
Well I agree with you Manfred; my purpose was to make a basic point; and not so much to comment specifically on Dr Spencer’s Essay. I will have to digest his paper much more thoroughly, before I would be able to comment usefully on it (if at all).
I’m just trying to point out that some of the holy grail tenets of standard climate science, as it still is taught in schools, simply don’t hold water, in the light of day (pun intended).
Now I don’t know beans about what causes all the ocean circulations, and the ENSO and other cycles; so I’ll gladly leave that to those who study such things. But I am quite sure, that you can’t explain the stable range of comfortable temperatures on earth, without invoking the remarkable Physical and Chemical properties of H2O in all its three phases; and I really don’t think CO2 has very much to do with anything.

George E. Smith
March 2, 2010 11:15 am

“”” Stephen Wilde (14:54:01) :
George E Smith (10:29:44)
Well spotted George. And of course to this day the tropical ocean surfaces never go over 22C because that is the temperature set by the sun/ocean/atmospheric density and pressure interaction.
So how must that be maintained ? “””
Well Stephen, you don’t really want to go stepping out there on thin ice.
22 deg Cis only 71.6 deg F, and ocean surface waters easily exceed that temperature all the time. Well I’ve done enough fishing in tropical ocean waters to know that 22 C is not any real limit to surface temperatures.
As to how the equilibrium is maintained; cloud modulation, is my answer.
H2O is the only GHG that exists permanently in earth’s atmosphere in all three phases. As a vapor, it has both cooling and warming properties; the first by absorbing incoming solar energy in the near IR range from about 760 nm wavelength; perhaps as much as 20% of the total solar spectrum energy. That warms the atmosphere, but cools the surface, by lowering ground level insolation.
In the LWIR thermal radiation region, water vapor absorbs in many bands across a wide spectral range, becoming almost totally opaque beyond about 15-16 microns, and that too warms the atmosphere, but blocks very little solar spectrum energy.
But it is in the liquid and solid phases, where H2O forms clouds, that we get the greatest cooling influence on the surface.
When a cloud moves between the sun, and the surface, and casts a shadow, it ALWAYS cools the surface in the shadow zone; it is NEVER observed to warm the surface in the shadow zone.
On the other hand the LWIR thermal emissions from the surface, radiate in a very diffuse radiation pattern, that is at least Lambertian (cosine theta intensity), and more likely near isotropic, since the emitting surface is seldom an optically flat surface (well the ocean surface sometimes can be).
As a result, the same cloud that casts a penumbral edged shadow on the ground, can only intercept a small fraction of the diffuse LWIR emission from that surface, so with broken or scattered clouds, a whole lot of surface IR escapes interception (by the clouds).
With more CO2 or other GHGs, the equilibrium fraction of cloud cover simply increases, to maintain a robustly stable state.

Stephen Wilde
March 2, 2010 12:09 pm

George E Smith (11:15:32)
Whoops. I think it was 28C or 82F as a general maximum for sea surface temperatures not 22C. Still, any maxing out of SSTs would also put a lid on what the global air temperature can achieve.
As for increased cloud cover I would just say that that would be the obvious first step in a speeding up of the hydrological cycle wouldn’t it ?
As for thin ice I think we are all on that all the time until the problem has been solved. Even the most expert here are expert in limited fields only.

Ed S
March 2, 2010 1:09 pm

Thank you Dr. Spencer–I have been making these same arguments with regard to data sets and the need to use the same site over time and also to factor in the effects of increasing urbanization. Bottom line is we probably do not have the data to draw any conclusion and therefore to make any educated or conclusive summary of actions needed. We are still at square 1 with regards to the entire theory when the scientific method is rigorously applied.

Ed S
March 2, 2010 1:14 pm

I might add that it is not the duty of anyone to prove the theory wrong, it the obligation of the proposers to provide irrefutable firm data to prove it right and to date that proof is not here.

sky
March 2, 2010 5:26 pm

Carsten Arnholm (04:36:34):
If you’re interested in generating a meaningful daily time-series, the standard procedure for calculating the mean when 4 or more (equi-spaced) readings are available is to simply average those readings. Classical “numerical quadrature” methods do NOT provide mean values with superior accuracy. On the contrary, their frequency response function is even further away from ideal integration ( i.e., the reciprocal of i*omega) than is that of simple averaging, with its inevitable negative side-lobes. The Langrangian quadrature operator suggested by dr. bill (15:56:52) for 6-hourly data does not even remove the diurnal cycle completely and has a horrendous negative side lobe in its response function that reaches a value of -.42222222 at Nyquist frequency (1/12hrs). With simple averaging of the 4 daily readings the side lobe fades to zero there.
Because the diurnal temperature cycle contains appreciable harmonics , the average of Tmax and Tmin will usually be appreciably different from the average reading, especally if 144 data points per day are averaged. The former is actually the mid-range value, which ordinarily is found several percent above the temporal mean. But, since the extreme values occur at irregular times of day, there is no aliasing problem, as with equally spaced readings. Hope you use a sensor with a time constant of a few minutes to avoid minor fluctuations from turbulent wind eddies aliasing into lower frequencies.

dr.bill
March 2, 2010 11:59 pm

sky (17:26:00) :
Perhaps I didn’t make the intention of my note to Carsten clear enough. For the Lagrange procedure, I was speaking of the case of 5 readings per day, taken every 6 hours, not the 144 values that his device is capable of generating. As I understood him, he was interested in comparing the average of the measurements taken every 10 minutes with the results of those taken every 6 hours, but wasn’t sure of how to calculate the average. I wasn’t trying to provide him with a smoothing formula, just a way to find the daily average of the every-six-hour temperature values.
::
If the intent had been smoothing, then I would have suggested a series of formulas that can be applied to a time series so as to remove the various frequency components. If we stay with the 6-hour intervals, there are three frequencies that could measurably contribute to the series, namely those corresponding to periods of 12, 18, and 24 hours. These can be removed sequentially by applying the following “moving weights” expressions:
Stage 1: (-1, 4, 10, 4, -1)/16
Stage 2: (-1, 4, 3, 4, -1)/9
Stage 3: (-1, 4, -2, 4, -1)/4
The “stage 1” weighting formula are moved along the original series to generate a new series. This new series will have the 12-hour variation completely removed. Likewise, the 18-hour components are removed by applying the second formula to the “stage 1” output, and 24-hour components are removed by applying the third formula to the “stage 2” output. The “stage 3” output will have zero spectral amplitude for all three of those frequencies.
What you are left with is the underlying trend, which will be identified exactly if it is no more complicated than a cubic polynomial over a range of five points.
::
At each stage, you “lose” two points from the beginning and end of the series, so a total of six plus six after the three operations have been performed. If you’re using 6-hour readings, of course, that means that you’re just losing a day and a half at the ends, but you could also apply end-correction formulas to avoid that if it mattered enough.
You can test this procedure yourself without trouble. Just make up a function consisting of any cubic polynomial plus three sine or cosine functions with non-zero phase constants, and with periods of 12, 18, and 24 hours. The coefficients can be anything you want, and when you apply the three-step process to your series, the final result will be exactly the same as the polynomial alone, with all the trig stuff removed.
I hope that clarifies things,
dr.bill

George E. Smith
March 3, 2010 3:00 pm

Well that seems like a lot of effort to me. I didn’t get the idea that the intent was one of smoothing. That implies a noisy sequence whose individual data points are each suspect because of noise.
In fact they are the reading on a thermometer, and any noise in the reading must be small compared to the actual change in temperature itself. So the aim is to obtain the true average of the measured values; not to smooth them and create a completely fictitious function which includes none of the original measured values. That average is simply the total area under the plotted function divided by the total time of observation.
The reason to use four equally spaced daily time measurments (of temperature), is because that is the minimum sampling rate to satisfy the Nyquist criterion for a repetitive cyclic function that consists of a 24 hour periodic function plus a 12 hr (second harmonic) component, and still be able to extract an average uncorrupted by aliassing noise (barely).

dr.bill
March 3, 2010 4:25 pm

George E. Smith (15:00:21) :

Well that seems like a lot of effort to me. I didn’t get the idea that the
intent was one of smoothing. That implies a noisy sequence whose
individual data points are each suspect because of noise.
I agree. I didn’t think it was about smoothing either

In fact they are the reading on a thermometer, and any noise in the
reading must be small compared to the actual change in temperature
itself. So the aim is to obtain the true average of the measured values;
not to smooth them and create a completely fictitious function which
includes none of the original measured values. That average is simply
the total area under the plotted function divided by the total time of
observation.
I agee with this as well.

The reason to use four equally spaced daily time measurments (of
temperature), is because that is the minimum sampling rate to satisfy
the Nyquist criterion for a repetitive cyclic function that consists of a
24 hour periodic function plus a 12 hr (second harmonic) component,
and still be able to extract an average uncorrupted by aliassing noise
(barely).
In this case, I would have a quibble. If the four measurements were taken at 3:00, 09:00, 15:00, and 21:00, I don’t think any harm would be caused, or spurious frequency responses introduced, by simply averaging the four values. If the values are recorded three hours earlier or later, however, the resulting average would not be for a self-contained day, but for a “day” that was shifted forward or backward, depending on “which Midnight” you used. My original recommendation (not the smoothing thing) was designed to cope with having 5 values per day, starting at 00:00 and ending at 24:00. If you simply averaged those five, however, you would be overstating the importance of the two Midnight values.
dr.bill

sky
March 3, 2010 7:40 pm

dr. bill (23:59:31):
Your intention did not elude me in the slightest and I never suggested that you were advocating Lagrangian quadrature to obtain the mean of 144 daily readings. On the contrary, I argue that classical quadrature formulae based on fitting low-order polynomials to data are not best ways of obtaining the mean in any realistic case, because of their miserable frequency response characteristics. My discussion of the negative side-lobe features is entirely in the realm of 6-hourly temperature data.
Integration–the prelude to establishing the mean in the continuous, analog case–is always a smoothing operation. The distinction you try to draw seems teleological rather than mathematical. Furthermore, the idea that “there are three frequencies that could measurably contribute” to the temperature series shows a lack of acquaintance with real-world data, which almost never follow text-book preconceptions. And the smoothing filters you prescribe are truly effective only if you have spectral lines at precisely those freqeuncies, rather than a spectral density over a broad continuum that encompasses those frequencies.
Instead of forcing preconceptions upon data, modern methods of signal analysis and discrete-time processing cope with spectral structure of real-world data. I would urge everyone to get acquainted with them.

dr.bill
March 4, 2010 10:02 am

  sky (19:40:04) :
With respect, I would suggest that you are being overly dogmatic in a case where it isn’t warranted.
With a limited number of values, there is a limited amount of information that can be extracted from the data. Choosing polynomials, Fourier components, or any other set of functions will not change that fact, unless you actually KNOW something specific about the data beyond the actual measurements. If that is so, then you effectively have more information than the simple measurements themselves, and it would be sensible to let this influence your choices.
With simply four or five points, however, and no a priori knowledge, any set of functions capable of describing the maximum observable number of “wiggles” is as good as any other, and there is nothing inherently better about one choice than another. No method of analysis can overcome this limitation.
dr.bill