NOTE: An update to the compendium has been posted. Now has bookmarks. Please download again.
I have a new paper out with Joe D’Aleo.
First I want to say that without E.M. Smith, aka “Chiefio” and his astounding work with GISS process analysis, this paper would be far less interesting and insightful. We owe him a huge debt of gratitude. I ask WUWT readers to visit his blog “Musings from the Chiefio” and click the widget in the right sidebar that says “buy me a beer”. Trust me when I say he can really use a few hits in the tip jar more than he needs beer.
The report is over 100 pages, so if you are on a slow connection, it may take awhile.
For the Full Report in PDF Form, please click here or the image above.
As many readers know, there have been a number of interesting analysis posts on surface data that have been on various blogs in the past couple of months. But, they’ve been widely scattered. This document was created to pull that collective body of work together.
Of course there will be those who say “but it is not peer reviewed” as some scientific papers are. But the sections in it have been reviewed by thousands before being combined into this new document. We welcome constructive feedback on this compendium.
Oh and I should mention, the word “robust” only appears once, on page 89, and it’s use is somewhat in jest.
The short read: The surface record is a mess.

E.M.Smith, have you heard of this project? If so, what do you think of it?
The Clear Climate Code project writes and maintains software for climate science, with an emphasis on clarity and correctness.
The results of some climate-related software are used as the basis for important public policy decisions. If the software is not clearly correct, decision-making will be obscured by debates about it. The project goals are to clear away that obscurity, to increase the clarity and correctness of climate science software.
http://clearclimatecode.org/
Hi!
where’s the surface station project at? any report going out soon or any US temperature trend using only the “best” stations?
Anthony, I was just taking a look at the article at http://www.skepticalscience.com/On-the-reliability-of-the-US-Surface-Temperature-Record.html which discusses the Menne paper.
There is someone in the comments going by the name Kforestcat who is thoroughly debunking the Menne paper. I suggest you have a read, and if possible, try to contact him as he can likely help with any future endevours you may have.
On pages 36/37. You could interpret the max/min temperature graph in two different ways.
1) Are you trying to display CURRENT max/min temperature records with reference to the decade they were set in? Which shows the 30s still retain most of the records, despite the 90s/2000.
2) Are you displaying EVERY max/min temperature record and what decade they were set in, even if they were later eclipsed by a new record? Which shows the 90s/2000s still eclipsed the records of the 30s.
I thought the Menne, et al, paper was quite fair, and actually complimentary to surfacestations.org. That being said the surface temperature set is the only thing that ties a hypothesis to reality, and so people heavily invested in the hypothesis, must believe deeply in the data–they have little objectivity. The deception they practice is largely self deception, but unfortunately it has broad implications. Menne et al may have a flaw in their logic as they cannot exclude the possibility that the two data sets look similar not because they measure the same thing, but because confounding factors make the two separate data samples look similar. The confounding factors could be in their own data processing. Beside there are many issues in the surface data,not just internal consistency.
This is great! Why don’t people “jump all over” this evidence? The AGW folks will marginalize the work even though it’s more scientific, thorough and logical than anything they’ve come up with. Thinking about it, AGW has all the hallmarks of “Pathological Science” defined decades ago by Langmuir. Look it up on wikipedia.
E.M.Smith (02:36:52) :
“Nick Stokes (23:21:59) :
‘This just propagates a misunderstanding of what GHCN is. 6000 stations were not active (for GHCN) in the 1970’s. GHCN was a historical climatology project of the 1990’s. V1 came out in 1992, V2 in 1997. As part of that, they collected a large number of archives, sifted through them, and over time put historic data from a very large number in their archives.’
Those station were, in fact, active in 1970. 5997 of them in that year. The exact date the data get into GHCN is not particularly important.”
Michael,
I can state for a fact that Mister Stokes is stating a complete falsehood. The GHCN’s own reports say the following:
http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_overview.pdf
We then see the real culprit in reducing the number of stations and how this artificially imposed a 0.5 C increase in temperature records WITHOUT ANY ACTUAL WARMING:
http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_qc.pdf
This is the smoking gun: GHCN in 1998 decided to abandon most stations alleged to be synoptically reporting in favor of CLIMAT reporting stations only, thus whittling from 8000 stations down to a mere 1650 stations. Rather than adjusting all the older synoptic data upward by 0.5C, they kept it and imposed on the climate record a false and artificial 0.5C warming.
I would like to thank Nick Stokes for his input. His comments are probably worth more than all the rest of us combined. Whether they are valid or not is unimportant. They are coming from a SKEPTIC. The rest of us suffer from various degrees of group-think. I actually hope to see other warmers comment as well.
Next, I too appreciate the work of EM Smith. However, I am a bit worried. I am also a software engineer and have commented more than once on the sad shape of the GISS software. This problem makes me concerned about having a single source of examination for exactly the same reasons I question the correctness of the GISS code itself. I know I wouldn’t trust myself and, although I would say without hesitation that EM Smith appears much more meticulous than I, it still leaves me uncomfortable.
Finally, I would suggest placing the work “DRAFT” in the title. I’ve already seen lots of needed corrections and you should plan on version dating the document to avoid confusion in the future.
Amust listen.
http://news.bbc.co.uk/today/hi/today/newsid_8480000/8480314.stm
Anthony & E.M. Smith…
I just wandered over to RealClimate for the first time in a while to see what they’ve been saying about this report.
I found this post…
……………
Could a response to “Surface Temperature Records: Policy Driven Deception?” by D’Aleo and Watts be written? Their paper can be found at http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf
I know it takes time to write a thoughtful response to a paper over 100 pages long, but maybe a near term response dealing with a few of the more understadable allegations of improper method. What sticks in my mind is the reduction of the number of surface temperature sites, with a claimed bias toward deleting locations at higher altitudes and latitudes (i.e. those reporting lower temperatures) yet leaving historical reports from these locations in the average. Surely the researchers involved realized that deleting a colder station from today’s average temperature yet leaving its input in older average temperatures would show an artificial increase in average temperature?
[Response: This is, was, and forever will be, nonsense. The temperature analyses are not averages of all the stations absolute temperature. Instead, they calculate how much warmer or colder a place is compared to the long term record at that location. This anomaly turns out to be well correlated across long distances – which serves as a check on nearby stations and as a way to credibly fill in data poor regions. There has been no deliberate reduction in temperature stations, rather the change over time is simply a function of how the data set was created in the first place (from 31 different datasets, only 3 of which update in real time). Read Peterson and Vose (1997) or NCDC’s good description of their procedures or Zeke Hausfather’s very good explanation of the real issues on the Yale Forum. – gavin]
………….
My brain starting hurting trying to make sense of all this. However, I was left in no doubt that Gavin is saying ‘Bulls**t!’
I suspect that you’re both probably up to your eyes in it at the moment, but if you do get the time, it would be great to hear your thoughts on Gavin’s response.
quote: “I am also a software engineer and have commented more than once on the sad shape of the GISS software.”
Richard M, have you heard of this project? If so, what do you think of it?
The Clear Climate Code project writes and maintains software for climate science, with an emphasis on clarity and correctness.
The results of some climate-related software are used as the basis for important public policy decisions. If the software is not clearly correct, decision-making will be obscured by debates about it. The project goals are to clear away that obscurity, to increase the clarity and correctness of climate science software.
http://clearclimatecode.org/
On page 43,
“As usual, the warmers want to have it both ways.”
I am not keen with labelling people. They might do it, but we shouldn’t.
Bouncing around Twitterdom:
http://earthobservatory.nasa.gov/Newsroom/view.php?id=42382
Q&A between Gavin and NASA.
@PaulH
Paul, this issue of dropping stations in the later years is of interest to me as well. The way I understand Gavin’s explanation is the difference in temperatures from one day to the next for nearby stations is the same. In calculus terms I guess we would say the first derivative of temperature measurements with respect to time is the same or similar for nearby stations. I would dearly like to see the code that is used to calculate this (assuming I have their explanation correct). This approach seems like it would have many challenges to model correctly – so many variables to account for.
The baseline period for computation of anomalies is cited as 1951-1980.
Who got to decide on that particular time period?
Would it not be better to use three or four sunspot cycles (either min-min or max-max) as a baseline period?
Unless, that is, one does not grant that solar radiation has any connection to temperature on the Earth.
Or, perhaps, if there are cycles of sunspot cycles, use one of the larger cycles.
The choice of one time period for computation of anomalies is highly suspect.
Why use anomalies in the first place? Why not just look at the raw data, weight the raw data depending on the compliance with both siting compliance and the urbanization effect, and go from there?
And when one puts in error bars, as has not (as far as I can tell) been done on a uniform basis, it is evident that we still don’t know about Earth’s energy budget.
We are certainly fortunate that engineers, by and large, rely less upon the manipulation, smoothing, cherry-picking, and homogenizing of data. Oh, I forgot. The USSR did a lot of that. Their engineers were required by the political leaders to come up with conclusions. That did not work out so well.
The thanks are richly deserved!
As someone who has worked in IT for quite a while (wrote my first program on 80 column punch cards in 1968) your work analyzing the data and methods seasoned with the revelations from The Harry readme file about how big a mess the code and data bases were in pretty much the final nail in the coffin as far as I was concerned.
The model output is simply nonsense, the false precision is a joke, the error bands swamp the signal, and the so called causal relationship between CO2 and global average temperature is wishful thinking.
Thanks for your efforts to uncover the problems and put them in language that the average citizen can understand.
Larry
Chris D. (08:16:49) :
Really funny:
Nasa: It is wintertime
Gavin: No!, it´s summertime, people think it is winter but that it is a wrong perception of reality.
Summarizes the dialogue from the link you gave.
Neven (08:07:06),
I had heard about it but had no time to investigate. Hopefully, I will have some time in the future.
From Gavin’s response above:
There has been no deliberate reduction in temperature stations, rather the change over time is simply a function of how the data set was created in the first place …
This response shows some ignorance. It doesn’t matter whether the reduction was “deliberate” or not. If the reduction creates bias then there is a problem. I think this demonstrates a little group-think problem with Gavin. He just assumes that it’s OK simply because he trusts the person who did it. Sorry Gavin, that’s just plain unscientific.
If you look the temperature correlation with dropouts on page 11 it’s clear there is a overall warmer set of thermometers in the new dataset. Now, if you assume there has been any warming at all (eg. 2%) then that percentage applied to a higher value with create bias all by itself. It may be small and certainly there are other factors, but you can’t ignore it as Gavin wants to do.
Thank you Joe, Anthony and Michael. You all bring the light of day to this serious issue. I’ve only read the first 16 pages, and was amused by the graph on page 14, the GHCN network from 1701-2008. Amusing because since 1871 we see about 5°C variation, allegedly CO2 driven, yet 1741-1871 we see 7°C variation sans the CO2 forcing. Everywhere one looks, there are far greater ‘forcings’ than the magically endowed CO2.
Anthony,
Still no reponse to my post (MJK 6:26:30) reagrding your failure to provide a supportinf reference for the assertion in your report that there has been cooling since 2001.
I suspect part of the problem is that this assertion cannot be supported and is incorrect. The RSS and UAH data sets (the only temperature records that you trust) do not show that the globe has cooled since 2001. Perhaps if your paper had been written in 2008 you may have been able to make such a claim based on cherry picking of a cooler 2008 as the end point. But in January 2010, the date of your report) the claim no longer holds water–if it ever did.
Could I kindly suggest you retract this incorrect statement from your report wherever it appears or point to evidence in the RSS and UAH data sets that supports your claim the globe has cooled since 2001.
REPLY: Joe suggests you have a look at this:
http://scienceandpublicpolicy.org/originals/policy_driven_deception.html
-A
Is it only me getting an error message from Acrobat? “The file is damaged and could not be repaired.” If anyone else got this message but figured out a fix or workaround, please share!
I would like us to meditate a bit on the other kind of data, the ice core data:
Now though the argument that surface data are in a muddle and cannot be used to assert warming is made clear with this post, I think that one should acknowledge that there are other data that show the medieval warming and the getting out of the little ice age.
Some of those changes come out of historical written records too, ( where is TonyB?)
This just to touch base and not throw the baby out with the bathwater, unless someone has shown that the icecore data have been tampered/averaged/homogenized etc.
I believe the reported global temperature anomaly is not the anomaly of the average, but the average of the anomalies of the individual stations. Therefore, it makes no difference if you eliminate cooler stations, unless their anomalies are lower. I believe global warming theory predicts larger anomalies towards the poles, so elimination higher latitude stations would be counterproductive.
Richard M (08:43:11)
“I had heard about it but had no time to investigate. Hopefully, I will have some time in the future.”
Yes, do so. If I’ve understood correctly they’re clearing up the GISTEMP software issues. It should be very interesting for people who want to check the datasets themselves.