NOTE: An update to the compendium has been posted. Now has bookmarks. Please download again.
I have a new paper out with Joe D’Aleo.
First I want to say that without E.M. Smith, aka “Chiefio” and his astounding work with GISS process analysis, this paper would be far less interesting and insightful. We owe him a huge debt of gratitude. I ask WUWT readers to visit his blog “Musings from the Chiefio” and click the widget in the right sidebar that says “buy me a beer”. Trust me when I say he can really use a few hits in the tip jar more than he needs beer.
The report is over 100 pages, so if you are on a slow connection, it may take awhile.
For the Full Report in PDF Form, please click here or the image above.
As many readers know, there have been a number of interesting analysis posts on surface data that have been on various blogs in the past couple of months. But, they’ve been widely scattered. This document was created to pull that collective body of work together.
Of course there will be those who say “but it is not peer reviewed” as some scientific papers are. But the sections in it have been reviewed by thousands before being combined into this new document. We welcome constructive feedback on this compendium.
Oh and I should mention, the word “robust” only appears once, on page 89, and it’s use is somewhat in jest.
The short read: The surface record is a mess.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

GHCN stations are not moving to warmer climates, as the underlying thesis of much of this report says. I did a simple calc of the average temperature of GHCN stations in each year since 1950. The results are on my blog here. The trend is down.
Watts for president
D’Aleo for VP
Campaign motto -“Honesty and intregrity- What else do you need?”
luminous beauty (11:57:01) :
Maybe this map from NCDC of the GHCN temperature Anomalies from Jan-Dec 2008 will help:
http://www.ncdc.noaa.gov/img/climate/research/2008/dec/map-land-sfc-mntp-200801-200812-pg.gif
Notice there is not data for most of Canada and that big gaping hole in South America. Once you got that take a look at Africa and notice that it looks like someone rolled a giant bowling ball through the continent taking out every thermometer in the way. Then we move to tiny New Zealand which mysteriously no one seems to have read a thermometer in 2008. Greenland no thermometers there either and look at those giant gaps in Russia.
For a better idea I made a 1 min 23 sec Animation using the GISTemp Anomaly map maker, set to 250Km infill to show how the change in thermometers since 1880.
I’m still struggling with #1 of the Summary for Policymakers:
“Instrumental temperature data for the pre-satellite era (1850-1980) have been so widely, systematically, and unidirectionally tampered with that it cannot be credibly asserted there has been any significant ‘global warming’ in the 20th century.”
I can’t find anything in the report that seems to support this, and some observations that do not. Unless putting “global warming” in quotes was meant to really mean “anthropogenic global warming”. The authors clearly recognize that the 20th century ended much warmer than it began, don’t question the existance of the 1900-1940 warming, and acknowledge (tho as “cyclic”) the 1979-1998 warming.
So how was there no global warming in the 20th century?
Don’t get me wrong, I enjoyed the rest of it, and appreciated the surfacestations.org status update that is included as well (thru October 2009, but that’s the good “station collecting” period of the year anyway).
Also, I must admit to being confused on the point about “only 4 stations” in California are being used. For what data set is that true? What have ss.org volunteers been taking pictures of in CA (i.e. what dataset are they included in)? And if that data does exist, isn’t there “still time” to analyze/process it and compare to the denuded data sets?
And do we have any idea how many of those stations “not included” in whichever dataset is being pointed at still exist with continuous records that could be processed now?
It would be nice if the executive summary were added to the WUWT article here – particularly for those with slower computers or very little computer memory. Especially as the article itself is so short (e.g., if it were already long I could see not wanting to add to it, but with it being very short, the addition of an exec. summary would be fine I’d think…).
Thanks much for considering adding it here!
Murray,
Your two points are somewhat incorrect.
1) It doesn’t matter that much what the absolute temperature of any given station is, since the global anomaly is calculated with respect to local anomalies, not absolute temps. So if stations with discontinuous records in GHCN tend to have a colder absolute temperature than stations with continuous records, it will have no real effect on the global anomaly as long as the the change in temps over time is unrelated to the baseline temp. If anything, I’d imagine colder places would warm faster than already hot places, all things being equal. That said, I haven’t looked enough into the altitude of continuous vs. discontinuous stations to comment more on if the general claim is correct or not.
Still, as I mention in the article, the proper way to approach the problem is to compare anomalies in continuous and discontinuous stations. I make a first pass at it in my article, and I’d welcome folks to improve it by adding some sort of geographic weighting, such that you don’t start getting weird effects when the number of discontinuous stations gets small.
2) The adjustment graph doesn’t show the magnitude of the adjustments per se, but rather it shows the distribution of how the adjustments modify the trends. This means that if the data was adjusted down in the past and up in the present, it would have a large effect on the temperature trend over the full period.
Now, where GG’s graph does have shortcomings is that it does not account for the timeframe of adjustments, but rather it looks at the net effect on the trend over the full period of measurements for each individual station. This means that the net effect of adjustments for any discrete time period (say, 1970 to present or 1900-1950) could be different.
Let’s put it this way –if it had said no evidence of global warming from 1940 (or 1934) then I could support that with the rest of the report. Of it had said “we can’t be sure *how much*” because of data problems, I could support that.
But if 1900-1934 is in there, I can’t make “20th century” work from the report itself. Unless a lot of unexplained (in the rest of the report) weight is being put on “significant”.
Murray (12:45:28) :
Paul K2, regardless of funding, the Yale report you reference has 2 obvious problems
1) a misunderstanding of d’aleo/Smith. They have not said that stations with a lower warming trend have been dropped. They are saying that the percentage of cooler stations (regardless of the trend) in the average has dropped, causing the average to warm.
My response: Let me get this straight; you believe that an average temperature is being calculated from all the station data? That all the temperatures are being added up, and divided by the number of stations? This is apparently what D’ Aleo and Watts seem to think is happening. The statements and methods in this report show that.
This seems like an extraordinarily difficult task. In order to get the average temperature for Pennsylvania, we would need to estimate the temperature for every square kilometer (for example), and divide by the area of Pennsylvania. Tough to do, since there is going to be a lot of temperature variation across the state, from the Delaware Bay and Lake Erie influenced areas, to the icebox section in the northcentral.
Wouldn’t it be easier if we simply looked at each station’s data, identify a trend, and calculate an anomaly for that station? Then we could grid the state, and use a formula to assign the anomaly to the grids that are near that station? Then over time, if a station changes, say from morning to afternoon readings, we could put an adjustment in, similar to other stations with AM to PM changes?
Then the actual absolute temperature distribution shouldn’t have a major impact on the anomaly calculated for the state. Since this report clearly says the distribution of cooler and warmer stations is important, then the authors seem to think that an average temperature is being calculated.
Hope to see your answers soon.
Regarding hunter (12:40:44) :
Of course they say that, they said the same thing about the hockeystick. Give it some time, let the minor errors get corrected, let it force some much needed revelations on still undisclosed adjsutments, let the process continue. Read the critical comments and the responses. Time will tell, but in science blanket statements like this man made are well, simply unscientific.
regarding Nick Stokes (14:05:39) :
“I did a simple calculation. Just the average temperature of all stations in the GHCN set v2.mean, for any year. You might expect a small rise reflecting global warming. But if there is nett movement of stations to warmer climes, that should show as a bigger effect.”
Is this two sets combined showing the difference, the one with the current stations, the one with all the stations?
The chart heading does not make it clear, are you saying the reduced number of stations currently used, are actually cooler then the stations dropped?
jonk (12:58:46) :
The follow up article is slightly humorous, especially Gavin’s contribution.
http://www.vancouversun.com/technology/Incomplete+data+mean+warming+w
Well, Gavins claim is easy to check. It might make a nice blog post for somebody.
Just look at RSS from 60N to 82.5N.
Then compare with the land.
Also, gavin makes an interesting argument about only using one station.
Hmm I wonder if somebody could pick a station with a cooling trend.
Paul K2 (15:04:12):
Yeah, we could do a lot of things. But the problem isn’t on this end. You could ring up James Hansen over at GISS, and ask him how he “adjusts” past temperatures. Here are some Illinois stations. Notice the shenanigans: click
Ray (13:43:17) :
“The big issue will be that all researchers that linked their research and publications to IPCC or CRU or GISS data sets can now and should be removed from peer-literatures.’
Or buried in peat.
Anthropogenic Global Warming Virus Alert
http://www.thespoof.com/news/spoof.cfm?headline=s5i64103
Regarding Paul K2 (15:04:12) :
Could it not also be possible that they are saying that if the remaining stations have an exisiting or increasing UHI effect and or an exisiting true warming relative to other regions, then those anomalys, legitmate or not, would show a warming, then the anomaly estimates from those stations transposed to the no longer used rural stations could artificialy raise that anomaly as well?
Additionally if the dropped stations increased in the recent past then the remaining UHI effects could potentially raise the overall anomally trend, especially if the past was adjusted down, and this would be further emphasised if the biggest downward adjustments were during the late 1930s warm period.
Thanks for a great job in putting this all together.
You mentioned that this has not been peer-reviewed. Have the GISS, CRU and NCDC climate records been peer-reviewed? I am aware that some facets, such as UHI effects, have been published, but has the overall records & methodology been reviewed?
Paul K2 (15:04:12):
“Wouldn’t it be easier if we simply looked at each station’s data, identify a trend, and calculate an anomaly for that station? Then we could grid the state, and use a formula to assign the anomaly to the grids that are near that station? Then over time, if a station changes, say from morning to afternoon readings, we could put an adjustment in, similar to other stations with AM to PM changes?”
Humm? Here is the thing, a formula to assign the anomaly to the grids from near by stations?? Maybe, depends on the formula. Maybe it also depend on which stations you drop relative to recent changes in all the stations currently in the average anomaly. If you drop the stations with less anomaly, and keep the ones with more warming as gauged by their anomaly, UHI or not, and use their anomaly, now transposed to the grid stations dropped, you may have problems and a biased warming. Just theoretical of course.
I mentioned in an earlier comment that a series with fewer high latitude stations vis-a-vis low latitude stations would show less warming than a series with more high latitude stations since colder places seem to be warming faster than warm places. The folks at NCDC told me something similar:
“By the way – the absence of any high elevation or high latitude stations would likely only serve to create a cold bias in the global temperature average because we calculate the gridded and global averages using anomalies – not absolute station temperatures – as I explained in the information in my earlier e-mail to you. Anomalies for stations in areas of high latitudes and high elevations are typically some of the largest anomalies in the world because temperatures are warming at the greatest rates in those areas. So the suggestion that the absence of station data in these areas creates an artificial warm bias is completely false.”
However, not wanting to rely on their word alone, I figured I’d do the analysis myself, looking at the mean annual anomaly across the raw data from all stations at > 60 degrees latitude (both north and south) and <= 60 degrees latitude. You can find the source code here: http://drop.io/2pqk4vg (see the lat lon version of the do file).
The results? http://i81.photobucket.com/albums/j237/hausfath/Picture67.png
Looks like higher latitude stations do show a significantly larger warming trend (0.28 C per decade vs 0.18 C per decade since 1960).
I expect altitude will have similar effects, but I still need to check.
” David (15:22:35) :
Is this two sets combined showing the difference, the one with the current stations, the one with all the stations?
The chart heading does not make it clear, are you saying the reduced number of stations currently used, are actually cooler then the stations dropped?”
No, it’s just the average temperature of all the GHCN stations being used in each year, plotted by years. If this report is right, that should be increasing as cooler stations are “dropped”. It isn’t; it is decreasing.
David, yes of course! I am so uninformed. Clearly I don’t understand this.
So help me out; which pages in this report shows a comparison of the anomalies for the dropped stations versus the stations that were kept? Then we can all get on the same page, so to speak.
But maybe good news, I stumbled across some work that has been done on the continental US stations, comparing subsets of the stations, especially the bad ones and the good ones, so we can look there to see the big differences that are caused by station selection.
http://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf
I am having trouble seeing your point by looking at the graphs at the end of that article… Perhaps you can point out the graph that shows the big change to due to bad stations being retained over good stations?
As always, your humble servant…
Nick Stokes (15:57:48) :
If, after lopping off all the cooler stations, the GHCN averages continue to fall, it can only mean one thing:
There ain’t no Global Warming going on, and even more to the point, there’s a whole lot of Global Cooling.
Seems to be a lot of trolls about. The publication must have struck a nerve.
They didn’t have to write all that to convince me, because after over a year of following along and going over station data myself, it’s all too obvious.
I have a “Harry_Read_Me” headache.
What a mess !!
You may want to have a look at some scatter plots and animations I created from raw GHCN V2 temperature data, anomalies and station locations here:
http://globaltemps.wordpress.com
While they sure didn’t uncover any scandals, they might be useful to expose the complete dataset in its chaotic variability.
I agree with Pamela Gray (06:31:29). The message of the article is very powerful. But a dispassionate style would be much more effective, along with use of the scientific passive. The polemical tone of the article is unfortunate, and will give partisans a hook to discredit the message.
Is it possible that the Briffa decline that was hidden was correct? That visually appears to be similar to the climate models without AGW that was presented in IPCC AR4. (My suspicion that Briffa knows he was right, but was brow-beaten by Mann et al, then embarrassed when he finally had to release his data, and that he is the deep throat behind the release of the CRU data, Now he is not available because he is ill. How’s that for a wild conspiracy theory)
Second question: Is it possible that the lack of warming in the last decade is due to the lack of “adjustments” that can be made to the data or lack of stations to eliminate?
RE Bruce (15:43:14) :
The basic GISTEMP, with references, is described here. The programs are also available on that site, but it’s also good to check Clear Climate Code by Nick Barnes, et al.