New Compendium Paper on Surface Temperature Records

NOTE: An update to the compendium has been posted. Now has bookmarks. Please download again.

I have a new paper out with Joe D’Aleo.

First I want to say that without E.M. Smith, aka “Chiefio” and his astounding work with GISS process analysis, this paper would be far less interesting and insightful. We owe him a huge debt of gratitude. I ask WUWT readers to visit his blog “Musings from the Chiefio” and click the widget in the right sidebar that says “buy me a beer”. Trust me when I say he can really use a few hits in the tip jar more than he needs beer.

surface temp cover image

The report is over 100 pages, so if you are on a slow connection, it may take awhile.

For the Full Report in PDF Form, please click here or the image above.

As many readers know, there have been a number of interesting analysis posts on surface data that have been on various blogs in the past couple of months. But, they’ve been widely scattered. This document was created to pull that collective body of work together.

Of course there will be those who say “but it is not peer reviewed” as some scientific papers are. But the sections in it have been reviewed by thousands before being combined into this new document.  We welcome constructive feedback on this compendium.

Oh and I should mention, the word “robust” only appears once, on page 89, and it’s use is somewhat in jest.

The short read: The surface record is a mess.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

280 Comments
Inline Feedbacks
View all comments
Editor
January 28, 2010 12:28 am

E.M.Smith (22:23:48) :
Thanks for the lovely comments about the database and maps. Graphics envy, eh? Well I have offered…. And you are right that this would not have even got off the ground without your efforts.

E.M.Smith
Editor
January 28, 2010 2:08 am

rbateman (22:58:49) : A question for you:
In attempting to come up with values to plug holes in raw data sets should I
1.) take the slope between the previous and following data points or
2.) use the average hi/low values for the date in the stations history.
3.) Your suggestion

Oh Dear. I really hate the “right way to make up data” question….
Whenever possible, I’d rather just leave the hole. It is all you really “know”. But, if you MUST fill in: It depends a lot on the particular data sets, the particular processes they will be used to support, and the particular goals of the analysis. FWIW, this “issue” often comes up in stock trading. You have discontinuities for most data between each market day and it’s worse over weekends and holidays…
OK. The “default” is a straight line connecting the two dots you do have. So I’m assuming you are talking about temperatures, not stock prices. You draw the line between them and look for the time intercept. That’s your data.
(Tuesday: 20 F Wed: 25F Th: blank Fri: 30F – fill in Th with 27.5 )
That’s your #1 I think.
Now your #2 implies not a 25 F on Wed and a 30 F on Friday but a H/L set for each. Now you have lots of choices… Or does your #2 say you have a H/L for TH ? ….
If you have a H/L for Thurs, then you average them (that’s what NOAA or other provider does to get the “daily mean”) – even though it isn’t a daily mean… Imagine a station at the bottom of a steep canyon, solar heated high might be all of 2 hours, not 12; or imagine a 50 F drop in about an hour as front moves through, then the clock changes to the next day: the shape of the daily curve is ignored, but does change the actual mean as compared to the H/L average. But “everybody does it” and I doubt that the difference between an actual area under the curve mean vs a H/L average will matter too much most of the time… (bald assumption…) So, for all practical purposes and to be in conformance with the OTHER NOAA products, I’d do the H/L average to calculate the daily “mean”… if I have the daily H/L data for a date that is missing a “mean”.
But what if you meant “H/L for Wed and Fri” …
Now you get some interesting choices:
You can average the H/L for each day and put the slope between those two averages.
You can put a slope from H to H and from L to L and get a synthetic H and a synthetic L for Thu. Then average the sH and sL to get a sMean…
(you could do one thing at one end of the line and another thing at the other end. Probably only useful if you have different data missing from Wed than from Fri… W high to F mean. W low to F mean. s1/2H s1/2L averaged… )
And you could also get really fancy and do longer term things like looking at delta slope H vs delta slope L over longer periods and project the likely delta slope during the individual day (so, for example, you might have a dead flat low with nighttime fog, but a H that was decelerating toward that low as the fog filled in the daytime. That would imply (if Fri H was almost the same as Fri L) that the Thu H ought to be a bit closer to the Thu L than a straight line fit from Wed H to Fri H would have given (i.e. more of the drop would have happened in the last 1/2 of the prior day…)
Do all of these minutia of tea leaves really matter? I doubt it. For most money issues a straight line interpolation is fine. I have trouble thinking of what would be so dramatic in temperatures (but a “weather guy” would be consulted before I’d just pick one and not tell the customer…)
For #3, what would I do? I think I’d chose a straight line from Wed H to Fri H for a Thu sH and a straight line from Wed L to Fri L for a Thu sL, then average those two. I REALLY don’t like this slamming together highs and lows into a single ball of goo. It hides too much information in that single daily average. In fact, were I doing a “GIStemp like” temperature series, I’d do it with Highs and Lows kept through the whole thing. Why?
IF, for example, we had “global warming” that had summers holding steady at just about the same temperature highs as they always had; but the winter lows were being clipped so that -30 F days were fewer and we only had -10 F days instead, well, frankly “Bring it on!”… And if the “warming” were such that daytime highs stayed at about, oh, 70 F where I am, but nighttime lows were being raised from 25 F to 35 F I’d again say “Bring it on!” (you can grow more stuff if you dodge the nightly frost…)
And that is one of the things that I find distressing about this whole Global Average Temperature number. It is just so … so… “useless”. It doesn’t tell me if I’m going to have a 120 F August afternoon (instead of 100F) or if I’m going to have a 15 F January night (instead of -5 F …) and one of those I’d be more than happy to have while the other, not so much!
BTW, the base data shows an interesting pattern. I’ve not done a look at the H and L yet, but the pattern of the averages over time shows a warming of the coldest parts of winters, but summers Do Not Warm. Personally, it it were shown to not be thermometer location driven, I think that it would have to be that 4th power radiation thing… No, I have no evidence for it. It’s just a self delusion at this point (but a pleasant one 😉 So while I think the whole CO2 thing is bogus, IFF there is ever an effect that actually happens, I think it would show up as a 4th power driven lid at about 20 C to 25 C ‘global average’ (GAK!); and with bottoms being raised as the blanket keeps a bit of heat in at night and in cold winters. All in all a very beneficial effect. Basically, a slightly warmer winter low would be FINE with me, and my plants… (but just as real in the data: is the fact that the base data for individual places show plenty of stability but not much else… so that ‘CO2 warming winters’ fantasy is just personal speculation and not supported by the data… )
Oh, and the naive case of just averaging all the data for the globe and finding a trend. I did that early on. It’s not very interesting. You end up averaging a N. Hemisphere site that moves nearer to water (moderating temps) with a S. Hemisphere added station (say, at an airport) and masking the changes in both regions.
Where aggregate averages are interesting is in how they inform your ignorance. So you look at, oh, Africa. And you find a dramatic increase in temperatures in the early years. Then you look ‘by latitude’ and find that they move from the two ends (Mediterranean coast and South African coast) toward the Sahara. Then temps stabilize and not much changes. The actual pattern of the change of the average is what matters. And it clearly matches thermometer movements. THAT is the bias signal that a ‘temperature series’ must remove. It’s there. It can not be denied. And it is enormously larger than any supposed CO2 signal.
Lots of folks at that point want to say that I believe that average means something. I don’t. I think it tells you the problem you are trying to solve OR that there is no problem to solve.
So, Africa. It just is not warming. Sorry. Once the thermometer locations stabilize near the Sahara, it just sits there with a bit of ‘ripple’. Same thing for New Zealand. And Argentina. And and an… (Canada is interesting because the basic data do show a cooling trend, yet GIStemp makes this nice rosy red somehow… but I digress…) The basic story told by the thermometers is that they move. And when they stop, the temperatures stabilize. WHEN they stabilize changes from country to country. So for the CO2 thesis to ‘work’ it must somehow have a differential impact on each country working at different years in each. So this looking at subset averages has value. It tells you that you are trying to hear a CO2 whisper in a jet airport hurricane…
But average them all together and you hide too much. That station that moves from the mountains to the lake coast has the L rise, but also has H clipped. What does the “average” do? Does it matter? But look at the temperature profile by months of the year and you see the summers not moving up (moderation) and sometimes moving down, while the winters DO move up (moderation). And that is seen again and again in the base data.
So at the one extreme: an average of everything is useless. Yet smaller groups averaged together can show trends and inform our ignorance about where there is information carried in the base data. (August NOT warming, but January does? Hey, I call that a good thing; but it also is what happens when you move a location from an inland mountain to the beach… And an August that NEVER rises over 100+ years says that the CO2 ‘tipping point’ is just a fantasy. And THAT is what you see in the base data again and again.
The more you look, the more you find that there is a ‘hard lid’ not a ‘tipping point’. I don’t really care if the lid is from moving thermometers to the beach or from a 4th power function of IR radiation. I still know that it isn’t a CO2 induced runaway feedback tipping point. (You can have amplification and a runaway tipping point, or you can have stability and dampening with added temps; but you can’t have dampening with a tipping point…)
So why all this long exposition? Because it points out that you need to know WHY you are averaging. What is the impact? What does it HIDE? (Every average hides something. That is what they are used for.)
So, you want to fill in a missing data point. What do you want to hide? The “probable High”? The “probable Low”? The likely change of the gap between them? (acceleration of one toward the other) The shape of the daily temperature profile? ( Think of a sin wave vs. a nearly flat temp with a daily spike in the bottom of a canyon like uuuu vs. a series of nnnn shaped days with a brief night time cold moment in the desert heat. You hide those shapes with a H/L average). What information are you willing to lose? What information do you WANT to lose? Does daily shape just distract from what you really want to see? Then hide it with a H/L average. Similarly, do you want “monthly shape’ data? A closer approximation of the actual ‘areas under the monthly curve? Or will a monthly H / L average do? Is 30 days at 100 F and one at 50 F best represented with 75F or 99F for your purposes? What about if 28 days are missing and you have one day at 100F and the other at 50F ? Did you use one answer to the first question and a different one for the second? How will you keep those two answers playing well with each other?
So in almost all cases a naive straight line fit of W ave to F ave will be just fine. In some cases you might want a Wed H to Fri H slope. In others you might want to have a Wed L to Fri L slope. In a very few, the sH and sL from them. And it all comes down to what do you want that ‘in fill’ to do and what do you want the averages used to hide?
What happens when Thu had a Canada express run through in 12 hours and be gone? Or a hot tornado cooks through moving a lot of air? Your “in fill” and your average hide that you are ignorant of those events… If you know you want that ignorance and it’s a “feature”, then go ahead and average…

Editor
January 28, 2010 3:16 am

globaltemps (16:28:35) :
Your animations are really cool – and show there is a cap on temperature.
Great work!

January 28, 2010 5:52 am

E.M.Smith (22:23:48)
Thank you for the links to my web server and the ‘interactive maps’ that show the different warming/cooling trends (in the raw and adjusted data) for different time periods separtely for BOTH the NOAA GHCN and GISS datasets (and at some point CRU dataset also). In particular, thank for encouagring visitor here to read the ‘Mapping global warming’ thread on ‘diggingintheclay’.
I’ve been recently trying to answer the question that someone called Andy asked on the ‘diggingintheclay’ DITC blog as to exactly how do the adjustments affect the warming/cooling trends. Vjones and I are just in the process of preparing a series of threads which link to the previous analyses done by GG and RomanM (and others) and as with the other maps, I’m attempting to show the effects the adjustments have on the various temporal warming/cooling trends. If anyone has read the ‘Mapping global warming’ thread on DITC, you’ll have seen the cooling trend from 1880 to 1909, followed by the clear warming trend from 1910 to 1939, followed by the clear cooling trend from 1940 to 1969, followed finally by the ‘current’ warming period (CWP) from 1970 to 2010.
Just so that everyone is clear as to exactly what I’ve done to produce these maps, the maps show trends (for the different time periods) in individual station raw and adjusted temperature data (i.e. there isnt a single anomaly chart in sight!) shown as ‘coloured dots’ based on what range their warming or cooling trend (during the given time period) falls into. So for example if a station shows a warming trend of 7 degC/century then it will be shown as a ‘dark red’ dot. Vice versa, if it shows a cooling trend of -7 degC/century then it will be shown as a ‘dark blue’ dot.
Now please go and look at the ‘interactive maps’ and/or the snaphots of them in the ‘Mapping global warming’ thread on DITC. In particular please contrast the 1880 to 1909 cooling period with the 1940 to 1969 cooling period and most importantly the 1910 to 1939 warming period with the 1970 to 2010 period. Also when contrasting these periods please bare in mind the ‘station drop out’ problem, namely that prior to 1950 the station global coverage is much sparser prior to 1950 and after about 1992. Please read the ‘Station drop out problem’ thread on DITC for much more detail.
If you contrast the 1910 to 1939 warming period with that for the 1970 to 2010 CWP and allow for the fact that there is significantly greater global sttaion coverage for the 1970 to 2010 period, you’ll see that the maps aren’t that different. Indeed it’s arguably that the 1910 to 1939 warming period is more severe in the US that it is shown to be during the 1970 to 2010 CWP? In particular note the Northern Hemispshere versus Southern Hemispshere differences. Global warming during the 1970 to 2010 CWP is clearly not ‘global’ but rather is largely Northern Hemisphere warming. If you look at the 1970 to 2010 DJF and JJA seasonal maps you’ll also see that it is largely Northern Hemispshere winter warming. Most importantly note that these warming trends are largely evident in the RAW data trend maps as well as the ADJUSTED data trend maps. In other words the adjustments don’t have that much effect on the warming/cooling trends over and above those evident in the raw data.
There are nonetheless some significant differences between the 1910 to 1939 and 1970 to 2010 trend maps. Look at the 1970 to 2010 map for example at all the Canadian stations at and above the 49th parallel. They are all ‘dark red’ dots i..e they show greater than 5degC/century warming trends over the 1970 to 2010 CWP. Look also at the Icelandic, Northern Norway and Northern Russia stations for 1910 to 1939. These also show ‘dark red’ dots i.e greater than 5 degC/century warming trends – something going on with the AMO here perhaps?. Finally look at the central US during the 1910 to 1939 and 1970 to 2010 time periods. The warming trend in many of the stations in the central US is greater than it is during the 1970 to 2010 CWP. This is perhaps clear evidence that in the central US (at least) the 1930s was a somewhat warmer decade than the 1990s?
Also please bare in mind (as with EM Smith) that I’m producing these ‘interactive maps’ on low spec hardware. In fact the web server is an ex Compaq Evo desktop PC that is at least 6 years old and only had 512Meg of RAM and an 80 Gig hard drive. As I and E M Smith have shown (unless you are from NOAA or GISS) you really don’t need a large amount of computing power to do this type of analysis of the NOAA/GISS/CRU datasets. Because the hardware is not that powerful and largely because the ‘intercative maps’ use a Flash component (which in turn loads the data from an XML file) the maps can take some time to load. If you are prompting that its is ‘taking a while for Adobe Flash player to load the data’, please click ‘No’ (maybe several times) and eventually the map will be fully displayed – its well worth the wait. Just a quick tip! Its best to open the different maps in separate ‘tabs’ in your browser and switch between them to see the differences. After the maps are fully loaded you’ll then be able to ‘zoom in’ to to a particular country and click on a particular ‘dot’ to see a full chart of the raw/adjusted data and warming/cooling trends for that station. Enjoy!!

Jeff Alberts
January 28, 2010 7:31 am

I don’t find Menne’s paper a surprise. If one is looking for a trend, it does not matter that station A is pure, whereas Station B is contaminated by noise provided that the contaimnation to Station B remains constant throughout the period over which the trend is being examined. In countries such as the US, most urban development/growth predates the period considered by Menne and hence when looking for temperature trends (rather than absolute accuracy in the temperature measurement), during the period considered by Menne, one would not expect to see substantial differences between good and bad sited stations, or between urban and rural stations

You can’t really categorically say that. You’d have to take each station on a case by case basis.
I grew up in Manassas, VA (yeah, Battle of Bull Run and all that). As of the mid 1970s, it was a small town of about 20k population. In the 80s it saw an explosion of strip malls and housing as DC area commuters started moving further out into the suburbs. The same can be said of Gainesville, VA, Haymarket, Warrenton, and many otherwise rural towns in that area. Places that were forests and open fields where I tromped around as a young teen in the 70s became housing developments, parking lots, etc.

Richard M
January 28, 2010 8:05 am

EM Smith, thanks for your reply. I now feel more comfortable.
As for the remarks by Tom Graney and others that siting changes are a one time deal … wrong! Once again one must think about the situation. If you really have a warming signal in climate (and I believe there is one), then that means air conditioners will be on more often, asphalt will absorb more heat and hold it longer, and maybe even the barbecues will be used more often. Hence, your statements are only true if there is no warming signal at all.
Now, for a little conjecture. If one looks at the temperature anomalies post 1998 there appears to be a real balance between La Ninas and El Ninos around the .2C mark. Maybe the Canadian researcher was right about the pre-1998 timeframe and ozone depletion was a major player in warming the planet. Now it has stabilized.
Of course, my long time position is that I am skeptical of anything and everything related to climate. It is very complex. In fact, I’ve coined a new term to describe those who think they understand climate to the Nth degree. They are complex climate deniers. 😉

January 28, 2010 8:16 am

Smokey (15:24:26) :
…You could ring up James Hansen over at GISS, and ask him how he “adjusts” past temperatures. Here are some Illinois stations. Notice the shenanigans: click [ http://www.rockyhigh66.org/stuff/USHCN_revisions.htm ]

Actually, that’s All the Illinois USHCN stations, so no cherry picking. Just compare the number of stations where the warming increased with the the number where the warming decreased. Same with the Wisconsin stations page, all included, great majority adjusted to more warming.
http://www.rockyhigh66.org/stuff/USHCN_revisions_wisconsin.htm
I’ve started on an Iowa page. So far, same story.

Doug S
January 28, 2010 8:33 am

E.M.Smith (23:21:21) :
Very fine work E.M. and thanks so much for the pointers to the “fill-in” code. This is an unbelievable process for managing the data. I’m inclined to think the primary problem is the legacy nature of the temperature collecting methods and a failure of the research community to address the fundamental data management issues. I would urge the people spending our tax dollars to halt everything their doing and start again with a new data structure. It would be a big job to go back to the paper records and hand enter the data once again in a modern Enterprise database system but the cost would be minuscule compared to the waste, fraud and abuse we’ve suffered to date. Keep the faith and thanks again for all your efforts on behalf of the tax payers around the world.

rbateman
January 28, 2010 8:52 am

E.M.Smith (02:08:45) :
Thank you for all that information, it helps to know the thought process and the pitfalls with each decision.
I’m only doing 1 station, my hometown. I have 2 data sets. One is the NCDC which goes back to 1913 uninterrupted, but has holes in it, especially in the later years. The 2nd data set goes from the 50’s to present, has a lot fewer holes, is from hardcopy printed source (same rural town) and overlaps the first data set an average of 4 out of 7 days perfectly.
I’m really not keen on hiding anything, but do want to get the best possible representation of how the high, low and high-low(average) change in the time span.
This:
http://www.robertb.darkhorizons.org/TempGr/Wv1913_2009avs.GIF
will change as I continue to transcribe the 2nd set of data from hard copy (about 75% complete) but I can see where the warming is: nightime lows.
All of the data I have preserved at a glance on the base page:
http://www.robertb.darkhorizons.org/WeavervilleClimate1.htm
There is a 3rd way to do the averages (dri.edu does it) and that is take a particular calendar day [say Jan 26th] and for 1913 to 2009 the average high is 49F and the average low is 29F. Plug those into Jan 26th holes.
II do want to limit any bias that comes from ‘making up’ missing data.
At the same time, the NCDC historical data is choppy with many runs of years shot full of holes (some holes are plugged with it’s B-91’s) , so I have to do something with it, or throw it away.

Jeff Alberts
January 28, 2010 9:31 am

Richard M (08:05:05) :
EM Smith, thanks for your reply. I now feel more comfortable.
As for the remarks by Tom Graney and others that siting changes are a one time deal … wrong! Once again one must think about the situation. If you really have a warming signal in climate (and I believe there is one), then that means air conditioners will be on more often, asphalt will absorb more heat and hold it longer, and maybe even the barbecues will be used more often. Hence, your statements are only true if there is no warming signal at all.

I dunno. I use my BBQ even when it’s 40f outside…

rbateman
January 28, 2010 11:15 am

Doug S (08:33:39) :
By hand, yes, that is what is needed.
If we are going to get our money’s worth from the new dollars poured into climate change study, collecting and disseminating a better foundation is the way to go.
So what if it takes a whole army of individuals to do the dirty work?
Fine. Pony up with the grants. The President and Congress are talking job stimulus: Let them put their money where their mouths are.

January 28, 2010 11:24 am

Mike McMillan,
Your pages of blink charts are really excellent. We can see at a glance what is being done to the temperature record. And if they do this to individual stations, there’s not much doubt that they do it throughout GISS [and NOAA, which does the same thing].
Also, I’d suggest putting your name on those chart pages – in case someone stupidly forgets to note the provenance. Something like these guys do at the bottom of their chart: click

Jeff
January 28, 2010 1:19 pm

This report is embarrassing and is going to destroy the position of AGW critics. Are Joe D’Aleo and Watts really AGW critics, or are they really AGW alarmists posing as critics and working to undermine criticism by AGW non-believers?
Temperature trend isn’t calculated by averaging recorded temperatures from the surface stations across the world. Anomalies are created for each individual station, and then the anomalies are averaged to determine the average temperature trend. And this means that what’s relevant isn’t how the recorded temperature from one station differs from another, but rather how the change in temperature at different stations differ. And on the televised news report you see D’Aleo and Smith talking about how “cold thermometers” were removed in a systematic fashion.

Jeff
January 28, 2010 2:04 pm

I just noticed the comments above where Smith responded to someone else who was also pointing out that temperature anomalies are first created for each individual station and then averaged. According to Smith, this isn’t what’s really going on in the code, even though that’s what everyone is being told. And if that’s the case, I apologize for my remarks above.

January 28, 2010 4:24 pm

Jeff (13:19:01) :
Jeff, I think your first post had it mostly right. You described the Climate Anomaly Method. Some modification of it is needed when stations do not have enough data in the reference period, and as E.M.Smith describes, GISS calculates anomalies at grid points, which involves a bit of local aggregating. But it is still much closer to the CAM than what seems to be envisaged in this report.
BTW, there’s nothing secret about the GISS method. It was described in a paper in 1987 by Hansen and Lebedeff.

Richard M
January 28, 2010 4:44 pm

Jeff Alberts (09:31:51) :
“I dunno. I use my BBQ even when it’s 40f outside…”
I used my barbecue last week when it was around 20F, a nice mid-winter day here in Minnesota. While my remark about barbecues was somewhat tongue in cheek, I imagine the tendency is to use barbecues more often the hotter it gets so one doesn’t heat up the house by turning on the oven.

Richard M
January 28, 2010 4:49 pm

Jeff (14:04:32):,
I think that is the crux of a lot of the confusion. The warmists at skeptical science were making the same assumption and throwing out ad homs left and right.

January 28, 2010 8:31 pm

E.M.Smith (02:36:52) :
A retraction and apology is due here. I discovered an error in my R program, which I published on my blog, which calculated the station averages in v2.mean. There was a memory overflow which had unexpected effects. The revised graph now looks like the one on p 14 of the report (and like that, different to the one on p 11), which shows that the site averages in the GHCN collection have been warming since 1950, though not uniformly.
However, it is still true that this should not lead to warming of the average anomaly.

Steve Keohane
January 29, 2010 4:46 am

Richard M (16:44:13) : I imagine the tendency is to use barbecues more often the hotter it gets so one doesn’t heat up the house by turning on the oven. The only meat that doesn’t go on my grill is the Thanksgiving turkey, and bacon (it flairs up badly).

clique2
January 29, 2010 6:37 am

Hi! Tried to buy him a beer but you need to put a website address in the submit form! Can WUWT provide?

Ken MacLauchlan
January 29, 2010 7:36 am

A nice compendium of many concerns about the surface temperature record. A few weaknesses I have noted that are worth sorting :
1) CASE 5: NO WARMING TREND IN THE 351-YEAR CENTRAL ENGLAND TEMPERATURE RECORD
This comparison only works if you limit yourself to only the summer season figures. The other three seasons do show warming when charted in the same way. Probably worth binning this case study.
2) Klotzbach et al. (2009) show how RSS and UAH disagree with HadCRUT3v and NCDC for land surface temperature measurements. However the satellite and terrestrial measurements are much closer for ocean surface measurements and for the global total. Has anyone explained why such large disagreements for land based measurement disappear when we look at the global totals? Is it just because most of the world is water or is there another fiddle going on?

Baa Humbug
January 29, 2010 9:01 am

Anthony I thought you might be interested in this paper I found by John Christie of UAH from 2001 titled
WHEN WAS THE HOTTEST SUMMER?
A State Climatologist Struggles for an Answer
BY JOHN R. CHRISTY

Richard M
January 29, 2010 9:41 am

Steve Keohane (04:46:27) :
“The only meat that doesn’t go on my grill is the Thanksgiving turkey, and bacon (it flairs up badly).”
I use my grill quite a bit but not nearly as much as that. However, I do plan on grilling tonight while the temperature hovers just above 0F.
Has anyone else noticed this:
http://www.chron.com/commons/readerblogs/atmosphere.html?plckController=Blog&plckBlogPage=BlogViewPost&newspaperUserId=54e0b21f-aaba-475d-87ab-1df5075ce621&plckPostId=Blog%3a54e0b21f-aaba-475d-87ab-1df5075ce621Post%3a1602a720-b2a5-47de-bf2d-3b62afcf88a6&plckScript=blogScript&plckElementId=blogDest
The author makes the same assumption about the calculation of anomalies that the warmers have been repeating. I’d suggest EM Smith or Joseph D’Aleo post a rebuttal.

January 29, 2010 3:38 pm

E. M. Smith –
I had blogged here (http://tinyurl.com/ykfy8aa) that the GISS temperatures were computed from anomalies, but your comment on these pages that such a statement was “bull pucky” required me to dig further.
Thanks to your posting of the code, I now see that the process is just as documented in Hansen and Lebedev (1987): after individual station adjustments and corrections are applied, an average temperature is constructed by starting with the longest-period record near a grid point and doing a successive weighted average with all the other nearby stations. The biases (or “shifts”) are computed separately for each station and removed before the data are averaged together. Then anomalies are computed relative to the base period on the grid point value.
This is certainly not averaging anomalies together, but it is equally effective at correcting for changes in network configuration. No bias in the trends is introduced simply by losing mostly cool stations after 1990. Of course, you can have problems if the lost stations have different TRENDS, but their absolute temperatures make no difference, contrary to the D’Aleo and Watts report. I discuss this more fully here: http://tinyurl.com/yec3ads .
I would welcome your thoughts. If you wish to post a comment on my blog but do not want to register, just email it to me. Anthony, the same goes for you.

January 29, 2010 3:40 pm

Just noticed the 9:41:35 by Richard M just a bit above mine. I didn’t intend my comment as a response, but it seems to serve that purpose.