New Compendium Paper on Surface Temperature Records

NOTE: An update to the compendium has been posted. Now has bookmarks. Please download again.

I have a new paper out with Joe D’Aleo.

First I want to say that without E.M. Smith, aka “Chiefio” and his astounding work with GISS process analysis, this paper would be far less interesting and insightful. We owe him a huge debt of gratitude. I ask WUWT readers to visit his blog “Musings from the Chiefio” and click the widget in the right sidebar that says “buy me a beer”. Trust me when I say he can really use a few hits in the tip jar more than he needs beer.

surface temp cover image

The report is over 100 pages, so if you are on a slow connection, it may take awhile.

For the Full Report in PDF Form, please click here or the image above.

As many readers know, there have been a number of interesting analysis posts on surface data that have been on various blogs in the past couple of months. But, they’ve been widely scattered. This document was created to pull that collective body of work together.

Of course there will be those who say “but it is not peer reviewed” as some scientific papers are. But the sections in it have been reviewed by thousands before being combined into this new document.  We welcome constructive feedback on this compendium.

Oh and I should mention, the word “robust” only appears once, on page 89, and it’s use is somewhat in jest.

The short read: The surface record is a mess.

0 0 votes
Article Rating
280 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ruhroh
January 26, 2010 8:34 pm

Anthony;
This is Huge!
Excellent work by all involved.
As a person who heard Richard Feynman deliver that famous Cargo Cult Science talk in 1974, I’m very sure he would be a huge fan of your work.
He loved to skewer stuffed shirt science, and you guys really di a great job on this one.
On behalf of my children, THANK YOU.
RR
REPLY: I appreciate that, but please thank Joe D’Aleo and in particular E.M. Smith, they did more work on this than I. – Anthony

Methow Ken
January 26, 2010 8:37 pm

Downloaded; saved; scanned (full read not doable tonight).
Already I’ve seen enough to say:
Every member of the US Congress should be required to read this.

January 26, 2010 8:38 pm

Mega congratulations to Anthony and Joe. This is the most important climate realist report written to date. A blockbuster. Kudos to all involved, including E.M. Smith, the surface station crew, and SPPI.
This report marks the tipping point. From here on, the AGW scare will steadily decline and disappear. The whole world is in your debt.

Peter of Sydney
January 26, 2010 8:42 pm

Excellent piece of work. Certainly a far cry from the IPCC’s standard of research. I like to see the IPCC provide a response – a scientific one not their usual political spin.

January 26, 2010 8:45 pm

Would you object to my quoting extensively from this in an upcoming article?
REPLY: No but since this is a large compendium, do try to attribute who said what. Ask if needed. -A

Clive
January 26, 2010 8:52 pm

WOW!
And Thank you!!
Copy off to the Canadian Minister of Environment.
Cheers!
Clive
A grateful Canuck! ☺

Leon Brozyna
January 26, 2010 8:53 pm

You have a gift for understatement:

“The surface record is a mess.”

Bruce Hall
January 26, 2010 8:56 pm

Anthony, this will make good reading on my trip to SF. Joe is not one to fly off the handle, so it ought to be interesting to see to where this leads.

Ruhroh
January 26, 2010 8:59 pm

Anthony;
I found the linked website to be an extremely insightful distillation of the unstated assumptions that you are up against with this debunking effort.
I think that the first 3 are pivotal in this case;
1. Authority confers virtue.
2. Authority bestows wisdom.
3. Authority implies benevolence.
4. Authority creates wealth.
I expect that the non-sceptics that you might seek to reach will, without question, assume that
the NCDC is intrinsically virtuous, (more so than 2 guys on a website), wiser (than 2 guys on a website), and more benevolently trustworthy (than 2 guys on a website).
I don’t know how to get those unstated presumptions to be ‘in play’ for the folks you seek to reach, but perhaps it would be by explicitly discussing your pathway from initially sharing those presumptions, (as opposed to having rejected them from the git-go).
You might even note (along the way) in passing your surprise at the situation;
That despite your assumption of (NCDC) virtue, wisdom and benevolence, , you nonetheless have found contrary evidence.
Maybe this would be relevant if you find yourself reprising this Augean effort for a TV camera.
Just an idea from decades of marriage counselling…
RR

Andrew30
January 26, 2010 9:02 pm

Great Work!
I’ve skimmed it twice and will do a full read tomorrow.
This just is:
“Andrew Weaver, probably Canada’s leading climate scientist, is calling for replacement of IPCC leadership and institutional reform.
If Andrew Weaver is heading for the exits, it’s a pretty sure sign that the United Nations agency is under monumental stress. Mr. Weaver, after all, has been a major IPCC science insider for years. He is Canada Research Chair in Climate Modelling and Analysis at the University of Victoria, mastermind of one of the most sophisticated climate modelling systems on the planet, and lead author on two recent landmark IPCC reports.”
http://network.nationalpost.com/np/blogs/fpcomment/archive/2010/01/26/terence-corcoran-heat-wave-closes-in-on-the-ipcc.aspx

Doug in Seattle
January 26, 2010 9:03 pm

Anthony,
Do you still plan on releasing the journal paper you spoke about last summer? Or is this the one?
REPLY: Well this isn’t a journal paper, just a compendium of important issues discovered about the surface temperature record, and yes a full journal “peer reviewed” article is being worked on. – A

January 26, 2010 9:04 pm

Strong stuff!
“…NOAA and NASA in the manipulation of global temperature records…”

Richard Wakefield
January 26, 2010 9:07 pm

Yes, the surface temperature is a mess. I’ve gotten all of Southern Ontario’s data from 110 stations from Environment Canada and started to compile it, first to see how good the data is. It’s pathetic. Only 76 have data at all the rest from Ontario only have pecipitation data, no temperatures. The best coverage was in the mid 1980s with 72% data, but it drops off dramatically to a mere 10% today (up until the 1950’s it was less than 5%). This is because most stations started their data collection after the 1950s’, but for some reason many of them stopped data collection (budget cuts?). Only 10 stations are still collecting measurements and all of them started after the 1960’s. Only 4 have a long data range starting in 1900, but all of them ended before 2005.
What I have been able to get out of this data is very similar to what I found with the one location in Belleville (http://www.scribd.com/doc/25338819/What-Does-Averge-Temperature-Actually-Mean) The increase in the average mean temperature for all of southern Ontario is due to a narrowing of the variation. The number of days above 30C have dropped since the 1920s and the number of days below -20C has also dropped since the 1920’s. The length of the growing season is also increasing since the 1920’s.
Thus I have more evidence that this “warming” is nothing more than a narrowing in the extreme ranges of years temperatures to more moderate ranges today. British Columbia is next on my list.

January 26, 2010 9:09 pm

Congratulations Anthony. I am pleased to see the fruition of your’s and others’ hard work. Keep it up, as you *are* much appreciated.

Stephan
January 26, 2010 9:10 pm

Anthony I thought this was the one that needs an answer:
http://www.skepticalscience.com/On-the-reliability-of-the-US-Surface-Temperature-Record.html
Does your book cover this? or are they too thick to understand. He/they say you only show pics, they show data and their conclusion: there is no significant effect. I would doubt that very strongly…..

REPLY:
This was well along when the Menne paper came out, but I do touch on it in this compendium. I had to get this wrapped up before I could do any substantive replies here. I have a Paper with Pielke Sr. and others we are working on, and it is a fully detailed analysis. That will be the best rebuttal. – A

Tom G(ologist)
January 26, 2010 9:10 pm

Cinderalla – The short story – The shoe fit
Just love stories you can sum up in a few words

JB
January 26, 2010 9:10 pm

Just what the doctor ordered!
A copy is being sent to my MP.
Thank you!

George E. Smith
January 26, 2010 9:11 pm

Well I just downloaded it and got it safely saved in my Climate file. Haven’t had time to do ther than look at the table of contents; but it looks like a substantial piece of work.
Very nice effort there Joe and E. M. Smith and Anthony.
I’ve already stated many times that I have little confidence in the surface data prior to the age of polar orbit satellites, and the oceanic buoys, which I date from about 1979/80.
So I am happy to read that you too feel that early record is highly contaminated.
Jolly good show chaps.

wayne
January 26, 2010 9:27 pm

Gentlemen, hat’s off to you!

crosspatch
January 26, 2010 9:30 pm

Anthony, is this true:

Global terrestrial temperature data are gravely compromised because more than three-quarters of the 6,000 stations that once existed are no longer reporting.

Or is it more a case of something like:
Global terrestrial temperature data are gravely compromised because more than three-quarters of the 6,000 stations that once existed are no longer having their reports included in the database.
I know that many of the stations that have been dropped from the GHCN are still there and still reporting, it is just that their reports are no longer included in the data.
REPLY: It is a combination of both, but I agree that could be worded for improved comprehension – A

Editor
January 26, 2010 9:36 pm

REPLY: I appreciate that, but please thank Joe D’Aleo and in particular E.M. Smith, they did more work on this than I. – Anthony

So it seems to me that E.M. Smith deserves authorship credit along with Joe and you. Smith may not have written much of the text, but he was certainly a major contributor.
REPLY: Most certainly he is.

henry
January 26, 2010 9:37 pm

You’re being mentioned over at Weather Underground;
http://www.wunderground.com/blog/JeffMasters/show.html
Primarily as an aside to a report given by Dr. Matthew Menne and co-authors at NOAA’s National Climatic Data Center (NCDC). In a talk at last week’s 90th Annual Meeting of the American Meteorological Society, Dr. Menne reported the results of their new paper just accepted for publication in the Journal of Geophysical Research titled, On the reliability of the U.S. Surface Temperature Record (http://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf)
You really need to read Dr Jeff Master’s take on this paper.
At the end, he states:
“The surfacestations.org effort coordinated by Anthony Watts has made a valuable contribution to science, helping us better understand the nature of the errors in the U.S. historical temperature data set. In his talk last week at the AMS conference, and in the credits of his paper, Dr. Menne had some genuinely grateful comments on the efforts of Anthony Watts and the volunteers of surfacestations.org. However, as of this writing, Watts has made no mention on surfacestations.org or on wattsupwiththat.com of Dr. Menne’s study.”
Looking forward to your take on this paper.
REPLY: Been a little busy, see above, that’s my next project. Besides Menne’s paper was a surprise. I’ll explain in a future post. – A

JEM
January 26, 2010 9:39 pm

It’s Chile, not Chili.
Just a quibble. Still reading and digesting.
REPLY: Damn spell checkers 😉

January 26, 2010 9:42 pm

Anthony,
I downloaded a copy and sent one to Congressman McClintock and to California Assemblyman Dan Logue who is sponsoring and initiative to resend AB32, which calls for a California Cap and Trade scheme. I hope this will help them make the case the surface record is an unreliable mess, and any Cap and Trade bill would be based on faulty data.
Congratulation to you, Joe and E.M. Smith for a very interesting document.
I am please to hear that you are working on a rebuttal to the Menne paper. It see to be more of a cover their (you know what) paper.

January 26, 2010 9:44 pm

Good work!
One relatively minor criticism: You have only one bookmark in the PDF, for “CASE 4: CANADA’S WEATHER NETWORK”. A fully-fleshed out set of bookmarks based on the TOC (and clickable links in the TOC itself) would make it much more reference-friendly.

Gary Crough
January 26, 2010 9:48 pm

Thank you Joseph D’Aleo and Anthony Watts for all this good work.
From day 1 it seemed to me that a bunch of weather stations was a poor means of determining global temperature. They did not cover the whole earth; omitting whole oceans and focusing on urban locations. Still, prior to the satellite-based measurements they were the best proxy for global temperature and if used just to measure trends they could have been a very good proxy if:
· Measurements were taken from the same set of global positions each year
· Adjustments for creeping urbanization, equipment upgrades and equipment movements were made honestly
Sadly, neither condition was met.
I ignored land-based climate data in favor of RSS and UAH data. But evaluating claims about the “warmest decade” in 150 years requires one to look into surface station data. I knew it would be a mess and researchers having to make close decisions would be biased towards global warming … if only to make their jobs seem more important. But this report (and others) make this look more like fraud that simple bias.
Original sources claim they sent data into NOAA (or the Hadley CRU) and it was simply not used. If so, it should be possible for GISS etc. to re-do their calculations with all the data. Why not simply demand they do so? What excuse is provided for omitting existing data?
Alternatively, GISS could run their calculations using only the subset of stations (1500 ?) used in the latest calculations. Seems like a simple request since GISS has both the software and the data. This is the less attractive alternative as the existing stations are biased towards heat-island locations.
I have a question: Is it possible for a 3rd party to re-calculate the global temperature trend using data that was reported to NOAA but not used? If so, the output of such a project would trump anything done by GISS or Hadley CRU. As I understand it GISS is now responding to FOI requests (rather than face a court trial) and I would expect NOAA to do the same? If so, the software and data to create a much more honest climate trend should be in the public domain?

John F. Hultquist
January 26, 2010 9:49 pm

Question, Anthony. Is the file for this document correctable and do you want to know of things? I was doing a real quick look-thru and found this:
page 62, 4th line under NONSENSE! there is a (the) that needs to be removed.
If you want to know of these please say so and I will make a list if I find any more – and I will read for them as well as reading for content. I’m impressed with the first look at the first 62 pages.
I also bought the paperback of Climategate and advise others to do so but I have read even less of it. Of course I’ve read a lot of both of these as these stories played out over the last weeks and months. It is great to see this stuff in one place and spruced up, though. Thanks to all.
REPLY: We plan to keep this as a living document, with version numbers as we update, so yes such things are welcome. We are depending on all of you for our “peer review”. Check your inbox for an email, send there. – A

John F. Hultquist
January 26, 2010 9:59 pm

Ric Werme (21:36:17) : authorship credit
Not to try to refute your notion but a book is known by its authors and not by all the contributors or even the main ones. Consider the King James version of the bible. Did K. James write any of it?

Dave F
January 26, 2010 10:02 pm

Getting ready to download this. I won’t get to read it for a while. At least not until the semester ends. I have too much reading to do before then to commit to 100 pages, but I am going to slowly leaf through in between cram sessions. If I find anything wrong, I will let you know, per your comment to John F. Hultquist (21:49:39) :, but it may be a month or two later than you hear from others.
You know, unless I get hooked and decide to skip an assignment to finish your paper. 🙂

geo
January 26, 2010 10:02 pm

Wheee!
“Instrumental temperature data for the pre-satellite era (1850-1980) have been so widely, systematically, and unidirectionally tampered with that it cannot be credibly asserted there has been any significant “global warming” in the 20th century.”
I’d like to think I’ve done my part in my own small way for surfacestations.org, mostly in the upper midwest, but that one is a hard one to swallow. Almost as hard as “MWP? What MWP?” from the other side. Overstated? Yes, I can believe that easily. “Cannot be credibly asserted”? I look forward to a close reading of your arguement is as far as I can go right now.
I look forward to reading the report carefully to see if I am convinced this is the case. The rest of the “Summary for Policymakers” seems to me at least easily defensible from what I know up to know, but #1 is going to take some convincing.

rbateman
January 26, 2010 10:03 pm

I shall read this, and hopefully I can get some ideas on making the most of the horrid mess my poor town’s temperature records are in.
And are they ever in a mess.
Thank you E.M Smith, Joe D’Aleo and Anthony Watts.

jorgekafkazar
January 26, 2010 10:05 pm

[snip – for the same reason I don’t allow the Hitler parody videos here]

Norm in Calgary
January 26, 2010 10:07 pm

Glad to see that there appears to be many Canadians on this blog. Is there anything comparable to the surface stations report on how Canadian surface stations are sited?

D. King
January 26, 2010 10:08 pm

It is mind-numbing the level of deception that went
into this. I don’t know how you guys stayed sane.
Your methodical and relentless unraveling of this
should get you a medal. I suspect that we will see
other, more sophisticated methods of manipulating
the sensors themselves (sea ice).
Good work gentlemen.

Doug in Seattle
January 26, 2010 10:10 pm

Thanks for your answer up above.
As for this paper today, great summary of the issues. It provides a much better background for the station drop out argument and the UHI presentation is well laid out in all its gruesome details. Nice final bit too using McIntyre’s “Hide it” piece.

January 26, 2010 10:21 pm

Thank you, thank you, thank you!
Keeping the spotlight of public attention on the climategate iceberg is probably the most kind and loving action you could take on behalf of everyone living on planet Earth today.
With deep gratitude,
Oliver K. Manuel

Hilary Ostrov (aka hro001)
January 26, 2010 10:25 pm

Have just been skimming and this is a delight to read … First Mosher’s post and now this … how’s a part-time blogger ever to keep up, eh?!
This has been a *very* good day, today. Corcoran’s piece on Andrew Weaver jumping ship (Weaver must be feeling really silly about his attempt to bolster the CRU crew when he was getting media mileage from claim that his office had been broken into – without disclosing that there had been UVIC wide alert regarding potential theft).
Now the U.K. Times reports that John Beddington, U.K.’s chief scientific advisor has stated:
There is fundamental uncertainty in climate change, science tsar says
“The impact of global warming has been exaggerated by some scientists and there is an urgent need for more honest disclosure of the uncertainty of predictions about the rate of climate change, according to the Government’s chief scientific adviser. “John Beddington was speaking to The Times in the wake of an admission by the Intergovernmental Panel on Climate Change (IPCC) that it grossly overstated the rate at which Himalayan glaciers were receding. “Professor Beddington said that climate scientists should be less hostile to sceptics who questioned man-made global warming. He condemned scientists who refused to publish the data underpinning their reports. “He said that public confidence in climate science would be improved if there were more openness about its uncertainties, even if that meant admitting that sceptics had been right on some hotly-disputed issues.”
http://www.timesonline.co.uk/tol/news/environment/article7003622.ece

richard verney
January 26, 2010 10:37 pm

Anthony.
I have quickly read your paper (which has obviously taken much work) with interest and there are many good points made. Hopefully, a copy will be submitted to the UK Parlimentary Committee which is looking into some of the issues arising from Climategate.
I don’t find Menne’s paper a surprise. If one is looking for a trend, it does not matter that station A is pure, whereas Station B is contaminated by noise provided that the contaimnation to Station B remains constant throughout the period over which the trend is being examined. In countries such as the US, most urban development/growth predates the period considered by Menne and hence when looking for temperature trends (rather than absolute accuracy in the temperature measurement), during the period considered by Menne, one would not expect to see substantial differences between good and bad sited stations, or between urban and rural stations. Materially, his paper does not deal with station drop out (which appers to have resulted in a warming bias), nor does his paper deal with homogenising the pre 1960 temperatures downwards. As such, it does not deal with the major flaws in the temperature records which your recent paper deals with.
Since global warming is not a global problem (the amount of warming and its effect are often dictated by microclimatic conditions/sensitivities), I would have thought that rather than seeking to create some artificial global temperature record, it would be more useful to compile individual records for each country (and if the country is large or has special and significant geological/ecological features then to compile a number of records as appropriate). When seeking to compile such a record, one would seek to identify station data that requires no form of adjustment/manipulation so that the record can be compiled exclusively on raw data (wherever possible). No doubt, the use of only well sited rural stations would give the most accurate temperature record.
An advantage of compiling temperature records in this manner is that one could more easily see whether various areas of the globe (northern, southern hemishpere, equatorial or polar) are warming at different rates which would give further insight into wher the warming anthropogenic.

Deadman
January 26, 2010 10:43 pm

I thank you for your exhaustive work.
If you would like a comprehensive list of the pardonable typographical slips, grammatical errors and syntactical infelicities, please send me the appropriate e-mail address.

George Turner
January 26, 2010 10:49 pm

Great work!
I found it interesting that most of the automated reporting stations for aviation only have to be accurate to plus or minus 0.9 F, and may be biased. If I was setting up an airport thermometer with a bias I’d be sure to make it read high, because reading low could be dangerous in terms of aircraft performance on takeoff (denisty altitude and such). Then we’d have an aircraft that shoots off the end of the runway, clipping trees in half, and a pilot who climbs out and starts measuring the trees’ growth rings to check airport thermometer calibration, since tree-rings can be read to a hundredth or so.. 😉
Then again, maybe we should go with Briffa’s post-1960 data!

George Turner
January 26, 2010 10:52 pm

Oh, my point was that it might be interesting to talk to the people who set up the airport reporting stations to see if they make sure the thermometer reads accurate or high, but never low, for safety reasons, despite what the published specification says.

crosspatch
January 26, 2010 10:54 pm

Another thing that SteveM noted about missing months in Siberia. Apparently many of the stations NOAA reported as having missing values for various months actually had monthly values that were available from other sources. I believe that was reported on this site as well. The discussion came up, if you remember, at about the time that there was a very odd anomaly in Russia that turned out to be a month where the previous month’s value was repeated in the NOAA data. A quick look at sources available online showed the correct value was available for the same station. Further checking showed that months “missing” values from NOAA’s data set were actually available.
This is important because a “missing” value allows a new “fill” value to be calculated to take its place. This “fill” value is never replaced even if a correct real observation is discovered. This, in turn, is important because these “fill” values are based on an average over time. It causes positive feedback and even greater warming bias. If this month is a warm month, than all other fill values in past years for this month get bumped up a little because this months warm observation increased the average over time. That is one reason why you can see current temperatures influencing the past.
The more of these fill values you have, the more you get the opportunity to influence the past. I find it odd that these calculated fill values are never replaced with actual observational data when it is discovered.

Paul Metcalf
January 26, 2010 10:58 pm

Thanks for all of this. As a returning student to the halls of higher education I have already singled this subject out as one of my go to subjects for those dreaded term papers.
I’ve already managed to garner sideways glances from teachers at Butte and CSU.
After scanning this I can see that my good times are just beginning.
Thanks for all the hard work.

kwik
January 26, 2010 11:03 pm

I have read it. Its really hard to accept, but what can you say?
One thing I think would be enlightening;
A flowchart of the data-path between the different organisations. And small explanatory boxes showing where data adjustments are done?
It will be easier for people that never heard of this before to understand where the “adjustments” happens, and who is using common adjusted data.
REPLY: I’ve been wanting to do a flowchart for quite awhile. -A

Konrad
January 26, 2010 11:11 pm

O/T
I just missed out on getting in to see Lord Monckton in Sydney. The venue was filled to capacity and a large number of people had to be turned away with Lord Monckton’s apologies. Interesting times…

January 26, 2010 11:18 pm

D. King (22:08:44): Your methodical and relentless unraveling of this should get you a medal.
The Nobel Peace Prize, for starters. NASA and NOAA have some explaining to do. I demand a full Congressional investigation. We cannot trust our scientific institutions any longer. Without immediate Congressional action to repair them, those agencies are useless and should be defunded.
On a related note, tomorrow (Jan 27, 2010) the Security and Exchange Commission (SEC) plans to issue a new rule requiring corporations to explain how they are “alleviating global warming.” The “interpretive release” will have the effect of force of law with no hearings, taking no testimony, and without statutory authority to exercise jurisdiction over global warming “abatement.”
The SEC failed (with disastrous consequences for the entire world) to rein in investment banks and their credit default swaps that undermined the financial sector worldwide in 2008. The SEC failed to respond to repeated warnings about the Madoff Ponzi scheme, resulting in $65 billion in losses to investors. Now the SEC has turned their defective attentions to the global warming hoax, not with the intention to protect investors from the fraud, but just the opposite — to force corporations to further the hoax.
As a result of the SEC’s ill-considered action tomorrow, corporations will be subject to civil lawsuits and criminal penalties if they do NOT participate in the greatest hoax in history.
http://tinyurl.com/ycnkfm4

Nick Stokes
January 26, 2010 11:21 pm

A big compendium of nonsense here. I’ll try to make a start.

More than 6000 stations were active in the mid- 1970s. 1500 or less are in use today.

This just propagates a misunderstanding of what GHCN is. 6000 stations were not active (for GHCN) in the 1970’s. GHCN was a historical climatology project of the 1990’s. V1 came out in 1992, V2 in 1997. As part of that, they collected a large number of archives, sifted through them, and over time put historic data from a very large number in their archives.
After 1997, it was decided to continue to update the archive. But it wasn’t possible to continue to regularly update monthly all the sites that had provided batches of historic data to the original collection. That’s a different kind of operation. They could only, on a regular basis, maintain a smaller number. This notion of a vast swag of sites being discontinued about 1992 is very misleading. 1992 is about when regular reporting started.

It is only when data from the more southerly, warmer locations is used in the interpolation to the vacant grid boxes that an artificial warming is introduced

A constantly repeated, way-off meme. Firstly, there’s little quantification of such a drift. But the main thing is, all the GMST calcs are done with anomaly data. Station temps measured with respect to their own mean over a period, or at most, at their own supplemented with some nearby station data. It doesn’t matter if stations are replaced with other stations of higher mean.
What could matter is if stations are replaced by others with a higher warming trend. And that’s where this argument gets really silly. The stations with higher warming trend are at higher latitudes. Shifting stations away from the poles (to whatever extent it may have happened) would have a cooling trend, not warming.

Interestingly, the very same stations that have been deleted from the world climate network were retained for computing the average-temperature base periods

Misunderstanding of how anomalies are actually calculated underlie a lot of the argument about station shifts. They do not calculate a global average and then subtract it. The basic method is the Climate Anomaly Method, which NOAA uses. Each station has an anomaly calculated with respect to its own average.
Gistemp uses the same method, but applied to grid points (Sec 4,2), rather than individual stations. Again, this is very little affect by any general drift in stations – the grid points don’t move.

January 26, 2010 11:31 pm

Great work, I have a particular gripe with Salinger and his manipulation of NZ temperature data. He is a witness for Mighty River Power in the Called -In Turitea wind farm, in Palmerston North. The AGW lobby are out to sacrifice this city to their global warming god ( note I used a small g ). Salinger is totally discredited and has been outed here.
http://www.palmerston-north.info

UK Sceptic
January 26, 2010 11:31 pm

O/T but worthy of a mention. The warmistas are getting very, very desperate.
http://www.thestar.com.my/news/story.asp?file=/2010/1/27/nation/20100127092338&sec=nation

tallbloke
January 26, 2010 11:37 pm

I’ve been waiting impatiently for this. Well done to Anthony, Joe, E. Michael Smith and Willis and others for exposing the climate liars and thank you for your dedication to the search for truth.

Michael
January 26, 2010 11:39 pm

I just can’t stop ROTFLMAO at everything that’s been going on lately. I’m not posting much these days, Everyone and their grandmother are jumping in on exposing the fraud, and all the fraud being exposed is just completely overwhelming me. Thanks, it takes a lot of pressure off me to do the job, although I do enjoy it so.
Did you here? Everyone is turning the Obama State of Confusion Address into a drinking game. Every time he says all the typical blather, somebody gets to drink. How drunk are we going to get every time BO says “Green Jobs” and “Climate Change”? Everyone is jut going to give a big belly laugh every time BO Lies this time.

tallbloke
January 26, 2010 11:42 pm

Nick Stokes (23:21:59) :
“It doesn’t matter”.

I think you’ll find it does. Quite soon.

Konrad
January 26, 2010 11:49 pm

Quite a comprehensive study. The section on GHCN adjustments is eye opening, Dawin 0 was just the tip of the iceberg.
A small note – in the summary for the Argo bouy section a quotation mark appears out of place in the last paragraph p59. (unless I am looking at an older version)

Patrick Davis
January 26, 2010 11:57 pm

Interesting reading indeed and thanks to all contributors. Apart from poor device siting, if this doesn’t represent unidirectional manipulation of temperature data I don’t know what does.
“Konrad (23:11:16) :
O/T
I just missed out on getting in to see Lord Monckton in Sydney. The venue was filled to capacity and a large number of people had to be turned away with Lord Monckton’s apologies. Interesting times…”
Indeed. And due to other commitments (Today, the 27th, for me means I qualify for Australian citizeship – Been a hard road getting here) I was also unable to attend however, I colleague of mine is there for Lord Monkton’s presentation to night in Sydney. We’ll have an interetsing smoko tomorrow.
When my finances are healthier I’ll certainly contribute to the tip jars of Messers Smith and Watts.
Well done people.

supercritical
January 27, 2010 12:10 am

Nick Stokes,
I really take exception to your over-written post. Rhetoric that appeals to sensibilities raises the suspicion that you are attempting to obscure the sense of your thoughts.
While such a style may be essential to writers of romantic fiction and certain types of politician, it has no place in discussing matters of scientific accuracy, because it obscures more than clarifies.
If you have no natural ability to write concisely, please could you repost using bullet-points to state your thoughts?

DirkH
January 27, 2010 12:17 am

“Nick Stokes (23:21:59) :
[…]
After 1997, it was decided to continue to update the archive. But it wasn’t possible to continue to regularly update monthly all the sites that had provided batches of historic data to the original collection. That’s a different kind of operation. They could only, on a regular basis, maintain a smaller number”
yeah, because as we all know, all things climate science are so UNDERFOUNDED they couldn’t pay a grad student… because the darn supercomputer time is so expensive… puuulease Mr. Politician, can i have a million more????

January 27, 2010 12:17 am

“Of course there will be those who say “but it is not peer reviewed” as some scientific papers are.”
So what? Neither are many of the ‘peer reviewed’ reports documented in the IPCC reports.

Geoff Keane
January 27, 2010 12:17 am

Great work.
small quibble – page 33, i doubt that the population increased from 1.5B to 6.7 B in 2010…

E.M.Smith
Editor
January 27, 2010 12:45 am

To all who have given thanks and compliments, my humble gratitude.
Per “authorship”: There are sufficient references to “source” to suit me. A collection of poems is authored by the editor and collector. My “poems” are suitably designated. (Besides, with a name like “Smith” it’s hard to get worked up about attribution… Just try “Mr. Smith, your table is ready!” in a restaurant… 1/4 of the place will rush the table 😉 So I leave such choices to Anthony and Joseph. And for folks who’ve spent months wondering what “E.M.” is, you know now that I’m “Michael” when informal; E. Michael Smith when being more formal. E.M.Smith when something will be typed a hundred times 😉
D. King (22:08:44) : It is mind-numbing the level of deception that went
into this. I don’t know how you guys stayed sane.

How does that line go? “I think you may be jumping to conclusions from facts not yet in evidence.” 😉
But yes, at times it was ‘a challenge’ not to throw things at the TV… Especially when some ‘talking head’ would be saying one thing and the data were saying another…

Your methodical and relentless unraveling of this
should get you a medal. […]
Good work gentlemen.

Medal enough… “Public Review”.
When you are working without a safety net, in full public view, all hands dealt face up, that is “Public Review”. It’s a bit more draining than “peer review”, but a lot more honest and a lot more effective. I think it is the way of the future. (And, oddly, the way Science was done in the past… the circle turns…)
Per the ‘relentless’ part: Well, I must admit that the last 2 weeks or so I’ve taken a rest. Then again, I think it was a suitable time. Today I finally got all my “stuff” onto a newer bigger box. Only 10 years out of date 8-0 now! Honestly. Has Windows 2000 Pro on it. Still need to find “Office” somewhere (it has “viewer” versions). I’ve now got a 40 GB 2nd disk in it with Linux dual boot and the GIStemp code; so I have room to do more analysis runs. (I’d filled up the old machine 10 GB disk – GIStemp does not take much CPU, but it’s a real disk hog…)
Already found one thing. The temperature data for Madagascar end at an odd time. While the other places get killed off about 1990, Madagascar struggles on to about 2005 or 2006 IIRC, then dies. I’m pondering using it as the “poster child posting” for the rebuttal to the “it was a batch in 1990” claim… (Wonder if the anomaly maps still show Madagascar … )
So having taken a week to play with new hardware and get the software all installed on a new home and get everything working again, I guess it’s time to start back in. So maybe not completely ‘relentless’… just mostly 😉
E. Michael Smith

George Turner
January 27, 2010 12:53 am

Nick Stokes,
You talk of trends and the replacement of stations.
So if you’re just looking at trends and discarding stations, how do you calculate an average global temperature, or compare one year to another, based on trends? Do you say that since the cooler stations that were dropped were last seen trending with slope M that you can assume the trend (which is probably fractal noise imposed on a complex quasi-periodic signal) can be straight lined to infinity? Do you say that two stations that have different temperatures that were last seen with the same trend must always have the same trend, and do you adjust a rural grid point between two remaining urban stations with the historic lower temperatures of the missing point or do you make the missing point track the average of the two urban points? Can you just wait till two temperature trends happen to cross and then lock things in forever?
In short, how can you claim to be measuring temperatures in N places if you not actually taking the measurements, just extrapolating to make it look like you’re taking more measurements than you actually are? If such a technique is accepted then why did we ever bother with taking so many temperature measurements in the first place?
Finally, we’ve seen what kind of computer code and comments accompany such calculations. Can you provide us with source code so we can see if the original temperature offsets are being carried in or whether it just calculates a weighted average of remaining stations?

Konrad
January 27, 2010 12:57 am

Patrick Davis (23:57:38)
If you are using the term “smoko” then I’m guessing the citizenship ceromony is a mere formality 🙂

Scott
January 27, 2010 12:59 am

I have sent this to the senators that I have email addresses for here in Australia, I suggest everyone does the same in their own countries if we saturate them with this information then they will not have much choice but to do something about it.
Scott

Fred Kupferroth
January 27, 2010 1:07 am

Isn’t it all Garbage In -> Gorebage Out ?

January 27, 2010 1:38 am

Great paper, Anthony, Joe, and E.M.
My poor high school reunion web site has had more hits on the Illinois and Wisconsin USHCN charts than it ever got from my classmates.
http://www.rockyhigh66.org/stuff/USHCN_revisions.htm
http://www.rockyhigh66.org/stuff/USHCN_revisions_wisconsin.htm

Andrew W
January 27, 2010 2:13 am

well you can bin the case 2 study in your compendium, members of the NZ CSC have admitted that the NIWA adjustments, which were a result of station site changes, are justified. And other stations where the data was not adjusted because there were no reasons for adjustment, show the same warming trend.

Nick Stokes
January 27, 2010 2:19 am

George Turner (00:53:27) :
So if you’re just looking at trends and discarding stations, how do you calculate an average global temperature, or compare one year to another, based on trends?

Well, first, you don’t usually calculate an average global temperature, for good reason. You calculate an anomaly. That’s what all those famous temp plots show. For each station, the anomaly is basically the difference, for each month say, between the current value and the mean value for some reference period (1951-1980 for GISS). That’s what the Climate Anomaly Method returns – GISS uses a slightly different method which returns a gridpoint anomaly, but anyway, it’s a local value.
Then in any month, you average over a region (globe, NH etc) the anomalies in your current dataset. It’s a weighted average to account for area etc. It’s probably an average of grid values, but these in turn are got by averaging underlying station values.
Then the trend comes out as the trend of these monthly averages (annually averaged if necessary).
The point of the anomaly method is that you don’t need to worry a lot about whether one years stations include a few that are warmer or cooler relative to another year, because you are looking at deviation from station means. GISS explains this here. Scroll down to “Anomalies and Absolute Temperatures”, and follow the link to SAT.
supercritical (00:10:31) :
I didn’t use bullet points – I discussed three quotes from the paper, as blockquotes. These didn’t show quite as prominently as I expected, but that’s the structure of what I wrote.

Patrick Davis
January 27, 2010 2:26 am

In Australia today, climate change minister Penny W(R)ong announced Australia’s emission cut commitments. 5%, inline with every other country present at Copenhagen.
And as usual, Lord Monckton’s arrival in Sydney today in MSM media wasn’t covered, but apparently we had our warmest night, last night, in 4 years. Don’t know about that, it was certainly humid ~87%.
“Konrad (00:57:48) :
If you are using the term “smoko” then I’m guessing the citizenship ceromony is a mere formality :)”
I’ve been downunder for quite sometime now and “antipodean speak” has rubbed off a bit 😉

brc
January 27, 2010 2:31 am

Patrick Davis (23:57:38)
Congrats on becoming a citizen of the lucky country. You can look forward to a lifetime of smoko’s with a hot cuppa and a natter with some mates. I assume you’ll use your new vote wisely at the next election!

E.M.Smith
Editor
January 27, 2010 2:36 am

Nick Stokes (23:21:59) : A big compendium of nonsense here. I’ll try to make a start.
Thought you ‘tried to make a start” back on the other posting where we already hashed this over.

“More than 6000 stations were active in the mid- 1970s. 1500 or less are in use today.”
This just propagates a misunderstanding of what GHCN is. 6000 stations were not active (for GHCN) in the 1970’s. GHCN was a historical climatology project of the 1990’s. V1 came out in 1992, V2 in 1997. As part of that, they collected a large number of archives, sifted through them, and over time put historic data from a very large number in their archives.

Those station were, in fact, active in 1970. 5997 of them in that year. The exact date the data get into GHCN is not particularly important. And, BTW, data neatly archived but unavailable is functionally useless. (Like that warehouse scene in Raiders of the Lost Ark…) I’d hope you are not asserting that GHCN is only usable as an archival location…

After 1997, it was decided to continue to update the archive. But it wasn’t possible to continue to regularly update monthly all the sites that had provided batches of historic data to the original collection. That’s a different kind of operation. They could only, on a regular basis, maintain a smaller number. This notion of a vast swag of sites being discontinued about 1992 is very misleading. 1992 is about when regular reporting started.

So you are saying that the data set is 1/2 obsolete archive and 1/4 usable data (and 1/4 misc who knows what… like Madagascar that gets sort of updated sometimes… maybe… until 2005 or so). OK, fine with me. Means that ALL the work based on it is based on a horridly botched data set design. Sure you want to “go there”? Broken by design? Obsolete archive?
“It is only when data from the more southerly, warmer locations is used in the interpolation to the vacant grid boxes that an artificial warming is introduced”
A constantly repeated, way-off meme.
Nope. An accurate statement of what the data say.
Firstly, there’s little quantification of such a drift.
Try looking at the data. I did. It’s easy to see and well characterized:
http://chiefio.wordpress.com/2009/11/07/gistemp-ghcn-selection-bias-measured-0-6-c/
http://chiefio.wordpress.com/2009/08/13/gistemp-quartiles-of-age-bolus-of-heat/
(I’m especially fond of the “Bonus Round” top 10% table at the very bottom. The more stabilized the thermometer set, the less drift of the average temperature. In that set of ‘over 100 years in the same place’, “Global Warming” is effectively non-existent. I’d love to know how the globe can be warming when the best longest lived thermometers are not, but only the new ones at tropical airports are…)
The first of those links looks, in particular, at the impact of leaving out of GHCN the USHCN stations. GIStemp provided a convenient vehicle to do this since it uses both, but neatly dropped the USHCN stations on the ground from May 2005 to November 2009 (when they finally put in the USHCN.v2 data). So we can MEASURE the impact. And it is 0.6 C for those stations. That is the warming bias in the base data from those locations being left out of GHCN.
BTW, this also illustrates another silly thing you keep asserting. Those stations that are in the USHCN and were dropped from the GHCN were not due to some archival unavailability of the data or similar lack of reporting. NOAA / NCDC produce both data sets. They would have to move the data all the way from their right pocket into their left… It was a decision not an unfortunate accident of reporting circumstances. So asserting otherwise is, at best, disingenuous.
The second of those links lets you see directly how much the different groups of records carry a warming signal. All the warming is in short lived records. I have a whole series of “by latitude” reports as well. They clearly show the migration of the average thermometer location toward the equator.
Though I must grant you that the “southernly” reference is a bit broad. Yes, most thermometers drift south, but in Australia we found them drifting north… An early look here:
http://chiefio.wordpress.com/2009/08/17/thermometer-years-by-latitude-warm-globe/
just shows the southern drift. It was later in the detailed ‘by country’ and ‘by continent’ looks that I saw the more subtile patterns:
“Most” of them can be reached through this link:
http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/
though the full list is probably here:
http://chiefio.wordpress.com/category/ncdc-ghcn-issues/
And here is that Australia trend:
http://chiefio.wordpress.com/2009/10/29/ghcn-pacific-basin-lies-statistics-and-australia/
Now, for all the folks who look at these (the results of lots of hours of computer time, full of charts of numbers) please remember that Nick thinks theses are “little quantification”…
But the main thing is, all the GMST calcs are done with anomaly data. Station temps measured with respect to their own mean over a period, or at most, at their own supplemented with some nearby station data. It doesn’t matter if stations are replaced with other stations of higher mean.
And this, frankly, is bull pucky. Station temps are run through a meat grinder of processes long before the “anomaly map” is calculated in STEP3. We have UHI “corrections” that go the wrong way in about 1/4 of the cases. We have lots and lots of “in-fill” and “homogenizing” and who knows what, then, at the very end, the station data is compared to an average of a bunch of other stations to compute an anomaly, NOT just to itself. I posted the code comments on the other thread (I’ll not put all of them here, too, folks who care can go see what the code says it does here:)
http://wattsupwiththat.com/2010/01/22/american-thinker-on-cru-giss-and-climategate
down near the very bottom (at least, right now).
What could matter is if stations are replaced by others with a higher warming trend.
Say, like Airports?
http://chiefio.wordpress.com/2009/12/08/ncdc-ghcn-airports-by-year-by-latitude/
Where we find a persistent increase in the percentage of thermometers are what are now airports over time. Like, oh, 92% in the USA. Good luck finding a ‘rural reference station’ in that lot…
And that’s where this argument gets really silly. The stations with higher warming trend are at higher latitudes. Shifting stations away from the poles (to whatever extent it may have happened) would have a cooling trend, not warming.
Bald faced assertion with NOTHING in the way of data to back it up. All hypothesis, no cattle.
So: No, that’s just where you are ‘sucking your own exhaust’ a bit too much. If you look at the actual DATA from Canada, you find it cooling. It’s only when you compare it to thermometers from different places over time that the “north” is warming. Same thing in New Zealand. No warming if you use the stable set. The warming only comes in because one very southernly island is in the baseline (AND used to fill in grid boxes… I’ve run the code…) but taken out recently (so grid boxes must look elsewhere for ‘in fill’ and elsewhere is airports closer to the equator…) IIRC, Campbell Island about 68 S. Oh, and in Canada they use ONE thermometer in “The Garden Spot of the Arctic” to get that warming trend north of 65 N.
“Interestingly, the very same stations that have been deleted from the world climate network were retained for computing the average-temperature base periods”

Misunderstanding of how anomalies are actually calculated underlie a lot of the argument about station shifts.

Yes, they do. And almost universally from the “warmers” side where folks assert anomalies are calculated in some nice neat “self to self” same station way when they are not. The code averages baskets of thermometers together (and different baskets at different time intervals) and compares a station to the baskets. Read The Code. An excerpt from comments in the other thread:
from:
http://chiefio.wordpress.com/2009/03/07/gistemp-step345_tosbbxgrid/
C**** The spatial averaging is done as follows:
C**** Stations within RCRIT km of the grid point P contribute
C**** to the mean at P with weight 1.- d/1200, (d = distance
C**** between station and grid point in km). To remove the station
C**** bias, station data are shifted before combining them with the
C**** current mean. The shift is such that the means over the time
C**** period they have in common remains unchanged (individually
C**** for each month). If that common period is less than 20(NCRIT)
C**** years, the station is disregarded. To decrease that chance,
C**** stations are combined successively in order of the length of
C**** their time record. A final shift then reverses the mean shift
C**** OR (to get anomalies) causes the 1951-1980 mean to become
C**** zero for each month.
C****
C**** Regional means are computed similarly except that the weight
C**** of a grid box with valid data is set to its area.
C**** Separate programs were written to combine regional data in
C**** the same way, but using the weights saved on unit 11.

So not exactly like you’ve been asserting. LOTS of weighting going on.

They do not calculate a global average and then subtract it. The basic method is the Climate Anomaly Method, which NOAA uses. Each station has an anomaly calculated with respect to its own average.

Flat out WRONG. The data from NOAA arrive as temperatures at GIStemp, not anomalies. An error you made in the other thread too.
In GIStemp Station data is carried AS station data through STEP2 (they do produce a couple of “zonal averages” along the way, but the temp data are carried forward) THEN that process noted above is applied. Notice that a basket of stations is averaged based on a scaling factor and then compared. But only after adjusting their mean and some other changes.

Gistemp uses the same method, but applied to grid points (Sec 4,2), rather than individual stations. Again, this is very little affect by any general drift in stations – the grid points don’t move.

BTW, many of those “grid boxes” have exactly NO stations in them and many have exactly ONE. Good luck with that whole “it’s a grid so individual station bias won’t matter” thing… ( 8000 boxes, 1500 stations… do the math…)
The anomalies are calculate in STEP3 (STEP4_5 just blends in a pre-fab sea anomaly map from HadCRUT). So GIStemp carries temperature data to the end, then makes an anomaly map out of it after most of the damage was already done to the temperature data. And does NOT do it by comparing that thermometer data to an earlier self.
Frankly, it is blatantly obvious that it can’t. The “record” is largely made up of disjoint segments of too few years to be usable if they did. Only 10% of it is over 100 years and a hugh chunk of thermometers are less than 25 years. And with all of 1500 stations surviving, and many of THEM short lived, they would be hard pressed to find anything against which to compare. From an analysis of the “best” thermometers representing the top quartile ( a bit over 3000 thermometers and about 1/2 the total data in the data set) we have a report that shows not many survive into the present DECADE (and we know more of them die off during that decade…):
This is a set of monthly averages of the temperature data, then the annual average, and finally the thermometer count. I’ve deleted most decades so you can focus on what matters:

DecadeAV: 1879
 1.8  2.7  5.8 10.4 14.7 18.8 20.8 20.1 16.7 12.1  6.6  2.6 11.1  575
DecadeAV: 1889
 0.4  2.0  5.3 10.7 15.4 19.0 21.1 20.3 17.2 12.0  6.5  2.7 11.1 1137
...
DecadeAV: 1959
 0.2  1.7  4.8 10.4 15.0 18.7 20.9 20.3 17.0 12.0  5.7  2.0 10.7 3179
DecadeAV: 1969
-0.6  1.0  4.9 10.4 14.8 18.6 20.8 20.1 16.7 12.0  6.1  1.0 10.5 3207
DecadeAV: 1979
...
DecadeAV: 2009
 1.8  2.8  7.0 12.0 16.3 20.2 22.6 22.1 18.3 12.7  7.0  2.1 12.1  304

That middle chunk with about 3000 is the “baseline”. Our present decade has 304 survivors.
That’s right. 304 for the whole world. The rest (1200) are all fairly short lived records and mostly at warm low latitude and low altitude locations.
So unless you want to say that you are somehow comparing those other 1200 to an average bucket, you have to accept that they are not being compared to much at all. They just are not long enough lived.
So, you pick it: Compared to a composite bucket (as the code claims) or not compared to anything at all and we’re just wasting our time talking about ‘anomalies’…
I apologize for the length and detail of this reply, but I have gone through all the code and all the data and when folks just want to hand wave that away with “the anomaly will save us!”, well, lets just say they really need to look at what is really DONE and not what they would like to imagine is done. We’ve had enough imagination applied to the data already…
Oh, and BTW, I did a benchmark on the anomaly. It DOES change when you leave out the thermometers GHCN left out. This is a crude benchmark in that the anomaly report is for the whole N. Hemisphere while the data are only changed in the USA. In theory, this means a 25 X uplift is needed to adjust for the area dilution ( 50% / 2% = 25 ). The anomalies change by 1/100, 2/100. Heck even some 4/100 C. Scaled for the small number of grid boxes of the hemisphere that are shifted, that implies about a 1/4 C to 1 C shift in the anomaly in those boxes…
http://chiefio.wordpress.com/2009/11/12/gistemp-witness-this-fully-armed-and-operational-anomaly-station/
So you can take your theoreticals and smoke ’em. I’ve run a benchmark with the actual GIStemp code on real data and the anomaly map changes. By a very significant amount. Now we’re just haggling over the price…

Nick Stokes
January 27, 2010 2:39 am

George Turner (00:53:27) :
Forgot your last question. The Gistemp source code is here.
I didn’t say much about your discussion of local spatial averaging, because I’m not sure what you are saying it is. There is some local averaging with homogenisation in GISS (GHCN is different), but it doesn’t have the sort of effects you speak of. It’s still pretty much the gridding of individual station values. And spatial averaging as in USHCN’s Filnet have small effect on the total region averaging..

Rhys Jaggar
January 27, 2010 2:40 am

Mr Watts/Dr D’Aleo
To say that your paper represents a ‘smoking gun’ in the refutation of AGW would be akin to saying that Omaha beach was recaptured by one man and his dog………..
Many congratulations on synthesising a coherent, wide-ranging, global argument base which should, as a matter of principle, priority and moral rectitude, be urgent reading material for all Public Politicians, functionaries and policy makers globally.
It is not for me to make judgements on your paper, but I suspect that this synthesis will act as an urgent spur to global efforts both to return climate science to its rightful place within the pantheon of subjects for critical analysis, but also to start to return meteorological measurement to its needed place within the warning systems, prediction mechanisms and public planning inputs that has been so shamefully abused in the past 20 years.
I would strongly suggest that you email a copy to the three leaders of the UK political parties, whose email addresses, publicly accessible on the internet, are:
browng@parliament.uk; camerond@parliament.uk; and cleggn@parliament.uk.
All three are in urgent need of education in this arena and it may be helpful in the weeks and months ahead for they and their colleagues to be insightfully stimulated at HOC to change their somewhat unscientific, incoherent and inaccurate positions expessed lucidly, if not with due attention to scientific fact, in the months and years recently past……
YF
Rhys Jaggar

Mark Fawcett
January 27, 2010 2:59 am

Anthony, Joe D’Aleo and E.M. Smith,
To paraphrase Churchill: “Clucking Bell!”.
Hope this gets the wider coverage it deserves; how refreshing to have real data, clearly presented and free from hyperbole.
Keep up the good work; I’m not entirely sure how you chaps are managing the work-rate at the moment – have you given up sleep entirely?
[Yes. ~dbs]
Cheers
Mark

E.M.Smith
Editor
January 27, 2010 3:29 am

Mark Fawcett (02:59:52) : Anthony, Joe D’Aleo and E.M. Smith,
To paraphrase Churchill: “Clucking Bell!”.

I always loved Churchill… My Mum lived in England during that time. Left at the end of WWII (with Dad & kid). I was raised with stories of the English Bulldog. (Guess where some of my ‘persistence’ comes from 😉 Though, thankfully, my Dad persuaded her to swap from cold baths to warm baths as we kids were not going to sea when we grew up… (Grandad was in HM merchant marine and his brother moved to Australia after a few voyages..)

Hope this gets the wider coverage it deserves; how refreshing to have real data, clearly presented and free from hyperbole.

My belief is that you ought to ask the data what they have to say, politely, then shut up and listen… You hear more truth that way…

Keep up the good work; I’m not entirely sure how you chaps are managing the work-rate at the moment – have you given up sleep entirely?

Well, I’m in California. It’s now 3:25 AM…
[Yes. ~dbs]
You too, eh? …
(Do the math. An hour of time is worth what? Coffee per pound is much less, tea even better. Both is best ;-0 But I do sleep sometimes. Today it will likely be from 4 AM until 15 after… Unless I do a new posting… )

Scott H.
January 27, 2010 3:33 am

I’ve been following the global warming/climate change controversy for many years now, having been always skeptical of the “science is settled” claims. My hat is off to so many that have worked so diligently to cut through the lies, manipulations and misinformation.
One would think that the evidence accruing to the skeptical camp is becoming overwhelming and that the long nightmare of environmental propaganda is ending.
I hope that the skeptics keep the pressure on and keep documenting the fraud that went into the claims of a network of Luddites and misanthropes. We must keep hammering the nails into the coffin of this beast so that it is laid to rest for good.
I’m a longtime reader of this website as well as many of the linked sites on the WUWT blogroll. I’ve never submitted a post on this site, but am compelled to do so this morning to congratulate all involved on this magnificent piece of work. It’s a BLOCKBUSTER.

jimbo
January 27, 2010 3:43 am

I’d like to state at the outset that that I consider myself a luke warmer, have am convinced that the Hockey Stick has basically been proven to be an outright fraud, that Climategate shows much of current climate science to lack any credibility, and acknowledge that there does appear to have been a lot of tampering with surface temp record, always aimed at getting the same (warming) result. The case against the warmistas is strong, which is why I’m concerned about how weak some of this paper appears to be. There is enough strong evidence against the CRU crew and their fellow travellerswithout diluting it with some pretty underwhelming points. I have so far only skimmed down to page 20 or so, but already have a few questions and (possibly mistaken) assertions on the content, specifically:
Page 12: “Number of stations by category”. In this chart, whilst it is true that the greatest number of stations were cut from the rural category, as there were many more rural stations in the first place, the actual share of rural stations stayed pretty constant, so rural areas should be as well represented in any final averages as they originally were, maybe even a bit more. It looks to me that in 1985, there were around 2600 each of urban and suburban stations, and maybe 6800 rural, so shares of approx. 22%, 22%, and 56% respectively. In 2000 it looks like about 800 Urban, 1200 suburban, and 3000 rural, giving 16%, 24% and 60%. This is very rough eyeballing, but I think those numbers are about right.
Page 17, Statement accompanying map of Russia, that the majority of stations added since 2003 have been in the warm bits (I assume white is cold, green warm), is true, but it is equally obvious that the that the majority of stations dropped since 2003 were also in the green. I think that it will be the relative shares of stations in the warm and cold bits that will matter rather than the total numbers, and I’m not convinced that has changed much.
Page 18, Graph of Canada’s Temps and total station count. Is the temp. axis missing a leading 0, or does the simple average really show a drop of about 8 degrees C since 1865? If so, I think that the only thing the simple average shows is that it is useless for any purpose, unless it is actually being proposed that a real change of this size has taken place without anyone really noticing!
Page 19, “The dropout in Europe as a whole was almost 65%. In the Nordic countries it was 50%”. I’m reading this as only 35% of the original stations remain in Europe as a whole, with 50% remaining in the Nordic countries. This sentence partially contradicts the shift to the Mediterranean asserted, although it does’t undercut the statement about a shift to lower altitudes.
Perhaps I’m missing something, but that stuff really doesn’t seem very strong to me at all.

Mark Fawcett
January 27, 2010 3:45 am

E.M.Smith (03:29:01) :
I always loved Churchill… My Mum lived in England during that time. Left at the end of WWII (with Dad & kid). I was raised with stories of the English Bulldog.

In the current “climate” a couple of rather pertinent quotes from the old dog:
However beautiful the strategy, you should occasionally look at the results.
and
Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing ever happened.
How apt :o)
Cheers
Mark

E.M.Smith
Editor
January 27, 2010 4:05 am

Nick Stokes (02:39:00) : There is some local averaging with homogenisation in GISS (GHCN is different),
“Some”? Try heaping pots full. Oh, and which GHCN? the “GHCN unadjusted” monthly mean temperatures that are used by GIStemp are ‘almost raw’ data and are NOT anomalies nor the product of anomalies. If you are talking about NOAA / NCDC analysis product, please say so. And if that is what is shipped as “GHCN Adjusted” please explain how it is shipped as temperatures and not anomalies…
but it doesn’t have the sort of effects you speak of. It’s still pretty much the gridding of individual station values.
See the benchmark. GIStemp is a filter that TRIES to remove the data biases and is overwhelmed by the massive bias. The anomalies do change with the station changes. It most certainly DOES have ‘the sort of effects’ in question. (BTW, your statement that is is ‘pretty much the gridding of individual station values’ is in conflict with your earlier assertion that it was anomalies…) Yes, it does ‘grid individual station values’ and that is the problem. That is does so after various broken homogenizing, in-filling , and ‘often wrong’ UHI adjustments is even worse. (BTW, I’ve benchmarked the change profile of the first 2 steps of GIStemp and they impart a warming trend to the temperature data. STEP3 must not only overcome the bias in the data but also the added bias from the earlier steps. It fails at that task.)
A summary finds about 1/2 C of added ’tilt’ to the temperature data ( resulting ONLY from the GIStemp processing through STEP1):
http://chiefio.wordpress.com/2009/08/12/gistemp-step1-data-change-profile/
And here are some detailed ‘by country or continent’ blocks of summary data that you can study yourself. Eyeballing it says to me that GIStemp is adding tilt.:
http://chiefio.wordpress.com/2009/11/05/gistemp-sneak-peek-benchmarks-step1/
And we’ve already seen the link to the “STEP3” benchmark showing that the anomaly processing does not save you…

Nick Stokes
January 27, 2010 4:38 am

E.M.Smith (02:36:52) :
“Those station were, in fact, active in 1970. 5997 of them in that year. “

Of course, if a station is collecting data it is active. And most of your “dropped” stations are active now. But I said “active (for GHCN)”. They weren’t regularly supplying data to GHCN, because it didn’t exist.
“So you are saying that the data set is 1/2 obsolete archive and 1/4 usable data “
No. There’s nothing obsolete about the pre-1992 data – it performs exactly the (historic) function that it was intended to have. And there’s no particular reason why new data shouldn’t be added to whatever extent it can be obtained. It’s no more or less usable. It’s just that obtaining and checking large batches of historic data, as was done in the early 90’s, is a very different process to maintaining and checking a flow of monthly data.
Did it never occur to you to mention, in all this flourishing of the early 90’s drop in your graph, that this is where you change from historic to recurrent data collection?
“Firstly, there’s little quantification of such a drift.”
Try looking at the data…
There’s no quantification in this report. And I did look at your website, but it’s a little , er, diffuse and anecdotal. Where are the numbers that say this is how much drift there was, and here’s the effect? There’s a lot of numbers, but when you try to pin it down, it goes away.
For example, you give this table to show movement of GHCN sites from cold/temperate to tropical zones:
YEAR Warm Cold
1839 2.8 97.2
1889 8.3 91.5
1939 15.5 83.8
1989 25.4 73.2
But there’s a lot of gap there – what happened between 1939 and 1989, and then to 2009. Well:
YEAR Warm Cold
1939 15.5 83.8
1949 17.8 81.4
1959 26.8 71.9
1969 29.9 68.6
1979 29.1 69.3
1989 25.4 73.2
1999 24.9 74.2
2009 26.1 72.9
Different picture – after 1959, the movement is if anything the other way.
“And this, frankly, is bull pucky. Station temps are run through a meat grinder of processes long before the “anomaly map” is calculated in STEP3.”
The fact that missing values are filled from neighboring sites does little to alter the fact that the anomalies are locally based. GISTEMP calculates them at grid points, as I said in my previous post. But the grid points don’t move.
“Bald faced assertion with NOTHING in the way of data to back it up. “
The “assertion” is that high latitude sites are warming more rapidly, so reducing them would have a cooling effect. It’s not just my assertion – but I’m not the one issuing a glossy, highly distributed report. If it is to show that moving stations has a warming effect, then it should do so – in the report.
“And does NOT do it by comparing that thermometer data to an earlier self.
Frankly, it is blatantly obvious that it can’t. The “record” is largely made up of disjoint segments of too few years to be usable if they did. Only 10% of it is over 100 years and a hugh chunk of thermometers are less than 25 years. “

You don’t need 100 years of data – you only need a reasonable coverage of the base period (1951-1980). NOAA does now use the climate anomaly method. But indeed, as I said above, GISS still uses the Hansen/Lebedeff method of calculating anomalies at grid points. That’s still local.

Jimbo
January 27, 2010 4:49 am

Small question Anthony and maybe obvious:
Are you correcting typos etc., as comments come in?
You don’t want to give the warmists ‘issues’ with which to attack you with, no matter how small or irrelevant.
[Reply: Ever since we lost Sisyphus as a moderator, typos are corrected when there’s time, or when they’re pointed out. ~dbs]

Jimbo
January 27, 2010 4:49 am

Anthony:
A small question.
Are you correcting typos etc., as comments come in?
You don’t want to give the warmists ‘issues’ with which to attack you with, no matter how small or irrelevant.

rbateman
January 27, 2010 4:52 am

crosspatch (22:54:00) :
Another thing that SteveM noted about missing months in Siberia. Apparently many of the stations NOAA reported as having missing values for various months actually had monthly values that were available from other sources.

That is what my focus has been. Digging up the missing values for my own area. What are the other sources?
It would be of great benefit for someone to put together a history of station reporting. Far as I know, it all started with ordinary people and a need. In the US, Army Signal Corps to Weather Bureau to NOAA to NCDC…something like that.
Where can original documents be found other than the final destination?
Along the way, there may be archives and copies to be found. How did AMS get it’s data to make it’s Monthy Weather Reviews? Dept. of Interior, USDA/USFS maintained and operated stations, so where in those agencies can copies of records be found?
My own contact attempts with agencies has been rather fruitless, as I suspect any individual would run into with government agencies that don’t have any compelling reason to play along.
NCDC is not the only game in anybody’s town, though they may have the main body of what exists.
Where do we go from here?

Nick Stokes
January 27, 2010 4:55 am

The report makes a very misleading comparison on p 12. It shows a GISS global temp map for April 1978, and a purported corresponding map for April 2008, to show how coverage has shrunk.
But that is an incomplete map for 2008. GISS brings out an early version with the data reported to date. Bob Tisdale showed this very map on May 20, 2008. It was what was available on that date. But it’s not the final version. You can see this here. Or you can generate it by going to this GISS page. There are a lot more stations than the preliminary map showed.

Michael D Smith
January 27, 2010 4:58 am

More gates than a giant slalom course…
Garbage in, Gospel out.
Great stuff guys, bravo!

jack mosevich
January 27, 2010 5:05 am

Anthony or E.M.Smith or Roy Spencer: Naive question: Are UAH satellites calibrated using these surface temperatures? If so is that a potential problem in view of this work?

January 27, 2010 5:54 am

Less than 15 seconds on my broadband.

Ed
January 27, 2010 5:54 am

Have been following WUWT and CA for a couple of years. I continue to be amazed by the people that request Anthony and others to e-mail this to such and such a politician, do that etc.
Please people. It is useless for a Canadian or American to e-mail a European politician. Politics responds to the local man/woman who votes for them. If you support the efforts of this blog, then do the footwork, send it off to YOUR politician at the local, province/state and federal level.
The constituency offices gets lots of e-mails that are sorted by the office staff and hopefully summarized as for or against before automatic deletion. It would be much more effective for you to print off this report, or at least the summary, highlight what you see as the signifigant parts and mail it to your MPs. Yes more use of dead trees but at least you are helping to capture the carbon in the paper.

Douglas DC
January 27, 2010 6:22 am

Ruhroh (20:34:16)-Fenyman was one of my science Heroes-as is Anthony and the rest who challenge the status quo of :”the science is settled.”-Good Science is _never_
settled…

MJK
January 27, 2010 6:26 am

Anthony,
Congratulations on releasing this paper. I am sure it will get people talking. On page 62 and in other places in the report you continuously make the point that global tempepratures have cooled since 2001. This claim does not appear to be supported by a reference of any kind. Are you able to point to such a refernce in the paper (I may have missed it) or at least provide such a reference, particulalry one that takes into account RSS/UAH data for 2009?

January 27, 2010 6:28 am

I’m very glad to see someone looking into the disappearing station problem. I work in physics, and my colleagues all agree that when 85% of your sensors disappear, the onus is on you to measure the magnitude of the resulting systematic error.
It is true that the average temperature of the selected stations increases with time. But the global surface trend is calculated not by taking the average temperature, but by taking the average change in temperature for all stations each year. The fact that most stations are now in the tropics does not mean that these stations will, compared to themselves, be getting warmer. For a more detailed explanation, see here.
Thus your plots of number of stations are compelling, but the plots of average temperature are a let-down. If you want to show the systematic effect of disappearing stations, you will have to do apply the annual changes method, like this.

Pamela Gray
January 27, 2010 6:31 am

I realize that you may have other valid reasons for the personal writing style, however, when I submitted my first draft of my research thesis, I was told to get rid of the Howard Cosell-ish “color commentary”, and he didn’t mean “magenta and cyan yellow”. This is especially true of the literature review.
Two points especially:
Disparaging remarks of other research and researchers are, even if done with a very light hand, completely unnecessary. A simple review of what others have done and the conclusion they have drawn, followed by your opposing research and results will speak for and stand by itself.
First person singular is a strange form of writing for a technical report. May I suggest that in the future your reports take on a dry, distant style of writing, using phrasing such as “This author recommends yada yada yada..” versus the pronoun “I”. It paints the issue in a less agenda driven, more scientifically advanced, and more palatable form for those who are still making policy decisions.
Just my teacher view.

Leon Brozyna
January 27, 2010 6:31 am

Finally did a speed read through this.
Wow!
++++++++++++++
From my working decades my experience is that sleep is an overrated commodity. I can say that now that I’m catching up on it.

January 27, 2010 6:45 am

“chaired and paneled by mutually agreed to climate scientists who do not have a vested interest in the outcome of the evaluations.”
are there enough of these guys left to actually constitute a panel? 🙂

January 27, 2010 6:50 am

On Page 7. “Meanwhile NASA showed it was the 9th-coldest June in the 30 years of its record.”
Is that correct? Did you really mean RSS or UAH?

Tom t
January 27, 2010 6:54 am

As we now know peer review is always peer reviewed.

January 27, 2010 7:01 am

MJK (06:26:30) :
…On page 62 and in other places in the report you continuously make the point that global tempepratures have cooled since 2001. This claim does not appear to be supported by a reference of any kind. Are you able to point to such a refernce in the paper (I may have missed it) or at least provide such a reference, particulalry one that takes into account RSS/UAH data for 2009?

http://www.woodfortrees.org/plot/hadcrut3vgl/from:2001/plot/uah/from:2001/plot/rss/from:2001/plot/hadsst2gl/from:2001

Vincent
January 27, 2010 7:01 am

Excellent post Chiefio.
You sure shredded his lame duck defense of the undefendable.

January 27, 2010 7:04 am

On Page 17, I couldn’t make sense of the map of the Russian snow pack.
Could you graph average station latitude versus time?

Richard M
January 27, 2010 7:10 am

On page 7 you use GISS when you mean RSS (referring to satellite data).

January 27, 2010 7:11 am

On Page 21, I couldn’t make sense of the following paragraph:
“Smith found that in New Zealand the only stations remaining had the words “water” or “warm” in the descriptor code. Some 84% of the sites are at airports, with the highest percentage in southern cold latitudes.”
What does that last sentence mean? It reads as if colder sites were kept, but I think you mean that the colder sites that were kept were airports, which are normally warmer?

kenboldt
January 27, 2010 7:14 am

Hi Anthony, I have been following WUWT for about a month now since I discovered it, and am so happy to have found a reliable place for information, as well as so many like minded individuals who are more interested in the truth, than they are in advancing an opinion.
I read some comments above that you are happy to accept info on any errors or typos in the document. I have not been able to read it through yet, but one jumped out at me in the Summary for Policymakers. In point 3. you say “…have skewed the data so as greatly to overstate observed …” Should this not be “…have skewed the data so as to greatly overstate observed …” notice the movement of the word “to”.
Keep up the good work, and I look forward to your peer reviewed article which I am sure will have warmers everywhere in tears.

Neven
January 27, 2010 7:16 am

E.M.Smith, have you heard of this project? If so, what do you think of it?
The Clear Climate Code project writes and maintains software for climate science, with an emphasis on clarity and correctness.
The results of some climate-related software are used as the basis for important public policy decisions. If the software is not clearly correct, decision-making will be obscured by debates about it. The project goals are to clear away that obscurity, to increase the clarity and correctness of climate science software.

http://clearclimatecode.org/

Rob Vermeulen
January 27, 2010 7:29 am

Hi!
where’s the surface station project at? any report going out soon or any US temperature trend using only the “best” stations?

kenboldt
January 27, 2010 7:30 am

Anthony, I was just taking a look at the article at http://www.skepticalscience.com/On-the-reliability-of-the-US-Surface-Temperature-Record.html which discusses the Menne paper.
There is someone in the comments going by the name Kforestcat who is thoroughly debunking the Menne paper. I suggest you have a read, and if possible, try to contact him as he can likely help with any future endevours you may have.

January 27, 2010 7:33 am

On pages 36/37. You could interpret the max/min temperature graph in two different ways.
1) Are you trying to display CURRENT max/min temperature records with reference to the decade they were set in? Which shows the 30s still retain most of the records, despite the 90s/2000.
2) Are you displaying EVERY max/min temperature record and what decade they were set in, even if they were later eclipsed by a new record? Which shows the 90s/2000s still eclipsed the records of the 30s.

Kevin Kilty
January 27, 2010 7:42 am

Stephan (21:10:09) :
Anthony I thought this was the one that needs an answer:
http://www.skepticalscience.com/On-the-reliability-of-the-US-Surface-Temperature-Record.html
Does your book cover this? or are they too thick to understand. He/they say you only show pics, they show data and their conclusion: there is no significant effect. I would doubt that very strongly…..
REPLY: This was well along when the Menne paper came out, but I do touch on it in this compendium. I had to get this wrapped up before I could do any substantive replies here. I have a Paper with Pielke Sr. and others we are working on, and it is a fully detailed analysis. That will be the best rebuttal. –

I thought the Menne, et al, paper was quite fair, and actually complimentary to surfacestations.org. That being said the surface temperature set is the only thing that ties a hypothesis to reality, and so people heavily invested in the hypothesis, must believe deeply in the data–they have little objectivity. The deception they practice is largely self deception, but unfortunately it has broad implications. Menne et al may have a flaw in their logic as they cannot exclude the possibility that the two data sets look similar not because they measure the same thing, but because confounding factors make the two separate data samples look similar. The confounding factors could be in their own data processing. Beside there are many issues in the surface data,not just internal consistency.

Dave
January 27, 2010 7:53 am

This is great! Why don’t people “jump all over” this evidence? The AGW folks will marginalize the work even though it’s more scientific, thorough and logical than anything they’ve come up with. Thinking about it, AGW has all the hallmarks of “Pathological Science” defined decades ago by Langmuir. Look it up on wikipedia.

Editor
January 27, 2010 7:54 am

E.M.Smith (02:36:52) :
“Nick Stokes (23:21:59) :
‘This just propagates a misunderstanding of what GHCN is. 6000 stations were not active (for GHCN) in the 1970’s. GHCN was a historical climatology project of the 1990’s. V1 came out in 1992, V2 in 1997. As part of that, they collected a large number of archives, sifted through them, and over time put historic data from a very large number in their archives.’
Those station were, in fact, active in 1970. 5997 of them in that year. The exact date the data get into GHCN is not particularly important.”
Michael,
I can state for a fact that Mister Stokes is stating a complete falsehood. The GHCN’s own reports say the following:
http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_overview.pdf

“In the early 1990s, climatologists from NCDC and the Carbon Dioxide Information Analysis Center (CDIAC) undertook a new initiative aimed at creating a dataset appropriate for the study of climate change at both global and regional scales. Building upon the fine efforts of its predecessors, this database, known as the Global Historical Climatology Network (GHCN), was released in 1992 (Vose et al. 1992). It contains quality-controlled monthly climatic time series from 6039 land-based temperature stations worldwide. Compared to most datasets of this type (e.g., Jones 1994), this initial release of GHCN was larger and had more detailed spatial coverage.”

We then see the real culprit in reducing the number of stations and how this artificially imposed a 0.5 C increase in temperature records WITHOUT ANY ACTUAL WARMING:
http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_qc.pdf

“2.2. Synoptically deri6ed source data
Another consideration in the compilation of GHCN is the origin of the data. The most reliable monthly data come from sources that have serially complete data for every monthly report. Monthly data derived from synoptic reports transmitted over the Global Telecommunication System (GTS) are not as reliable as CLIMAT type monthly reports. This may be due to missing data or the orders of magnitude more digitization and corresponding greater likelihood of keypunch errors. Schneider (1992) showed that synoptically derived monthly precipitation typically differs from CLIMAT monthly precipitation by 20–40%. A similar analysis performed on temperature found synoptically derived monthly temperatures differ by as much as 0.5°C from CLIMAT temperatures (M. Halpert, personal communication, 1992). Therefore, GHCN does not include monthly data that were derived from transmitted synoptic reports. While this decision does not significantly impact the quantity of historical data available for GHCN, it does decrease the quantity of near real time data available because many more stations currently report synoptically (ca. 8000) than send in CLIMAT reports (ca. 1650).”

This is the smoking gun: GHCN in 1998 decided to abandon most stations alleged to be synoptically reporting in favor of CLIMAT reporting stations only, thus whittling from 8000 stations down to a mere 1650 stations. Rather than adjusting all the older synoptic data upward by 0.5C, they kept it and imposed on the climate record a false and artificial 0.5C warming.

Richard M
January 27, 2010 7:58 am

I would like to thank Nick Stokes for his input. His comments are probably worth more than all the rest of us combined. Whether they are valid or not is unimportant. They are coming from a SKEPTIC. The rest of us suffer from various degrees of group-think. I actually hope to see other warmers comment as well.
Next, I too appreciate the work of EM Smith. However, I am a bit worried. I am also a software engineer and have commented more than once on the sad shape of the GISS software. This problem makes me concerned about having a single source of examination for exactly the same reasons I question the correctness of the GISS code itself. I know I wouldn’t trust myself and, although I would say without hesitation that EM Smith appears much more meticulous than I, it still leaves me uncomfortable.
Finally, I would suggest placing the work “DRAFT” in the title. I’ve already seen lots of needed corrections and you should plan on version dating the document to avoid confusion in the future.

Rob
January 27, 2010 7:59 am
PaulH from Scotland
January 27, 2010 8:01 am

Anthony & E.M. Smith…
I just wandered over to RealClimate for the first time in a while to see what they’ve been saying about this report.
I found this post…
……………
Could a response to “Surface Temperature Records: Policy Driven Deception?” by D’Aleo and Watts be written? Their paper can be found at http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf
I know it takes time to write a thoughtful response to a paper over 100 pages long, but maybe a near term response dealing with a few of the more understadable allegations of improper method. What sticks in my mind is the reduction of the number of surface temperature sites, with a claimed bias toward deleting locations at higher altitudes and latitudes (i.e. those reporting lower temperatures) yet leaving historical reports from these locations in the average. Surely the researchers involved realized that deleting a colder station from today’s average temperature yet leaving its input in older average temperatures would show an artificial increase in average temperature?
[Response: This is, was, and forever will be, nonsense. The temperature analyses are not averages of all the stations absolute temperature. Instead, they calculate how much warmer or colder a place is compared to the long term record at that location. This anomaly turns out to be well correlated across long distances – which serves as a check on nearby stations and as a way to credibly fill in data poor regions. There has been no deliberate reduction in temperature stations, rather the change over time is simply a function of how the data set was created in the first place (from 31 different datasets, only 3 of which update in real time). Read Peterson and Vose (1997) or NCDC’s good description of their procedures or Zeke Hausfather’s very good explanation of the real issues on the Yale Forum. – gavin]
………….
My brain starting hurting trying to make sense of all this. However, I was left in no doubt that Gavin is saying ‘Bulls**t!’
I suspect that you’re both probably up to your eyes in it at the moment, but if you do get the time, it would be great to hear your thoughts on Gavin’s response.

Neven
January 27, 2010 8:07 am

quote: “I am also a software engineer and have commented more than once on the sad shape of the GISS software.”
Richard M, have you heard of this project? If so, what do you think of it?
The Clear Climate Code project writes and maintains software for climate science, with an emphasis on clarity and correctness.
The results of some climate-related software are used as the basis for important public policy decisions. If the software is not clearly correct, decision-making will be obscured by debates about it. The project goals are to clear away that obscurity, to increase the clarity and correctness of climate science software.

http://clearclimatecode.org/

January 27, 2010 8:07 am

On page 43,
“As usual, the warmers want to have it both ways.”
I am not keen with labelling people. They might do it, but we shouldn’t.

Chris D.
January 27, 2010 8:16 am

Bouncing around Twitterdom:
http://earthobservatory.nasa.gov/Newsroom/view.php?id=42382
Q&A between Gavin and NASA.

Doug S
January 27, 2010 8:29 am

@PaulH
Paul, this issue of dropping stations in the later years is of interest to me as well. The way I understand Gavin’s explanation is the difference in temperatures from one day to the next for nearby stations is the same. In calculus terms I guess we would say the first derivative of temperature measurements with respect to time is the same or similar for nearby stations. I would dearly like to see the code that is used to calculate this (assuming I have their explanation correct). This approach seems like it would have many challenges to model correctly – so many variables to account for.

mathman
January 27, 2010 8:32 am

The baseline period for computation of anomalies is cited as 1951-1980.
Who got to decide on that particular time period?
Would it not be better to use three or four sunspot cycles (either min-min or max-max) as a baseline period?
Unless, that is, one does not grant that solar radiation has any connection to temperature on the Earth.
Or, perhaps, if there are cycles of sunspot cycles, use one of the larger cycles.
The choice of one time period for computation of anomalies is highly suspect.
Why use anomalies in the first place? Why not just look at the raw data, weight the raw data depending on the compliance with both siting compliance and the urbanization effect, and go from there?
And when one puts in error bars, as has not (as far as I can tell) been done on a uniform basis, it is evident that we still don’t know about Earth’s energy budget.
We are certainly fortunate that engineers, by and large, rely less upon the manipulation, smoothing, cherry-picking, and homogenizing of data. Oh, I forgot. The USSR did a lot of that. Their engineers were required by the political leaders to come up with conclusions. That did not work out so well.

hotrod ( Larry L )
January 27, 2010 8:35 am

E.M.Smith (00:45:08) :
To all who have given thanks and compliments, my humble gratitude.

The thanks are richly deserved!
As someone who has worked in IT for quite a while (wrote my first program on 80 column punch cards in 1968) your work analyzing the data and methods seasoned with the revelations from The Harry readme file about how big a mess the code and data bases were in pretty much the final nail in the coffin as far as I was concerned.
The model output is simply nonsense, the false precision is a joke, the error bands swamp the signal, and the so called causal relationship between CO2 and global average temperature is wishful thinking.
Thanks for your efforts to uncover the problems and put them in language that the average citizen can understand.
Larry

JonesII
January 27, 2010 8:37 am

Chris D. (08:16:49) :
Really funny:
Nasa: It is wintertime
Gavin: No!, it´s summertime, people think it is winter but that it is a wrong perception of reality.
Summarizes the dialogue from the link you gave.

Richard M
January 27, 2010 8:43 am

Neven (08:07:06),
I had heard about it but had no time to investigate. Hopefully, I will have some time in the future.

Richard M
January 27, 2010 8:54 am

From Gavin’s response above:
There has been no deliberate reduction in temperature stations, rather the change over time is simply a function of how the data set was created in the first place …
This response shows some ignorance. It doesn’t matter whether the reduction was “deliberate” or not. If the reduction creates bias then there is a problem. I think this demonstrates a little group-think problem with Gavin. He just assumes that it’s OK simply because he trusts the person who did it. Sorry Gavin, that’s just plain unscientific.
If you look the temperature correlation with dropouts on page 11 it’s clear there is a overall warmer set of thermometers in the new dataset. Now, if you assume there has been any warming at all (eg. 2%) then that percentage applied to a higher value with create bias all by itself. It may be small and certainly there are other factors, but you can’t ignore it as Gavin wants to do.

Steve Keohane
January 27, 2010 8:57 am

Thank you Joe, Anthony and Michael. You all bring the light of day to this serious issue. I’ve only read the first 16 pages, and was amused by the graph on page 14, the GHCN network from 1701-2008. Amusing because since 1871 we see about 5°C variation, allegedly CO2 driven, yet 1741-1871 we see 7°C variation sans the CO2 forcing. Everywhere one looks, there are far greater ‘forcings’ than the magically endowed CO2.

MJK
January 27, 2010 8:59 am

Anthony,
Still no reponse to my post (MJK 6:26:30) reagrding your failure to provide a supportinf reference for the assertion in your report that there has been cooling since 2001.
I suspect part of the problem is that this assertion cannot be supported and is incorrect. The RSS and UAH data sets (the only temperature records that you trust) do not show that the globe has cooled since 2001. Perhaps if your paper had been written in 2008 you may have been able to make such a claim based on cherry picking of a cooler 2008 as the end point. But in January 2010, the date of your report) the claim no longer holds water–if it ever did.
Could I kindly suggest you retract this incorrect statement from your report wherever it appears or point to evidence in the RSS and UAH data sets that supports your claim the globe has cooled since 2001.
REPLY: Joe suggests you have a look at this:
http://scienceandpublicpolicy.org/originals/policy_driven_deception.html
-A

Paddy Barrett
January 27, 2010 9:03 am

Is it only me getting an error message from Acrobat? “The file is damaged and could not be repaired.” If anyone else got this message but figured out a fix or workaround, please share!

anna v
January 27, 2010 9:07 am

I would like us to meditate a bit on the other kind of data, the ice core data:
Now though the argument that surface data are in a muddle and cannot be used to assert warming is made clear with this post, I think that one should acknowledge that there are other data that show the medieval warming and the getting out of the little ice age.
Some of those changes come out of historical written records too, ( where is TonyB?)
This just to touch base and not throw the baby out with the bathwater, unless someone has shown that the icecore data have been tampered/averaged/homogenized etc.

January 27, 2010 9:09 am

I believe the reported global temperature anomaly is not the anomaly of the average, but the average of the anomalies of the individual stations. Therefore, it makes no difference if you eliminate cooler stations, unless their anomalies are lower. I believe global warming theory predicts larger anomalies towards the poles, so elimination higher latitude stations would be counterproductive.

Neven
January 27, 2010 9:10 am

Richard M (08:43:11)
“I had heard about it but had no time to investigate. Hopefully, I will have some time in the future.”
Yes, do so. If I’ve understood correctly they’re clearing up the GISTEMP software issues. It should be very interesting for people who want to check the datasets themselves.

Ron de Haan
January 27, 2010 9:13 am

Thanks Anthony, Joseph D’Aleo and E. M. Smith.
Even the Dutch Minister Kramer now is asking questions!
Keep up the pressure.
Great work.

Paul K2
January 27, 2010 9:21 am

Anthony, I just did a brief overview of the paper, and I must warn you have made a lot of well-known mistakes in the paper. It appears that you and D’ Aleo are out of the loop when it comes to understanding how the data for the temperature anomalies are collected and analyzed. I have been reading about the “missing station” reports, and this claim has been debunked.
The report needs substantial revision, and all 15 of your conclusions are inaccurate, and are clear distortions of the current published science papers. The climate scientists are going to slice and dice this report.

Paul K2
January 27, 2010 9:24 am

Tom Graney, you are correct. Contrary to what is asserted in the report, the impacts of AGW on higher latitude temperature anomalies is greater than lower latitude anomalies. This is just one of myriad of easily spotted mistakes in this report.

Randi S
January 27, 2010 9:24 am

As a science professional myself, I have witnessed many times that it takes a HECK of a lot more work for someone to try to fake up a ‘defensible’ set of data, than it does to do the diligent work to generate a good data set to begin with.
All that effort just to try and fool the world about a laughable 3 ppm of carbon dioxide plant food. And people were actually buying it?? Unbelievable.
Very nice work. Very readable. Thank you. You should be proud of this work.

January 27, 2010 9:27 am

MJK (08:59:27),
This should get you started: click

Saaad
January 27, 2010 9:28 am

As with some other commenters, I’ve just read Gavin’s typically haughty response to the paper over at RC. I’m guessing that Roger Pielke Snr would have something very interesting to say about Gavin’s assertions concerning the homogenisation of temperature records from different locations.
Perhaps at last we can really start to get back to science-based ecology ie: forget about meaningless global temperature trends which are both completely beyond our scope to quantify whist at the same time being utterly useless to us in terms of our local micro-environments….and focus on how local climate variations, UHI and genuinely dangerous atmospheric pollutants can be tackled in practical ways that might actually affect our everyday lives.
For instance, here in Australia we’re all set to re-debate the failed “Carbon Pollution Reduction Scheme” next week, despite everything that’s happened in recent times – at the same time, kids in Mount Isa (a remote Queensland Mining Town) are still breathing in air polluted with lead levels which are way above safe levels….trouble is they just don’t seem to rate a mention when faced with the behemoth of AGW. And that is truly a tragedy, one I suspect is being repeated around the globe.
Like everyone else, I’ve been drawn into this almost Orwellian debate about reason versus fundamentalism……hopefully, Guys like messrs Watts, D’Aleo and Pielke (s) can at last nudge the debate back into an area that actually means something real.

MJK
January 27, 2010 9:36 am

Smokey (09:27:51) :
The trend line you provide relates to 1998 to 2009–not 2001 to 2009. try again.

JohnH
January 27, 2010 9:41 am

This is the laymans version of the Nick Stokes anomaly argument, from the SkepicalScience site.
You still don’t get it, do you?
A thermometer could be placed in a frying pan and yet as long as it has dynamic range available it’ll still be able to register a trend in temperature and that trend will be separable from the frying pan component.
I doubt a thermometer at 160C in a pot of fying oil would respond to change in temp in the surrounding room going from 16C to 17C. Even if it did it still does not cover the point that if the oil was an AC unit then that would only operate for part of the year, or may not have existed at that site in say 1975 and on its introduction would generate a large jump upwards.

January 27, 2010 9:45 am

Great job Anthony, after reading you are right it’s a mess — and to ask people to give trillions in tax money for this shaky science is a complete joke.
Thanks to all involved, now we need to follow up until we have the complete picture and can reduce it down to a few short lines … For now, ‘it’s a mess’ will have to do.
Seek the truth …

PaulH from Scotland
January 27, 2010 9:57 am

The scientific debate aside, ‘anomaly’ is a beautiful word, IMHO.

John B (TX)
January 27, 2010 10:07 am

I haven’t made it through the entire report, but the handling of Russian data seems to be an area ripe for research because only 25% of station data submitted was used by Hadley. It would interesting to see a temperature reconstruction based on 100% of the available data. Is the raw data available? Any known work on that type of project?

January 27, 2010 10:08 am

Steve Mosher’s “CRUtape Letters” just arrived in the mail and now D’Aleo and Watts’ have released their long-awaited work. Congratulations to you all (and the Chiefio of course)! Between these books and keeping up with breaking news on WUWT the rest of my week is gone… Good thing I’m on sabbatical this year but keeping up with Climategate and the IPCC scam revelations has really hurt my productivity. I am sure many readers here are as awed as I am by your total dedication to unearthing the truth and helping us to understand both the science and pseudoscience of climate change.

Steve in SC
January 27, 2010 10:10 am

Things are beginning to unravel.
Way to tug on that string Joe, Anthony, and E.M.
Keep pulling!

January 27, 2010 10:11 am

John H- Not being familiar with the Nick Stokes argument, I can’t tell if you are agreeing or disagreeing with what I said.
I would also like to point out the the UHI and hotspot effects are going to be one time effects so they cannot really contribute to a long term upward trend the the global temperature anomaly. The influence of a barbecue grill or a parking lot will show a temperature effect only for the year it was built, but itself will not have a long term trend affect. This is not the case if existing sites are continually gaining heat sources, but that could only go on so long and then the effect on the trend disappears.
I point this out as someone who constantly wishes for the alarmists to fall flat on their face.

January 27, 2010 10:14 am

Congrats to all.
I’m getting ready for a job interview and don’t have time to even start it. Could someone take Nick Stokes comment containing the criticisms he asserts, and make a bullet point rebuttal of each. E. M. Smith sort of did it, but it would be nice to have something that shows the rebuttal line by line.
Have to go iron now….. WOW! I’m actually ironing!!! 🙂

Boris
January 27, 2010 10:15 am

“This response shows some ignorance. It doesn’t matter whether the reduction was “deliberate” or not. If the reduction creates bias then there is a problem.”
This matters because E. M. Smith and others are accusing NASA/NOAA of fraud.

rbateman
January 27, 2010 10:25 am

Paul K2 (09:24:03) :
And what makes AGW in the 80’s-2000 any different than the Coming Ice Age in the 40’s to 1970’s, the Global Warming in the 20’s-1930’s and the New Ice Age the 1880′-1910’s?
End of the Worlders: Serving you since 1884. Billions served with nightmares.
I should ask: How many are going back to church with Gore?
Big Al has seen the Light. According to him, environmentalism can co-exist with religious belief systems.
The science of AGW building has settled: Termite wood (that’s holy wood) was used in the walls, floor and roof.

A C Osborn
January 27, 2010 10:26 am

Re
MJK (08:59:27) :

Anthony,
Still no reponse to my post (MJK 6:26:30) reagrding your failure to provide a supportinf reference for the assertion in your report that there has been cooling since 2001.
I suspect part of the problem is that this assertion cannot be supported and is incorrect. The RSS and UAH data sets (the only temperature records that you trust) do not show that the globe has cooled since 2001. Perhaps if your paper had been written in 2008 you may have been able to make such a claim based on cherry picking of a cooler 2008 as the end point. But in January 2010, the date of your report) the claim no longer holds water–if it ever did.
Could I kindly suggest you retract this incorrect statement from your report wherever it appears or point to evidence in the RSS and UAH data sets that supports your claim the globe has cooled since 2001.

Try reading the Threads, it has been answered.

Chris D.
January 27, 2010 10:31 am

Steve in SC got me going.
[youtube=http://www.youtube.com/watch?v=L7Dw60SVXQ4&hl=en_US&fs=1&]

January 27, 2010 10:33 am

Tom Graney (10:11:57),
You might be forgetting the fact that most temperature stations have been eliminated: click
The majority of those eliminated stations are rural. Gradually removing the rural stations produces an artificial upward bias. More info here: click

Paul K2
January 27, 2010 10:34 am

Here is a good discussion that summarizes D’ Aleo’s work and much of the information in this report:
http://www.yaleclimatemediaforum.org/2010/01/kusi-noaa-nasa/
Key conclusion:
A San Diego TV station’s mid-January one-hour broadcast reporting that two key federal climate research centers deliberately manipulated temperature data appears to have been based on a fundamental misunderstanding of the nature of the key climatology network used in calculating global temperatures.
Independent TV news station KUSI in San Diego aired a story challenging current scientific understanding of climate science and offering “breaking news” of government wrongdoing based on work of Joseph D’Aleo, a meteorologist, and E.M. Smith, a computer programmer.

Jan
January 27, 2010 10:37 am

* MJK (08:59:27) Still no reponse to my post (MJK 6:26:30) reagrding your failure to provide a supportinf reference for the assertion in your report that there has been cooling since 2001….Could I kindly suggest you retract this incorrect statement from your report wherever it appears or point to evidence in the RSS and UAH data sets that supports your claim the globe has cooled since 2001*
What an insistance…there indeed was a response to you ( Juraj V. (07:01:11))
So, again, just for you MJK, and with trends from 2001 only – as you wish – and adding the comparison with GISTEMP:
http://www.woodfortrees.org/plot/uah/from:2001/plot/uah/from:2001/trend/plot/rss/from:2001/plot/rss/from:2001/trend/plot/gistemp/from:2001/plot/gistemp/from:2001/trend
(also note the .2°C+ absolute values difference and of course the divergence of the UAH+RSS vs. GISTEMP trends to opposite direction)
Now, could you kindly retract your claims to Anthony?

January 27, 2010 10:44 am

@Smokey;
I agree that a rural station is less likely to have extraneous heat effects, and if you shift away from stations with no heat effects to a population of sites that is gradually gaining heat effects then this will cause the system to exhibit an upward bias in temperature over time. But, the heat effects are not cumulative; a parking lot, once constructed, is not going to continue to influence the trend so over time the impact of these heat affected sites is going to peter out.

P Gosselin
January 27, 2010 10:59 am

I just printed it out and skimmed over it. Looks impressive!
A great resource to have.

January 27, 2010 11:11 am

MJK (09:36:38) :
“The trend line you provide relates to 1998 to 2009–not 2001 to 2009. try again.”
OK: click. Sorry that it doesn’t go to 2009. But you get the idea.
Here are a couple more: click1, click2.
Here’s a century of unremarkable temperatures: click
Here’s the thirty year global temp record: click. And another: click. One more: click. Those all cover 2001 — plus the twenty prior years.
And how good are the climate models? They suck: click
Paul K2 (10:34:39),
I guess you’re new here. That Yale blog has been discussed numerous times. It is funded by the Grantham Foundation, which has a heavy pro-AGW agenda. In fact, in one comment Zeke Hausfather claimed he didn’t know about Grantham’s funding – even though on the blog’s home page it says:

The Yale Forum on Climate Change & The Media is grateful for the generous financial support of the Grantham Foundation for the Protection of the Environment…

Unlike Hausfather and the Yale blog, Joe D’Aleo and Mike Smith are not paid for their work here.
Grantham funds the Yale blog — and he who pays the piper calls the tune. Keep those things in mind when trying to decide who to believe.

Paul K2
January 27, 2010 11:41 am

MJK, you reject the Yale analysis because of the funding sources. Do you know who is funding the Heartland Institute and SPPI, the organizations who published AW’s recent publications? (Hint: Think of things that make smoke when they are burned.) I would rather stick to the science and the facts about the number of stations, and how the data from the stations is gathered and analyzed.
I have been reading quite a few blogs where knowledgeable scientists discuss this issue. The posters have been discussing D’ Aleo’s conclusions, and there are problems with how he thinks the process of collecting temperature data occurs, and the conclusions in this report.

luminous beauty
January 27, 2010 11:57 am

D’Aleo & Smith are widely publicized as making the claim that there are only 35 Canadian stations and only 1 above 65°N for 2009 in the GHCN data set. I’ve tried looking at Smith’s webpage, but between the hand-waving assertions and computer glibberish, I have no idea where he is getting this information. According to Deutscher Wetterdeinst, part of the GHCN, there are more than 80 Canadian stations ) (#s 71017 – 71990) reporting monthlies and 20 or more above 65°N for 2009.
Most of 10 stations in Bolivia are there, too.
WUWT?

kadaka
January 27, 2010 11:59 am

Paddy Barrett (09:03:12) :
Is it only me getting an error message from Acrobat? “The file is damaged and could not be repaired.” If anyone else got this message but figured out a fix or workaround, please share!

That is the “normal” message when a pdf download terminates as “finished” before the entire file is really downloaded. You go to view the file and get the message since it’s not a complete file.
Do you have a separate “Downloads” window with your browser that monitors such? Some progress indicator showing xx of xx downloaded? See what the total files size is supposed to be, compare it to what actually came through to see if it quit early.
It may take a few times to get a full and successful download. Just keep trying, see what happens.

steven mosher
January 27, 2010 12:00 pm

I look forward to reading it, when I come up for breath

January 27, 2010 12:02 pm

Paul K2 (11:41:10),
That wasn’t MJK. I made that comment.
You misunderstand the difference between scientific skeptics, and advocates of the AGW hypothesis.
Skeptics simply say, prove it. Or at least provide solid, measurable, empirical [ie: real world] evidence that human emissions of CO2 cause global warming – because those at the Yale blog are simply trying to sell AGW. Their continued funding depends on it.
You need to keep in mind that skeptics have nothing to prove. Skeptics simply question the AGW hypothesis, which must either withstand scrutiny, or go down in flames.
AGW is going down in flames because it cannot withstand scrutiny. Proponents of the AGW hypothesis refuse to show their data and methodologies. Why? Because if they did their hypothesis would be quickly falsified. There is no other reason; there are only hastily made up excuses for stonewalling requests for information.
I pointed out that the Yale blog is funded by a group with a heavy pro-AGW agenda, compared with unpaid volunteers. In order to try and salvage your argument, you re-framed the debate into Yale vs SPPI, Heartland, etc.
If you feel inclined to respond, try to keep your response to the original point: a heavily subsidized blog with an agenda, vs volunteers who are only interested in seeing where the facts lead.

Tim Clark
January 27, 2010 12:03 pm

I’m assuming you did a terrific job. I didn’t read it. As a rule, I only read publications that are greater than just “one robust” ;~P

Richard Sharpe
January 27, 2010 12:06 pm

Hmmm, I see a larger than usual number of trolls.

Kay
January 27, 2010 12:07 pm

Nice job, guys. I found another typo:
p 33, you say: “The Climate reference network was capped at 114 stations but will not provide meaningful trend assessment for about 10 years.” It should be The Climate Reference Network […]”

Tom_R
January 27, 2010 12:15 pm

>> Tom Graney (10:44:23) :
But, the heat effects are not cumulative; a parking lot, once constructed, is not going to continue to influence the trend so over time the impact of these heat affected sites is going to peter out. <<
There will be a discrete change in the average temperature level before and after the construction. All subsequent measurements will show a false warming when compared to pre-constuction temperatures. As time goes on the false warming will decrease since it is the constant temperature difference divided by the total time of the analysis, but it will never go away.

Kay
January 27, 2010 12:18 pm

p 41: “In a conversation during Anthony Watts invited presentation about the surfacestations projects to NCDC, on 4/24/2008, he was briefed on USHCN2’s algorithms and how they operated by Matt Menne, lead author of the USHCN2 project.”
Huh?
How about:
On April 24, 2008, at a presentation about the surfacestations projects to NCDC, Matt Menne, lead author of the USHCN2 project, briefed Anthony Watts on USHCN2’s algorithms and how they operated.
Or: On April 24, 2008, Matt Menne, lead author of the USHCN2 project, briefed Anthony Watts on USHCN2’s algorithms and how they operated.
I think you guys did a wonderful job, but you really need a proofreader!
Keep up the good work!

Tom in Florida
January 27, 2010 12:29 pm

Tom Graney (10:44:23) : “But, the heat effects are not cumulative; a parking lot, once constructed, is not going to continue to influence the trend so over time the impact of these heat affected sites is going to peter out.”
I have been wanting to ask a question, this seems like a good place:
The article about the Menne paper and the statement above all say that it doesn’t matter what the actual temperature is, it is the anomaly that counts. But this is based on only permanent changes as stated above. However when it comes to air conditioners, that is a different story. No one knows what the inside thermostat setting was at any given time (day, month or year), no one knows when the A/C unit kicked on or off, no one knows how efficient the A/C unit is over time, no one knows anything about the daily use of the A/C unit at all. Therefore, can any temperature sensor near an A/C unit report data that is consistant? If it cannot, then the data from all those stations must be questioned.

hunter
January 27, 2010 12:40 pm

I urge you and Joe to read Dr. Neil Gammon’s review of your work.
http://www.chron.com/commons/readerblogs/atmosphere.html?plckController=Blog&plckBlogPage=BlogViewPost&newspaperUserId=54e0b21f-aaba-475d-87ab-1df5075ce621&plckPostId=Blog%3a54e0b21f-aaba-475d-87ab-1df5075ce621Post%3a1602a720-b2a5-47de-bf2d-3b62afcf88a6&plckScript=blogScript&plckElementId=blogDest
Dr Neils Gammon ng attacks the basic concept of your work and says it is flawed, not showing what you claim at all.
Since he is the climatologist who was responsible for driving home the IPCC glacier scam, I think he is worth reading and responding to.
Additionally, Eric Berger the science reporter for the Houston Chronicle moderated a debate today between Dr. North of A&M, and Dr Lindzen of MIT in Houston.
There will be reporting on this later.
When I find a link to that, I will post it.

wmsc
January 27, 2010 12:42 pm

Eh, ok, I still have one minor question, that completely eludes me.
If the temperature sensors have an +/-1C resolution, how in the world are they saying that it is a measured 0.2C change? I apologize if that’s a dumb question, but if it’s never asked…

Murray
January 27, 2010 12:45 pm

Paul K2, regardless of funding, the Yale report you reference has 2 obvious problems
1) a misunderstanding of d’aleo/Smith. They have not said that stations with a lower warming trend have been dropped. They are saying that the percentage of cooler stations (regardless of the trend) in the average has dropped, causing the average to warm.
2)The fact that adjustments up or down are about equal fails to take time into account. When down adjustments are in older data than up adjustments the adjustments also introduce a warm bias, regardless of how many of each there are, and this is what has happened.
Point 2 has been dealt with extensively on the blogs. The Yale report authors are either thick, out of touch, or intentionally misleading readers like yourself.

steven mosher
January 27, 2010 12:48 pm

Dang Anthony, that’s a great read. I couldnt not put everything down and get right to it.

jonk
January 27, 2010 12:49 pm

Grats to the authors. You all did an amazing job of pulling all this together. The time and effort is greatly appreciated by so many out in the world who don’t have the time or skills to investigate in depth, but knew instictually that there was something fishy going on.
Looks like you even made it into a newspaper, although Anthony was not credited here
http://www.vancouversun.com/technology/Scientists+using+selective+temperature+data+skeptics/2468634/story.html

hunter
January 27, 2010 12:53 pm

Do I have a post that failed to post?

Jan
January 27, 2010 12:54 pm

Just for the fun and for the MJK understanding I made also this 2001-2010 chart:
http://www.woodfortrees.org/plot/uah/from:2001/plot/gistemp/from:2001/trend/plot/rss/from:2001/plot/rss/from:2001/trend/plot/gistemp/from:2001/plot/uah/from:2001/trend/plot/sidc-ssn/from:2001/scale:0.0015/plot/esrl-co2/from:2001/scale:0.0015/trend/plot/sidc-ssn/from:2001/scale:0.0015/trend/plot/esrl-co2/from:2001/scale:0.0015
There is the GISTEMP+CO2 slightly ascending trend, the UAH+RSS descending trend and of course the sunspot number descending trend.
I didn’t made it up – its the woodfortrees engine which makes such funny charts… 😉
BTW if you add HADCUT3 it has the same trend as the satelites…

Paul Coppin
January 27, 2010 12:55 pm


Tom Graney (10:44:23) :
@Smokey;
[…]
But, the heat effects are not cumulative; a parking lot, once constructed, is not going to continue to influence the trend so over time the impact of these heat affected sites is going to peter out.

If my understanding is correct, that’s an unwarranted assumption. Changing the environment of the siting changes the dynamics uniquely. Each station sits in a unique microclimate; Its report on each occasion is a instantaneous net of the current microclimate.
The anomaly purports to measure the trend in the delta occurring at that site (or fleet of sites…). But, the population of microclimates is neither randomly nor uniformiy distributed or perturbed, being chaotic instead. Hence, the anomaly is based on a chaotic reference that we can’t characterise. Further, surfacestation.org demonstrates that the sitings themselves suffer from an intrinsic competency bias that hasn’t been accounted for. The margin of error that is implicit in all of this is not nulled by the process of determining the anomaly,

jonk
January 27, 2010 12:58 pm

The follow up article is slightly humorous, especially Gavin’s contribution.
http://www.vancouversun.com/technology/Incomplete+data+mean+warming+worse/2475762/story.html

steven mosher
January 27, 2010 1:07 pm

Anthony,
The flow chart is a must. WRT the submission to the UK. I’m thinking the 3rd question ( indepence of the 3 records) may require 3k words on its own.
Anyways, I good read. I love all the copy editors here. They are such a pleasant and diligent lot. Everybody can play a role. I love it.

steven mosher
January 27, 2010 1:14 pm

Nick Stokes (23:21:59) :
Correct WRT the anomaly approach. I think I made this same mistake a while back on CA and Dr. Hu corrected me. Looking at the GISS code and seeing exactly how they did it made it perfectly clear. It would be interesting to look at the distribution of trends before and after culling.

hunter
January 27, 2010 1:30 pm

An update from the Linzen v North debate in Hosuton:
Lindzen apparently did very well.

Ray
January 27, 2010 1:43 pm

The big issue will be that all researchers that linked their research and publications to IPCC or CRU or GISS data sets can now and should be removed from peer-literatures. If they found any relationship to forged temperature data, the relationship is not invalidated.

January 27, 2010 2:02 pm

Paul,
If my understanding is correct…
Your understanding is not correct. If I place a hot object next to a temperature instrument it will cause a step change in the measured temperature. This effect will occur once and will not affect the long term temperature trend. It does not affect or alter the local climate. If more hot objects are added over time then it could have the appearance of contributing to a trend, but how many hot objects can you place in the proximity of one temperature instrument? I believe this growth in the UHI effect has been studied extensively. The people responsible for all of this may be dishonest, but they are not stupid.

Nick Stokes
January 27, 2010 2:05 pm

GHCN stations are not moving to warmer climates, as the underlying thesis of much of this report says. I did a simple calc of the average temperature of GHCN stations in each year since 1950. The results are on my blog here. The trend is down.

David S
January 27, 2010 2:10 pm

Watts for president
D’Aleo for VP
Campaign motto -“Honesty and intregrity- What else do you need?”

boballab
January 27, 2010 2:10 pm

luminous beauty (11:57:01) :
Maybe this map from NCDC of the GHCN temperature Anomalies from Jan-Dec 2008 will help:
http://www.ncdc.noaa.gov/img/climate/research/2008/dec/map-land-sfc-mntp-200801-200812-pg.gif
Notice there is not data for most of Canada and that big gaping hole in South America. Once you got that take a look at Africa and notice that it looks like someone rolled a giant bowling ball through the continent taking out every thermometer in the way. Then we move to tiny New Zealand which mysteriously no one seems to have read a thermometer in 2008. Greenland no thermometers there either and look at those giant gaps in Russia.
For a better idea I made a 1 min 23 sec Animation using the GISTemp Anomaly map maker, set to 250Km infill to show how the change in thermometers since 1880.

geo
January 27, 2010 2:14 pm

I’m still struggling with #1 of the Summary for Policymakers:
“Instrumental temperature data for the pre-satellite era (1850-1980) have been so widely, systematically, and unidirectionally tampered with that it cannot be credibly asserted there has been any significant ‘global warming’ in the 20th century.”
I can’t find anything in the report that seems to support this, and some observations that do not. Unless putting “global warming” in quotes was meant to really mean “anthropogenic global warming”. The authors clearly recognize that the 20th century ended much warmer than it began, don’t question the existance of the 1900-1940 warming, and acknowledge (tho as “cyclic”) the 1979-1998 warming.
So how was there no global warming in the 20th century?
Don’t get me wrong, I enjoyed the rest of it, and appreciated the surfacestations.org status update that is included as well (thru October 2009, but that’s the good “station collecting” period of the year anyway).
Also, I must admit to being confused on the point about “only 4 stations” in California are being used. For what data set is that true? What have ss.org volunteers been taking pictures of in CA (i.e. what dataset are they included in)? And if that data does exist, isn’t there “still time” to analyze/process it and compare to the denuded data sets?
And do we have any idea how many of those stations “not included” in whichever dataset is being pointed at still exist with continuous records that could be processed now?

Rational Debate
January 27, 2010 2:19 pm

It would be nice if the executive summary were added to the WUWT article here – particularly for those with slower computers or very little computer memory. Especially as the article itself is so short (e.g., if it were already long I could see not wanting to add to it, but with it being very short, the addition of an exec. summary would be fine I’d think…).
Thanks much for considering adding it here!

January 27, 2010 2:27 pm

Murray,
Your two points are somewhat incorrect.
1) It doesn’t matter that much what the absolute temperature of any given station is, since the global anomaly is calculated with respect to local anomalies, not absolute temps. So if stations with discontinuous records in GHCN tend to have a colder absolute temperature than stations with continuous records, it will have no real effect on the global anomaly as long as the the change in temps over time is unrelated to the baseline temp. If anything, I’d imagine colder places would warm faster than already hot places, all things being equal. That said, I haven’t looked enough into the altitude of continuous vs. discontinuous stations to comment more on if the general claim is correct or not.
Still, as I mention in the article, the proper way to approach the problem is to compare anomalies in continuous and discontinuous stations. I make a first pass at it in my article, and I’d welcome folks to improve it by adding some sort of geographic weighting, such that you don’t start getting weird effects when the number of discontinuous stations gets small.
2) The adjustment graph doesn’t show the magnitude of the adjustments per se, but rather it shows the distribution of how the adjustments modify the trends. This means that if the data was adjusted down in the past and up in the present, it would have a large effect on the temperature trend over the full period.
Now, where GG’s graph does have shortcomings is that it does not account for the timeframe of adjustments, but rather it looks at the net effect on the trend over the full period of measurements for each individual station. This means that the net effect of adjustments for any discrete time period (say, 1970 to present or 1900-1950) could be different.

geo
January 27, 2010 2:30 pm

Let’s put it this way –if it had said no evidence of global warming from 1940 (or 1934) then I could support that with the rest of the report. Of it had said “we can’t be sure *how much*” because of data problems, I could support that.
But if 1900-1934 is in there, I can’t make “20th century” work from the report itself. Unless a lot of unexplained (in the rest of the report) weight is being put on “significant”.

Paul K2
January 27, 2010 3:04 pm

Murray (12:45:28) :
Paul K2, regardless of funding, the Yale report you reference has 2 obvious problems
1) a misunderstanding of d’aleo/Smith. They have not said that stations with a lower warming trend have been dropped. They are saying that the percentage of cooler stations (regardless of the trend) in the average has dropped, causing the average to warm.
My response: Let me get this straight; you believe that an average temperature is being calculated from all the station data? That all the temperatures are being added up, and divided by the number of stations? This is apparently what D’ Aleo and Watts seem to think is happening. The statements and methods in this report show that.
This seems like an extraordinarily difficult task. In order to get the average temperature for Pennsylvania, we would need to estimate the temperature for every square kilometer (for example), and divide by the area of Pennsylvania. Tough to do, since there is going to be a lot of temperature variation across the state, from the Delaware Bay and Lake Erie influenced areas, to the icebox section in the northcentral.
Wouldn’t it be easier if we simply looked at each station’s data, identify a trend, and calculate an anomaly for that station? Then we could grid the state, and use a formula to assign the anomaly to the grids that are near that station? Then over time, if a station changes, say from morning to afternoon readings, we could put an adjustment in, similar to other stations with AM to PM changes?
Then the actual absolute temperature distribution shouldn’t have a major impact on the anomaly calculated for the state. Since this report clearly says the distribution of cooler and warmer stations is important, then the authors seem to think that an average temperature is being calculated.
Hope to see your answers soon.

David
January 27, 2010 3:06 pm

Regarding hunter (12:40:44) :
Of course they say that, they said the same thing about the hockeystick. Give it some time, let the minor errors get corrected, let it force some much needed revelations on still undisclosed adjsutments, let the process continue. Read the critical comments and the responses. Time will tell, but in science blanket statements like this man made are well, simply unscientific.

David
January 27, 2010 3:22 pm

regarding Nick Stokes (14:05:39) :
“I did a simple calculation. Just the average temperature of all stations in the GHCN set v2.mean, for any year. You might expect a small rise reflecting global warming. But if there is nett movement of stations to warmer climes, that should show as a bigger effect.”
Is this two sets combined showing the difference, the one with the current stations, the one with all the stations?
The chart heading does not make it clear, are you saying the reduced number of stations currently used, are actually cooler then the stations dropped?

steven mosher
January 27, 2010 3:24 pm

jonk (12:58:46) :
The follow up article is slightly humorous, especially Gavin’s contribution.
http://www.vancouversun.com/technology/Incomplete+data+mean+warming+w
Well, Gavins claim is easy to check. It might make a nice blog post for somebody.
Just look at RSS from 60N to 82.5N.
Then compare with the land.
Also, gavin makes an interesting argument about only using one station.
Hmm I wonder if somebody could pick a station with a cooling trend.

January 27, 2010 3:24 pm

Paul K2 (15:04:12):

Wouldn’t it be easier if we simply looked at each station’s data, identify a trend, and calculate an anomaly for that station? Then we could grid the state, and use a formula to assign the anomaly to the grids that are near that station? Then over time, if a station changes, say from morning to afternoon readings, we could put an adjustment in, similar to other stations with AM to PM changes?

Yeah, we could do a lot of things. But the problem isn’t on this end. You could ring up James Hansen over at GISS, and ask him how he “adjusts” past temperatures. Here are some Illinois stations. Notice the shenanigans: click

Andrew30
January 27, 2010 3:25 pm

Ray (13:43:17) :
“The big issue will be that all researchers that linked their research and publications to IPCC or CRU or GISS data sets can now and should be removed from peer-literatures.’
Or buried in peat.
Anthropogenic Global Warming Virus Alert
http://www.thespoof.com/news/spoof.cfm?headline=s5i64103

David
January 27, 2010 3:34 pm

Regarding Paul K2 (15:04:12) :
Could it not also be possible that they are saying that if the remaining stations have an exisiting or increasing UHI effect and or an exisiting true warming relative to other regions, then those anomalys, legitmate or not, would show a warming, then the anomaly estimates from those stations transposed to the no longer used rural stations could artificialy raise that anomaly as well?
Additionally if the dropped stations increased in the recent past then the remaining UHI effects could potentially raise the overall anomally trend, especially if the past was adjusted down, and this would be further emphasised if the biggest downward adjustments were during the late 1930s warm period.

January 27, 2010 3:43 pm

Thanks for a great job in putting this all together.
You mentioned that this has not been peer-reviewed. Have the GISS, CRU and NCDC climate records been peer-reviewed? I am aware that some facets, such as UHI effects, have been published, but has the overall records & methodology been reviewed?

David
January 27, 2010 3:44 pm

Paul K2 (15:04:12):
“Wouldn’t it be easier if we simply looked at each station’s data, identify a trend, and calculate an anomaly for that station? Then we could grid the state, and use a formula to assign the anomaly to the grids that are near that station? Then over time, if a station changes, say from morning to afternoon readings, we could put an adjustment in, similar to other stations with AM to PM changes?”
Humm? Here is the thing, a formula to assign the anomaly to the grids from near by stations?? Maybe, depends on the formula. Maybe it also depend on which stations you drop relative to recent changes in all the stations currently in the average anomaly. If you drop the stations with less anomaly, and keep the ones with more warming as gauged by their anomaly, UHI or not, and use their anomaly, now transposed to the grid stations dropped, you may have problems and a biased warming. Just theoretical of course.

January 27, 2010 3:57 pm

I mentioned in an earlier comment that a series with fewer high latitude stations vis-a-vis low latitude stations would show less warming than a series with more high latitude stations since colder places seem to be warming faster than warm places. The folks at NCDC told me something similar:
“By the way – the absence of any high elevation or high latitude stations would likely only serve to create a cold bias in the global temperature average because we calculate the gridded and global averages using anomalies – not absolute station temperatures – as I explained in the information in my earlier e-mail to you. Anomalies for stations in areas of high latitudes and high elevations are typically some of the largest anomalies in the world because temperatures are warming at the greatest rates in those areas. So the suggestion that the absence of station data in these areas creates an artificial warm bias is completely false.”
However, not wanting to rely on their word alone, I figured I’d do the analysis myself, looking at the mean annual anomaly across the raw data from all stations at > 60 degrees latitude (both north and south) and <= 60 degrees latitude. You can find the source code here: http://drop.io/2pqk4vg (see the lat lon version of the do file).
The results? http://i81.photobucket.com/albums/j237/hausfath/Picture67.png
Looks like higher latitude stations do show a significantly larger warming trend (0.28 C per decade vs 0.18 C per decade since 1960).
I expect altitude will have similar effects, but I still need to check.

Nick Stokes
January 27, 2010 3:57 pm

” David (15:22:35) :
Is this two sets combined showing the difference, the one with the current stations, the one with all the stations?
The chart heading does not make it clear, are you saying the reduced number of stations currently used, are actually cooler then the stations dropped?”

No, it’s just the average temperature of all the GHCN stations being used in each year, plotted by years. If this report is right, that should be increasing as cooler stations are “dropped”. It isn’t; it is decreasing.

Paul K2
January 27, 2010 4:04 pm

David, yes of course! I am so uninformed. Clearly I don’t understand this.
So help me out; which pages in this report shows a comparison of the anomalies for the dropped stations versus the stations that were kept? Then we can all get on the same page, so to speak.
But maybe good news, I stumbled across some work that has been done on the continental US stations, comparing subsets of the stations, especially the bad ones and the good ones, so we can look there to see the big differences that are caused by station selection.
http://www1.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2010.pdf
I am having trouble seeing your point by looking at the graphs at the end of that article… Perhaps you can point out the graph that shows the big change to due to bad stations being retained over good stations?
As always, your humble servant…

rbateman
January 27, 2010 4:09 pm

Nick Stokes (15:57:48) :
If, after lopping off all the cooler stations, the GHCN averages continue to fall, it can only mean one thing:
There ain’t no Global Warming going on, and even more to the point, there’s a whole lot of Global Cooling.

rbateman
January 27, 2010 4:12 pm

Seems to be a lot of trolls about. The publication must have struck a nerve.
They didn’t have to write all that to convince me, because after over a year of following along and going over station data myself, it’s all too obvious.
I have a “Harry_Read_Me” headache.
What a mess !!

January 27, 2010 4:28 pm

You may want to have a look at some scatter plots and animations I created from raw GHCN V2 temperature data, anomalies and station locations here:
http://globaltemps.wordpress.com
While they sure didn’t uncover any scandals, they might be useful to expose the complete dataset in its chaotic variability.

January 27, 2010 4:45 pm

I agree with Pamela Gray (06:31:29). The message of the article is very powerful. But a dispassionate style would be much more effective, along with use of the scientific passive. The polemical tone of the article is unfortunate, and will give partisans a hook to discredit the message.

January 27, 2010 4:49 pm

Is it possible that the Briffa decline that was hidden was correct? That visually appears to be similar to the climate models without AGW that was presented in IPCC AR4. (My suspicion that Briffa knows he was right, but was brow-beaten by Mann et al, then embarrassed when he finally had to release his data, and that he is the deep throat behind the release of the CRU data, Now he is not available because he is ill. How’s that for a wild conspiracy theory)
Second question: Is it possible that the lack of warming in the last decade is due to the lack of “adjustments” that can be made to the data or lack of stations to eliminate?

Deech56
January 27, 2010 4:50 pm

RE Bruce (15:43:14) :

Thanks for a great job in putting this all together.
You mentioned that this has not been peer-reviewed. Have the GISS, CRU and NCDC climate records been peer-reviewed? I am aware that some facets, such as UHI effects, have been published, but has the overall records & methodology been reviewed?

The basic GISTEMP, with references, is described here. The programs are also available on that site, but it’s also good to check Clear Climate Code by Nick Barnes, et al.

tokyoboy
January 27, 2010 4:50 pm

Great work Anthony! Hats off to your really perseverant efforts.
Just one point. I expected inclusion of more straightforward graphics as those Peter The 6th Grade and his Papa presented, comparing rural and urban temp trends, in a video of 9 Dec 2009, bur could not fine one, though enough information surrounding that issue is given in your Compendium…….

Deech56
January 27, 2010 5:00 pm

RE Jan (12:54:02) :

Just for the fun and for the MJK understanding I made also this 2001-2010 chart
There is the GISTEMP+CO2 slightly ascending trend, the UAH+RSS descending trend and of course the sunspot number descending trend.
I didn’t made it up – its the woodfortrees engine which makes such funny charts… 😉
BTW if you add HADCUT3 it has the same trend as the satelites…

But are those trends from UAH and RSS significant? I doubt it.
I hope you aren’t trying to make some kind of solar-temp correlation based on only 10 years worth of data.

tokyoboy
January 27, 2010 5:08 pm

Dave McK (17:02:03) :

Yeah, that’s the video I referred to at 16:50:32.

dlr
January 27, 2010 5:12 pm

I think you’ve got a typo on page 7. It says “This divergence is not new and has been growing. NOAA proclaimed June 2008 to be the eighth-warmest for the globe in 129 years. Meanwhile NASA showed it was the 9th-coldest June in the 30 years of its record.”
I think you meant to say “RSS and UAH” not “NASA” in the second sentence.

Deech56
January 27, 2010 5:13 pm

RE boballab (14:10:34) :

luminous beauty (11:57:01) :
Maybe this map from NCDC of the GHCN temperature Anomalies from Jan-Dec 2008 will help:
http://www.ncdc.noaa.gov/img/climate/research/2008/dec/map-land-sfc-mntp-200801-200812-pg.gif

That’s a map of grids, not stations. LB’s question still stands.

January 27, 2010 5:17 pm

“Paul K2 (16:04:06) : I am having trouble seeing your point by looking at the graphs at the end of that article…”
I think you are having trouble for several reasons:
First, for some reason, Menne only used 40% of the USHCN stations in his “article”.
Since about 90% of the stations have been surveyed, I’d call this cherry picking #1.
Yes, I know, that’s all the information he was able to purloin at the time.
Second, the graphs start at 1980. USHCN sites were specifically chosen because their datasets extend back to the 1880’s. Why 1980? Cherry pick #2.

Jan
January 27, 2010 5:38 pm

“I hope you aren’t trying to make some kind of solar-temp correlation based on only 10 years worth of data.”
There is always solar-temp correlation – causal correlation – the sun is undoubtedly one of the crucial climate drivers – no doubt about it. Or you contest the assumption 99.999…% of radiant energy comes from sun? Good luck.

January 27, 2010 5:39 pm

I think the video is actually more persuasive than paper we’re here talking about. Good job.

Phil Clarke
January 27, 2010 5:46 pm

Apart from the worrying lack of understanding of the way global anomalies are calculated, something of a prerequisite, one would have thought, this document is littered with factual errors… here are three I found on a cursory readthrough..
1. The CRU email quote is out of context, To quote John Nielsen-Gammon “Mann, in his quote, was accusing Steve McIntyre of having little regard for the truth. By taking it out of context, D’Aleo and Watts are intentionally making it look like Mann is admitting to having little regard for the truth.”
2. “Ian “Harry” Harris, a programmer at the Climate Research Unit, kept extensive notes of the defects he had found in the data and computer programs that the CRU uses in the compilation of its global mean surface temperature anomaly dataset”. That would be CRU’s flagship product – HADCrut, but Harris was working on upgrading CRU TS 2.1 (to v3.0) a completely different and much less widely-used product.
3.”Jones used data by Wang which Keenan has shown was fabricated.” Keenan did indeed allege scientific fraud by Wang, but Wang was investigated and cleared. In any case the fraud case was about station selection and not data fabrication.
Clarification, retraction and corrections required to be taken seriously.

David
January 27, 2010 6:59 pm

Paul my response to you was mainly because of this post…
” Since this report clearly says the distribution of cooler and warmer stations is important, then the authors seem to think that an average temperature is being calculated.”
Atributing the misunderstanding of a blog commentor to the authors of the paper, as well as your intimating that anomalies could not be transposed via the methods you outlined, was the communication of my post.
So at least I got you back on track with the link you posted. Tom in Texas (17:17:24) : was an acceptable response.
And by the way, in non peer reviewed papers they do throw out mean averages and there are any number of ways that could be affected.
Nick Stokes (15:57:48) : see rbateman (16:09:03) : Truly is amazing, if the “average temperature of all the GHCN stations being used in each year”, plotted by years shows a decline, and the claimed, by the faithful, higher warm anomaly (high altitiude) stations were dropped, where is the warming?
Phil Clarke (17:46:49) : assigning confusion of bloggers to the authors is way not cool. Your other three assertations are just that. “Wang was investigated and cleared.” This was not via a statute of limitations, like in climategate, so your assigning double jeporady is also way not cool.

John from MN
January 27, 2010 7:27 pm

Anthony, Joe and Michael. Great work. But if you rework the paper leave out the name calling and political under-tone. Just stick to the facts. Don’t follow the other-sides mistake of embellishing the facts with name calling or disparging grand-standing adjectives………Keep up the good work. Sincerely, John

Nick Stokes
January 27, 2010 8:00 pm

rbateman (16:09:03) :
Nick Stokes (15:57:48) :
If, after lopping off all the cooler stations, the GHCN averages continue to fall, it can only mean one thing:
There ain’t no Global Warming going on, and even more to the point, there’s a whole lot of Global Cooling.

Ah yes, the style of argument here:
“It can be shown that they systematically and purposefully, country by country, removed higher-latitude, higher-altitude and rural locations, all of which had a tendency to be cooler. “
Reality: GHCN station averages have declined.
Global Cooling! Yay! Told you so!

E.M.Smith
Editor
January 27, 2010 9:26 pm

Rhys Jaggar (02:40:36) : Mr Watts/Dr D’Aleo
To say that your paper represents a ’smoking gun’ in the refutation of AGW would be akin to saying that Omaha beach was recaptured by one man and his dog………..

Woof! 😉
Nick Stokes (04:55:40) : The report makes a very misleading comparison on p 12. It shows a GISS global temp map for April 1978, and a purported corresponding map for April 2008, to show how coverage has shrunk.
But that is an incomplete map for 2008. […]. Or you can generate it by going to this GISS page. There are a lot more stations than the preliminary map showed.

Oh please. Talk about misleading. You know better than most that 2 months ago GIStemp revamped their processing and now include USHCN.v2 stations (that they did not until 15 November 2009, since USHCN ‘cut off’ in May of 2007, there were, effectively, a few thousand stations ‘put back in’ via that move). So it is accurate to show 2008 as it looked in 2008 and not as reimagined 2 months ago. When making a document you stick a stake in the ground, gather your data, and write.
So pointing at the “now” version of GIStemp is just saying the walnut shell moved, and we’d all better keep an eye on it or we won’t find the pea. Yeah, right /sarcoff>
BTW, USHCN.v2 has the warming moved into the historical record. It’s “warmer” than USHCN. So the swap from USHCN (minus the most recent couple of years) to USHCN.v2 (with a colder past) is a nice “shell game” but not very good science. Trying to get folks to confound “what it was as written” with “what it is with the new improved 4 walnut shell monte” is just… oh. (SELF SNIP!).
There are nice ‘blink charts’ showing this at the links in this article:
http://chiefio.wordpress.com/2010/01/15/ushcn-vs-ushcn-version-2-more-induced-warmth/
So no, I don’t think we need to keep our eye on the walnut shells, no matter how many of them you trot out. We need to clear the board of such misdirections and look directly at the data. And a key part of that is stabilizing the constant changes over time (stop the walnut shells from moving) so we can see how this game is really being played.
Karl B. (07:11:41) : “Smith found that in New Zealand the only stations remaining had the words “water” or “warm” in the descriptor code. Some 84% of the sites are at airports, with the highest percentage in southern cold latitudes.”
What does that last sentence mean? It reads as if colder sites were kept, but I think you mean that the colder sites that were kept were airports, which are normally warmer?

If I remember the context of the original correctly, I was pointing out that the warming bias of excess airports was being placed in larger part in what ought to have been the colder places. i.e. Want to warm up ‘the cold bits’? Put the thermometer closer to the jet exhaust… Basically, the “most bang for the buck” with the least “visible oddity” (i.e. a very hot hot place up north… like one of the tropical Islands, would stand out if added warmth was too much.)
From the original report here:
http://chiefio.wordpress.com/2009/12/08/ncdc-ghcn-airports-by-year-by-latitude/
This chart has the percentage of sites that are airports on the very far right. That is also shown as percentage of airports in each latitude band.
the part about New Zealand says:

First, New Zealand:
This chart is by latitude bands from “South Pole up to 44 S” latitude to “above 36 S” labeled “NP” (as in “everything to the North Pole…)

      Year SP -44  -43  -42  -41  -40  -39  -38  -37  -36  -NP
DArPct: 1869  0.0 21.4 14.3  0.0  0.0  0.0  0.0 21.4  0.0  0.0 57.1
DArPct: 1879  0.0 23.1 19.2  0.0  0.0  0.0  0.0 19.2  0.0  0.0 61.5
DArPct: 1889  0.0 26.2  2.4  0.0  0.0  0.0  0.0 23.8  0.0  0.0 52.4
DArPct: 1899  0.0 21.7 13.0  0.0  0.0  0.0  0.0 21.7  0.0  0.0 56.5
DArPct: 1909  0.0 37.5 15.6  0.0  0.0  0.0  0.0 15.6  0.0  0.0 68.8
DArPct: 1919  0.0 33.3 16.7  0.0  0.0  0.0  0.0 16.7  0.0  0.0 66.7
DArPct: 1929  0.0 20.0 20.0  0.0  0.0  0.0  0.0 20.0  0.0  0.0 60.0
DArPct: 1939  0.0 23.1 19.2  0.0  0.0  0.0  0.0 19.2  0.0  0.0 61.5
DArPct: 1949  0.0 27.8  9.3  0.0  0.0  0.0  0.0  9.3  0.0  0.0 46.3
DArPct: 1959  6.1 23.5  4.7  4.2  0.0  4.2  0.0  4.7  4.2  0.0 51.6
DArPct: 1969  9.8 20.6  3.5  3.5  0.0  3.5  2.8  4.9  3.5  3.1 55.2
DArPct: 1979 11.6 20.3  5.7  3.0  0.0  5.7  3.0  6.0  3.0  5.7 63.9
DArPct: 1989 11.3 24.8  5.0  0.9  0.0  5.0  4.5 10.4  0.5  4.5 66.7
DArPct: 1999 12.3 23.6  9.4  0.0  0.0  9.4  9.4  6.6  0.0  9.4 80.2
DArPct: 2009 12.0 24.1 12.0  0.0  0.0 12.0 12.0  0.0  0.0 12.0 84.3
For COUNTRY CODE: 507
From source ./vetted/v2.inv.id.withlat

Clearly the 50 percent early values are places that started life as “flat but not an airport” fields and later got tarmac.
(I explained this earlier in the posting. Due to the, frankly, ignorant data structure design; you get ONE status flag for many things that change constantly over time. So any place that is an airport today is called an airport of all prior history. Even in the 1800’s… So this report is, but nature, very conservative. Those early years of 50%+ airports are clearly NOT airports in those years and there were really none then. But since we KNOW it was zero airports, I went ahead and included it as an interesting indication of how broken the GHCN data structure is, and because it does inform about land use changes over time, in an amusing sort of way. Ok, caveat out of the way, back to the quote.

The startling value is that ending value of 84.3% Airports in New Zealand. This is one of the highest I’ve seen so far. The temperature in New Zealand IS the temperature at the airports. Especially the most southern, cold, latitudes. In fact, if we zoom in on that last year:

LATpct: 2009 12.5 25.0 12.5  0.0  0.0 12.5 12.5  0.0  0.0 25.0 100.0
AIRpct:      12.5 25.0 12.5  0.0  0.0 12.5 12.5  0.0  0.0 12.5 87.5

That LATpct is the percentage of total thermometers in a given latitude band. There is only one value, the most northernly, that has a higher percentage than the airports percentage. It looks like there is ONE non-airport thermometer in New Zealand. Looking at those stations still active in 2009, we find it is Raoul Island:
So the whole point here is that the one thermometer that is not subject to jet turbine heating pollution is on a nice more tropical island… but those colder more southern locations, well, they are snuggled up to the airplanes.
FWIW, a look at New Zealand temperatures is here:
http://chiefio.wordpress.com/2009/11/01/new-zealand-polynesian-polarphobia/
That includes an exercise of taking Campbell Island out of the whole data set (i.e. not leaving that cold station in the baseline) so a more consistent set of places is averaged to see what’s going on. The result is that New Zealand shows no warming in the base data. The “warming signal” is largely carried by leaving Campbell Island in the baseline but taking it out of the present… Couple this with the Airports Percentage and I’d even hazard a guess that New Zealand may well be actually cooling over time.
That would take a more fine grained study, but it is clear that the base data do not support any warming hypothesis. (Unless, of course, you run them through the meat grinder and cook them in the oven… then hide the process with a final ‘anomaly’ at the end…)

E.M.Smith
Editor
January 27, 2010 9:51 pm

MJK (08:59:27) : Still no reponse to my post (MJK 6:26:30) reagrding your failure to provide a supportinf reference for the assertion in your report that there has been cooling since 2001.
Well, last December was certainly cool:
http://chiefio.files.wordpress.com/2010/01/ghcn_giss-dec_250km_anom12_2009_2009_1991_2006.gif?w=500&h=294
as described in this posting:
http://chiefio.wordpress.com/2010/01/27/temperatures-now-compared-to-maintained-ghcn/
That chart is the December “anomaly map” with a ‘currently used data’ baseline of 1991-2006 and a 250 km “spread” to the data (smallest GISS let you make). Note that the Arctic comes form optimal interpolations of temperature estimates from ice estimates from satellites… so the Arctic Red is basically a permanent fantasy and is not in any way a temperature or a temperature anomaly… Notice the deep purple over North America and Asia?

rbateman
January 27, 2010 10:15 pm

Bruce (16:49:14) :
Briffa could certainly have motive if indeed he was fed to the dogs.
In which case he who laughs last…
Your 2nd point: There is the lament that ‘ it is a travesty we cannot account for the recent decline’ meaning if they push it any further, the wheels would come off. CRU leaked, IPCC and NOAA have pushed it too far.
The wheels have now come off.

E.M.Smith
Editor
January 27, 2010 10:23 pm

mikelorrey (07:54:54) :
Thank you! I knew something had happened. I knew there had to have been a meeting and decision made. I had not had the time to chase down exactly who and when. You have. Bravo!.
Richard M (07:58:21) : Next, I too appreciate the work of EM Smith. However, I am a bit worried. I am also a software engineer and have commented more than once on the sad shape of the GISS software. This problem makes me concerned about having a single source of examination for exactly the same reasons I question the correctness of the GISS code itself. I know I wouldn’t trust myself and, although I would say without hesitation that EM Smith appears much more meticulous than I, it still leaves me uncomfortable.
Thanks for the positive evaluation of work quality. BTW, I publish methods and software specifically so that folks can double check my homework. I’ve had several folks duplicate key parts (often with entirely different hardware and software platforms).
Frankly, nothing I’ve done is all that complicated. It’s just “meat and potatoes” characterize the data (originally for benchmarking GIStemp, then I got pulled off into looking at GHCN based on what the benchmark evaluation said about the pattern of the input data…) Most of it can be done in Excel if folks wanted to. (In fact, some of the confirmation I’ve had is from folks using MS and Excel… though at least 2 groups have now done database implementations)
There is an interesting investigation based on one of those databases going on here:
http://diggingintheclay.blogspot.com/2010/01/climate-database-development.html
With a look at the cool map interface to the data under development here:
http://82.42.138.62/GISSMaps/stationtrendsraw.asp
and with some rather interesting graphics in their report here:
http://diggingintheclay.blogspot.com/2010/01/mapping-global-warming.html
with some, IMHO, spectacular conclusions from the visualization.
While I’m happy to think I might have in some small way helped them to do what they are doing, my decision to focus on ‘minimal change to GIStemp FORTRAN’ for maximal validity of benchmarking has left the field clear for folks to do this much more “sexy” and much more valuable approach.
It’s what I would have liked to have done had I not been doing “other things”… A real database with a real graphical interface.
So, want confirmation? Go “knock yourself out”… and enjoy the much more visual approach they have built. (Sigh… I have “graphics envy” again 😉
But back to what I’ve done:
Further, I’ve done some of this on two different platforms (Mac and Linux) with two different methods ( FORTRAN and things like grep / wc ) with the same result. Best I could do at turning up any ‘pathological failure mode’ on my own. I also tend to ‘spot check’ the program results. So when I run a bit of FORTRAN that sums, oh, airstations by latitude; then I’ll pick a place like New Zealand where the absolute number is small enough to do by hand and do it by hand. The two results have to match. (and I usually do this for more than 2 test cases… belt and suspenders…)

rbateman
January 27, 2010 10:58 pm

E.M.Smith (21:26:23) :
A question for you:
In attempting to come up with values to plug holes in raw data sets should I
1.) take the slope between the previous and following data points or
2.) use the average hi/low values for the date in the stations history.
3.) Your suggestion

E.M.Smith
Editor
January 27, 2010 11:06 pm

PaulH from Scotland (08:01:58) :
I’m going to interleave my comments with Gavin’s non-response response. The summary is “the anomaly will save us” along with the usual mythology about a station anomaly being done early and only against itself when that is NOT what the code does. It is, in essence, a self delusion. It may be what the published papers showed has validity, but it is NOT what the code does.

[Response: This is, was, and forever will be, nonsense.

Nice slammed shut mind. It will never find the error staring it in the face.
The temperature analyses are not averages of all the stations absolute temperature. Instead, they calculate how much warmer or colder a place is compared to the long term record at that location.
The PAPERS that support the Reference Station Method and the anomaly process may well have done “selfing” but that is NOT what is done in GIStemp. This is a common delusion among warmers and one they cherish dearly.
The reality is that a thermometer anomaly IS calculated against a “basket of others”. It is also a reality that this is done long after all the ‘in-fill’, homogenizing and UHI calculations. Basically, the “anomaly” can not protect you from all the broken bits done before it is calculated.
Until they get past the fantasy of what they believe is being done and look at what the code actually does do, they will get nowhere.
This “Belief in the Anomaly” is just that. A “Faith Based Belief” in how they think the world works. It is not based on an inspection of what the GIStemp code actually does. Since they then go on to describe the miracles worked by the fantasy anomaly (that is not the one the code does) I’m not going to bother commenting on those bits…

This anomaly turns out to be well correlated across long distances – which serves as a check on nearby stations and as a way to credibly fill in data poor regions.

This, of course, will fail when the “nearby stations” are up to 1000 km away (and things like the PDO were on one phase during the baseline 30 years; and a different phase now, breaking the old pattern of correlation…) and have changed over time (that is, the baseline was made between 2 rural areas and the survivor bias shows up when the survivor is, oh, an Airport and it fills in a cow pasture…). But why look at reality…
Me, I prefer existence proofs:
1) Pisa Italy gets a “wrong way anomaly” UHI correction of 1.4 C in the wrong direction. Busted…
2) The benchmark shows that the actual stations used DOES matter. Busted…
3) Airports now make up 92% of the GHCN for the USA (and as we saw above, ALL of New Zealand other than Raoul Island). Good luck getting a nice pristine “rural reference station” for use in that UHI correction… and that then means that the “anomaly boxes” made from those stations are ALL comparing an airport today to an open area not filled with jet exhaust and tarmac in the past. Your paper no longer applies… Busted… (HOW can you do a “reference station method” where there is NO rural reference station? The GIStemp code just passes a station through if it can’t find enough ‘reference stations’. So your choices are a) Wrong way via airports or b) NO UHI correction due to not enough reference stations.)
Theory, meet reality. Reality wins.
There has been no deliberate reduction in temperature stations, rather the change over time is simply a function of how the data set was created in the first place (from 31 different datasets, only 3 of which update in real time).
See above about Wunderground having no problem getting real time data. But this basically is just saying “sin of omission not commission”. Frankly, HOW the sin was done does not interest me that much (unless I’m hired as a forensics investigator… then it becomes much fun!) It also ignores things like USHCN where they clearly DID have the ‘real time data’ and chose not to use it.
Read Peterson and Vose (1997) or NCDC’s good description of their procedures or Zeke Hausfather’s very good explanation of the real issues on the Yale Forum. – gavin]
Appeal to authority and deflection to a set of procedures that do NOT address the actual facts on the ground.
Frankly, what is written in some paper is only useful for showing what fantasy someone THINKS is happening. It’s what is really done to the data that matters. So somebody has a written procedure. Good for them. I’ve run the data (their data) through the code (their code) and the anomaly changes. Reality just is.
It’s about as useful as an annual report from Lehman Brothers or Bernie Madoff. Nice fantasy. I’d like to look in the vault now…
So look, these folks REALLY do believe this stuff. It’s a simple pat answer and they don’t have to bother reading FORTRAN or doing benchmarks. I’ve got NO problem with that. Leave me and the economy alone and you can indulge in that all you want.
Where I have a problem is when the guys who wrote and run the code did not do any QA benchmarks and have not bothered to show that the code does what the paper validated. (it doesn’t) Then they want to take their computer fantasies and tell me what car I can drive, what food I can eat, and how much my heating bill is going to be. Sorry, but “no”.
FWIW, this is entirely normal.
I’ve never met a computer programmer who didn’t think “this time for sure” after a “one line fix”. I’ve never met a researcher who did not believe they had found a basic truth and it now could applied to all sorts of places beyond where it was demonstrated. But that does not make them right.
SIDEBAR: The Devil’s Data Processing Dictionary has a definition for “one line fix”. It’s a single line change of code that will fix a bug with certainty. It will also have the bug it introduces fixed by the NEXT “one line fix”…

Nick Stokes
January 27, 2010 11:20 pm

E.M.Smith (21:26:23) :
“2 months ago GIStemp revamped their processing and now include USHCN.v2 stations”

That’s an absurd excuse for the difference between this purported plot from the report, and this current GISS plot. Both have the US pretty much covered. The differences are in places like Africa, Australia and Canada. No USHCN there.

Editor
January 27, 2010 11:21 pm

0220 here. Sleep is definitely overrated.

E.M.Smith
Editor
January 27, 2010 11:21 pm

Doug S (08:29:08) : I would dearly like to see the code that is used to calculate this (assuming I have their explanation correct). This approach seems like it would have many challenges to model correctly – so many variables to account for.
The code is “up” on my site for casual observation. I also have a link to the NASA download site if you want a full set. The version I have up is from “prior to 15 Nov 2009” and the download link is a never version that I’ve not inspected yet (been building a new machine to run it on…).
See: http://chiefio.wordpress.com/gistemp/
as a general entry point. It is more ‘human oriented’ but has a “geek corner” down at the bottom. A general technical brief look is in this link, along with a link to the download location at NASA here:
http://chiefio.wordpress.com/2009/02/25/inside-gistemp-an-overview/
Be advised. It’s pretty messy code…

Patrick Davis
January 27, 2010 11:24 pm

Tut tut tut. NIWA in New Zealand will have lots of tricky questions to answer in the coming years (Good I say – It’s about time their bad science was exposed. NZ$800k to move some native worms from one place to another so a road could be exteneded as one extreme, IMO, example).
“brc (02:31:41) :
I assume you’ll use your new vote wisely at the next election!”
I’ve made it crystal clear to the main (Australian) parties that if they want my vote, they’ll need to earn it (Alter all I qualified for and earnt citizenship, I now hold 3). And, as I am sure you know, *ALL* the main parties are in on the ground floor and support an ETS in one form or another. Labour and KRudd747 are one hit wonders, that was my prediction when they came to power and they won’t win this year if called. The Greens are less than useless IMO. The Liberals are slimy so and so’s and were the main party to think about an ETS in the 1990’s (Howard trying to capture the “Green Vote”. In fact Labour’s CPRS is the Liberal’s ETS, just with different shorts on ). They are all as bad as one another IMO.
February will be an intersting month for the Senate.

Patrick Davis
January 27, 2010 11:46 pm

Sorry, it was DoC in New Zealand not NIWA re: NZ$800k wormgate (I couldn’t help myself). Regardless, NIWA have manipulated temperature data, ignoring site issues etc.

Andrew P.
January 27, 2010 11:49 pm

Tom Graney (10:44:23) :
@Smokey;
I agree that a rural station is less likely to have extraneous heat effects, and if you shift away from stations with no heat effects to a population of sites that is gradually gaining heat effects then this will cause the system to exhibit an upward bias in temperature over time. But, the heat effects are not cumulative; a parking lot, once constructed, is not going to continue to influence the trend so over time the impact of these heat affected sites is going to peter out.

Yes but if development / land use change continues in the vicinity the effect will be cumulative. The magnitude of the UHI effect is dependent on a number of factors but one of them is the size of the settlement. In my village (pop. 2500) it is only about 1C but in Edinburgh (pop. 500,000) it is at least 2C, and in London (c. 8,000,000) it is typically about 5 or 6C. So as settlements grow, and buildings become more developed with central heating / air conditioning systems (which has unquestionably been the global trend) I would argue that the UHI effect has become cumulative. You also have to bear in mind that UHI isn’t just about having the potential to artificially raise maximum temperatures, but more significant is the effect it has on reducing the extremes of night time minimums; sun-warmed asphalt/concrete, radiation from warm buildings and warm air from AC vents are the key to the UHI effect. This graph showing the seasonal trend in Salehard in Russia shows a dramatic rise in winter temperatures in the last 10 years – which I would suggest correlates with the economic recovery of the region after the hardships and subsequent collapse of the Soviet empire:
http://www.neutralpedia.com/wiki/File:Salehard_seasonal.gif

Editor
January 28, 2010 12:04 am

E.M.Smith (23:06:45) :
Ah, now I REALLY get it. And I can see from that huge change in my understanding, now why the anomaly is so flawed – it magnifies all the bias you have seen and I am starting to see (by a different method). Steven Mosher (this thread/another thread?) is right – we really need a flow chart of what is done by each organisation that does its own adjustment.

Editor
January 28, 2010 12:28 am

E.M.Smith (22:23:48) :
Thanks for the lovely comments about the database and maps. Graphics envy, eh? Well I have offered…. And you are right that this would not have even got off the ground without your efforts.

E.M.Smith
Editor
January 28, 2010 2:08 am

rbateman (22:58:49) : A question for you:
In attempting to come up with values to plug holes in raw data sets should I
1.) take the slope between the previous and following data points or
2.) use the average hi/low values for the date in the stations history.
3.) Your suggestion

Oh Dear. I really hate the “right way to make up data” question….
Whenever possible, I’d rather just leave the hole. It is all you really “know”. But, if you MUST fill in: It depends a lot on the particular data sets, the particular processes they will be used to support, and the particular goals of the analysis. FWIW, this “issue” often comes up in stock trading. You have discontinuities for most data between each market day and it’s worse over weekends and holidays…
OK. The “default” is a straight line connecting the two dots you do have. So I’m assuming you are talking about temperatures, not stock prices. You draw the line between them and look for the time intercept. That’s your data.
(Tuesday: 20 F Wed: 25F Th: blank Fri: 30F – fill in Th with 27.5 )
That’s your #1 I think.
Now your #2 implies not a 25 F on Wed and a 30 F on Friday but a H/L set for each. Now you have lots of choices… Or does your #2 say you have a H/L for TH ? ….
If you have a H/L for Thurs, then you average them (that’s what NOAA or other provider does to get the “daily mean”) – even though it isn’t a daily mean… Imagine a station at the bottom of a steep canyon, solar heated high might be all of 2 hours, not 12; or imagine a 50 F drop in about an hour as front moves through, then the clock changes to the next day: the shape of the daily curve is ignored, but does change the actual mean as compared to the H/L average. But “everybody does it” and I doubt that the difference between an actual area under the curve mean vs a H/L average will matter too much most of the time… (bald assumption…) So, for all practical purposes and to be in conformance with the OTHER NOAA products, I’d do the H/L average to calculate the daily “mean”… if I have the daily H/L data for a date that is missing a “mean”.
But what if you meant “H/L for Wed and Fri” …
Now you get some interesting choices:
You can average the H/L for each day and put the slope between those two averages.
You can put a slope from H to H and from L to L and get a synthetic H and a synthetic L for Thu. Then average the sH and sL to get a sMean…
(you could do one thing at one end of the line and another thing at the other end. Probably only useful if you have different data missing from Wed than from Fri… W high to F mean. W low to F mean. s1/2H s1/2L averaged… )
And you could also get really fancy and do longer term things like looking at delta slope H vs delta slope L over longer periods and project the likely delta slope during the individual day (so, for example, you might have a dead flat low with nighttime fog, but a H that was decelerating toward that low as the fog filled in the daytime. That would imply (if Fri H was almost the same as Fri L) that the Thu H ought to be a bit closer to the Thu L than a straight line fit from Wed H to Fri H would have given (i.e. more of the drop would have happened in the last 1/2 of the prior day…)
Do all of these minutia of tea leaves really matter? I doubt it. For most money issues a straight line interpolation is fine. I have trouble thinking of what would be so dramatic in temperatures (but a “weather guy” would be consulted before I’d just pick one and not tell the customer…)
For #3, what would I do? I think I’d chose a straight line from Wed H to Fri H for a Thu sH and a straight line from Wed L to Fri L for a Thu sL, then average those two. I REALLY don’t like this slamming together highs and lows into a single ball of goo. It hides too much information in that single daily average. In fact, were I doing a “GIStemp like” temperature series, I’d do it with Highs and Lows kept through the whole thing. Why?
IF, for example, we had “global warming” that had summers holding steady at just about the same temperature highs as they always had; but the winter lows were being clipped so that -30 F days were fewer and we only had -10 F days instead, well, frankly “Bring it on!”… And if the “warming” were such that daytime highs stayed at about, oh, 70 F where I am, but nighttime lows were being raised from 25 F to 35 F I’d again say “Bring it on!” (you can grow more stuff if you dodge the nightly frost…)
And that is one of the things that I find distressing about this whole Global Average Temperature number. It is just so … so… “useless”. It doesn’t tell me if I’m going to have a 120 F August afternoon (instead of 100F) or if I’m going to have a 15 F January night (instead of -5 F …) and one of those I’d be more than happy to have while the other, not so much!
BTW, the base data shows an interesting pattern. I’ve not done a look at the H and L yet, but the pattern of the averages over time shows a warming of the coldest parts of winters, but summers Do Not Warm. Personally, it it were shown to not be thermometer location driven, I think that it would have to be that 4th power radiation thing… No, I have no evidence for it. It’s just a self delusion at this point (but a pleasant one 😉 So while I think the whole CO2 thing is bogus, IFF there is ever an effect that actually happens, I think it would show up as a 4th power driven lid at about 20 C to 25 C ‘global average’ (GAK!); and with bottoms being raised as the blanket keeps a bit of heat in at night and in cold winters. All in all a very beneficial effect. Basically, a slightly warmer winter low would be FINE with me, and my plants… (but just as real in the data: is the fact that the base data for individual places show plenty of stability but not much else… so that ‘CO2 warming winters’ fantasy is just personal speculation and not supported by the data… )
Oh, and the naive case of just averaging all the data for the globe and finding a trend. I did that early on. It’s not very interesting. You end up averaging a N. Hemisphere site that moves nearer to water (moderating temps) with a S. Hemisphere added station (say, at an airport) and masking the changes in both regions.
Where aggregate averages are interesting is in how they inform your ignorance. So you look at, oh, Africa. And you find a dramatic increase in temperatures in the early years. Then you look ‘by latitude’ and find that they move from the two ends (Mediterranean coast and South African coast) toward the Sahara. Then temps stabilize and not much changes. The actual pattern of the change of the average is what matters. And it clearly matches thermometer movements. THAT is the bias signal that a ‘temperature series’ must remove. It’s there. It can not be denied. And it is enormously larger than any supposed CO2 signal.
Lots of folks at that point want to say that I believe that average means something. I don’t. I think it tells you the problem you are trying to solve OR that there is no problem to solve.
So, Africa. It just is not warming. Sorry. Once the thermometer locations stabilize near the Sahara, it just sits there with a bit of ‘ripple’. Same thing for New Zealand. And Argentina. And and an… (Canada is interesting because the basic data do show a cooling trend, yet GIStemp makes this nice rosy red somehow… but I digress…) The basic story told by the thermometers is that they move. And when they stop, the temperatures stabilize. WHEN they stabilize changes from country to country. So for the CO2 thesis to ‘work’ it must somehow have a differential impact on each country working at different years in each. So this looking at subset averages has value. It tells you that you are trying to hear a CO2 whisper in a jet airport hurricane…
But average them all together and you hide too much. That station that moves from the mountains to the lake coast has the L rise, but also has H clipped. What does the “average” do? Does it matter? But look at the temperature profile by months of the year and you see the summers not moving up (moderation) and sometimes moving down, while the winters DO move up (moderation). And that is seen again and again in the base data.
So at the one extreme: an average of everything is useless. Yet smaller groups averaged together can show trends and inform our ignorance about where there is information carried in the base data. (August NOT warming, but January does? Hey, I call that a good thing; but it also is what happens when you move a location from an inland mountain to the beach… And an August that NEVER rises over 100+ years says that the CO2 ‘tipping point’ is just a fantasy. And THAT is what you see in the base data again and again.
The more you look, the more you find that there is a ‘hard lid’ not a ‘tipping point’. I don’t really care if the lid is from moving thermometers to the beach or from a 4th power function of IR radiation. I still know that it isn’t a CO2 induced runaway feedback tipping point. (You can have amplification and a runaway tipping point, or you can have stability and dampening with added temps; but you can’t have dampening with a tipping point…)
So why all this long exposition? Because it points out that you need to know WHY you are averaging. What is the impact? What does it HIDE? (Every average hides something. That is what they are used for.)
So, you want to fill in a missing data point. What do you want to hide? The “probable High”? The “probable Low”? The likely change of the gap between them? (acceleration of one toward the other) The shape of the daily temperature profile? ( Think of a sin wave vs. a nearly flat temp with a daily spike in the bottom of a canyon like uuuu vs. a series of nnnn shaped days with a brief night time cold moment in the desert heat. You hide those shapes with a H/L average). What information are you willing to lose? What information do you WANT to lose? Does daily shape just distract from what you really want to see? Then hide it with a H/L average. Similarly, do you want “monthly shape’ data? A closer approximation of the actual ‘areas under the monthly curve? Or will a monthly H / L average do? Is 30 days at 100 F and one at 50 F best represented with 75F or 99F for your purposes? What about if 28 days are missing and you have one day at 100F and the other at 50F ? Did you use one answer to the first question and a different one for the second? How will you keep those two answers playing well with each other?
So in almost all cases a naive straight line fit of W ave to F ave will be just fine. In some cases you might want a Wed H to Fri H slope. In others you might want to have a Wed L to Fri L slope. In a very few, the sH and sL from them. And it all comes down to what do you want that ‘in fill’ to do and what do you want the averages used to hide?
What happens when Thu had a Canada express run through in 12 hours and be gone? Or a hot tornado cooks through moving a lot of air? Your “in fill” and your average hide that you are ignorant of those events… If you know you want that ignorance and it’s a “feature”, then go ahead and average…

Editor
January 28, 2010 3:16 am

globaltemps (16:28:35) :
Your animations are really cool – and show there is a cap on temperature.
Great work!

January 28, 2010 5:52 am

E.M.Smith (22:23:48)
Thank you for the links to my web server and the ‘interactive maps’ that show the different warming/cooling trends (in the raw and adjusted data) for different time periods separtely for BOTH the NOAA GHCN and GISS datasets (and at some point CRU dataset also). In particular, thank for encouagring visitor here to read the ‘Mapping global warming’ thread on ‘diggingintheclay’.
I’ve been recently trying to answer the question that someone called Andy asked on the ‘diggingintheclay’ DITC blog as to exactly how do the adjustments affect the warming/cooling trends. Vjones and I are just in the process of preparing a series of threads which link to the previous analyses done by GG and RomanM (and others) and as with the other maps, I’m attempting to show the effects the adjustments have on the various temporal warming/cooling trends. If anyone has read the ‘Mapping global warming’ thread on DITC, you’ll have seen the cooling trend from 1880 to 1909, followed by the clear warming trend from 1910 to 1939, followed by the clear cooling trend from 1940 to 1969, followed finally by the ‘current’ warming period (CWP) from 1970 to 2010.
Just so that everyone is clear as to exactly what I’ve done to produce these maps, the maps show trends (for the different time periods) in individual station raw and adjusted temperature data (i.e. there isnt a single anomaly chart in sight!) shown as ‘coloured dots’ based on what range their warming or cooling trend (during the given time period) falls into. So for example if a station shows a warming trend of 7 degC/century then it will be shown as a ‘dark red’ dot. Vice versa, if it shows a cooling trend of -7 degC/century then it will be shown as a ‘dark blue’ dot.
Now please go and look at the ‘interactive maps’ and/or the snaphots of them in the ‘Mapping global warming’ thread on DITC. In particular please contrast the 1880 to 1909 cooling period with the 1940 to 1969 cooling period and most importantly the 1910 to 1939 warming period with the 1970 to 2010 period. Also when contrasting these periods please bare in mind the ‘station drop out’ problem, namely that prior to 1950 the station global coverage is much sparser prior to 1950 and after about 1992. Please read the ‘Station drop out problem’ thread on DITC for much more detail.
If you contrast the 1910 to 1939 warming period with that for the 1970 to 2010 CWP and allow for the fact that there is significantly greater global sttaion coverage for the 1970 to 2010 period, you’ll see that the maps aren’t that different. Indeed it’s arguably that the 1910 to 1939 warming period is more severe in the US that it is shown to be during the 1970 to 2010 CWP? In particular note the Northern Hemispshere versus Southern Hemispshere differences. Global warming during the 1970 to 2010 CWP is clearly not ‘global’ but rather is largely Northern Hemisphere warming. If you look at the 1970 to 2010 DJF and JJA seasonal maps you’ll also see that it is largely Northern Hemispshere winter warming. Most importantly note that these warming trends are largely evident in the RAW data trend maps as well as the ADJUSTED data trend maps. In other words the adjustments don’t have that much effect on the warming/cooling trends over and above those evident in the raw data.
There are nonetheless some significant differences between the 1910 to 1939 and 1970 to 2010 trend maps. Look at the 1970 to 2010 map for example at all the Canadian stations at and above the 49th parallel. They are all ‘dark red’ dots i..e they show greater than 5degC/century warming trends over the 1970 to 2010 CWP. Look also at the Icelandic, Northern Norway and Northern Russia stations for 1910 to 1939. These also show ‘dark red’ dots i.e greater than 5 degC/century warming trends – something going on with the AMO here perhaps?. Finally look at the central US during the 1910 to 1939 and 1970 to 2010 time periods. The warming trend in many of the stations in the central US is greater than it is during the 1970 to 2010 CWP. This is perhaps clear evidence that in the central US (at least) the 1930s was a somewhat warmer decade than the 1990s?
Also please bare in mind (as with EM Smith) that I’m producing these ‘interactive maps’ on low spec hardware. In fact the web server is an ex Compaq Evo desktop PC that is at least 6 years old and only had 512Meg of RAM and an 80 Gig hard drive. As I and E M Smith have shown (unless you are from NOAA or GISS) you really don’t need a large amount of computing power to do this type of analysis of the NOAA/GISS/CRU datasets. Because the hardware is not that powerful and largely because the ‘intercative maps’ use a Flash component (which in turn loads the data from an XML file) the maps can take some time to load. If you are prompting that its is ‘taking a while for Adobe Flash player to load the data’, please click ‘No’ (maybe several times) and eventually the map will be fully displayed – its well worth the wait. Just a quick tip! Its best to open the different maps in separate ‘tabs’ in your browser and switch between them to see the differences. After the maps are fully loaded you’ll then be able to ‘zoom in’ to to a particular country and click on a particular ‘dot’ to see a full chart of the raw/adjusted data and warming/cooling trends for that station. Enjoy!!

January 28, 2010 7:31 am

I don’t find Menne’s paper a surprise. If one is looking for a trend, it does not matter that station A is pure, whereas Station B is contaminated by noise provided that the contaimnation to Station B remains constant throughout the period over which the trend is being examined. In countries such as the US, most urban development/growth predates the period considered by Menne and hence when looking for temperature trends (rather than absolute accuracy in the temperature measurement), during the period considered by Menne, one would not expect to see substantial differences between good and bad sited stations, or between urban and rural stations

You can’t really categorically say that. You’d have to take each station on a case by case basis.
I grew up in Manassas, VA (yeah, Battle of Bull Run and all that). As of the mid 1970s, it was a small town of about 20k population. In the 80s it saw an explosion of strip malls and housing as DC area commuters started moving further out into the suburbs. The same can be said of Gainesville, VA, Haymarket, Warrenton, and many otherwise rural towns in that area. Places that were forests and open fields where I tromped around as a young teen in the 70s became housing developments, parking lots, etc.

Richard M
January 28, 2010 8:05 am

EM Smith, thanks for your reply. I now feel more comfortable.
As for the remarks by Tom Graney and others that siting changes are a one time deal … wrong! Once again one must think about the situation. If you really have a warming signal in climate (and I believe there is one), then that means air conditioners will be on more often, asphalt will absorb more heat and hold it longer, and maybe even the barbecues will be used more often. Hence, your statements are only true if there is no warming signal at all.
Now, for a little conjecture. If one looks at the temperature anomalies post 1998 there appears to be a real balance between La Ninas and El Ninos around the .2C mark. Maybe the Canadian researcher was right about the pre-1998 timeframe and ozone depletion was a major player in warming the planet. Now it has stabilized.
Of course, my long time position is that I am skeptical of anything and everything related to climate. It is very complex. In fact, I’ve coined a new term to describe those who think they understand climate to the Nth degree. They are complex climate deniers. 😉

January 28, 2010 8:16 am

Smokey (15:24:26) :
…You could ring up James Hansen over at GISS, and ask him how he “adjusts” past temperatures. Here are some Illinois stations. Notice the shenanigans: click [ http://www.rockyhigh66.org/stuff/USHCN_revisions.htm ]

Actually, that’s All the Illinois USHCN stations, so no cherry picking. Just compare the number of stations where the warming increased with the the number where the warming decreased. Same with the Wisconsin stations page, all included, great majority adjusted to more warming.
http://www.rockyhigh66.org/stuff/USHCN_revisions_wisconsin.htm
I’ve started on an Iowa page. So far, same story.

Doug S
January 28, 2010 8:33 am

@ E.M.Smith (23:21:21) :
Very fine work E.M. and thanks so much for the pointers to the “fill-in” code. This is an unbelievable process for managing the data. I’m inclined to think the primary problem is the legacy nature of the temperature collecting methods and a failure of the research community to address the fundamental data management issues. I would urge the people spending our tax dollars to halt everything their doing and start again with a new data structure. It would be a big job to go back to the paper records and hand enter the data once again in a modern Enterprise database system but the cost would be minuscule compared to the waste, fraud and abuse we’ve suffered to date. Keep the faith and thanks again for all your efforts on behalf of the tax payers around the world.

rbateman
January 28, 2010 8:52 am

E.M.Smith (02:08:45) :
Thank you for all that information, it helps to know the thought process and the pitfalls with each decision.
I’m only doing 1 station, my hometown. I have 2 data sets. One is the NCDC which goes back to 1913 uninterrupted, but has holes in it, especially in the later years. The 2nd data set goes from the 50’s to present, has a lot fewer holes, is from hardcopy printed source (same rural town) and overlaps the first data set an average of 4 out of 7 days perfectly.
I’m really not keen on hiding anything, but do want to get the best possible representation of how the high, low and high-low(average) change in the time span.
This:
http://www.robertb.darkhorizons.org/TempGr/Wv1913_2009avs.GIF
will change as I continue to transcribe the 2nd set of data from hard copy (about 75% complete) but I can see where the warming is: nightime lows.
All of the data I have preserved at a glance on the base page:
http://www.robertb.darkhorizons.org/WeavervilleClimate1.htm
There is a 3rd way to do the averages (dri.edu does it) and that is take a particular calendar day [say Jan 26th] and for 1913 to 2009 the average high is 49F and the average low is 29F. Plug those into Jan 26th holes.
II do want to limit any bias that comes from ‘making up’ missing data.
At the same time, the NCDC historical data is choppy with many runs of years shot full of holes (some holes are plugged with it’s B-91’s) , so I have to do something with it, or throw it away.

January 28, 2010 9:31 am

Richard M (08:05:05) :
EM Smith, thanks for your reply. I now feel more comfortable.
As for the remarks by Tom Graney and others that siting changes are a one time deal … wrong! Once again one must think about the situation. If you really have a warming signal in climate (and I believe there is one), then that means air conditioners will be on more often, asphalt will absorb more heat and hold it longer, and maybe even the barbecues will be used more often. Hence, your statements are only true if there is no warming signal at all.

I dunno. I use my BBQ even when it’s 40f outside…

rbateman
January 28, 2010 11:15 am

Doug S (08:33:39) :
By hand, yes, that is what is needed.
If we are going to get our money’s worth from the new dollars poured into climate change study, collecting and disseminating a better foundation is the way to go.
So what if it takes a whole army of individuals to do the dirty work?
Fine. Pony up with the grants. The President and Congress are talking job stimulus: Let them put their money where their mouths are.

January 28, 2010 11:24 am

Mike McMillan,
Your pages of blink charts are really excellent. We can see at a glance what is being done to the temperature record. And if they do this to individual stations, there’s not much doubt that they do it throughout GISS [and NOAA, which does the same thing].
Also, I’d suggest putting your name on those chart pages – in case someone stupidly forgets to note the provenance. Something like these guys do at the bottom of their chart: click

Jeff
January 28, 2010 1:19 pm

This report is embarrassing and is going to destroy the position of AGW critics. Are Joe D’Aleo and Watts really AGW critics, or are they really AGW alarmists posing as critics and working to undermine criticism by AGW non-believers?
Temperature trend isn’t calculated by averaging recorded temperatures from the surface stations across the world. Anomalies are created for each individual station, and then the anomalies are averaged to determine the average temperature trend. And this means that what’s relevant isn’t how the recorded temperature from one station differs from another, but rather how the change in temperature at different stations differ. And on the televised news report you see D’Aleo and Smith talking about how “cold thermometers” were removed in a systematic fashion.

Jeff
January 28, 2010 2:04 pm

I just noticed the comments above where Smith responded to someone else who was also pointing out that temperature anomalies are first created for each individual station and then averaged. According to Smith, this isn’t what’s really going on in the code, even though that’s what everyone is being told. And if that’s the case, I apologize for my remarks above.

Nick Stokes
January 28, 2010 4:24 pm

Jeff (13:19:01) :
Jeff, I think your first post had it mostly right. You described the Climate Anomaly Method. Some modification of it is needed when stations do not have enough data in the reference period, and as E.M.Smith describes, GISS calculates anomalies at grid points, which involves a bit of local aggregating. But it is still much closer to the CAM than what seems to be envisaged in this report.
BTW, there’s nothing secret about the GISS method. It was described in a paper in 1987 by Hansen and Lebedeff.

Richard M
January 28, 2010 4:44 pm

Jeff Alberts (09:31:51) :
“I dunno. I use my BBQ even when it’s 40f outside…”
I used my barbecue last week when it was around 20F, a nice mid-winter day here in Minnesota. While my remark about barbecues was somewhat tongue in cheek, I imagine the tendency is to use barbecues more often the hotter it gets so one doesn’t heat up the house by turning on the oven.

Richard M
January 28, 2010 4:49 pm

Jeff (14:04:32):,
I think that is the crux of a lot of the confusion. The warmists at skeptical science were making the same assumption and throwing out ad homs left and right.

Nick Stokes
January 28, 2010 8:31 pm

E.M.Smith (02:36:52) :
A retraction and apology is due here. I discovered an error in my R program, which I published on my blog, which calculated the station averages in v2.mean. There was a memory overflow which had unexpected effects. The revised graph now looks like the one on p 14 of the report (and like that, different to the one on p 11), which shows that the site averages in the GHCN collection have been warming since 1950, though not uniformly.
However, it is still true that this should not lead to warming of the average anomaly.

Steve Keohane
January 29, 2010 4:46 am

Richard M (16:44:13) : I imagine the tendency is to use barbecues more often the hotter it gets so one doesn’t heat up the house by turning on the oven. The only meat that doesn’t go on my grill is the Thanksgiving turkey, and bacon (it flairs up badly).

clique2
January 29, 2010 6:37 am

Hi! Tried to buy him a beer but you need to put a website address in the submit form! Can WUWT provide?

Ken MacLauchlan
January 29, 2010 7:36 am

A nice compendium of many concerns about the surface temperature record. A few weaknesses I have noted that are worth sorting :
1) CASE 5: NO WARMING TREND IN THE 351-YEAR CENTRAL ENGLAND TEMPERATURE RECORD
This comparison only works if you limit yourself to only the summer season figures. The other three seasons do show warming when charted in the same way. Probably worth binning this case study.
2) Klotzbach et al. (2009) show how RSS and UAH disagree with HadCRUT3v and NCDC for land surface temperature measurements. However the satellite and terrestrial measurements are much closer for ocean surface measurements and for the global total. Has anyone explained why such large disagreements for land based measurement disappear when we look at the global totals? Is it just because most of the world is water or is there another fiddle going on?

Baa Humbug
January 29, 2010 9:01 am

Anthony I thought you might be interested in this paper I found by John Christie of UAH from 2001 titled
WHEN WAS THE HOTTEST SUMMER?
A State Climatologist Struggles for an Answer
BY JOHN R. CHRISTY

Richard M
January 29, 2010 9:41 am

Steve Keohane (04:46:27) :
“The only meat that doesn’t go on my grill is the Thanksgiving turkey, and bacon (it flairs up badly).”
I use my grill quite a bit but not nearly as much as that. However, I do plan on grilling tonight while the temperature hovers just above 0F.
Has anyone else noticed this:
http://www.chron.com/commons/readerblogs/atmosphere.html?plckController=Blog&plckBlogPage=BlogViewPost&newspaperUserId=54e0b21f-aaba-475d-87ab-1df5075ce621&plckPostId=Blog%3a54e0b21f-aaba-475d-87ab-1df5075ce621Post%3a1602a720-b2a5-47de-bf2d-3b62afcf88a6&plckScript=blogScript&plckElementId=blogDest
The author makes the same assumption about the calculation of anomalies that the warmers have been repeating. I’d suggest EM Smith or Joseph D’Aleo post a rebuttal.

January 29, 2010 3:38 pm

E. M. Smith –
I had blogged here (http://tinyurl.com/ykfy8aa) that the GISS temperatures were computed from anomalies, but your comment on these pages that such a statement was “bull pucky” required me to dig further.
Thanks to your posting of the code, I now see that the process is just as documented in Hansen and Lebedev (1987): after individual station adjustments and corrections are applied, an average temperature is constructed by starting with the longest-period record near a grid point and doing a successive weighted average with all the other nearby stations. The biases (or “shifts”) are computed separately for each station and removed before the data are averaged together. Then anomalies are computed relative to the base period on the grid point value.
This is certainly not averaging anomalies together, but it is equally effective at correcting for changes in network configuration. No bias in the trends is introduced simply by losing mostly cool stations after 1990. Of course, you can have problems if the lost stations have different TRENDS, but their absolute temperatures make no difference, contrary to the D’Aleo and Watts report. I discuss this more fully here: http://tinyurl.com/yec3ads .
I would welcome your thoughts. If you wish to post a comment on my blog but do not want to register, just email it to me. Anthony, the same goes for you.

January 29, 2010 3:40 pm

Just noticed the 9:41:35 by Richard M just a bit above mine. I didn’t intend my comment as a response, but it seems to serve that purpose.

hunter
January 29, 2010 5:40 pm

As a skeptic, I urge people to take John N-G seriously. He is the Texas state climatologist, and he was one of the first scientists to bust the IPCC glacier scam.
He may be wrong or he may be right, but he is calling it as he sees it.
He is not a hack and he is not pulling punches.

January 29, 2010 6:42 pm

hunter (17:40:22),
Who was the state climatologist who was fired for simply questioning AGW? I could look it up, but I’d bet that John N-G could name him in an instant. And John N-G would no doubt pucker up a little as he said the name.
That’s because state climatologists are political appointees – just like the entire IPCC. Therefore, we must have the presumption that they toe the Party line. And the Party line is AGW. Question it and pay the price. Argue for it and reap the rewards.
Anyone singing the praises of the resident GISS lunatic James [“Death Trains”] Hansen is apparently unaware of the USHCN “adjustment” of the temperature record: click [chart by Mike McMillan]. GISS provides [and alters] these temperature records. How can adjusted temperatures be “raw,” both before and after adjustment? It appears that the USHCN simply make it up as they go along.
I prefer to listen to an unbiased expert such as the internationally esteemed Prof Richard Lindzen, who heads the Atmospheric Sciences department at MIT – and who cannot be fired for giving his honest view:

The process of coopting science on behalf of a political movement has had an extraordinarily corrupting influence on science – especially since the issue has been a major motivation for funding. [source]

The sad fact is that plenty of people have learned to game the system, and plenty of others go along with them for job security. Going ballistic over an on-line article, when there is really major corruption endemic to the whole AGW conjecture makes me wonder why these people feel so threatened. But I’m sure if I was a state climatologist, aware that a fellow state climatologist had been summarily fired for questioning AGW, I might write a newspaper column singing a lunatic’s praises too – and turn a blind eye to all the missing raw data, and the endless “adjustments” that always mysteriously go in an upward direction, and the thoroughly corrupt, scheming UN/IPCC, and the plain fact that the planet itself is falsifying the CO2-CAGW hypothesis: as CO2 steadily rises, the global temperature has been flat to declining for most of the past decade: click
So who are we gonna believe? The CO2=CAGW True Believers? Or planet Earth and our lyin’ eyes?

January 30, 2010 8:13 am

Smokey (18:42:25) – There were two (descriptions oversimplified for brevity). George Taylor (Oregon) had his State Climatologist responsibilities removed for arguing that global warming was not as significant as natural variability (i.e. from the PDO) for changes being observed in hte Pacific Northwest. Patrick Michaels resigned rather than accept restrictions on his speech regarding the magnitude of impacts of global warming and appropriate actions to take. I respect both of them, and am deeply resentful of the way they were forced out of their positions.
On the other hand, I serve Texas, and the idea that the Governor of Texas (Rick Perry) would take action against me for questioning AGW is ludicrous. The idea that I am ignoring all the other problems regarding the surface temperature record is also ludicrous, as I helped write two of the peer-reviewed articles pointing out many of these issues.
Stick to the facts, and skip the ad hominems.

DirkH
January 30, 2010 8:47 am

“John N-G (08:13:03) :
[…]
Stick to the facts, and skip the ad hominems.”
Hansen endorses books that propagate the destruction of civilization. “Lunatic” is an apt yet maybe oversimplified description of his state of mind IMHO.

January 30, 2010 10:04 am

John N-G may be the exception that proves the rule, but state climatologists are political appointees first, foremost and always. My concern with his post is the fact that he wrote an article in a major newspaper nitpicking a non-peer reviewed paper, when there are numerous peer reviewed studies that have been shown to be in error. Where are all the articles on those studies? Where are the articles debunking Michael Mann, or Caspar Amman, or the 100% political appointees that make up the IPCC, etc? Something stinks about that. There are thousands of AGW-related papers out there; why zero in on this particular one? Answer: politics.
And here’s a brand new post from the current WUWT article, “Debunking National Wildlife Federation Claims – Part 2”:

The former WA State Climatologist, Phil Mote, produced a series of reports to the Governor of the state which claimed a significant drop in snow pack in the latter half of the 20th century.
Mote’s study was used by the Governor to create a panic over water supplies. It came out later that the Mote had cut off the data at a time of unusually high snowfall in the early 1950’s, thus creating an artificial downward trend. When data from 1930’s and 1940’s were added to the analysis the trend disappeared giving a flat trend.
Rather than own up to the cherry picking, Mote had the person who publicized the “trick” fired (he was the Deputy State Climatologist). Another expert, Cliff Mass of the UW had his reputation trashed.
The Governor ultimately ignored the truth. Phil Mote is now the State Climatologist of Oregon (after their Governor fired the previous one for not following the party line on AGW).

In my post @18:42:25 above there is a quote by Prof Richard Lindzen. Clicking on the ‘source’ at the end provides an explanation of exactly what is happening, and it debunks the belief that a few people who got onto the editorial boards of journals are revealing the thoughts of the entire membership wrt AGW. They are not. Lindzen’s piece should be required reading for those who still believe a handful of board members speak for thousands.
The professional journal system has been gamed just like the climate peer review system. With the exposé of the East Anglia emails, there can not be a rational, honest person who still thinks the climate peer review system hasn’t been exploited and used by the rent-seeking AGW clique for their own financial benefit and personal aggrandizement.

January 31, 2010 1:00 am

Smokey (10:04:37) :
“John N-G may be the exception that proves the rule, but state climatologists are political appointees first, foremost and always. My concern with his post is the fact that he wrote an article in a major newspaper nitpicking a non-peer reviewed paper, when there are numerous peer reviewed studies that have been shown to be in error.”
Arguing, and providing evidence, that a shift to warmer stations does not necessarily exaggerate a warming *trend* is not “nit-picking.” It goes to the heart of one of Anthony & Joe D’Aleo’s central claims. It will, I’m sure, force the latter to “dig deeper,” just as John N-G did.
There are many reasons to doubt how accurately the current methodology of NOAA, GISS, and CRU determine “average global temperature.” Whether those flaws misrepresent temperature *trends* is another question.
And the open-minded skeptic must be prepared to question not only the claims of alarmists, but those of other skeptics also.

JP
January 31, 2010 2:28 am

http://www.rockyhigh66.org/stuff/USHCN_revisions.htm
Great link!
To me I shows that historical data was altered to generate a overall warming trend, by copying the trend from UHI stations to all stations and the trying to prove that there is no difference in trends between stations.
Thus comparing homogezied data to homogenizied data is a null value study.

curious
January 31, 2010 2:43 am

Anthony,
Assuming that Mr. Nielsen-Gammon is right, will you retract your outrageous accusations against NOAA and apologize to them?

CuriousGeorge
January 31, 2010 4:13 pm

Anthony, E.M. Smith:
Thank you both, volumes. I am intrigued by the issue raised by Mr. Nielsen-Gammon. I read his first blog piece, then his “correction” after Anthony listed the url. In the correction, he admits to being wrong about how temperature compilations are calculated. So, E.M., Anthony, you have him there. But he stands by his conclusion, that (mostly cold?) station drop-out would not lead to a warming bias. In effect, he’s saying he was mistaken about the precise method whereby which an anomaly is arrived at, but he still stands by his original claim. Which is a strong claim, isn’t it? One of you is wrong here, right?
Or am I missing something?

Richard Sharpe
January 31, 2010 4:37 pm

curious (02:43:46) says:

Anthony,
Assuming that Mr. Nielsen-Gammon is right, will you retract your outrageous accusations against NOAA and apologize to them?

Did they hurt your feelings, curious?
Spare us the faux outrage.

February 1, 2010 10:00 am

Contrarian (01:00:52)
“And the open-minded skeptic must be prepared to question not only the claims of alarmists, but those of other skeptics also”
This is absolutely true. It’s called practicing the scientific method. In real science as opposed to the post modern science extolled by people like Mike Hulme from the Tyndall Centre the ‘Science is NEVER settled’. The science must and should always be continually challenged. IMO for it to even qualify to be called science it must be capable of being ‘falsified’. Sadly much of the ‘science’ practiced by climate scientists isn’t even falsifiable, yet we expected to believe that it’s ‘settled’, that there is no need for it to be challenged because it is the ‘consensus’. There is nothing to see here, so we must move on! Nonsense!
I’ve read John N-G thread and IMO he hasn’t proven anything. Where is his spreadsheet that contains his ‘little bit of Excel programming’? Even if it does show a simplified calculation involving the relative trends of Station A versus Station B and how they combined trends are affected when they are combined or Station B is ‘dropped out’, this is just a theoretical simplied calculation. It does not involve the full NOAA/GISS datasets. I’m sorry but he hasn’t proved anything until such time as he’s done the full calculation with the full dataset. Then and only then can he make his claim that
“However, their technique is as effective as the anomaly technique that NCDC and CRU use, and in all three cases, removing stations from cold locations would not, by itself, introduce a bias in the global temperature record”

woodNfish
February 1, 2010 11:26 am

I’m going to second Hunter’s comment concerning Dr. N-G and taking him seriously. Skeptics question the methodologies and data used by the AGW crowd, you should not attack a scientist questioning skeptic papers. This is how errors are corrected and science moves forward which should be what we all want.
Dr. N-G, you posted on your website a response to part of the Watts’ and Alio paper (http://www.chron.com/commons/readerblogs/atmosphere.html?plckController=Blog&plckScript=blogScript&plckElementId=blogDest&plckBlogPage=BlogViewPost&plckPostId=Blog%3a54e0b21f-aaba-475d-87ab-1df5075ce621Post%3a316fd156-fbba-46b0-b3ec-a6748f70d579)
but you only listed the program comments in pronouncing that the program did what its authors said it did. Did you actually check the code? Just because the comments say the program is supposed to do something, doesn’t mean it works properly.
There is just too much water under the dam from the activist Hanson to take his word on anything.

February 1, 2010 5:04 pm

KevinUK (10:00:40) :
“I’ve read John N-G thread and IMO he hasn’t proven anything.”
I agree. But I didn’t say he had, only that he had provided evidence that station selection does not necessarily affect the global trend, and that his critique was not “nit-picking.”
Some more work is needed here. Might be productive for Anthony, Chiefio, Joe D’Aleo, and John N-G to work together on a more intensive analysis of GISS’s methods.

Ben W
February 3, 2010 9:31 pm

excellent work Mr. Watts and company. please keep it up. worthy of a nobel prize! (I hear you have to actually apply to win one!) if only climate “scientists” exhibit the ethics you and others like Steve McIntyre exude, we could be spending money to explore space or feed the hungry instead of studying a non-problem.
reality check – as an observation, there is so much effort spent on the finer details of temperature record manipulation of national record databases, that actually the the basic argument is: how does co2, which is .035% of the atmosphere, cause such an alarming rate of global warming? …and glacier melts, and amazon forest reduction, and polar bear extinction.
The AGW IPCC argument is based on man-made CO2 causing doom. Looking at my local rural temperature records for the past 100 years (and not the databases of NOAA, GISS, etc), there is no hockey-stick effect. where exactly is the global warming? Is global warming only in certain parts of the world? Do I need a phd or two to find it?

February 4, 2010 12:35 am

This morning (feb 4th) the most populair and largest Dutch newspaper De Telegraaf opens on his front page with: Hoezo Opwarming? (“Global warming? What Global warming!). In the frontpage article are Antony Watts and Joe D’Aleo metioned by name and the journalist refers to your Compendium paper about temperature records. See also: http://www.ortogonaal.nl/2010/02/global-warming-watt-global-warming/.
Apperantly the newspapers are discovering your website! Now most of the Dutch people will discover your opinion and your report will we discussed in the Dutch Governement.

February 4, 2010 1:24 am

The full text of the Dutch Telegraaf article about your report see:
http://www.ortogonaal.nl/wp-content/uploads/2010/02/2010feb04-De-Telegraaf-Hoe-Zo-opwarming.pdf

woodbeez
February 4, 2010 3:06 am

As Dick H. Ahles mentioned above, the Dutch msm finally start to pick up on all the problems in paradise regarding AGW.
You can find the article here:
http://www.telegraaf.nl/binnenland/5951864/__Hoezo_opwarming___.html
Keep up the good work!

gallopingcamel
February 5, 2010 9:59 pm

I tested your allegations about the sharply declining numbers of weather stations by downloading the NOAA/GHCN (version 2) records. As each data file consisted of almost 250,000 lines of records (each with 12 months of temperature averages) my copy of Excel choked. I solved this problem by deleting all records except for Canada (country code 403).
I discovered that the number of Canadian weather stations in the GHCN record peaked around 1970 and then declined sharply, just as shown in D’Aleo & Watts (2009).
When I shared this revelation with the Alarmists on the Guardian blog, “dconvoluter” led me to a paper by Peterson & Vose (1997) that had maps and graphs almost identical with yours. How do you answer the criticism that your paper is just an update of this earlier paper rather than something new?

February 8, 2010 5:50 am

gallopingcamel (21:59:28):
“When I shared this revelation with the Alarmists on the Guardian blog, “dconvoluter” led me to a paper by Peterson & Vose (1997) that had maps and graphs almost identical with yours. How do you answer the criticism that your paper is just an update of this earlier paper rather than something new?”
gallopingcamel
I don’t think D’Aleo & Watts (2009 are claiming that they have discovered something new when showing the station ‘drop out’ problem. They, like many climate skeptics like my self and many others are well aware of the Peterson and Vose (1997) paper.
What is important is that the vast majority of people are unaware of this issue and the effect these post 1990 station drop outs have on the validity of deriving a so called mean global surface temperature (MGST) anomaly post 1990.
NOAA/GISS and CRU would have us believe that its OK to base climate change policy and advocate spending trillions of dollars in combating climate change, when in fact the majority of data that underpins their claims of ‘unprecendented’ and accelerating global warming in the latter part of the 20th century is based on such a sparse set of data.
If you want to see how sparse the data gets post 1990 and not just for Canada but for many other countries as well then please read the ‘Station drop out problem’ on ‘digginintheclay’ by clciking on th elink below
The ‘Station drop out’ problem
Note this is for the NOAA GHCN V2 raw/adjusted dataset. I’m in the process of produing a similar thread but for the GISS raw/adjusted dataset (GISS is just as bad!). And after you’ve seen the charts remember that its the adjusted (homogenised) data that is used to produce all those ‘red all over the Arctic and Antarctic’ scary colour contour anomaly maps that are shown on the GISS web site.

gallopingcamel
February 8, 2010 7:12 pm

KevinUK,
Many thanks for a great explanation. The “Deniers” and “Alarmists” can’t both be right so I am still trying to sort out the snake oil salesmen from the real scientists.
My (naive?) idea was that someone would be collecting every available temperature record in the same way that the Mormon church collects genealogical records into the IGI database that is available on line free of charge; comprehensive raw data, unedited so that researchers can start from the bedrock and decide for themselves what is good, bad or ugly.
Now it seems that NOAA GHCN was building up its data base, just like the IGI and then around 1970 started to prune the data drastically. This strikes me as a terrible disservice to the climate research community. Is there a more comprehensive source of unedited raw data? If not, why not?
If NASA, NOAA or the UEA/CRU claims “lack of resources” for their failure to publish every scrap of available data, it is only fair to point out that the IGI database is already several orders of magnitude greater than all the climate records that have ever existed.

February 10, 2010 2:40 am

gallopingcamel (19:12:40) :
Thanks for you reply. Yes. It really is quite shocking to find out that climate chnage policy is based on such a poorly managed dataset. In case you don’t already know. All three of the key mean global temperature anomaly indices rely on this same NOAA GHCN dataset, so that NOAA, GISS and HadCRUT. All three organisations use this dataset as the primary input to their calculation of mean global surface temperaure anomaly and so they can hardly be claimed to be independent of one another. Each organisation in turn then subject sthe ‘raw’ or ‘unadjusted’ data to a series of ‘adjustments’/’homogenisations’ supposed to allow for station movements, instrument/equipment changes and in some cases Urban Heat Island (UHI) correction.
It very important to understand that the size of these adjustments/homogenisations is huge (in many cases several degrees centigrade) and often results in what was a cooling trend in the raw data being converted into a warming trend post-adjusted/post-homogenisation (and vice versa i.e the adjustments are physically unjustifable), yet NOAA/GISS/CRU then claim to be able to detect a man-caused increase in the mean global surface temperature anomaly of approx. 0.7 degC/century (over the 20th century) after these adjustments/homogenisations have been made. In other words its pretty much impossible to tell whether this claimed ‘man-caused global warming’ is actually real or could perhaps be largely an artefact of the adjustments made to the raw data.
In fact, based on my own analysis of the raw data, I believe that the global warming trend over the 20th century is most likely real but is largely as a consequence of recovering from the Little Ice Age. Super-imposed on this centenial warming trend is (a clear wholely due to natural climate variability) multidecadal (approx 30 year periods) cycle of cooling trend followed by warming trend followed by cooling trend followed by warming trend cycle.
Have a look a look at this thread on ‘digginintheclay’ for further details.
Mapping global warming
In particular contrast the warming trends during the 1910 to 1939 period with the cooling trends during the 1940 to 1969 period, followed by the warming trends during 1970 to 2010 period.

gallopingcamel
February 10, 2010 8:22 am

KevinUK
While I am a scientist, my knowledge of climate science is minimal. Usually when I ask questions on blogs all I get is abuse! Finally, I am getting replies that make sense! It is clear that you have “..kept your head when all around were losing theirs…”.
It is the highly public controversy that attracted me to the AGW issue. Remember Fred Hoyle and how skillfully he used the BBC to promote his theory of “Continuous Creation”?
As a physicist, I was skeptical about “Cold Fusion” yet at the same time I was hoping that Fleishman & Pons were right!
What worries me about the gatekeepers of climate science is that they are acting like a priesthood in a panic over a list of questions nailed to their front door.
The data you showed me has a warming of ~0.7 degrees Celsius since 1880 but is there any justification for the IPCC’s (Copenhagen Diagnosis) prediction of a 2-7 degree Celsius rise by 2100, other than studies that use tree ring proxies?

February 10, 2010 9:50 am

gallopingcamel (08:22:16):
I think you’ll find as most skeptics (like my self) have that you’ll get a much better level of balanced discussion on so called ‘skeptic’ blogs than you will on so called ‘warmist’ (i.e. pro-AGW) blogs/forums. It’s fair to say that I am very grateful to those (like Wills E, Bender, RomanM and obviously Steve Mc and Anthony Watts etc) who have taken the time to educate me on the evidence (mostly lack of) for man’s effect on our climate. As a fellow physicist I’m happy to do the same for you.
As a fellow physicist though you’ll know that the education doesn’t just stop there. Hopefully you’ll agree with me that as scientists we in effect took the equivalent of the Hypocratic oath and declared that we must ALWAYS continue to question all established science through the application of the scientific method? There is and never will be any ‘settled science’, but rather only increments in our knowledge and understanding of phenomena like the earth’s climate. I personally think it is the height of arrogance (as Bob Watson did) to declare that the ‘science is settled’. Only a non-scientist (indeed IMO an anti-scientist) would make such a statement. If you want to see an anti-scientist trying to defend the indefensible watch this video on YouTube
Prof Watson on Climate Science

“The data you showed me has a warming of ~0.7 degrees Celsius since 1880 but is there any justification for the IPCC’s (Copenhagen Diagnosis) prediction of a 2-7 degree Celsius rise by 2100, other than studies that use tree ring proxies”
Have another look at the legend for those map images. The dark red dots are >+5 deg.C/century and the dark blue dots are >-5 deg.C/century. That’s right greater than +/- FIVE (not just 0.7) deg.C/century and this ‘unprecedented warming/cooling’ is occurring in warming/cooling cyles of approx 30 years, so that warming between 1910 to 1939 of >+5 deg.C/century for some stations followed by >-5 deg.C/century cooling between 1940 to 1969 followed by >+5 for some stations from 1970 to 2010! How can anyone think that these warming and cooling trends are caused by CO2 emissions when CO2 emissions didn’t become significant until the post WWII industrialisation era? Have another look atthe 1910 to 1939 versus the 1970 to 2010 trends. For the US stations it is arguable that teh warming trends during the 1910 to 1939 period exceed those for the 1970 to 2010 period. The late 20th century warming period is therefor eclearly not ‘unprecendented’ even within the last 100 years let alone 100 years as Michael Mann’s (and Keith Briffa’s) cherry picking of proxies and use of ‘novel statistical methods’ like de-centred PCA would have us all believe.

gallopingcamel
February 10, 2010 7:35 pm

KevinUK: Professor Watson could do with a lesson in humility and clarity from a great one:
“No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” Albert Einstein
That link on “Mapping global warming” led me to the “Weather Underground”, an organisation that may make more comprehensive station data available on the Internet than NASA, NOAA or UAE/CRU.
I had already figured out that to claim “unprecedented warming” the IPCC had to rewrite history by denying the Medieval Warm Period. It was news to me that they are also ignoring the more recent history which they can’t rewrite!

February 11, 2010 1:44 am

gallopingcamel (19:35:54) :
Yes, you are correct NOAA, GISS and CRU are all attempting to re-write recent climate history. In doing so thay are then able to claim that the 1990’s and now the’Noughties’ are the warmest decade in the last 150 years. This as you can see by looking at the ‘Mapping gobal warming’ maps (and sttaion charts)is nonsense. The warming trends (and temperatures) from 1910 to 1940 are often greater for many of the central US stations than they are for the equivalent 1970 to 2010 period and most importantly the trends are large, often >+5 deg.C/century and cannot be due to CO2 emissions as this period precedes the post WWII period of rapid industrialisation.
It’s also important to understand the significance of the LIA as well as the MWP. GISS have deliberately chosen 1880 as their starting point because this time falls right in the ‘trough’ of a nadir that effectively marked the end of the LIA and the planet’s subsequent recovery from it. I’m sure Hubert Lamb (who CRU named their department’s building after) is turning over in his grave at the idea that his successors (Tom Wigley, Phil Jones et al) have attempted to ‘get rid of the the MWP’.
If you are keen to research what has happened and how we got to where we are today on the AGW issue, I can highly recommend that you read Christopher Booker’s latest book ‘The Real Global Waming Disaster’ and after that read CB’s and Richard North’s book ‘Scared to Death’. Also be a little weary (but nonetheless amused by) of some of the statements made by Lord Monckton. As Steve McIntyre often says ‘be careful not to go a bridge too far’.

gallopingcamel
February 11, 2010 6:08 am

KevinUK,
I plan to read those books you recommended. What did you think of Plimer’s “Heaven & Earth”?
Monckton on TV comes across as a right wing “Prince Charles” or “The Rush Limbaugh of Climate Science”. When he gets into Geo-politics he loses me completely.
Does your warning extend to Monckton’s written works such as the latest “CO2 Report” to be found on the SPPI site?
The “bridge too far” says much about the Main Stream Media that pounces on every error by sceptics while downplaying the string of deliberate falsehoods by the IPCC and its supporters.

John Whitman
February 20, 2010 10:58 pm

Anthony & Joe D’Aleo,
Wow, this minute I just finished your ‘New Compendium Paper on Surface Temperature Records’.
My expression of wonder is beyond words.
Nightshift MODERATOR – I have a little problem. As I have mentioned in my comments of past posts, I have been working on a chart that show all temperature data=>process of the data=>data products going back to ~1950s to present. BUT MY PROBLEM is that this paper of Anthony & Joe make my project unnecessary from my view now. The paper is so clear (and in much more detail than a chart could do) and shows exactly what I intended to show in a single chart. SO, please see if you can get Anthony and Joe feedback to me on whether they consider that there still could be merit in my concept of a single chart? I appreciate it.
John in Taipei
[I’ll pass it along – The Night Watch]

February 27, 2010 10:18 am

Pure Platinum, the best compendium, and without reproach. My chapeau is off the gang of realists….