For example, until surfacestations volunteer Juan Slayton photographed it, there was no metadata to record the fact that this official USHCN station of record is sited over a tombstone.
From Dr. Roger Pielke Senior:
At the meeting in Exeter, UK September 7-9, 2010 ,
Surface temperature datasets for the 21st Century
there were several candid admissions with respect to the robustness of the global and USA surface temperature record that are being used for multidecadal surface temperature trend assessments (such as for the 2007 IPCC report).
These admissions were made despite the failure of the organizers to actually do what they claimed when they organized the meeting. In their announcement prior to the meeting [and this information has been removed in their update after the meeting] they wrote
“To be effective the meeting will have to be relatively small but, as stated above, stringent efforts will be made to entrain input from non-attendees in advance.”
In asking colleagues (such as my co-authors on our 2007 JGR paper)
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229
which has raised serious issues with the USHCN and GHCN analyses, none of us were “entrained” to provide input.
Nonetheless, despite the small number of individuals who were invited to be involved, there still are quite important admissions of shortcomings.
These include those from Tom Peterson
who stated in slide 8
“We need to respond to a wide variety of concerns – Though not necessarily all of them”
[from Introductory remarks – Tom Peterson];
Matt Menne, Claude Williams and Jay Lawrimore who reported that
“[GHCN Monthly]Version 2 released in 1997….but without station histories for stations outside the USA)”
“Undocumented changes [in the USHCN] can be as prevalent as documented changes even when extensive (digitized) metadata are available”
“Collectively station changes [in the USHCN] often have nearly random impacts, but even slight deviations from random matter greatly”
“Outside of the USA ~60% of the GHCN Version 3 average temperature trends are larger following homogenization”
“There is a need to identify gradual as well as abrupt changes in bias (but it is may (sic) be problematic to adjust for abrupt changes only)”
“Automation is the only realistic approach to deal with large datasets”
“More work is required to assess and quantify uncertainties in bias adjustments”
“Critiques of surface temperature data and processing methods are increasingly coming from non traditional scientific sources (non peer reviewed) and the issue raised may be too numerous and too frequent for a small group of traditional scientists to address”
“There is a growing interest in the nature of surface temperature data (reaching up to the highest levels of government)”
and Peter Thorne from Agreed outcomes – Peter Thorne who wrote
“Usage restrictions
Realistically we are not suddenly going to have open unrestricted access to all withheld data. In some areas this is the majority of the data.”
There are very important admissions in these presentations. First, outside of the USA, there is inadequate (or no) publicly available information on station histories, yet these data are still used to create a “homogenized” global average surface temperature trend which reaches up to the “highest level of government”. Even in the USA, there are undocumented issues.
While the organizers of the Exeter meeting are seeking to retain its leadership role in national and international assessments of the observed magnitude of global warming, it is clear that serious problems exist in using this data for this purpose. We will post information on several new papers when ready to introduce readers of this weblog to quantification of additional systematic biases in the use of this data for long-term surface land temperature trend assessments.
There is a need, however, to accept that the primary metric for assessing global warming and cooling should be upper ocean heat content, since from 2004 onward the spatial coverage is clearly adequate for this purpose (e.g. see). While there, of course, is a need for regional land surface temperature measurements including anomalies and long-term trends, for obtaining a global average climate system heat content change the oceans are clearly the more appropriate source of this information.
Nick Stokes says:
September 21, 2010 at 3:57 am
“No, the data is what it is. The NOAA weren’t responsible for the measurements; they have to do the best analysis they can with the data we have. Imperfections have been acknowledged all along. What do you think they should do?”
Ahhh, it’s the old “It’s not my dog” explanation.
I would still like to know why my tax dollars are going to this so-called research? Poor data is worse than no data.
And inaccessible data is the same as manipulated data.
A bit of fact-free denigration of NOAA here. Is anyone able to answre the question – what were these “admissions”? Prof P Sr is eventually emboldened to make a list. He gets to first, which turns out to be a nonsense, which he really should have known. And there doesn’t seem to be a second.
Nick Stokes says:
September 21, 2010 at 3:57 am
Brownedoff says: September 21, 2010 at 2:11 am
….
“No, the data is what it is. The NOAA weren’t responsible for the measurements; they have to do the best analysis they can with the data we have. Imperfections have been acknowledged all along. What do you think they should do?”
Nick – the answer to this question was obvious to me from the first day I followed Anthony’s surface stations project at Roger Pielke Srs. original blog. NOAA should have been the agency to go out and investigate ALL of the USHCN sites!! The sad fact of the matter is that people like Karl and Peterson were the ones who poo-pooed the whole surface stations idea (along with their like-minded buddies at NASA) and treated Anthony like crap. When, pictures like the one that accompanies this post started showing up, they started panicking and waving their hands to try to make it go away (remember, UHI doesn’t matter anymore, right?).
By the way, has NOAA published their climate data analysis code? Has their code been validated? Do we have access to ALL of their adjustments, detailed numerical algorithms, and code? If not, what are they afraid of???
That tombstone is NOAA´s. It passed away on November 19th, 2009, the unfortunate day when “Climate-Gate” emails were released.
John Q Public says:“Poor data is worse than no data. “
Ahh.. well it all depends if it is poor – poor data or good poor data.
Radiation decay is “poor” data because it is so random. However if we can control everything except the one thing that makes it poor, it becomes “good poor” data.
In theory, global temperature data is poor data, which in the right conditions, and with the right frame of mind, could have had most of variables that made it poor tightly controlled so that like radiation decay rates … given enough time the good data showed through.
But when you have a bunch of cretins with an agenda manipulating the data to suit their political bias and wish for more grant money … even good data becomes poor!
I second the call for using upper ocean heat content as the primary metric for measuring overall change, whether it be warming or cooling or no trend at all. This is long overdue.
“It’s a trick. Get an axe!”
— Ash, from housewares
Nick Stokes says:
September 21, 2010 at 3:57 am
Brownedoff says: September 21, 2010 at 2:11 am
….
No, the data is what it is. The NOAA weren’t responsible for the measurements; they have to do the best analysis they can with the data we have. Imperfections have been acknowledged all along. What do you think they should do?
Start at the beginning like real scientists do. Otherwise it’s “gigo city”, baybeeee! And as proven.
What is it with government agencies and graves? First we’ve got the mismanagement of Arlington cemetery with old grave stones being used for erosion control (with the names of the dead still visible) and bodies buried who knows where, now we see a weather station built right on top of a grave. That’s got to be a joke!
Nick Stokes says: September 21, 2010 at 1:19 am
This is a bizarre post. I can’t see what these new “admissions” are.
It tells me the best dataset is probably that of the U.S., and it is crap. That admission is news to me, perhaps you already knew that though.
Ian W:
Why do climatologists in both warming and sceptic camps insist on using the incorrect metrics? Surely, it cannot be out of ignorance.
One quite valid sceptic tactic is to “beat them at their own game”: grant/ignore their [probably incorrect] hypothesis, then show that their method is defective in some way or other regardless. I think that’s also Steve McIntyre’s approach in doing his “audits”.
R.S.Brown says:
September 21, 2010 at 12:36 am
The antlers and the rake on top should compensate for
the flagstones at the base… right ?
Oh wow, I thought this was a joke but then I loaded the high resolution image and it’s true — there really are antlers and a rake on top! Talk about white trashy. Now we just need to find some bullet holes and a few discarded cans of Coors Light to make the picture complete.
As in the unfortunate Thompsons’ case, an “odour assesment” of NOAA surface stations would be advisable….LOL!
It’s hard to believe the comedy levels reached by global warmers/climate changers.
However, they stubbornly insist on it. Just yesterday, he who bears the mark on his fronthead, Ban Ki Moon, spoke at the UN of the need to fight climate change.
I think as a minimum Adena and Jesse Baldwin deserve having their dignity restored. It’s too late to save Hanksville.
to Simpleseekeraftertruth:
In Q38;
Q: “Do you think that the CRU scientists are people of integrity but out of their depth when it comes to statistical analysis?”
A: “You are quite right. We were fortunate in having a very eminent statistician on our panel. …….. And he was really quite serious in saying that this was not the best way to do it. Having said that, even though they had adopted their own methods, he said, actually, had they used more sophisticated and state of the art methods, it wouldn’t have made a great deal of difference.”
There is that fantay word again, “sophisticated”!
OE Pocket Dictionary, 1925:-“sophisticated”,……spoil the simlicity or purity or natualness of, corrupt or adulterate or tamper with! Sounds like a pretty good definition to me, & if we get high-brows like Lord Oxgurgh using these words without fully appreciating the origins of their definitions then what hope is there?
BTW, I have not been a particular fan of the honours system for many years, it has tendency to reward those who have done the government’s bidding, at least since June 1997!
Check the paint on that station. Or is that dirt?
If a switch from whitewash to paint can change albedo enough to alter temps, what does the browning of that weather box do to its accuracy?
Mike Haseler says:
September 21, 2010 at 1:41 am
I have this theory that antlers on top of thermometer boxes attract grave stones. I have a data set that shows it is true (forget YAD06). We must prevent this sort of thing from happening so we need to institute a tax program to prevent the calamity. The world is pursuing a CO2 agenda as foolish as my speculation. I can see the motive for profit in AGW but I must admit I cannot see the profit in antlers on thermometer boxes.
Is there a way to group any stations that have a long historic record, say 50-100 years of constant use using old thermometers? This nonsense about calibration is a red herring, what we would look for is change, since change is the subject matter of climate change. Criteria would include: using the same thermometer and SOP for time of measurement, no movement of the station over time, no construction during the period in question.
I am an analytical chemist and we use rate of change when calibration is not available. Non-parametric statistics gives the result.
If we choose those long-standing stations, say at temperature latitudes, one set from the NH, one from the SH, we would only need a small sampling to get a statistical result, perhaps n=7 or n=11. A first look might look at decadal averages to see if one decade is different from another, then proper trend analysis could be done.
Is this quick-and-dirty look possible? Has it been done and I missed it?
I wonder if someone with a sense of humor wanted someone to walk on their grave everyday. Maybe they wanted to know if they’d still feel it.
☺
Anthony, who can argue this – except that with great coverage only since 2004, what good does it do, except as “weather”? With such a short period, what do we compare it against? When you come down to it, isn’t climate – and AGW – all about comparing?
And with past records sparser and sparser for ocean heat content, don’t we get into proxies more and more (and more quickly, no?) the further back we go, with their attendant uncertainties? Do we just change one set of badly measured data for another?
I don’t care about HOW many sediment cores and coral measurements they make, extrapolating local or global climate from them is still reading into it more than is really there to be read.
Nick,
The irony is that something like Exeter tries to address complaints by making raw data and adjustment processes more easily available and transparent, and the only message that gets promulgated on the skeptic blogs is that the GHCN metadata is often poor (surprise!).
I’d strongly encourage folks to read through all the Exeter presentations. People like John Christy, Ian Jolliffe, Peter Thorne, Matt Menne, and others are all trying to address many of the prior criticisms of the surface temperature record and data availability. You should at least make a modicum of effort to figure out and honestly appraise what their goals are.
Zeke the REAL irony is that the people who have been actively studying the metadata issue (Pielke, myself and others) were not invited to attend or present our findings
Typical shutout as we’ve seen before in climate science and once again Menne plagiarizes my work and that of surfacestations volunteers without so much as giving attribution
What a travesty
bubbagyro,
Nick Stokes took a stab at the approach you are suggesting awhile back: http://moyhu.blogspot.com/2010/05/just-60-stations.html
This really is a can of worms. Several directions this comment can go.
One is that they have KNOWN about undocumented station changes for a long time, probably forever.
Two is that stations known to have undocumented changes (I suppose they are discovered some time later?) should be excluded from the GHCN database.
Three, given that 80% of stations were dropped, what effort was made to assure that the retained ones were “high quality”? One would certainly think that would have been a top priority for retaining a station in the GHCN database. If not, they have no one to blame but themselves for the fine mess they’ve gotten themselves into. (My main reaction is, “And they call themselves scientists????”)
Four, the statement
is total crap. The “small group of” climatologists are not the ones who need to do any of the work to provide this. That should be done by the attendants at the met stations. That is not a monumental task for any of them. Collectively, yes, it is a lot of effort – individually, no, it is only a few minutes (or hours, at most) for someone to track this stuff down and email it in.
It seems like they are just beginning to see the fine mess they are in, and now it will be decades of get-togethers while they determine what to do about it.
And it never would have been addressed at all, if it weren’t for the
And BTW, THAT statement should be its own posting here!
In saying “non traditional scientific sources,” he was obviously NOT referring to the WWF, since he is talking of critiques. And he can’t be referring to CA, because SteveM is part of the peer-reviewed scientific community. Is he actually referring to WUWT? Is he calling WUWT a “scientific source”? (albeit “non traditional”)
If so, it is crossing a line I never thought I’d see.
You didn’t actually EXPECT them to credit you, did you?
It is completely typical of scientists: ONLY attribute other peer-reviewed folks.
In American archeology that has been rampant for over a century – if an amateur finds anything, it is ignored or dismissed (often as fraudulent), and the site itself is put on a sh__ list. (In Europe and Asia that is not nearly so true.)
If non-peer-reviewed work is to be used, they claim the right to “do it right,” – repeating the work themselves, now that someone has brought up the issue. Watch. There will be papers.
“If you point it out, they will come.”
But don’t expect them to thank you. They DO shoot messengers, don’t they?