New independent surface temperature record in the works

Good news travels fast. I’m a bit surprised to see this get some early coverage, as the project isn’t ready yet. However since it has been announced by press, I can tell you that this project is partly a reaction and result of what we’ve learned in the surfacesations project, but mostly, this project is a reaction to many of the things we have been saying time and again, only to have NOAA and NASA ignore our concerns, or create responses designed to protect their ideas, rather than consider if their ideas were valid in the first place.  I have been corresponding with Dr. Muller, invited to participate with my data, and when I am able, I will say more about it. In the meantime, you can visit the newly minted web page here. I highly recommend reading the section on methodology here. Longtime students of the surface temperature record will recognize some of the issues being addressed. I urge readers not to bombard these guys with questions. Let’s “git ‘er done” first.

Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.

From The Daily Californian:

Professor Counters Global Warming Myths With Data

By Claire Perlman

Daily Cal Senior Staff Writer

Global warming is the favored scapegoat for any seemingly strange occurrence in nature, from dying frogs to hurricanes to drowning polar bears. But according to a Berkeley group of scientists, global warming does not deserve all these attributions. Rather, they say global warming is responsible for one thing: the rising temperature.

However, global warming has become a politicized issue, largely becoming disconnected from science in favor of inflammatory headlines and heated debates that are rarely based on any science at all, according to Richard Muller, a UC Berkeley physics professor and member of the team.

“There is so much politics involved, more so than in any other field I’ve been in,” Muller said. “People would write their articles with a spin on them. The people in this field were obviously very genuinely concerned about what was happening … But it made it difficult for a scientist to go in and figure out that what they were saying was solid science.”

Muller came to the conclusion that temperature data – which, in the United States, began in the late 18th century when Thomas Jefferson and Benjamin Franklin made the first thermometer measurements – was the only truly scientifically accurate way of studying global warming.

Without the thermometer and the temperature data that it provides, Muller said it was probable that no one would have noticed global warming yet. In fact, in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius, and sea levels, which are directly affected by the temperature, have increased by eight inches.

Photo: Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised.
Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised. Javier Panzar/Staff

To that end, he formed the Berkeley Earth group with 10 other highly acclaimed scientists, including physicists, climatologists and statisticians. Before the group joined in the study of the warming world, there were three major groups that had released analysis of historical temperature data. But each has come under attack from climate skeptics, Muller said.

In the group’s new study, which will be released in about a month, the scientists hope to address the doubts that skeptics have raised. They are using data from all 39,390 available temperature stations around the world – more than five times the number of stations that the next most thorough group, the Global Historical Climatology Network, used in its data set.

Other groups were concerned with the quality of the stations’ data, which becomes less reliable the earlier it was measured. Another decision to be made was whether to include data from cities, which are known to be warmer than suburbs and rural areas, said team member Art Rosenfeld, a professor emeritus of physics at UC Berkeley and former California Energy Commissioner.

“One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”

Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point.

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

Some scientists who fear that their results will be misinterpreted as proof that global warming is not urgent, such as in the case of Climategate, fall into a similar trap of exaggeration.

The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.

“We believed that if we brought in the best of the best in terms of statistics, we could use methods that would be easier to understand and not as open to actual manipulation,” said Elizabeth Muller, Richard Muller’s daughter and project manager of the study. “We just create a methodology that will then have no human interaction to pick or choose data.”

0 0 votes
Article Rating
205 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Adam Gallon
February 11, 2011 8:02 am

Oh please!
“We intend to provide an open platform for further analysis by publishing our complete data and software code. We hope to have an initial data release available on this website in early 2011.”
What sort of science is this? (Big winky!)

Area Man
February 11, 2011 8:09 am

This is a wonderful development. Congratulations on your (large) role.
Minor typo – “… protest their ideas” should be “… protect their ideas” I believe.
REPLY: Fixed, thanks, Hansenian-Fruedian slip I suppose – Anthony

RickA
February 11, 2011 8:09 am

This is great news.
I await their report eagerly.
Personally, I think we should also be setting up new automated stations to obtain more uniform coverage, which we can use to build 30-60 years worth of high quality new data.

Jack Maloney
February 11, 2011 8:12 am

“We just create a methodology that will then have no human interaction to pick or choose data.”
In creating a methodology, human interaction to pick or choose data is inevitable. One can only hope that the BEST Study methodology is more transparent than the current ones. And that its authors are more open to constructive criticism.

Doug Proctor
February 11, 2011 8:15 am

Finally! With all the talk of dropped stations, uncorrected UHIE, inappropriate and biased correction so rural stations, older stations, the changed bias of urban for rural and loss of high altitude and high latitude data – finally a group with no (apparent) financial connection to the IPCC or Chevron (/sarc!!) will create a database. I think.
If the group started first with New Zealand as a “test”, we would know the way of the future. NIWA and BOM (Australia) have a discarded dataset followed by a result that looks just the same, and not at all like the raw datasets that we were shown by the NZScCoalition. Hmmm.
I’ll sure look to the New Zealand subset with interest. If NIWA is supported, then we’ll have to wonder if the GISTemp is all that bad ….

sharper00
February 11, 2011 8:15 am

“but mostly, this project is a reaction to many of the things we have been saying time and again, only to have NOAA and NASA ignore our concerns”
Aren’t your concerns supposed to be have been published as a proper analysis a long time ago? It’s now two years since you published your conclusions in “Is the U.S. Surface Temperature Record Reliable?” and one year since the Menne analysis.
Where is the actual analysis that demonstrates the concerns that NASA and NOAA should be paying attention to?
REPLY: We have a paper in peer review, note the difficulties encountered by O’Donnell et al with a hostile reviewer, Steig, and perhaps then you’ll understand why skeptical papers can take much longer to run the gauntlet. Besides, it took us three years with volunteers to get a large enough sample. Menne used preliminary data (mine against my protests), and a sample that was not spatially representative nor contained enough class1-2 stations. That paper was pure politics.
If you can do a better job with zero budget, herding volunteers, in your spare time, for no pay, against a well funded government sponsored consensus, by all means do it. Otherwise wait for our paper. – Anthony

dp
February 11, 2011 8:18 am

On the face of it this is not what I expect from the denizens of my old home town. Be prepared to learn that any 1000 randomly chosen thermometers selected from the full set and calibrated over time tell the same story as all 39,000 thermometers similarly calibrated. The alarmist science will, at the end of the day, stand. ±0.1 ºC

JimBrock
February 11, 2011 8:18 am

Being as how we are in an interglacial warming period, the earth SHOULD be getting warmer. As it has in earlier interglacial periods.
Sometimes I think a PhD replaces common sense. A number of years ago, a VP of our company brought me a memo prepared by one of his PhD team. It was an analysis of a proposal we had received from an outside expert. The analysis was cogent, complete and … negative. The PhD clearly showed, through analysis, that the proposal could not work. Then he concluded with his recommendation: that we put the guy on a consulting contract to work on it.
The VP and I had a good laugh over it. I told him I thought the PhD was doing good until he got to the end…where he had to exercise good judgment … and common sense.

February 11, 2011 8:20 am

Anthony, I look forward to reading your impending public vindication. Many thanks for this site.
REPLY: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly do to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet. – Anthony

Patrick Davis
February 11, 2011 8:24 am

I call this BS.
“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”
Is this just US based? Or global? How is this possible considering the cat ate the raw, real, unadjusted, data? Also the use of the word “deniers” along side “exaggerators”. To me this seems their “program” is set firmly in the AGW, alarmist, camp.

pat
February 11, 2011 8:28 am

“Some scientists who fear that their results will be misinterpreted as proof that global warming is not urgent, such as in the case of Climategate, fall into a similar trap of exaggeration.”
You would think they would be happy if “urgency” was not needed.

Ken G
February 11, 2011 8:32 am

Eh…
What happens if the results they get don’t fall somewhere between the “deniers” and “exaggerators” as expected?
They already making value judgments on others’ positions and it sounds like they are starting off with an expected conclusion before they’ve even begun. I’m betting they will find exactly what they are looking for, and that it will fall right in line with this “consensus” view they mention….and we will be nowhere closer to the truth.
The comments in the article give me little hope this will be an unbiased approach.

Oslo
February 11, 2011 8:32 am

Unfortunately, I have my doubts.
Of all places in the world, and of all Universities in the world, how likely is it that a policy-free and objective temperature record would come out of Berkeley, San Fransisco, the perennial hotspot of left-wing activism.
Of course nothing wrong with left-wing politics. Being Scandinavian, I share many of their ideas, but not their ideas on AGW.
Could this be the outcome of the soul-searching of the warmists in the wake of Climategate: lets make another temperature record, this time seemingly objective, seemingly open and cooperative towards the deniers, but ultimately confirming the good old story.
I don’t know. I have seen too much to be gullible.
Afterall – being a skeptic means not believing until there is a very good reason to.
I still prefer it that way.

the_Butcher
February 11, 2011 8:33 am

“They are using data from all 39,390 available temperature stations around the world ”
By that they mean those dodgy weather stations behind Jet engines?

DJ
February 11, 2011 8:34 am

Sorry, but I think we’re off on the wrong foot already, if the article itself isn’t biased towards AGW.
The statements ” the skeptics – they’re not the consensus”, “..intention of becoming the new, irrefutable consensus”, and “not as open to actual manipulation..” Not AS open?? But still open to manipulation, and by whom?
I love the pretext of the project, but it smacks of a built in bias, and has all the colors of a new premise for a grant proposal machine.
Now if there was a consortium of scientists from Both sides of the discussion, I’d be less concerned.

Cold Englishman
February 11, 2011 8:35 am

How about starting with a little less inflamatory language viz. deniers deniers deniers.
Not a very good start if I may say so.

Robb876
February 11, 2011 8:36 am

….”Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point.”…..
Nice phrase…

commieBob
February 11, 2011 8:37 am

Even just having all the raw data in a well organized, easy to use format would be a great improvement.
Once everyone can use the raw data, there can be an intelligent discussion on how to use it.

SSam
February 11, 2011 8:37 am

I don’t have a lot of hope for this. I see it as yet another justification for a job. Why do I say that? Here, take a look:
“conducted with the intention of becoming the new, irrefutable consensus
Science is not the consensus of a group of people. That is a “collectively perceived reality.”
Science is data. Data either supports an idea, or destroys it. It does this whether you do or do not agree.

sharper00
February 11, 2011 8:39 am

@Anthony
“Otherwise wait for our paper. – Anthony”
Well yes that was my point! I have been waiting for the paper for a long time! The only updates I get on its status are when you occasionally make reference to it in comments here.
If the paper is done and submitted (to where?) then that’s great news and I look forward to reading it.
To go back to your initial complaint about NASA and NOAA – the publication of your analysis showing there’s a problem for them to be concerned about is the starting point to them addressing it. It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.

wsbriggs
February 11, 2011 8:40 am

Congratulations Anthony! A good dataset is a requirement to understand our globe.
There are a number of things in the description of their methodology which caused me an involuntary flinch, but I’ll withhold judgment until I see the results. One particular item was treating local datasets with differences as lower weighted outliers after the areas of agreement were removed. After the examples which we’ve seen on this site, I’m not sure that lower weighting is appropriate. Still, this is a major step toward forcing a real focus on the physical basis of climate, and the data which documents it.

R. Gates
February 11, 2011 8:46 am

So long as the data is gathered and applied consistently, this can only be a good thing. More accurate information is always better than less. I look forward to seeing the data. It is interesting to note that Dr. Muller is not denying the existence of global warming, but wants to counter the extremes at both ends of the AGW debate with more accurate data.

Feynman
February 11, 2011 8:46 am

Sorry, it seems that they ‘know’ in advance what the result will be, so the result will be biased. That’s no science
Quote: “Global warming is real, Muller said”

Espen
February 11, 2011 8:51 am

Are they going to publish yet another “global mean temperature”? I’m surprised that physicists are willing to work with that kind of metric: Since the actual heat content (the enthalpy) of the air depends on its water vapor content, it’s highly temperature dependent – which means that a world that has a large region of +10 anomaly in the Canadian Arctic has not been heated much less than a world that has an equally large +10 anomaly in the tropics or subtropics. But if they both come out the same if you calculate the mean temperature.
(See also http://pielkeclimatesci.wordpress.com/2005/07/18/what-does-moist-enthalpy-tell-us/ )

steveta_uk
February 11, 2011 8:57 am

This is going to get exciting – there’s no chance that everyone will simply say “well, now we know” and act rationally on the information.
If the data confirms Hansenesque warming, most sceptics simply won’t believe it. And if it shows negligible warming, and in particular shows no accelerated warming in recent decades, then the hockey team et al won’t believe it.
So will it resolve anything?

David Davidovics
February 11, 2011 8:58 am

Anthony,
I have my doubts too. With words like ‘consensus’ and ‘denier’ along with phrases like “global warming is real”, its hard for me to be optimistic even if they are offering you a seat at the table so to speak.
As Steve McIntyre would say; Watch the pea under the thimble (very – VERY closely!!!!)
It could simply be that language like this was used to get the attention from a biased media and scientific community but whatever you do, don’t let your guard down.
REPLY: The word “deniers” was added by the reporter. And global warming “is” real. We expect some warming, my view is that it is exaggerated for political purposes. The key is find out what the true signal is. – Anthony

Oslo
February 11, 2011 8:58 am

Once the “deniers” are aboard, take them for a ride through the bushes and back to the “consensus”.

Larry Hamlin
February 11, 2011 9:00 am

In reading the methodology material I was unable to determine how the Urban Heat Island (UHI) impacts are going to be addressed in this study. I would hope that this critical issue is to be evaluated in this independent temperature data study.
Is that the case? If so can someone please explain where this issue is addressed in the methodology. Thanks.

Elizabeth
February 11, 2011 9:00 am

Conclusions about bias must be drawn until the report is published. This article was published in The Daily Californian, not written by the researchers. The fact that these researchers will publish all of their data and code and, as well, attempt to resolve issues uncovered in surface stations project is a good start. Rather than criticize this attempt at objective scientific inquiry, the sceptics’ job is to later critique their methodology.

Claude Harvey
February 11, 2011 9:01 am

Re:Feynman says:
February 11, 2011 at 8:46 am
“…it seems that they ‘know’ in advance what the result will be….”
That one jumped out at me as well. I was feeling really good about the prospects for “truth”, whatever it may be, coming out of this exercise until the professor made is “warming is a fact” statement. Hopefully, the mechanisms promised for removing the biases of the observer from the results will serve to make the professor’s stated bias irrelevant.

klem
February 11, 2011 9:06 am

I think this is fantastic. The data will be collected openly we hope and anyone who wants access will have it, allowing them to spin it anyway they want (as expected). At least it will be available. The problem here though is the same problem that has always existed, thermometers will provide evidence that the climate changes, they will not be able to show that CO2 is the cause. It’s good though.

DonS
February 11, 2011 9:08 am

So what’s new here? Apparently the globe is warming and the seas have risen and neither of those is what the fight is about anyway.
“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”
What science is going to be done when the top dog is already using “consensus” twice in one paragraph?
BS

Oslo
February 11, 2011 9:10 am

If any trusted “denier or “skeptic” is receiving any sort of grant or “compensation” for participating in a project such as this, it should be publicly declared beforehand.
Otherwise we will have “Deniergate” next.
Choose your friends well, brothers and sisters of “denialism”.

February 11, 2011 9:10 am

I suggest that there will be at least 4 factors that will determine whether or not this new effort will have any merit:
1/ How will the UHI effect be handled for cities? UHI “corrections” present a prime opportunity to introduce fudge factors to get the results one would like to see.
2/ Will so-called homogenization be used allowing temperature to differ from the actual measurements made at specific sites? The correct approach would be to constrain any fit to reproduce temperatures actually measured at all input sites.
3/ Will the data be interpolated over vast regions for which there is no site data? Integrating over such regions to calculate a “global temperature” can result in large systematic biases.
3/ Will the global heat content [the moist enthalpy as Espen pointed out above quoting Dr. Pielke] be reported along with the global temperature?

Roger Knights
February 11, 2011 9:13 am

Cold Englishman says:
February 11, 2011 at 8:35 am
How about starting with a little less inflammatory language viz. deniers deniers deniers.

I propose Scam Scoffers.

bobbyj0708
February 11, 2011 9:13 am

I’ve read Muller’s book “Physics for Future Presidents” and I enjoyed it quite a bit, that is until I got to the chapter on global warming. Up to that point the book had been a fine presentation on our abilities and limitations in energy, space, terrorism, etc based upon actual physics but when I got to the global warming chapter it was “AGW is real, trust me I know what I’m talking about”.
Color me skeptical on Muller’s ability to remain impartial and not be swayed toward the AGW corner.
But there is a youtube video showing a lecture from Muller where he talks about how the Hockey Team lied and he seemed to be quite disgusted with them and that he’d relied upon what they said to form his opinions on AGW so maybe he has turned a corner. But he also starts the lecture with a bunch of AGW stuff so who knows…

Spen
February 11, 2011 9:16 am

As discussed many times previously surely ocean heat content is the key metric not surface temperatures.

Alan Clark
February 11, 2011 9:17 am

RickA says:
February 11, 2011 at 8:09 am
Personally, I think we should also be setting up new automated stations to obtain more uniform coverage, which we can use to build 30-60 years worth of high quality new data.

I agree completely Rick. Individuals have the ability today to have a home weather station feeding data into their home computers that could be feeding data to a commingled base elsewhere, freely accessible by all. Ten years from now and beyond, we would have some serious raw data that would be very useful in establishing trends and verifying current projections.

kwik
February 11, 2011 9:17 am

If the database is open for anyone to read (maybe a small fee), then it the raw data can be checked.
And if anyone then can write a program to plot the data, then why not?
And if that organisation wants to do their own plots, and their software for doing so is accessible for all to compile and run, then I’d say, this is good news.
What is there not to like? The language above is just to be accepted by today’s AGW camp, me thinks.
They are in the same situation as being a Darwinian scientist under Lysenko.
How to get a plan approved, with the whole apparatcnik watching you?

Richard S Courtney
February 11, 2011 9:18 am

sharper00
Your comments to Anthony Watts at February 11, 2011 at 8:15 am and February 11, 2011 at 8:39 am are offensive in the extreme. You owe him an apology.
Your first post questioned why Anthony Watts’ paper on the NASA and NOAA global temperature data sets had yet to be published. And he answered that completely. I copy that answer here in full to save others the task of finding it: his reply said;
“We have a paper in peer review, note the difficulties encountered by O’Donnell et al with a hostile reviewer, Steig, and perhaps then you’ll understand why skeptical papers can take much longer to run the gauntlet. Besides, it took us three years with volunteers to get a large enough sample. Menne used preliminary data (mine against my protests), and a sample that was not spatially representative nor contained enough class1-2 stations. That paper was pure politics.
If you can do a better job with zero budget, herding volunteers, in your spare time, for no pay, against a well funded government sponsored consensus, by all means do it. Otherwise wait for our paper. – Anthony”
Your second post ignored that and complained at Anthony Watts by saying;
“To go back to your initial complaint about NASA and NOAA – the publication of your analysis showing there’s a problem for them to be concerned about is the starting point to them addressing it. It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.”
Say what!?
Anthony Watts had told you that “We have a paper in peer review”. In other words, his demonstration of the “problem” is submitted for publication, so he HAS demonstrated a “problem”. Furthermore, he has repeatedly presented aspects of the “problem” on this blog and a few minutes search would have shown them to you. And if you were to accept his words you would wait for the submitted paper to be published and then – in the unlikely event that you were capable – you could dispute it.
Put another way, you are calling Anthony Watts a liar by claiming he has “yet to demonstrate” a “problem” when he told you he has demonstrated it.
To use your words, It seems unreasonable to me for you to criticise him for your unjustified and unjustifiable refusal to believe his veracity, and your public statement that you do not believe him castes aspersions on his veracity. To my mind that means your behaviour is despicable.
Richard

Allen
February 11, 2011 9:24 am

Sheesh people! You’re reacting to a media article written by a non-scientist with some rhetorical flair. What did you expect from her?
Let’s give our colleagues the time to report on the results of their work. Then we can unleash our examination of the science.

Mike Haseler
February 11, 2011 9:24 am

A cautious welcome, but I still have my doubts, because garbage in, garbage out, and if the global temperature station network still is full of unknown errors, it will not be reliable, and there’s no sign of validating the effects of e.g. manual -> automation when a lot of stations were moved closer to power source and therefore closer to heat.
On the good side: An institution like Berkeley can’t afford to get it wrong (unlike the UEA)
On the bad side: so many other institutions who ought to have known better have just fluffed the science, so what’s to stop this being another one?

Dave W
February 11, 2011 9:25 am

I didn’t see how the urban heat island effect was to be addressed. Was it there somewhere? Also I think that avoiding all gridding removes some problems but creates a whole lot of others.
Still at least if they publish their raw data and tell us exactly what they have done with it, they are way ahead of most of the others in this field.

beng
February 11, 2011 9:32 am

*****
“We believed that if we brought in the best of the best in terms of statistics, we could use methods that would be easier to understand and not as open to actual manipulation,” said Elizabeth Muller, Richard Muller’s daughter and project manager of the study. “We just create a methodology that will then have no human interaction to pick or choose data.”
*****
Lofty goals — I hope they try to fulfill them. The devil will be in the details.
One wonders, tho, if the cultural-conformity conditions at Berkeley U (or most any other university) would permit this. Any results other than the party-line would stir up a green-hornet’s nest.

Oslo
February 11, 2011 9:33 am

Kwik:
What is “raw data” anyway?
Who reads thermometers, and in what way? Who gathers the results, who plots them, who sends them over to CRU and NOOA?
In Mozambique: how hard is it to let it shine through that you implicitly would like to see warming, and that a certain small grant depends on fulfilling a few simple expectations?
Afterall: most of the warming seems to happen where there is a) few people living or b) poor people living.
Is this a coincidence?

don
February 11, 2011 9:33 am

I’m sorry, but why is NASA doing climate change measurements and Muslim outreach? Looks to me like an expensive duplication of efforts that already occur at NOAA and the State Department: let me guess, Bizerkley is now getting in on the climate change gravy train too? No wonder we’re not getting more bang for the buck and going to the moon where the climate never measurably changes.

GregP
February 11, 2011 9:33 am

I already have my doubts about the project based on my reading of their methodology found here:
http://www.berkeleyearth.org/methodology
http://www.berkeleyearth.org/Resources/Berkeley_Earth_Summary.pdf
No discussion on UHIE that I can find.
Also Judith Curry is named as a member of their team.

David
February 11, 2011 9:34 am

Wether they are biased or not, as long as their data, code and methodology is made public, it will be possible to determine the validity of their result, and even (from those that have the time and will) to come up with alternatives that addresses concerns that might have been missed. That would be a big step forward indeed.

Ian L. McQueen
February 11, 2011 9:36 am

Minor typo:
“I can tell you that this project is partly a reaction and result of what we’ve learned in the surfacesations project” – should be “surfacestations project”.
Moderator: This note may be deleted.
IanM

February 11, 2011 9:36 am

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”
OK, if this is an accurate quote then I’m also very concerned.
“There are the skeptics – they’re not the consensus,” ???
Huh? Is this implying that “the skeptics”, one of which I believe I am, do not accept the “consensus” that the globe has been warming since the end of the LIA?
I believe the majority of skeptics agree with the following (which we’ve all seen from here – http://www.petitionproject.org/ ) :
“There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmsopheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth. ”
We’ve run into one of those define “skeptic” and “warmist” problems again. In my opinion, only a non-skeptic states or implies that skeptics do not beleive that the planet has been warming. What we deny is that anthropogenic CO2 emissions are the primary cause and may not even be a minor contributor.
Good, meaningful global temperature data will show what we skeptics have somewhat consensually agreed – that the planet is warming as it recovers from the LIA and has been doing so in a very natural manner.

February 11, 2011 9:41 am

I have mixed feelings about all this. If only it wasn’t coming out of Berkeley! I’ll be open-minded for now.
Meanwhile, James Holdren, Obama’s Science Adviser admits that there is a problem with AGW–the problem is that we skeptics need to be “educated.” Sheesh.
http://thetruthpeddler.wordpress.com/2011/02/11/obamas-top-science-advisor-calls-global-warming-skeptics-an-education-problem-unbelievable/

stephen richards
February 11, 2011 9:42 am

Anthony
This would have been a good one to post with comments switched off. There is nothing to see, nothing to say.

Grumpy Old Man
February 11, 2011 9:43 am

Of course, global warming is real. We know this from historical evidence. There are no more frost fairs on the Thames. The question is, ‘how much and what caused it’? An improved set of data may answer the first part but this absolutely depends on how much we can trust the data. It will do little to bring us to the cause. It may tame the exagerators and that will be good. In the end, real science will give us the answer. That the Supreme Court of the US ruled that CO2 is a harmful gas is just unbelievable. How many science degrees do these judges have?
This debate cannot be settled until we have a clear understanding of how climate works and, I suspect, that is many years in the future. Meanwhile an improved dataset can only assist us. (Good luck Anthony). It might even give a pointer to the next ice age, surely just around the corner. (Keep pumping the CO2 folks – the alarmists might be right (and I’m too darned old to cope with an ice age).

JEM
February 11, 2011 9:45 am

It’s coming out of Berkeley so one has to be cautious about any expectation that the folks concerned are going into this unbiased, but if they’re willing to fully document and honestly support what they do, then they’ll be making a worthwhile contribution.
I’ve been mulling over for a while just how one would go about creating a database of temperatures, where each entry was not just a value but a complete biography of the data point including time, location, any related imagery, qualitative metrics including confidence intervals, annotations, etc.
Where it gets sticky is being able to assign essentially a GUID to each data point and its associated metadata, so as to be able to track its use through all subsequent aggregations and analyses that rely on that value, and to ensure that any qualitative and annotative metadata automatically propagates through those aggregations and analyses and cannot be short-circuited by, say, someone whose statistical and data-warehousing expertise may be, say, a bit short.
The basic data structures involved are easy, but the analytical and processing side stumbles – if you (for instance) sum a range of values you also have to generate a GUID, data structure, and biographical metadata for the sum as well as the one-to-many relationship to the GUIDs of all the values you just summed.
Obviously, maintenance of all this overhead would be necessary only as a repository for ‘published’ results, but it still strikes me as some distance beyond where we are now.
Now, back to your regularly scheduled reality…

Baa Humbug
February 11, 2011 9:45 am

I can’t see how using thousands of stations with all the variables involved will give us an accurate indication of where global Ts are going.
Adjustments will need to be made for station moves, UHI effects etc etc.
This will become just another point of back n forth arguements.
I would have preferred just using stations with very long unbroken records such as the Central England record irregardless of how few of these stations there are.
Afterall, CO2 should be doing it’s “work” without fear nor favour all over the world, in all seasons and during all natural cycles such as the ENSO PDO and AMO etc. If we have just a handfull of reliable records of 60 years or longer, that should be enough to give us a good indication of what’s happening to global Ts.
An indication is the best we can hope for with the current measuring systems in place.
p.s. Sharperoo your comments are getting really really tiresome.

sharper00
February 11, 2011 9:48 am

S Courtney
“Your comments to Anthony Watts at February 11, 2011 at 8:15 am and February 11, 2011 at 8:39 am are offensive in the extreme. You owe him an apology.”
I don’t believe Anthony needs you to be offended on his behalf. No offence was intended and none was apparently taken
“Anthony Watts had told you that “We have a paper in peer review”. In other words, his demonstration of the “problem” is submitted for publication, so he HAS demonstrated a “problem”. “
When the paper is published we’ll see what’s up with that (a little joke there, hope you enjoyed it).
REPLY: Oh, I’m plenty offended, but I tried to be polite. I’ve stuck my neck out, done the work, recruited co-authors, argued the science in reviews, and put my name to my words. You on the other hand, snipe from the shadows, contributing nothing. That’s the real joke, and it’s on you. – Anthony

Elizabeth
February 11, 2011 9:48 am

Having done more reading on the Berkeley site I found: “The Berkeley Earth Surface Temperature study has been organized under the auspices of the non-profit Novim Group.”
Link to the Novim group page, http://www.novim.org/
Scan the page and you see, “Despite efforts to stabilize CO2 concentrations, it is possible that the climate system could respond abruptly with catastrophic consequences” followed by extensive discussion on climate engineering projects.
The integrity of the research team notwithstanding, further inquiry into the nature of Novim Group’s mandate is warranted.

Richard111
February 11, 2011 9:48 am

From this laymans point of view it only needs one thermometer to prove the global temperature is rising. That thermometer must be placed in a desert far from human habitation and in an open environment. Then simply record the MINIMUM temperature. After a few years of readings it should become apparent if the minimums are increasing, however slowly.

Marc77
February 11, 2011 9:53 am

If I had to do an analyses of surface temperature, I would use the information about wind. If there is a very local UHI, it should be washed away by wind. Also, if the wind is strong enough, the air will pass over a larger region over a certain time, so the temperature should be more representative. I wonder how much effort has been done to understand how to deal with the effect of wind on temperature measurement.
It is great to do the best statistical analyses, but just like computer models, statistics alone don’t go anywhere. I guess for now we have to wait and see what they have done exactly.

Peter Plail
February 11, 2011 9:55 am

Richard S Courtney says:
February 11, 2011 at 9:18 am
sharper00
Well said that man. I agree with every word and am pleased that you (unlike p00) are prepared to stand by your comments with your name, not hide behind an alias.
Sharper00 was accused of being Steig on an earlier thread here. Did I miss the denial?

Jerry from Boston
February 11, 2011 10:06 am

They’re going to use 39,390 temperature stations!! Anthony already had enough trouble compiling and analyzing data on the 1221 U.S. stations out of the 3,000 world-wide stations, detecting where the potential biases are on data collection and analysis, and showing they were low quality stations. Now they’re adding 36,000 new stations?! From flippin’ where? And what’s the quality of the databases they’ll receive? Handwritten records? Electronic (yeah, right!)? Going back how many years? How many station changes, and how well documented are these?
Anthony, don’t get suckered like Delingpole and Moncton did recently by trusting these guys (they are from Berkely, after all. Can you imagine what would happen to them on campus if they found global warming wasn’t as bad as thought? They’d be drawn-and-quartered and survivors of the purge hounded off campus. Careers over!). My suggestion – grab from their database the data for the stations you’ve inspected during surfacestations and compare them to your database. Maybe it’ll give you info you didn’t have yet. Although I would guess that if they were going to tweak data, they wouldn’t be dumb enough to do it on the most-scrutinized 1,221 U.S. station database subset in the 39,390 station dataset.
Personally, I think this whole Berkley effort is going to degenerate into farce. If the last 30 years of surface temp records don’t follow the satellite pattern, does that nullify the surface or the satellite data? If the surface temperature record climbs faster than the satellites, are we to “re-calibrate” the satellites to higher levels? If the surface temperatures are lower than the satellites, do we tweak up the surface temperatures to match the “gold standard” satellites? Or the reverse – admit major biases in the surface temperature records and adjust those back 150 years to show lower global warming in 150 years during the surface thermometer records but accept the boost in temperature during the satellite era for the last 30 years?
One thing for sure – if this study shows a big boost in temperature over the last 150 years or simply “confirms” the AGW global warming rise, it’ll be front page in the MSM for weeks.
I think this will be a mess.

February 11, 2011 10:09 am

Looking at the methodology described at the Berkeley site, they do not even mention systematic error. Systematic error inevitably contaminates the surface temperature record. If they neglect to discuss or evaluate that error, they’ll end up producing yet another centennial global temperature anomaly trend with plenty of statistical moxie but with no physical meaning.
There is no way to remove systematic error from an old data set. One can only estimate how large it might have been, and put conservative uncertainty bars on the result.
In either case — ignoring the systematic error in the 20th century record, or adding an estimated uncertainty width — there’s no doubt but that the global (or any local, for that matter) centennial temperature anomaly trend will be no better than almost entirely spurious.
If the Berkeley group ignores that empirical truth, their product will only kick up more controversy and be yet one more entry in the obscure the issue sweepstakes.

Perry
February 11, 2011 10:15 am

stephen richards says:
February 11, 2011 at 9:42 am
Anthony
This would have been a good one to post with comments switched off. There is nothing to see, nothing to say.
…………………………………………….
I second that motion and in addition, call upon sharper00 to moderate his/her language. The report will be released in due time. Further speculation is not required.

Dave Springer
February 11, 2011 10:20 am

UC Berkeley is well known for their politically unbiased faculty rather than the decidedly libtard demolib bias amongst the faculty of most other institutions of higher learning. Finally we can be assured of getting the truth. /sarc

JEM
February 11, 2011 10:22 am

Pat Frank – and that’s where the work is going to be needed in terms of validating whatever result they come up with.
As a start, extracting slices of their data based on probable quality indicators (stability of location, population density/UHI, etc.) and comparing the trends slice by slice.
There will be argument…

richard verney
February 11, 2011 10:22 am

My initial reaction was that this is a good development, but it appears likely that this will be a missed opportunity to examine matters from scratch.
Personally, I consider the idea of a global average temperature set to be absurd. It would be more sensible to have a data set dealing individually on a country by country basis. After all, every country will be affected differently to rising temperatures, and the global distribution of rising temperatures may point to a cause behind temperature rise which cause may be lost or not be apparent when looking at data globally.
Further, it would be sensible to compile such data set based only upon good quality raw data that requires no obvious adjustment, ie., only class 1 station data, preferably only class 1 rural data. This might mean fewer stations but some times less is more. A few accurate and uncurrupted stations may better tell what is truly going on.
Of course, however they compile the data set, it should be compiled in such way that one can do an analysis on sub data, ie,, select only rural data set, or only urban data set, or a combination of both. Similarly, only class 1 station data set, only class 2 station data set, only class 3 station data set etc nad a combination of all of these.
Compiling the data set in this manner will hekp analyse what is going on.

February 11, 2011 10:24 am

Sharperoo, “t seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.
See Anthony’s post on “Surface temperature uncertainty quantified,” here. You can download a free reprint of the paper here, (pdf) courtesy of the publisher.
When you figure out how to get around the systematic uncertainty in the surface air temperature record of the last 150 years, do let us know.

Jerry
February 11, 2011 10:27 am

Would someone please explain to me where he found the 39,000 stations? Sure, having more stations has got to be much better than the shoddy and manipulated GHCN data we have now, but we still need quality control. What percentage of all these stations are properly sited, etc.?

Ged
February 11, 2011 10:33 am

“Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.”
This is science! This is one of the most BASIC tenants of statistics! I hope people will pay greater attention to this seemingly innocuous statement, for it is part of the very core of all valid data gathering, processing, and interpretation.
Bravo for realizing and upholding this staple of the scientific method.

Jerry from Boston
February 11, 2011 10:40 am

“Alan Clark says:
February 11, 2011 at 9:17 am
Personally, I think we should also be setting up new automated stations to obtain more uniform coverage, which we can use to build 30-60 years worth of high quality new data.”
“RickA says:
February 11, 2011 at 8:09 am”
“I agree completely Rick. Individuals have the ability today to have a home weather station feeding data into their home computers that could be feeding data to a commingled base elsewhere, freely accessible by all. Ten years from now and beyond, we would have some serious raw data that would be very useful in establishing trends and verifying current projections.”
NASA has already done that. Starting a few years ago, they’ve finished putting about 114 stations around the U.S. (a few doubled up for quality control), spaced pretty equidistantly around the country, fully automated and in areas not subject to UHI, trees, whatever. Class 1 quality at each site (though their fencing looks a little weird, but that’s just me.) Their website says IIRC that they want to collect about 30-50 years of data so they can detect reliable long term trends in the U.S. I suspect that initial contacts by Anthony was a motivating factor.
I’ll look for the link.

Editor
February 11, 2011 10:51 am

sharper00 says:
February 11, 2011 at 8:39 am


To go back to your initial complaint about NASA and NOAA – the publication of your analysis showing there’s a problem for them to be concerned about is the starting point to them addressing it. It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.

Anthony has very clearly demonstrated huge problems in the sitings, spacing from buildings, nature of the surroundings, and other important issues affecting many, many, many temperature stations. Nor was he the first, Roger Pielke (IIRC) demonstrated the same thing for Colorado stations a few years before. Note that these are problems according to the official guidelines for surface stations, not just some issues that Anthony or Roger made up.
Now, if NOAA and NASA could pull their heads out of their fundamental orifices and put down their models and look out the window at the real world, surely those well-documented problems with their data collection apparatus would be a, what did you call it …. oh, yes, a “starting point to them addressing it”.
And in fact, in any well-run operation, the starting point would have been the NOAA/NASA internal evaluation of the siting of their ground stations. For Anthony to have to document the ground stations is a ringing indictment of the people that you so vigorously defend. They have obviously not done their jobs. Why do you defend that?
And for you to attack him with the fatuous claim that he has not provided anything for NOAA/NASA to use as a starting point is, to put it mildly, evidence of a serious misunderstanding on your part … Anthony has done their job for them, and deserves your thanks, not your approbation.
w.

Jerry from Boston
February 11, 2011 10:51 am

Sorry, folks. The new stations were put in place by NOAA, not NASA, at its Climate Reference Network:
http:/www.ncdc.noaa.gov/crn/#
And they’re looking for 50 years of data, not 30. Can’t wait!

GaryP
February 11, 2011 10:56 am

An open source for data and methods is to be applauded.
There still remains the problem that one cannot average intensive variables!
I have a 200 ml of water at 80 °C and a liter of water at 10°C. What is the average volume? First I add to get the total. 200 + 1000 = 1200 ml. This is okay, it is the total volume. Then I divide by two to get the average of 600 ml.
What is the average temperature? First I add to get the total. 80 + 10 = 90°C. This is the total temperature????? WattsUpWithThat!?! This is meaningless.
Now if both samples were exactly the same size and composition, then the total heat in each one could be measured by knowing the temperature. One could calculate the average heat in each one and compute the temperature if they were mixed together. Mathematically it would look like averaging the temperatures. This only works for exactly the same size and composition and no phase changes, volume changes, etc.
One cannot claim a 5°C change in bone dry Arctic air with a dew point of -50°C is the same as a 5°C change in tropical air with a dew point of +80°C. The energy change per unit mass is different and averaging the temperature changes is invalid. Don’t even think about the energy change per unit volume. Mixing identical volumes of incompressible fluids is one thing. Mixing volumes of air at different densities is different. Do you mix them reversibly or irreversibly? It makes a difference.

February 11, 2011 11:05 am

dp says:
February 11, 2011 at 8:18 am
On the face of it this is not what I expect from the denizens of my old home town. Be prepared to learn that any 1000 randomly chosen thermometers selected from the full set and calibrated over time tell the same story as all 39,000 thermometers similarly calibrated.
#####
Be prepared to learn that any 100 randomly chosen tell the same story.
heck, pick the 10 longest records and you get the same story.

Oslo
February 11, 2011 11:06 am

GregP:
“Also Judith Curry is named as a member of their team.”
I don’t think this is entirely true.
I think she (JC) is genuinely trying to balance things. Of course she doesn’t subscribe to the deception and corruption of Mann and Schmidt, but still, she does not oppose their view of the coming catastrophe.
For whatever reason. Keep in mind that JC’s coming to fame was with the hurricane katrina.
She has a chance now to be a genuine broker between the two factions, or to disappear as just another policy-driven advocate.
It is up to her.

red432
February 11, 2011 11:09 am

Can’t get past this paragraph:
“””
In fact, in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius, and sea levels, which are directly affected by the temperature, have increased by eight inches.
“””
What is the basis of that claim? I thought tide markers established in the early 1800s still held true.

February 11, 2011 11:13 am

Jerry says:
February 11, 2011 at 10:27 am
Would someone please explain to me where he found the 39,000 stations? Sure, having more stations has got to be much better than the shoddy and manipulated GHCN data we have now, but we still need quality control. What percentage of all these stations are properly sited, etc.?
##########
search back through comments i’ve made over the years and youll find the links
to many of the sources. The nice thing about most of the data is that it is raw and unhomogenized. No adjustments.
You get the same results using this expanded data set as you do with GHCN.
As for siting bias? well, we have one field study showing the magnitude.

Mark T
February 11, 2011 11:14 am

Gary0: the wikipedia article on the subject even descibes how you average temperatures. Amazing we still have to argue such a basic premise.
Mark

Dr A Burns
February 11, 2011 11:19 am

Expect the same results as CRU … plenty of adjustments but no UHI corrections:
http://www.berkeleyearth.org/methodology

David Davidovics
February 11, 2011 11:20 am

“REPLY: The word “deniers” was added by the reporter. And global warming “is” real. We expect some warming, my view is that it is exaggerated for political purposes. The key is find out what the true signal is. – Anthony

Ah, ok then. I understood that the opinion was taken from the scientist. At any rate, I am glad to see you offered the chance to contribute.
I tried mentioning your “surfacestations.org” website in an editorial I submitted a while ago, and it never saw the light of day. My article “a response from a climate skeptic” did however make it through a few months earlier (which is what made me think the paper might be open to more alternative views). Its too bad that people are giving you crap over the surface stations project because I feel its much closer to true science than most of the big budget productions out there today.
As for global warming being real, I agree with that too. However the meaning behind the words can be very different depending on the context which is what made me weary.

Mike
February 11, 2011 11:21 am

Muller: ““There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.””
Muller is likely a good physicist, but he may have his own political issues. No one is restricted to only repeat what there is a consensus on. It is reasonable to suggest that the flooding in Australia may have been related to AGW. Obviously there is not a consensus on an event that just happened. It is not reasonable to say the flooding there is definitely linked to AGW. Sometimes people only hear in black and white and just ignore the caveats.
“Muller came to the conclusion that temperature data … was the only truly scientifically accurate way of studying global warming. … Without the thermometer and the temperature data that it provides, Muller said it was probable that no one would have noticed global warming yet.”
If this is what Muller thinks, he is wrong. The sea ice and glacier changes would be noticed even if the thermometer had never been invented. So too with the many ecological changes. It is hard to ignore the crabs in Antarctica. As for extreme weather, if concern for AGW did not exist it unlikely the recent extreme weather events would cause us to suspect AGW – so I would agree with him there. Perhaps that was all he intended. But, since we do know AGW exists and will likely impact weather at some point you are not going to stop people from looking for a connection.

Bigdinny
February 11, 2011 11:33 am

I have been lurking here and a few other sites for several weeks now, trying to get my arms around this AGW issue, and like 95% (my supposition) of the people without any science background, I remain profoundly confused. I read here regularly, and elsewhere, that the earth is warming. I also read here, and elsewhere, that for the last 12 years the earth’s temperature has been stable or cooling. It seems to me that these do not have to be essay questions so I will phrase them simply:
Is the earth warming?
Is the earth cooling?
Is the earth currently warming but at a decreasing rate from before 1998?
I am impressed by the breadth and depth of knowledge I find here, and feel somewhat intimidated by it when I am posting such simple questions. Can anyone enlighten me? Is it remotely possible that this Berkeley study could?

George E. Smith
February 11, 2011 11:35 am

“”””” Jack Maloney says:
February 11, 2011 at 8:12 am
“We just create a methodology that will then have no human interaction to pick or choose data.”
In creating a methodology, human interaction to pick or choose data is inevitable. One can only hope that the BEST Study methodology is more transparent than the current ones. And that its authors are more open to constructive criticism “””””
No need; Mother Gaia already took care of that; and both the weather and climate are now exactly the way she said they should be. Problem solved.

February 11, 2011 11:36 am

Berkeley. Nuff said.

February 11, 2011 11:40 am

MikeBut, since we do know AGW exists…”
Coming after Demetris Koutsoyiannis’ work, among others, showing that GCMs are completely unreliable, that people can still write something like that shows a complete lack of understanding of the source of meaning in science.
Here’s the strictly scientific view on the cause of recent climate warming: no one knows.
Here’s the strictly scientific view of the effect on climate of recent rise in atmospheric CO2: no one knows.
In all the hoopla about AGW, no one knows what they’re talking about. No one.

Mac the Knife
February 11, 2011 11:44 am

Built in bias, at least by the Daily Cal Senior Staff Writer Claire Perlman.
“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”
Ms. Perlman,
Let’s have a look at the data, see the various analyses that derive from it, and have a good old fashioned debate about all of it, before we speak of “irrefutable consensus” and start calling people “deniers”.
Anthony,
Thank You and Many Thanks to your collaborators! We all look forward to a more reliable data base. Is there anything an average Joe or Josephine might do to assist this?
MtK

Ged
February 11, 2011 11:45 am

@Bigdinny
The answer to your questions depends completely on the time scale you use. There is no one answer.
If you talk about the past 3 months, it’s cooling. The past 3 years, it’s warming. The past 12 years it’s cooling. The past 30 years it’s warming. The past 1,000 years, it’s slightly cooling, or rather heading back up to “normal” on that baseline.
That’s the problem with the whole thing, the sense of scale. If you look back 10,000 years, we are in a warm, interglacial period coming out of an ice age, and nothing about this period is warmer or unusual than any other interglacial period. We’re both average in temperature and length of this period so far. So it could get warmer. It could last longer. Or things could suddenly get a whole lot colder for a few thousand years.
Saying Man has anything to do with the signal is a difficult assertion. But, that is what they are trying to do by looking at the sudden upward spike that was 1998. That is, the warming of the past 30 years, which coincides with the fact we now have satellite data and global temperature coverage that lasts only about that far back. We have a very short memory and experience with climate, so sudden changes spook us, and we think maybe we are to fault, maybe we did something wrong.
That’s what the scientific discussion is about. IS Mankind having an effect and to what degree, by adding 3% more CO2 to the air per year than would have been by purely natural sources, so it is said.

FrankK
February 11, 2011 11:54 am

I wish those who would like to tread the middle zone would not use statements like:
“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”
First science is not about consensus. Copernicus was a skeptic and not part of the 1000 year “consensus” of geocentricity. It was those who were part of the “consensus” who turned out to be totally wrong.
Not a good start in my book.

February 11, 2011 11:57 am

Why is having 39,000 stations with bad data an improvement? I accept as a given that physical location, instrumentation and time of observation has introduced errors into the existing data. I do not accept the premise that more observations statistically reduces the error. I would much rather see random validation of existing stations by putting up 3 stations near an existing station but in locations that meet the NOAA siting criteria. Data could be taken for a few months to a year and compared against the exiting station. For just over $100 I got a low end weather station that measures inside and outside temperature, wind speed & direction, dew point and rainfall and has a wireless console that stores data for several weeks. I would hope that for a few hundred dollars one could get a highly accurate instrument that would record and store temperature only. In all of the discussions of the problems with siting of the stations, I have not seen any description of collection of emperical data.
Anthony: Looking forward to your paper. Why is it taking so long, when Menne was able to take some of your preliminary data and get his paper out so quickly?

Doug in Seattle
February 11, 2011 12:01 pm

Overall, and ignoring the “denier” labels being added by the reporter, I think the idea is good. I look forward to seeing the product.

jim rosart
February 11, 2011 12:03 pm

I have been familiar with Mullers web sites for a couple of years. He publishes the ice core data that screams”no unusual warming”, and yet he toes the global warming party line. He does not fudge the data and he has my respect.

February 11, 2011 12:08 pm

I’m a bit concerned about the handling of UHI, but I’ll reserve judgement for now, at least (especially since you said you had a good deal of input).
On a similar note, though, I would be quite interested in seeing a series of comparative graphs for each station, with annotations as to their location (urban, rural) and noting when changes were made to said stations (ie replaced equipment, moved, etc).
Is there anything of this nature available, or in the works?
Or, is it something that could possibly be worked on in conjunction with your surfacestations project? (In which case I would be interested in helping)

Roger Knights
February 11, 2011 12:09 pm

Roger Knights says:
February 11, 2011 at 9:13 am

Cold Englishman says:
February 11, 2011 at 8:35 am
How about starting with a little less inflammatory language viz. deniers deniers deniers.

I propose Scam Scoffers.

Better yet, Scorcher-Scam Scoffers.

upcountrywater
February 11, 2011 12:09 pm

Hay Sharperoo,
Do yourself a favor.
Find graphs that plot loss of thermometers to rise in temperatures.
Find satellite “data loss” areas on Earth that correspond to surface temp data loss.
Find record low temps in heat island city(s)..
Most of that has been posted on this site in the last month.
The more data points the better.
If temp history is your thing then, record data from the same locations for a long long time, don’t move the thermometers, and don’t remove them.

Jeff
February 11, 2011 12:11 pm

no warmists or skeptics should be involved period … data gathering and data evaluation should be completely independent … until thenm is is just another exercise in bias …
this should be independently staffed …

Jeff
February 11, 2011 12:12 pm

you don’t need a statistician to gather data … you don’t need a climatologst either …

Robuk
February 11, 2011 12:18 pm

Another decision to be made was whether to include data from cities, which are known to be warmer than suburbs and rural areas, said team member Art Rosenfeld, a professor emeritus of physics at UC Berkeley and former California Energy Commissioner.
“One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”
Drop the urban stations, obvious if you want a clean unbiased study, how will they down weight them, deduct 0.05 C from the result.

An Inquirer
February 11, 2011 12:24 pm

A wise person once said, “Trust, but verify!”
The idea sounds great, but from what Dr. Muller has revealed so far, I am not optimistic. Perhaps the best news is the promise that the code and data sources will be open so that we can see what it done with UHI concerns, siting issues, TOB, missing data, station moves, homogenization, and extrapolation.

February 11, 2011 12:27 pm

I’m so tired of this “AVERAGE TEMPERATURE” Crap. It is MEANINGLESS!
I’ve been thinking about it and realizing the ONLY thing that matters is “length of growing seasons”. That’s determined by first frost and last frost.
I’ve NEVER heard anyone comment on that.
And what else matters? CLOUD COVER! As once it is below 32 F, the only losses from ice/snow are due to SUBLIMATION in the sun. (Sublimation under cloud cover = virtually zero.)
It is the CLASSIC confusion of energy versus temperature. What we are concerned about is AVERAGE ATMOSPHERIC ENERGY.
Another problem with the “historic record” based on “hand recorded” results is you haven’t the FOGGIEST idea of when the results are recorded!!!!
Are the the REAL high and low? Does this guy understand that the DEW line and BMEWS people regularily LIED about the low value to confuse the Soviets? (I.e, their winter values are worthless!)
Chasing a ghost. Of course SOMEONE is paying them.
“A fool and his money…”
Max

Tom B
February 11, 2011 12:37 pm

“so deniers and exaggerators alike can see the numbers.”
This is from Berkeley, so there should be no surprise at the slant. But letting them constantly get away with using the pejorative “denier” label is just like letting them get away with the “tea bagger” label. They’ll keep using it until it sticks. Hey? If we use the “N” word enough, will that make it meaningless?

Mac the Knife
February 11, 2011 12:39 pm

Bigdinny says:
BD,
Most of us started as ‘lurkers’ at WUWT and other sites. The debate is more civil and reasoned here, so we ‘come out of the AGW closet’ here. Welcome, seeker of more knowledge!
1. Is the planet warming? Yes, and it has been (with fits and starts) since the end of the last glacial epic about 10,000 years ago. The warming did not progress uniformly. There were shorter periods of cooling and warming within that 10,000 period.
2. Is the planet cooling? Within the general warming trend of the last 10,000 years, there have been notable cooling periods. The most often named is “The Little Ice Age” that started somewhere around 1400AD and may have ended as late as 1915AD. Most of the Anthropomorphic Global Warming debate centers about the shorter warming trend since then.
3. Is the earth currently warming but at a decreasing rate from before 1998? I’ll have to let others respond. My lunch break is inadequate!
Gotta go,
MtK

Robuk
February 11, 2011 12:39 pm

One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”
Do two studies side by side, one rural one urban, they won`t do this because they know what the results will show.

Editor
February 11, 2011 12:42 pm

As Ken G (and others) have said, this does not feel right. “They already making value judgments on others’ positions and it sounds like they are starting off with an expected conclusion before they’ve even begun. I’m betting they will find exactly what they are looking for, and that it will fall right in line with this “consensus” view they mention….and we will be nowhere closer to the truth.
The comments in the article give me little hope this will be an unbiased approach.

There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method.
in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius
Global warming is real
Hmmmmm…
Anthony – you describe it as “Good News“. I truly hope that you are right, but belief is pending.

Alexander K
February 11, 2011 12:47 pm

This new enterprise looks good and I wish them well, BUT (there is always a but) when is the network of surface stations going to be built, or existing stations upgraded, that conforms to the standards already laid down? To my admittedly simple and unscientific thinking, I always have been a great believer in starting at the beginning, therefore a high-quality surface stations network should be the first priority in such an enterprise; messing about with statistics again, but with more existing stations included, seems a very complicated way of acheiving an unbiased measurement on an on-going basis. To simplify the matter again, a broken ruler is damned hard to get meaningful measurements from.

Jer
February 11, 2011 12:49 pm

I’m sorry, I have not read all the comments so someone may have pointed this out. Has anyone looked into the organization behind this project, http://www.novim.org/ ? I have not had time to study up on them but it seems from a somewhat quick review of their web site and their participants that they are not exactly neutral observers. I do not want to falsely cast dispersions on what could be a very worthwhile project, but on the other hand I am a bit leery here of this based upon what I have found so far…just saying
Jer

paulo
February 11, 2011 1:08 pm

http://www.guardian.co.uk/environment/2011/feb/11/the-heretic-climate-change-review
A review of a play called The Heretic
As Diane’s relationships with her anorexic Greenpeace daughter Phoebe and her tweedily corrupt professor deteriorate, her scepticism turns to hectoring. “Green is proxy for anything. Class war. Hate your dad. Hate America. It’s the perfect religion for the narcissistic age.”
Her public profile grows and she tells Jeremy Paxman on Newsnight that “the real global warming disaster is that a small cohort of hippies who went into climate science because they could get paid for spending all day on the beach smoking joints have suddenly become the most powerful people in the world”.
———————-
I think we’re beginning to win this argument – as ‘art’ often leads the way in detecting new trends.

Doug Proctor
February 11, 2011 1:27 pm

The warmist camp says that the current datasets of GISTemp and HadCruT are good enough. Hansen says that satellite data does not need to be nor should it be added into the land temperature data as it does not measure the same thing (nor, I suppose, show the same trends that exist in the land data). ARGO data is not used for … I’m not sure why, but it is also not good enough to be considered. In other words, all is good and there is no reason to re-do the historical data.
This 39,000 station data review will take a long time and, as far as the warmist camp is concerned, is unneeded and irrelevant. Right now New Zealand has had its NIWA data more-or-less confirmed by the Australian BOM. So, the reasoning would go, why are you doing this?
If you/we want the new analysis to be considered worth paying attention to, something small and clear must be done that can incontrovertibly be held up as a challenge in court, in Congress or on the cover of Time magazine. If, as the New Zealand Science Coalition says, the NIWA data is as bad as appears, and New Zealand is not warming at 0.9K since 1988 (or whenever) and is the fastest warming country on the planet, then New Zealand is the place for a first-attempt at record repair. A first-area comparison that is devastating to the New Zealand prior history will make further work relevant. (Australia and Canada would be second and third on the list, I’d suggest, followed by the continental USA with its UHIE problem).
We have been led to believe that skeptics have solid data that conflicts with the Hansen-Gore CAGW meme. Can we not focus somewhere and bring that out now, before the EPA and others get further into our homes? Can we not say, “This is wrong! and tomorrow I’ll be showing you how the rest of it isn’t right, either.”
If the data is bad globally, it must be bad regionally. An initial small region showing how manipulated the public has been would be more than a shot across the bows. For both sides

Paul Deacon
February 11, 2011 1:30 pm

Anthony – like you, I am happy to judge the project on its own merits when it meets the light of day.
The NOVIM group, private sponsors of this study, seem (to me anyway) to be rather odd. They use Alarmist Warmist language, and seem to be interested mainly in things like rapid geo-engineering solutions to “climate change”. On the face of it, this suggests that they may have a financial interest in the outcome of related research. Their people seem young (e.g. PhD in 2005).
May I suggest it would be wise to find out who are the private funders of NOVIM? Other readers may be able to help.
All the best.

GregP
February 11, 2011 1:35 pm

For those who might be interested in R. Muller, he speaks about AGW and climategate here http://www.youtube.com/watch?v=U5m6KzDnv7k
Climategate @ 6:03 mark.
Seems like a pretty upstanding guy to me although he also appears to place too much confidence in GCM’s for my liking but, hey, nobody’s perfect. 😉
I’m looking forward to their report.
Also, look at the team membership here: http://www.berkeleyearth.org/aboutus
7 physicists, 2 statisticians and 1 climatologist (Judith Curry). Encouraging.
Regarding funding, their donors are listed here: http://www.berkeleyearth.org/donors Draw your own conclusions.

GregO
February 11, 2011 1:40 pm

Jer 12:49
Thanks for the link – great breathless prose to be found there. Pure Sci-fi as far as I can see. CAGW is a given with the need for crazy geo-engineered “solutions”. These guys are scary.

Ricard Wakefield
February 11, 2011 1:41 pm

I sure hope this data will be on line, downloadable.
Suggestion, would be nice to see how many records, and range of data, for each station before one downloads it, that way we don’t have to hunt for stations with long good records.

kwik
February 11, 2011 1:41 pm

Oslo says:
February 11, 2011 at 9:33 am
You could be right. Its a bit depressing though.

LearDog
February 11, 2011 1:44 pm

Well I sure do wish them success – and hope that in their process of checking for empirical homogeneity they consider other factors – siting issues ala surfacestations.org, lat-long precision (Mosher), existing geographic variation, humidity, climate bands, etc.
They also might need (at some point) to conduct analysis in a map-based framework (GIS). What is homogeneous at one large scale might reveal important variation at quite another.
Good luck with this Anthony. What an impact you’ve had.

Ricard Wakefield
February 11, 2011 1:45 pm

Actually, what would be better, would be the ablity to send sql statements to the database and get records back.

Stephen Brown
February 11, 2011 1:54 pm

“Prejudice” a. An adverse judgement or opinion formed beforehand or without knowledge or examination of the facts.
b. A preconceived preference or idea.
People, let’s see what this program produces, let’s see the raw data, the methodologies of data interpretation etc. before we rush into giving judgemental opinions.
Is this not what Anthony, McIntyre et al have been working for all this time? Let the group publish their findings, along with all of the associated data, and then let us form some sort of conclusion.
Remember, what is being published in this post is written by Claire Perlman
Daily Cal Senior Staff Writer, not anyone from the group involved in this project.
The proof of the pudding is in the eating. Shall we wait for the pudding to be served up before giving our opinions as to its quality?

February 11, 2011 1:56 pm

What’s the big deal. Just because the data shows warming over the past 100+ years says nothing about the cause. This is a zero sum game at best.

P. Solar
February 11, 2011 1:57 pm

Patrick Davis says:
February 11, 2011 at 8:24 am
I call this BS.
“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”
Is this just US based? Or global? How is this possible considering the cat ate the raw, real, unadjusted, data? Also the use of the word “deniers” along side “exaggerators”. To me this seems their “program” is set firmly in the AGW, alarmist, camp.
=============
Hmm, yes I caught the deniers language give away too. Neither did I like the “irrefutable consensus” bit.
As has been said many times, science does not work on consensus, that’s a political term. Neither can any such global average from a huge stack of crap quality data, never intended for climate, be considered irrefutable.
When crap from tens of thousands of homes gets churned up and homogenised at the local sewage treatment utility, you still end up with a tank full of shit. The only consensus is that it’s irrefutably smelly.
I do not find anything in their methodology blurb that says how they will deal with UHI , airport temperatures and sub-standard weather stations but the comment in this article about “down weighting” is both vague and worrying. Down-weighted bad data is still bad data.
So it seems the object is to water down the UHI by some unspecified factor and hope we’ll accept the ensuing warming as “irrefutable”. Does not sound too promising.
If they want to call it BEST , I hope they can live up to it. Though best is only relative. Seeing the opposition, they are not setting the bar too high trying to do better.

P. Solar
February 11, 2011 2:23 pm

GregP says:
February 11, 2011 at 1:35 pm
For those who might be interested in R. Muller, he speaks about AGW and climategate here http://www.youtube.com/watch?v=U5m6KzDnv7k
========
Thanks for the link. This man looks like he knows what science means and has the integrity to be disgusted by “hide the decline” and whitewash (non) enqiries.
If the rest of the team are of the same calibre as him and Dr. Curry we may finally see some climate science here.

Dave Andrews
February 11, 2011 2:23 pm

Steven Mosher Feb11 11.05am,
Are you really saying that 10 stations with long histories can adequately tell you how the temperature of the whole earth has progressed, rather than just point to the fact that there has been some warming, which of course is to be expected as we come out of the LIA?

Ken Lydell
February 11, 2011 2:28 pm

First, it is good that a group of competent scientists — including statisticians –are attempting to be honest brokers in retailing ground station measurements. Second, the process involved is entirely transparent. There is absolutely nothing to whine about here. Please move along and find something else to bitch about.

DocMartyn
February 11, 2011 2:31 pm

why not do a positive control. Deliberately pick sites that fail the criteria, inside growing cities, airports and the like and see what contamination looks like, with respect to Tmax/Tmin and changes in the yearly temperature warming cooling rates?

Helen Armstrong
February 11, 2011 2:35 pm

DJ says:
February 11, 2011 at 8:34 am
Sorry, but I think we’re off on the wrong foot already, if the article itself isn’t biased towards AGW.
The statements ” the skeptics – they’re not the consensus”, “..intention of becoming the new, irrefutable consensus”, and “not as open to actual manipulation..” Not AS open?? But still open to manipulation, and by whom?”
My thoughts – when I read the word denier and the ‘new consensus’ I have to ask just when did science become consensus? And when did skepticism become not science based? It is a bit like the ‘null hypothesis’ advanced by Mr Trenberth, it doesnt make sense.
Either the good gentleman has a bit of a problem with language and communication, or we are on a hiding to no-where. No can trust. Be careful Anthony that you are not dragged by implication of association into some quagmire of maybe not deceit, but distortion.
Like was said above, test it on knowns such as NZ first.

Ken Lydell
February 11, 2011 2:38 pm

One reporting station at San Francisco airport is used to characterize all weather in the Bay Area for the GISSTemp data set. Anyone who has lived for any length of time in that area knows that this is preposterous. That station cannot reliably measure conditions anywhere in San Francisco. It can be raining in the Avenues and remain sunny in the Mission District. In just 25 square miles you can choose your climate by choosing your neighborhood. As GISS uses only four thermometers to deduce weather everywhere in California this problem scales up. Forget statistical sampling. Some thermometers have good predictive value for large areas while others can’t be relied upon for small areas. The more thermometers the better.

eadler
February 11, 2011 2:41 pm

Elizabeth says:
February 11, 2011 at 9:48 am
Having done more reading on the Berkeley site I found: “The Berkeley Earth Surface Temperature study has been organized under the auspices of the non-profit Novim Group.”
Link to the Novim group page, http://www.novim.org/
Scan the page and you see, “Despite efforts to stabilize CO2 concentrations, it is possible that the climate system could respond abruptly with catastrophic consequences” followed by extensive discussion on climate engineering projects.
The integrity of the research team notwithstanding, further inquiry into the nature of Novim Group’s mandate is warranted.

You are skeptical because of the participation of the Novim Group. You don’t need to be concerned. Their influence will be cancelled by the participation of the Charles H Koch Foundation, a creature of the polluting Koch Industries. They have funded anti AGW conservative think tanks like Heritage, Cato, AEI etc.
http://en.wikipedia.org/wiki/Political_activities_of_the_Koch_family
You can now relax.
It seems that many of the same methods used by GISS to correct bad data will be used by this new effort. They are however hoping to get away without gridding because they are using more stations.
It is doubtful that there will be a large effect on the recent temperature record. The 2 satellite records, and the three leading thermometer records show the same behavior for the recent record.
REPLY: eadler I swear you are some sort of “polluting creature” too. Mind pollution maybe? Koch sponsors the PBS NOVA TV program, it’s right there at the bottom of the web page, soooo…..be sure to close your mind to that too, they can’t possibly have a decent science program if Koch is involved now, can we? – Anthony

eadler
February 11, 2011 2:52 pm

Pat Frank says:
February 11, 2011 at 10:09 am
Looking at the methodology described at the Berkeley site, they do not even mention systematic error. Systematic error inevitably contaminates the surface temperature record. If they neglect to discuss or evaluate that error, they’ll end up producing yet another centennial global temperature anomaly trend with plenty of statistical moxie but with no physical meaning.
There is no way to remove systematic error from an old data set. One can only estimate how large it might have been, and put conservative uncertainty bars on the result.
In either case — ignoring the systematic error in the 20th century record, or adding an estimated uncertainty width — there’s no doubt but that the global (or any local, for that matter) centennial temperature anomaly trend will be no better than almost entirely spurious.
If the Berkeley group ignores that empirical truth, their product will only kick up more controversy and be yet one more entry in the obscure the issue sweepstakes.

Please define what you mean by systematic error. Looking at their web page, they allow for station moves, the UHI, and equipment changes. These are sources of systematic error. They also check adjacent stations to look for discrepancies, in as similar manner to GISS and CRU. This is a means of catching systematic errors.

Ken Lydell
February 11, 2011 3:01 pm

Much more on San Francisco’s notorious microclimates can be found at Wikepedia.
http://en.wikipedia.org/wiki/Microclimate

John Whitman
February 11, 2011 3:30 pm

Anthony,
It is good to see that you are involved with the Berkeley temp record project.
Please encourage them to be open during all the processes of the project.
John

Stephan
February 11, 2011 3:39 pm

Actually I beg to differ with Anthony on this one; “There is warming etc” UAH satellite data shows cooling for the last two months (below anomaly). The argo data shows no warming for all years and SST are much more meaningful in my view, The SH shows no warming, there is no tropospheric hot spot, definitely no “Global” effect here. Of course I am a blatant denier by now… All that said, if the study uses only raw data and no cities I would believe the outcome which will show no change (flatliner). Temps are going to go up and down for the rest of everyones life here and your childrens children until everybody gets incredibly bored, like measuring respiratory rate in a normal patient LOL.

beng
February 11, 2011 3:45 pm

After thinking about this alittle more, I can’t see how this could be an honest effort (I hope I’m wrong), especially if done by US academics.
Big question: Are these researchers actually prepared to be ostracized culturally and even socially by their academic peers & friends if they don’t get the desired results? I truly doubt it….
The only honest analysis will come from Anthony & other independent researchers, I’m afraid. This is the unfortunate state of pathological conformity in modern, public-funded “climate science”.

Billy Liar
February 11, 2011 4:02 pm

Elizabeth says:
February 11, 2011 at 9:48 am
Having done more reading on the Berkeley site I found: “The Berkeley Earth Surface Temperature study has been organized under the auspices of the non-profit Novim Group.”
Link to the Novim group page, http://www.novim.org/

Interesting find. They appear to be most interested in geoengineering – kind of planetary hit squad when the tipping point comes along (presumably after large ransom paid).
They will undoubtedly be interested in having an accurate global temperature, otherwise, as Dr Kevin Trenberth is reputed to have said, ‘how will we know if it worked’.

eadler
February 11, 2011 4:06 pm

DocMartyn says:
February 11, 2011 at 2:31 pm
why not do a positive control. Deliberately pick sites that fail the criteria, inside growing cities, airports and the like and see what contamination looks like, with respect to Tmax/Tmin and changes in the yearly temperature warming cooling rates?
The same thing could be accomplished by doing the reverse. Keep only the good sites and see what the trends looked like.
In fact this has been done. In the US, only the stations acceptable by Anthony’s criteria were examined, versus the full set of stations. The result for the US temperature trend did not change significantly.
In addition, when urban stations were dropped from the global data set used by GISS, it made no difference in the trend. This result was reported in the peer reviewed literature.
I am glad the this larger data base is being examined. It is encouraging that it is being funded by one of the Koch brothers, who are opposed to the idea that global warming is a problem. It ensures that an objective study will be done, and makes it likely that it will be accepted by “skeptics”.

DaleC
February 11, 2011 4:23 pm

Richard Muller unequivocably supports M&M in the hockey stick fiasco (in 2003/2004).
http://www.technologyreview.com/energy/13423/page1/
http://www.technologyreview.com/energy/13830/page1/

February 11, 2011 4:25 pm

Dave Andrews says:
February 11, 2011 at 2:23 pm
Steven Mosher Feb11 11.05am,
Are you really saying that 10 stations with long histories can adequately tell you how the temperature of the whole earth has progressed, rather than just point to the fact that there has been some warming, which of course is to be expected as we come out of the LIA?
#######
And what would be the difference? ‘coming of out the LIA’ explain nothing.
Here is the point. Regardless of the cause the temperature of the earth has gone up over the past 150 years.
If you sample 40,000 sites you will get one estimate of the trend.
If you randomly select 10,000, 5,000, 3000, 1000, 500, 100 you will get similar trends. That’s because the distribution of trends is fairly normal ( kinda spiky)
In Ar4 I belive they looked at the 4 longest records. Same general answer.
Thats because over century scales you don’t have areas of persistent cooling while the rest of the globe warms.
Welcome to the Law of large numbers.

ImranCan
February 11, 2011 4:27 pm

Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point. “There are the skeptics – they’re not the consensus,” Muller explained.
Oh please …. these muppets appear to have completely missed the point …… as usual. I don’t know anyone (at all) who says that there hasn’t been some global warming over the last century or so. How can these guys come out with a statement like the one above ? Honestly, whats the point ?

February 11, 2011 4:35 pm

DocMartyn says:
February 11, 2011 at 2:31 pm
why not do a positive control. Deliberately pick sites that fail the criteria, inside growing cities, airports and the like and see what contamination looks like, with respect to Tmax/Tmin and changes in the yearly temperature warming cooling rates?
######
thats been done over and over again.
The biases dont rise above the noise floor. Doesnt mean they are not real. They are just small.

sky
February 11, 2011 4:37 pm

The presumption that using ALL 39 thousand station records will somehow improve estimates of temperatures world-wide precipitates a sense of uneasiness that the essential nature of the problem is being missed. It can summarized in two sentences:
1) Records of adequate length are available ONLY from population centers of one size or another–not from locales untouched by civilization.
2) There are large regions throughout the globe where for which there are NO credible records–including most of the oceans.
Without proven techniques for identifying LOCAL man-made effects upon station temperatures–and removing them–merely using more records will NOT solve the intrinsic problem of data CORRUPTION. Nor, barring the discovery of a treasure trove of previously unknown records in the most unlikely places on the globe, will the GEOGRAPHIC COVERAGE be materially improved. Thus, despite a more punctilious minding of largely academic P’s and Q’s, I do NOT expect the findings of this panel to differ substantially from the indiscriminate data products of the “major” index manufacturers.

February 11, 2011 4:38 pm

P. Solar says:
P. Solar and others. dr. Curry put me in contact with the researchers long ago.
My interaction with them has been favorable. Issues that I raised about metadata
are on the table. Their Approach or method is mathematically sound and in line
with the methods of RomanM and jeffId. Its the best method.

Merovign
February 11, 2011 5:03 pm

Sadly I don’t think Muller’s project will resolve a single solitary thing, because it addresses the least relevant and least interesting part of the controversy – adding needless precision to the argument about exactly how much the difficult-to-define “average” temperature has changed in the extremely recent (last 150 years or so) past.
Actual questions that are controversial:
Whether the change is a threat.
Whether and to what extent human activity affects the climate.
What can be done to ameliorate or adapt to that change.
Whether anything should be done to ameliorate or adapt to that change.
So far #1 and #2 are the big fighting points, and #3 is filled with pie-in-the-sky flying-windmills kinds of things at the moment, some of which is unfortunately costing money and resources.
I conclude that, as well-intentioned as it may be, Muller’s efforts are almost entirely pointless.
I say “almost” because, hey, a major database of raw temperature data combining existing data could be useful. It just won’t resolve the debate at all.

Werner Brozek
February 11, 2011 5:17 pm

The satellite data do show some warming over the last 30 years and both RSS and UAH show that 1998 was the hottest year so far, although I have to admit that it could be argued that 2010 was a statistical tie with 1998. But with the Met Office also having 1998 warmer than 2010, even thought the race was close, the first thing that I will be watching for is whether or not 1998 still beats 2010 and retains its rank as the hottest year of the last 50 according to their results. With the great increase in the number of thermometers being used, this would naturally mean more thermometers in the northern Arctic about which there has been a lot of discussion. I do not dispute that the northern Arctic has warmed, however there is no way I believe it has warmed as much as GISS would have us believe.
The area in the north polar region above 82.5 degrees is 2.2 x 10^6 km squared. This is where satellites apparently cannot get readings. The ratio of the area between the whole earth and the north polar region above 82.5 degrees is 5.1 x 10^8 km squared/2.2 x 10^6 km squared = 230. So that area above 82.5 degrees is only 1/230 or 0.43% of Earth. This is not enough to allow GISS to give 1998 as low a ranking as it does.

Nicholas
February 11, 2011 5:21 pm

Anthony,
Since most of the comments seem to be posted by people that have no
knowledge of Richard Muller I would like to see his credentials before
the comments.
Department of Physics at the University of California at Berkeley, and Faculty Senior Scientist at the Lawrence Berkeley Laboratory, where I am also associated with the Institute for Nuclear and Particle Astrophysics.
Named by students as Best Class at Berkeley! (It has won this honor for the last two years in a row.)

Doug Badgero
February 11, 2011 6:22 pm

Like most here, I look forward to a more accurate “global temperature anomaly” trend. However, they cannot settle the issue of CAGW with a temperature anomaly trend. I think most skeptics acknowledge that the earth has warmed since Franklin and Jefferson began measuring temperatures. There are the matters of attribution and sensitivity to solve.

eadler
February 11, 2011 6:27 pm

DaleC says:
February 11, 2011 at 4:23 pm
Richard Muller unequivocably supports M&M in the hockey stick fiasco (in 2003/2004).
http://www.technologyreview.com/energy/13423/page1/

I don’t think that is accurate. He says they have an argument that must be considered, and time will tell. He is open minded on the question.
Last month’s article by McIntyre and McKitrick raised pertinent questions. They had been given access (by Mann) to details of the work that were not publicly available. Independent analysis and (when possible) independent data sets are ultimately the arbiter of truth. This is precisely the way that science should, and usually does, proceed. That’s why Nobel Prizes are often awarded one to three decades after the work was completed-to avoid mistakes. Truth is not easy to find, but a slow process is the only one that works reliably
It is 2004, before the NAS report, which weakened the conclusions of the Hockey Stick. Subsequently it was shown that M&M’s claim the the HS was invalid because it was an artifact of non centered PCI is incorrect. It was shown that centered PCI also gives a hockey stick if the correct PCI procedure is used.
Since then many papers have been published which confirm the hockey stick shape of the NH temperature anomaly. Muller subsequently changed his mind.
Errors have been discovered in the “Hockey Stick” statistical model that shows the potential for dramatic future increases in global temperatures. This claim arises from a February 2005 article in Geophysical Research Letters by Stephen McIntyre and Ross McKitrick claiming various errors in the methodology of Mann et al. (1998) in the principal component analysis they used to generate the “Hockey Stick.” McIntyre and McKitrick first made this claim in 2003 in the social science journal Energy and Environment. The same paper was later unanimously rejected by the editors and reviewers of the journal Nature before being printed in Geophysical Research Letters at the urging of physicist and McArthur Fellow Dr. Richard Muller. In response to the 2005 article scores of scientists worldwide verified the statistical methods used by Mann in the generation of the “Hockey Stick” and no peer-reviewed claims doubting its veracity have arisen since
In fact, Professor Muller has changed his mind since then, based on what has been written since then. In his 2009 power point, he shows the Hockey Stick graph as something political decision makers need to know. Check out page 67 of the following presentation that he gave:
http://www.strategicstudiesinstitute.army.mil/pdffiles/strat20/muller/muller.ppt
Physics for Presidents and other World Leaders

Duster
February 11, 2011 6:41 pm

Mac the Knife says:
February 11, 2011 at 12:39 pm

1. Is the planet warming? Yes, and it has been (with fits and starts) since the end of the last glacial epic about 10,000 years ago. The warming did not progress uniformly. There were shorter periods of cooling and warming within that 10,000 period.

Actually, no. The planet warmed from the approximate end of the last major glacial epoch, which was around 20 to 17 kya (thousand years ago), until about 8 kya. Since then the overall trend has been cooling. Warm episodes have been trending both shorter and cooler over the Holocene, and cool episodes both cooler and longer during the same span. The planet recently – geologically, that is – seems to have been warming over the last 200 years or so, but you should read Pat Franks’ (2010) paper on the real measurement error of the available global surface temperature record, which IIRC is, at two sigma, about +/- 0.92 degrees C (one-sigma is +/- 0.46 degrees C). Franks concludes that the trend over that span is statistically indistinguishable from a 0-degree trend. We do know that geographic plant distributions and thermometers all seen to show evidence of warming, but the instrumental record is simply too uncertain to offer any really good estimate of the magnitude of the change.

Cheers,

Layne Blanchard
February 11, 2011 7:08 pm

Sorry, this looks tainted already. The statement: “Global warming is real” is itself suggestive that man is causing it. And if your period looks back 1000 years, it is likely false, not true. Global warmth variability is real. If the 1930s were warmer than today, and so understood, no one would say warming is real. It may not be. Warmer today than 1977? Sure. So what?
And without solid ocean and humidity data over the last few hundred years, we really don’t know anything. So is there really any point to a global mean temperature comparison? Not much.

Patrick Davis
February 11, 2011 7:15 pm

“The word “deniers” was added by the reporter. And global warming “is” real. We expect some warming, my view is that it is exaggerated for political purposes. The key is find out what the true signal is. – Anthony”
Ah, this makes more sense now, a clear indication of bias in the media (What a surprise!). Still, would Muller have not been given the chance by the reporter to “eyeball” the article first before publishing?

February 11, 2011 7:18 pm

eadler, see my paper, here (free pdf download).
I’m talking of systematic error in the temperature record due to problems at the instrumental level. Inaccuracies enter field-measured temperatures because of solar loading on sensor shields and wind speed effects, which cause the sensor to record something other than the true air temperature.
There are empirical ways to account for this and remove the error from each measured temperature, but they require independent precision monitoring of insolation intensity and wind speed (and variations in albedo, too, actually). None of that was ever done at USHCN climate stations during the 20th century, nor likely anywhere else. Some of the new CRN stations may include that capacity, but monitoring now won’t do anything for systematic inaccuracies in the prior 150 years of the instrumental record.
I didn’t see any recognition at the Berkeley site of the systematic effects that cause sensor error in the field. Systematic error won’t show up in any statistical test of the raw data, or in cross-comparisons of regional temperature-time series.
Because systematic effects at the sensor are caused by the same processes that govern air temperatures (insolation, wind, albedo), the resulting systematic errors will be pretty much as regionally correlated as the air temperatures themselves.

David W
February 11, 2011 7:22 pm

I will be interested to hear of how they handle the incredibly vexing question of station classification. I.e. rural or urban
I’ve seen plenty of evidence of stations still classified as “rural” whilst having already in reality transitioned into what should be an “urban” classification. This is where adjustments for UHI can be done incorrectly and skew your whole data set.
Sometimes things are so badly broken that they cannot be reassembled. I suspect this is the case when your looking at reconstructing a temperature record from surface station measurements.

AusieDan
February 11, 2011 8:09 pm

Larry Hamlin wrote on February 11, 2011 at 9:00 am:
QUOTE
In reading the methodology material I was unable to determine how the Urban Heat Island (UHI) impacts are going to be addressed in this study. I would hope that this critical issue is to be evaluated in this independent temperature data study.
Is that the case? If so can someone please explain where this issue is addressed in the methodology. Thanks.
UNQUOTE
Thanks Larry.
I agree completely.
From my observations of various Australian individual locations, UHI explains ALL the long term trend.
And there are all the problems of unsuitable sites, faulty instruments and housing, etc that Anthony has written about.
It seems to me that expanding from a few thousand data points to over 39,000 makes the task of getting the answer right, in terms of truely reflecting the agverage global temperature, much, much, much more difficult.
Open source data and programs would be a real advance.
But I am still very doubtful about programs that attempt to spread the data over the surface of the globe, beyond the actual spots where the temperatures are taken.
Forget the possibility of manipulation or bias.
Just what does the gridded output mean?
We know, thanks to very close attention to data by ancients long ago, that moving the thermometer at Sydney Observatory Hill in 1917 – about 150 metres down hill to the south east, made a significant difference to the measured temperature – max & min, month by month and annually.
Those gridded outputs are just grizzled mish mash, in my opinion.
And downgrading some of the data relative to others, seems to be unacceptable as well.
With 39,000 plus data points, why not just add them up and divided by the number?
The answer may not be the mean temperature of the earth, but it would sure tell us more as it changes over the years, than any complex output from programs that even if open source, the man in the street cannot understand.

kforestcat
February 11, 2011 9:13 pm

Gentlemen
I going into this, I am going to presume “innocence” with an expectation of honest effort. And further, I am prepared to tentatively applauded what appears to be a well intentioned and refreshing effort to conduct climate science in a transparent and open manner.
That said, I must echo many of the comments above in that I am far more interested in the future collection of long-term surface temperature data using instrumentation that is properly located, is properly and consistently QA/QC’d, is routinely checked using independent instrumentation, and is inherently accurate enough to measure ambient surface air temperatures within the range of interest (say +/- 0.001 degrees F).
Unfortunately for CAGW advocates and skeptics alike, our historically surface temperature data is: highly fragmented; of uncertain, inconsistent, and poorly documented accuracy; lacks common/consistent quality control; and we are attempting to discern differences in an exceptionally narrow temperature range.
This leaves reasonable doubt as to wither any analysis of the historical instrument record truly has scientific value. In the face of this uncertainty, we are relying of the “magic” of statistics. While I value the views statisticians and firmly believe statistics is a valuable tool; no amount of statistical analysis can make-up for inadequate data. Absent reliable and accurate readings, we must recognize we are simply “making do”.
Sadly, in my view, the historical surface temperature record is simply too crude to have conclusive scientific value. I remain inclined to place far greater weight on satellite data – the UAH data in particular – and very little weight on even properly analyzed historical surface temperature data.
On balance I’m willing to give “Berkeley Earth” the benefit of a doubt. But, like most science, the resulting product will only be a part of the picture. We will still be left to judge the “preponderance of the evidence”.
Regards, Kforestcat

JimF
February 11, 2011 9:13 pm

As a Stanford man, I can almost guarantee that these doofi from Berkeley can and will find that, in fact, the globe has warmed 35 degrees C. in the last 10 years, and we have until tomorrow morning to raise the hammer and sickle flag and retake our world from the evil, carbon dioxide-spewing capitalists or suffer catastrophic…whatever.
Mike Haseler says:
February 11, 2011 at 9:24 am “…An institution like Berkeley can’t afford to get it wrong (unlike the UEA)…” Well as a matter of fact Berkeley can hardly ever get it right on anything these last 40 years or so.
And finally, @sharper00. There’s nothing sharp about you. The double zeroes at the end of your monicker say it all. I would favor Anthony simply scrubbing all you have to say all the time, but he won’t do that, and I understand why. He’s just a nice man.

February 11, 2011 10:14 pm

Let me add, by the way, that the assigned uncertainties in the surface station CRN rating key, that Anthony is assessing for his paper, represent guesstimated systematic errors and are statistically analogous to the (+/-)0.2 C “reading error” guesstimate of Folland, et al., 2001.
The CRN keys likewise fall under Case 3b in my paper. That means the CRN key uncertainties propagate into an anomaly average as s^2 = sqrt{[sum over N of(CRN key)^2]/(N-1)}, and will end up producing a large uncertainty in any average air temperature anomaly time series.
When all is said and done, there will almost certainly be no way to avoid the conclusion that the current instrumental surface air temperature record is pretty much climatologically useless; likely any trend less than at least (+/-)1 C will be lost under the uncertainty bars.

AlanG
February 11, 2011 10:19 pm

This is what you get when people try to do ‘science’ while playing with a computer. 5 x garbage is still garbage. They can only process this much data with no quality control or calibration. As most thermometers are in built up areas (where people can get to them) the larger data set will be WORSE. It’s a con job.
I’m going to wait for Anthony’s paper.

February 11, 2011 10:27 pm

“Sadly, in my view, the historical surface temperature record is simply too crude to have conclusive scientific value. I remain inclined to place far greater weight on satellite data – the UAH data in particular – and very little weight on even properly analyzed historical surface temperature data.”
So when the satellite measure matches the land surface record what do you conclude about the land surface record?
And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?
Do you think the LIA existed? why? on what evidence? is that “evidence” as accurate or as highly sampled as the evidence from 1900-2010?

February 11, 2011 10:31 pm

“Some of the new CRN stations may include that capacity, but monitoring now won’t do anything for systematic inaccuracies in the prior 150 years of the instrumental record.”
Well, that’s not actually the case. The CRN are set up in a “paired” configuration for a large part of the network. That means old stations are paired with new stations. That
will allow for the creation of transfer functions from the new network to the old.
Look at it this way. You accept the conclusions of ODonnell 2010. I do. In that paper the new data of the satellites was used to calibrate and infill the old land data. You’ll
have the same kind of proceedure with CRN and the old network. Already we know that CRN does not deviate from the old network.

John Robertson
February 11, 2011 11:11 pm

If computational horsepower is needed consider using the same shared engine as SETI@Home – distributed computers. Of course there is at least one climate model running (since around 2000) called Climate Prediction

kwik
February 12, 2011 12:32 am

eadler says:
February 11, 2011 at 4:06 pm
“In addition, when urban stations were dropped from the global data set used by GISS, it made no difference in the trend. This result was reported in the peer reviewed literature.”
Oh really?
You can Peer review this one;

rukidding
February 12, 2011 1:30 am

What is the point. Even if you use every temperature measuring device in the world there will still be large areas of the Earth’s surface that is not monitored.So until we have a thermometer on every Square metre of Earth there will still be room for a fiddle factor.
And what does it mean anyway.I think the current average is somewhere around 14.5 C.Is that the average were I live no is it the average were you live probably not.
If I took all the stations inside the Arctic circle and averaged them would that be the world temperature no.If I took all the stations within 1deg north or south of the Equater and average them would that be the world temperature no.
So why would a a whole heap of stations placed randomly around the world be any different.

John Marshall
February 12, 2011 1:41 am

Sounds like a victory for common sense. But will we get the real data?
Well done Anthony and keep at ’em!

February 12, 2011 1:53 am

Steven mosher, you write:
“So when the satellite measure matches the land surface record what do you conclude about the land surface record?”
I agree with your point.
And in this particular context checkout the global compare with the UAH-GISS divergence coupled to logarithmic population change:
http://joannenova.com.au/2011/02/the-urban-heat-island-effect-could-africa-be-more-affected-than-the-us/
😉

February 12, 2011 1:56 am

Steven Mosher, here the logarithmic illustration UAH-GISS divergence versus population growth:
http://hidethedecline.eu/media/UHIINDICATOR/fig14.jpg

BigOil
February 12, 2011 2:01 am

I support the suggestion of a previous post that it would be far more useful to have temperature records by region.
Many posters have pointed out that the world average is meaningless. Quite a clever device really.
The other useful thing would be to show a graph of actual temperature – 16.5 degrees or whatever- rather then the anomaly as this distorts the scale of change to look much worse than it is.

HR
February 12, 2011 3:04 am

Given we have independant surface temperature data from satellites and a reliable OHC data collection system isn’t this all a bit of a backward step?

Simon Barnett
February 12, 2011 3:08 am

@Feynman (et al disparging this effort)
But global warming – i.e the fact that our planet warmed in the 20th century – is not at dispute.
The point is we just don’t know by how much – becuase of the politicisation of the existing datasets. And we have no idea what caused it – the measured increase may be attributable to natural variation, the UHIE, reflect change in land-usage, solar variation, a combination of the above or something else entirely.
We don’t even really know if warming has ceased in the last 15 years becuase of “adjustments” made to the existing datasets to make every succsessive year “the hottest eva!!!”.
A documented, open source, temperature record – one that is based on the numbers and not the politics, and one where the math can be indipendantly verified by both sides – is vital starting point in _scientifically_ answering these questions and I see this as a very positive development.
If we cannot even properly quantify the temperature change we have no business trying to attribute that change to human activity, or to anything else. Attributing changes in temperature to any one factor in a vast and complex climatic system of which we currently have only limited understanding – carbon for example – before we even have agreement on how much the temperature changed is junk science conducted largely by Malthusian activists.
Scientifically you just can’t get there from here.

Keith
February 12, 2011 3:36 am

Anthony:
I looked at the description of methodology. Will that methodology remove records where the sign of the temperature is reversed by failing to input/transcribe the data properly by failing to include the M for minus signs? GISS & METAR – dial “M” for missing minus signs: it’s worse than we thought. http://wattsupwiththat.com/2010/04/17/giss-metar-dial-m-for-missing-minus-signs-its-worse-than-we-thought/

February 12, 2011 5:35 am

I’m encouraged by this effort and can’t wait until they release the data as I have a methodology using absolute temperatures that is different to the normal way of measuring temps, and more of this sort of data will suit it perfectly.
The way I look at at the temperature recordings is that they are all in error, so the more data that can be collected, the more the errors are reduced.
We see from the brouhaha over Steig that two stations 2.5km apart have a 2deg difference in recorded temps. It could be thermometer error, it could be that it really is different by that amount, who knows.
500metres from a temp station, the temperature will be different, half an hour after the recording is made, the temperature will be different. Yet all we have are these snapshots in time and place of the temperature.
The trick is to find the things that follow a normal distribution of error, and those that don’t. For instance, instrument accuracy is said to be +/-2deg. I think it would be right to expect that there are just as many errors upwards as those downwards, so they follow a normal distribution.
In summer, it would be expected that the temperature would be above the average of the max and min for longer than it would be below. However, it is balanced by longer lower temperatures in winter, so it could be said to follow a normal distribution over the course of the seasons.
UHI does not follow a normal distribution, neither does the March of the Thermometers, or the lowering in average elevation of the temperature stations.
These are three of the major ways, and no doubt there are others, that the data has become less useful than it could be, and they need to be quantified and adjusted for, and the more data we have, the better we can test those things and generate accurate results. Also the use of anomalies results in a huge loss of information, in my view.
Remember, Anthony has been involved in this, and if for no other reason than as a mark of respect to him, we should await the releasing of the data before saying things we may regret later.

John Brookes
February 12, 2011 6:26 am

Well, this is a rather exciting development! But what a lot of work it will be. Temperature records, like any records, have mistakes in them. There will be so many subtle problems to fix. For example, how do you handle the case where a weather station is moved? A move of a few kilometres from the coast can dramatically affect the temperatures. How do you take this into account?
I’m going to assume that we have skillful people doing this, and that they will manage to make sense out of the vast amounts of temperature data, and get meaningful results out. I have a reasonable faith that clever people can do miracles!
Attitudes to this new study seem to be splitting skeptics into those who think the science is tractable and worth pursuing, and those who think its all too hard and we should just make our minds up without any evidence.

R. de Haan
February 12, 2011 8:48 am

No matter the outcome, this is a great initiative. Thanks

kforestcat
February 12, 2011 9:01 am

Dear steven mosher
Where you state:
“And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?”
Assuming the Climate Reference Network (CRN) sites in question were co-located with the older the surface temperature stations. I would conclude the “pristine” sites were accurate to within their calibration range at the time the data was compared – based on independent verification.
Note however that my understanding is that the CRN data is obtained from land based” automated instrument package[s], transmitted to a GOES satellite which in turn transmits the data to Wallops Island, VA.” (See http://www.data.gov/geodata/E5110AB7-6A2A-7705-9B63-CBDEDA02DFA5) If I am in error, and the CRN data represents temperature data collected directly by satellite and not land-based data collected by satellite please let me know – you knowage in this area being better than mine.
Even without the land-based CRN or “pure” satellite data, I would normally be inclined to “believe it likely” that scientific readings at “pristine” sites were “likely” accurate during periods prior to the independent verification – under a blanket assumption that the site’s QA/QC procedures were followed…Unfortunately I have lost all confidence in NOAA’s QA/QC program for surface temperature stations.
That said, I would not/could not “conclusively” state the stations pre-comparison data was accurate …unless I had access to reliable QA/QC data showing consistent and independent verification of the source instrument readings.
Further, if the inherit accuracy of the source instrument was beyond to the range required to answer the climate change issue at hand – then I would be forced to conclude that I could not discern a usable result for that purpose.
My comment was not intended to suggest that all surface temperature stations readings have no scientific value. Rather that there is not a sufficient number of reliable “pristine” stations available throughout the world from which one can draw a reliable conclusion about the world “surface” temperature…or to even assumed a difference from an arbitrary set “normal” temperature.
Consequently, in my view, while the historical record may provide a “indicator” to “suggest” past events, the data is simply not reliable enough nor available in sufficient quantity to draw a firm conclusion about the “world temperature”. (Assuming , as a side issue, said number has any meaning).
In conclusion, where surface temperature data can be verified as reliable and is of sufficient quantity to draw specific conclusion I have no problem accepting the results. I am simply not convinced this is the case.
Between gentleman, recognizing you appear to have a divergent view, do you come to a different conclusion(s) with the same facts? Or do you differ with my view of the realiablity, quality, and quantity of the data avaliable? What reasoning divides us?
No hostile intent or sarcasm implied, I do value you opinion.
Regards, Kforestcat

eadler
February 12, 2011 9:06 am

Pat Frank says:
February 11, 2011 at 7:18 pm
eadler, see my paper, here (free pdf download).
I’m talking of systematic error in the temperature record due to problems at the instrumental level. Inaccuracies enter field-measured temperatures because of solar loading on sensor shields and wind speed effects, which cause the sensor to record something other than the true air temperature.

I think you are making a logical error in this paper in case 2 sec 2.2. You claim that for a given station, the variation in the actual temperature , s, contributes to uncertainty in the monthly average, which is in addition to the measurement noise. This is incorrect. The average of the real temperature has no uncertainty as a result of the real variation. If you were choosing N temperature samples at random from an infinite sample, then the average that you get would have a statistical uncertainty that you state, even when the measurements were perfectly accurate. But this is not what applies to the monthly average at a given station. You are not choosing N values from from a random sample of temperatures. The N temperature measurements at a given station are all that there is. The average is the average with no uncertainty due to sampling.
I don’t have access to the references you cite, and am not acquainted with the terminology you use, so I can’t comment on the details of the other aspects of your analysis of temperature uncertainty. I will have to wait and see what the climate science community makes of it.
The actual global temperature is not what we are calculating, but rather the change in temperature over time. Its seems to me that the sensor errors, that you mention, will cancel when the temperature anomaly is calculated, unless there is a systematic drift over time.

Chris Riley
February 12, 2011 9:49 am

“It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.” Sharper00
It seems reasonable to ME to criticize them (NOAA NASA CRU etc. etc.) for pushing a draconian re-ordering of the world’s economy, one that without question would result in gargantuan increases in human misery, as a “solution” to a “problem” that they (or anyone else) have yet to demonstrate.

Allen
February 12, 2011 10:44 am

@Bigdinny, Ged, MtK,
Thank you for asking the question BD and for the responses. I too am a “lurker” and the bee in my bonnet has always been about the integrity of the scientific inquiry that led to this fading AGW alarmism. I really took an interest when Climategate broke, when emails seemed to indicate that the peer-review process was being actively corrupted by a small number of scientists. More recently, as you might have seen here at WUWT, there is more evidence of peer-review corruption with the Steig/O’Donnell affair.
In summary, the atmosphere has been warming since the last ice age. But some would have us believe, based on a corrupted scientific inquiry process, that our consumption of fossil fuels and the consequent emission of CO2 is catastrophically exacerbating the warming trend. It follows that we have it in our power to reverse the trend by reducing our consumption of fossil fuels.
If the scientific inquiry process IS corrupt, then how can we know the true causes behind the warming trend? For me, the critique starts there, with an inquiry about the scientific process itself.

DT UK
February 12, 2011 1:14 pm

The lead scientist of this new group Robert Rohde has I believe been an administrator of Wikipedia since 2005, using the name Dragons Flight he has been pretty ‘active’ mainly in climate related topics
Look him up, his style while not as obvious as one William M Connolley is none the less……..well make your own mind up
BTW I predict this team will find even more warming than had been previously found

eadler
February 12, 2011 1:25 pm

kwik says:
February 12, 2011 at 12:32 am
eadler says:
February 11, 2011 at 4:06 pm
“In addition, when urban stations were dropped from the global data set used by GISS, it made no difference in the trend. This result was reported in the peer reviewed literature.”
Oh really?
You can Peer review this one;

I don’t know what the nature of the data that the family in your video was accessing. If the data was not homogenized an UHI effect will be detected. It is a real effect. Before climate scientists use the data, it is homogenized to account for station moves, equipment changes and abrupt temperature changes due to environment. The result is that once this is done, no difference between urban and rural data bases can be detected.
http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf
All analyses of the impact of urban heat islands (UHIs) on in situ temperature observations suffer from inhomogeneities or biases in the data. These inhomogeneities make urban heat island analyses difficult and can
lead to erroneous conclusions. To remove the biases caused by differences in elevation, latitude, time of observation, instrumentation, and nonstandard siting, a variety of adjustments were applied to the data. The resultant
data were the most thoroughly homogenized and the homogeneity adjustments were the most rigorously evaluated and thoroughly documented of any large-scale UHI analysis to date. Using satellite night-lights–derived urban/rural metadata, urban and rural temperatures from 289 stations in 40 clusters were compared using data from 1989 to 1991. Contrary to generally accepted wisdom, no statistically significant impact of urbanization could be found in annual temperatures. It is postulated that this is due to micro- and local-scale impacts dominating over the mesoscale urban heat island. Industrial sections of towns may well be significantly warmer than rural sites, but urban meteorological observations are more likely to be made within park cool islands than industrial regions.

Dave Springer
February 12, 2011 1:33 pm

steven mosher says:
February 11, 2011 at 11:05 am
“Be prepared to learn that any 100 randomly chosen tell the same story.
heck, pick the 10 longest records and you get the same story.”
That’s what happens when people won’t let facts get in the way of a story.
What do you think the story would say if we took the 10 longest rural records and used only raw data with no adjustments?
The problem is the instrument record isn’t accurate enough or long enough or global enough to pull such a small signal out of the noise of the past 130 years. Then because that can’t tell a credible story they try to manipulate, extrapolate, interpolate, adjust, and otherwise massage the poor data to make it better. Once you commit to massaging poor data like that with statistical techniques and unverifiable quality assumptions you can make it say whatever you want it to say which is why there’s a trite expression “Lies, Damned Lies, and Statistics” which came from a popular book by the same title.
Anthony Watts added in this thread “global warming is real” and only the magnitude is in question.
Not quite, Anthony. You rely on the satellite record for that which is short (32 years) and not without its own problems, questionable assumptions, and assorted other artifacts to say nothing of it not measuring the air temperature directly with a thermometer 4 feet off the ground inside a Stevenson screen but rather is making an indirect measurement of radiation that has travelled through kilometers of atmosphere and has to be adjusted and tranformed with mad skillz to get an actual temperature out of it. The number of revisions over the past 32 years to how the satellite data is and was massaged is legion and the sad fact is the satellite record is still the best temperature we have despite all the problems with it.
So when you say “global warming is real” you’re manufacturing a factual statement. If you said “global warming appears to be real over the past few decades” I would have no argument with it but you didn’t – you stated it is a fact when it is no such thing.

Bill Illis
February 12, 2011 1:51 pm

I, for one, am very interested in seeing the raw data from all 39,000 sites across the world.
As long as they outline how many are in cities, UHI will not be a problem since we can deduce how much of whatever increase is just UHI.
It doesn’t matter what the results are.
We have the right to have access to all the raw data in this very important issue (and I prefer to see all of it – not just the ones NCDC or GISS or the Hadley Centre have picked out for me and made all kinds of unknown adjustments to).

Dave Andrews
February 12, 2011 2:09 pm

Steven Mosher,
OK the ‘law of large numbers’ says there is warming. No one disagrees with that- its what coming out of the LIA also says.
But the ‘ law’ says nothing about whether the warming is unprecedented, the records being too short, or whether there is a significant AGW component. The MWP occurred ,with variation, over approx 300 years. Similarly the LIA again with variation during the period, lasted over a similar period. There was precious little AGW involved in these events.
Why should we assume human influence is now the dominant factor in climate change?

Bigdinny
February 12, 2011 3:21 pm

So now, having read every post on this thread (and thank you for the attempt at direct answers Allen,Duster,Ged and MtK), I can safely conclude that there are no answers, only more questions. As an elected official in a small town who must make reasoned judgments in the allocation of tax dollars, it leaves me, well, cold. 🙂 As a tidbit for all you analytical mathematicians out there, our community recently discovered (somewhat rudely) that the price of wind is directly proportional to the price of fossil fuels, at least as it relates to the generation of electricity. Someday, on an appropriate thread, I’ll reveal some of the dirty little secrets regarding recycling. As Kermit the Frog once said, “It’s not easy to be green”.

rbateman
February 12, 2011 4:19 pm

Patrick Davis says:
February 11, 2011 at 8:24 am
Perhaps the group found the data that Phil Jones says got lost.
I hope it includes the missing data from my town. I can dream, can’t I?

u.k.(us)
February 12, 2011 5:05 pm

So, now that we have the surface temps figured out, we can factor in the effects of CO2.
I assume it won’t be good news.

February 12, 2011 5:49 pm

Steven Mosher, “So when the satellite measure matches the land surface record what do you conclude about the land surface record?
The systematic error of a PRT temperature sensor, e.g., inside a CRS shelter, has been measured, Steve. It’s about (+/-)0.5 C, and about half that for an MMTS sensor. Why does comparison with satellites tell anyone anything about the reality of empirically determined systematic errors in a surface air temperature sensor?
Apart from that, satellite temperatures from IR sensors are calibrated against buoy SST measurements, and so are no more accurate than the buoys are. Since buoy SSTs are also used to deduce or calibrate marine air temperatures, it’s not so surprising that satellite and surface temperature trends should match.
And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?
Was the CRN data corrected for the systematic error impacting its own sensors? If you look at Hubbard and Lin’s 2002 paper (reference 13 in the E&E paper), you’ll see that the precision sensors in all tested shelters were systematically biased in the same direction.
In their 2004 paper, Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks J. Atm. Ocean. Tech. 21, 1025-1032, H&L, among other things, examined errors in USCRN sensors. They found that, “For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2 – 0.34 C due to the inaccuracy of CR23X datalogger…”, where “RSS” is root-mean-squared. These are systematic errors, not random, and do not decrement as 1/sqrtN.
When the systematic effects are derived from the same forces as determine air temperature, it’s not surprising that anomaly trends correlate, even when they’re inaccurate. But in any case, there doesn’t seem to be much reassurance available in comparative analysis.

LazyTeenager
February 12, 2011 6:07 pm

Espen says
———
Are they going to publish yet another “global mean temperature”? I’m surprised that physicists are willing to work with that kind of metric
———
Despite the moist enthalpy argument, you can’t away from the fact that if the global heat content of the oceans, the land and the air is going up, then the global average air temp is also going to go up as well.
You,ve let yourself become distracted by a “can’t see the wood for the trees” argument,

LazyTeenager
February 12, 2011 6:14 pm

Dave Andrews says
———-
Why should we assume human influence is now the dominant factor in climate change?
———–
The reverse also applies.
Why should we assume that human influences cannot affect climate now, when only natural influences affected the climate in the past?
Remember “that was then this is now”.

LazyTeenager
February 12, 2011 6:23 pm

Chris Riley says
———–
It seems reasonable to ME to criticize them (NOAA NASA CRU etc. etc.) for pushing a draconian re-ordering of the world’s economy, one that without question would result in gargantuan increases in human misery, as a “solution” to a “problem” that they (or anyone else) have yet to demonstrate.
———-
Re gargantuan increases in human misery:
Let me guess:
1. This statement is incontrovertible
2. The economics is settled.
3. It must be true because there is a consensus of Internet bloggers.
Or maybe you are just making up a story.

February 12, 2011 6:30 pm

Steven MosherWell, that’s not actually the case. The CRN are set up in a “paired” configuration for a large part of the network. That means old stations are paired with new stations. That will allow for the creation of transfer functions from the new network to the old.
Steve, how will transfer functions from a new CRN sensor presently parallel to, say, a LiG set up in a CRS screen, remove systematic error from the prior decades of LiG temperatures? Error will have varied systematically with erratic micro-climatic conditions. At best, after a decade or two of parallel measurements, you’ll get an estimate of the average bias and SD for the LiG/CRS system, relative to the CRN system. Subtracting the average bias from prior LiG temperatures will not remove the relative LiG SD. That SD will be uncertainty bars around any LiG measurements spliced onto a measured CRN trend.
If there happened to be a larger systematic variance or a different bias in the past LiG measurements than in the LiG temperatures taken in the later parallel measurements, then subtracting the new average bias may even make the older LiG temperatures less accurate. But we’d never know, and so that would produce another form of uncertainty, namely implicate uncertainty in that we’d not really know whether our correction actually corrected the older record.
Further, the LiG SD will have built into it the unmeasured systematic error from the CRN sensor. Unless, that is, a further parallel calibration temperature sensor system was put in place that is relatively impervious to solar loading and wind speed effects. That sensor would yield the accurate air temperatures that will reveal the systematic bias in the CRN sensor temperatures.
A fully critical experiment would include a pyranometer and an anemometer to independently measure radiation and wind speed, and use those plus the more accurate temperatures to obtain empirical transfer functions to correct the systematic bias out from the CRN temperatures.
So, I see the CRN-LiG parallel set-ups to be just business as usual for NCDC, where they merely want to adjust and renormalize older LiG or MMTS temperatures to line up with newer CRN temperatures. The parallel measurements won’t help them with systematic error already in the older instrumental surface air temperature record, and won’t help them with the systematic error that will also enter the newer CRN temperatures.
As I recall, both Ryan O’D and Jeff C observed that their study corrected a faulty method, but said nothing about a physically real temperature trend in Antarctica. Given that, it looks like your argument there assumes a conclusion that was not present.

LazyTeenager
February 12, 2011 6:30 pm

My feeling about this new project is that it’s leadership is extremely naive about the mishmash of attitudes that make up climate skeptic land.
I predict that if it does not give the answer that people want it, will still be attacked irrespective of the quality of the result.
Even amongst the comments here today I see people maneuvering to preemptively set up a dismissal.

February 12, 2011 6:50 pm

eadler, you wrote, “[In a] monthly average at a given station[, y]ou are not choosing N values from from a random sample of temperatures. The N temperature measurements at a given station are all that there is. The average is the average with no uncertainty due to sampling.
Daily temperatures at a given station are not day-by-day constant across the month. They may oscillate and there will likely be a trend with a non-zero slope across the month. The average of those temperatures will be one number, it’s true. However, it is false to say that the average temperature is an accurate representation of the temperature for that month. The average has a magnitude uncertainty that communicates the variation in daily temperature across that month. Including the sqrt(variance) of the magnitude is the only physically complete representation of a monthly average temperature.
That leads to the interesting point that the 30 sets of 12 months in a 30-year anomaly normal period will display a magnitude uncertainty in their 30-year monthly averages. That magnitude uncertainty should be propagated into any long-term temperature anomaly trend based on that 30-year normal, as an indication of the natural variation in temperature during the climate regime of the normal epoch. I present that calculation in my next paper, already reviewed and accepted by E&E, and it turns out to further seriously impact the meaning of the 20th century surface air temperature anomaly trend.
You wrote that, “The actual global temperature is not what we are calculating, but rather the change in temperature over time. Its seems to me that the sensor errors, that you mention, will cancel when the temperature anomaly is calculated, unless there is a systematic drift over time.
When calculating an anomaly by subtraction, the errors in the normal and the temperature propagate as their rms. They don’t subtract away. The rest of your comment about errors canceling is true only when errors are known to be random. Systematic errors are not random, and the estimated climate station measurement errors are not known to be random. Applying the statistics of random errors to measurement errors that are not known to be random, is a mistake.

February 12, 2011 7:01 pm

Lazy teenager says:
“Even amongst the comments here today I see people maneuvering to preemptively set up a dismissal.”
Classic projection.

Doug Badgero
February 12, 2011 7:36 pm

Lazy Teenager,
This project is not capable of settling the issue. The debate isn’t about what the earth’s temperature has been doing for the last 150 years. All this project can do is damage the warmist meme by showing that we haven’t warmed. I would be surprised if that were the case.

johanna
February 12, 2011 8:04 pm

“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”
———————————————————————-
What a load of cobblers. Putting together a dataset has nothing to do with creating ‘a new, irrefutable consensus’. There ain’t no such animal as a ‘new, irrefutable consensus’, anyway.
I don’t see any harm in this project, if they are as transparent as they promise to be. But, given the points made by PPs and Anthony about the quality of even the raw data, it will probably end up as a GIGO exercise. More crappy data does not mean more accurate conclusions.
Where a problem could arise is if the results of putting together a dataset are splashed across the world as somehow providing answers on a global scale. That would be several bridges too far. In fact, it is more likely that the dataset will be useful at local levels to provide pointers to the veracity of numbers being generated in particular areas.
As has been pointed out, none of this touches on causation anyway. But, as Ryan O has discovered, methodology is every bit as fraught as subsequent steps in this field.

John Brookes
February 12, 2011 8:28 pm

[Snip. We frown upon the labeling of different points of view as “deniers.” ~dbs, mod.]

Alex Heyworth
February 12, 2011 9:58 pm

Prof Muller is, IIRC, the author of Physics for Future Presidents. If his work on this project is up to the standard he displayed there, I look forward to the results.

Chris Riley
February 12, 2011 10:18 pm

Lazy teenager says (2/12/6:23 pm)
———
Re gargantuan increases in human misery:
Let me guess:
1. This statement is incontrovertible
2. The economics is settled.
3. It must be true because there is a consensus of Internet bloggers.
Or maybe you are just making up a story.
————
Consider the misery generated to date by a program that is so tiny that even its developers admit that it will have no measurable impact on the climate. I am referring to CAFE standards, not the internet kind, or the kind that should only sell coffee grown in the shade, but the program wherein the U.S government mandates the fuel mileage of motor vehicles sold in this country.
Four studies have looked at the number of deaths caused by this program, and no, the studies were not done by “Big Oil”. The studies I refer to were done by the following:
1. USA TODAY
2. Brookings
3. NAS (National Academy of sciences)
4. NTSB (National Transportation Safety Board)
JR Dunn writing in The American Thinker compiled the results of these studies and published ranges in the estimated deaths from the CAFE standards to date.
This ranges between 42,000 and 125,000 Americans killed as of April of last year.
CAFE standards alone have already caused what anyone but a Bolshevik would describe as “gargantuan human misery.” Imagine what a program that would materially impact CO2 concentration in the atmosphere would do.
Or maybe you are just making up a story.

Allen
February 12, 2011 10:18 pm

@Bigdinny: When we stop asking questions we stop doing science.
I don’t envy your lot as a politician. You deal in rhetoric and as Plato so fervently argued, the truth cannot be found using it. However, as a politician you must make decisions and arguments with the information you have at hand. The problem with climate science is that it does not operate within a paradigm theory as do chemistry (Lavoisier’s combustion theory) or physics (Einstein’s theories of relativity). While there are competing theories to account for the behaviour of the climate only one has been given the full backing of the rhetoricians irrespective of the validity of the theory. This theory, as I have said before, has its basis in a corrupt line of scientific inquiry, so we cannot know what is true by using this theory.
I think that what we do know about climate is presently so incomplete that we cannot discern man made effects, much less the magnitude of those effects. So rather than hitching political fortunes to global climate dogma the prudent politician should put government resources to use in ways that produce direct benefit for his constituents. Wind farms, as you have found out, are dubious “investments”.

February 13, 2011 1:48 am

“…Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point…”
Yes, global warming IS real, and nobody really denies that.
The question all along has always been: “Just exactly how much warming have we seen, and is this warming outside the bounds of natural warming in the past”?
Several scientists have questioned the use of thermometers used for daily readings being “re-purposed” to provide a climate record.
Over the years, the moves, changes in equipment, UHI, encroachment on the thermometers and other things puts a possible error in the system.
Add to that the adjustments, dropping of stations, 1200km smoothing, extrapolation to account for areas of no data, classification of rural or not based on nightlights, refusal of scientists to simply tell people which sites they used and how they processed that data, use of different averaging periods, etc – and you see how they’ve managed to muddy the original question.
And we haven’t even TOUCHED the whole idea of an “anomaly”, especially when we don’t know what “normal” is.
It seems the only use we see of the processed data is as a basis to declare “warmest month since whenever”.
Unfortunately, the “exaggerators” have already attacked. They’ve looked at the list of donors and determined the effort to be worthless.
That is, unless the data confirms their theory. Then, they’ll fall over themselves to deem the study a success.
Can’t wait for the data to come in…

Bill Illis
February 13, 2011 5:00 am

If you want to watch your historical temperature series get changed every month by GISS and Hadcrut, sign yourself up at.
http://www.changedetection.com/
Over the last two months for example, over 50% of the annual numbers in this familar chart were changed.
http://data.giss.nasa.gov/gistemp/graphs/Fig.D.gif
Over the last 11 years, the trend in US temperatures has been adjusted upward by about 0.5C which is close to the total increase in the new adjusted numbers.
http://img844.imageshack.us/img844/3259/gistempuschanges11years.png

February 13, 2011 10:31 am

u.k.(us) says:
So, now that we have the surface temps figured out, we can factor in the effects of CO2.
I assume it won’t be good news.

I was about to ask about this, too. No matter WHAT they find, it doesn’t demonstrate ANY forcing. In fact, this particular project won’t even show any correlations.

Doug Proctor
February 13, 2011 11:06 am

As Kwik notes http://wattsupwiththat.com/2011/02/11/new-independent-surface-temperature-record-in-the-works/#comment-597152:
The rural vs urban temperature difference in the GISS data is so easily identified by even a 10-year-old, it is a wonder that the re-analysis needs doing at all. The correlation with global temperature increase with decreasing station count is as easily seen with other comparisons within the GISS official data. The divergence of global temperatures between land stations and satellite data is similarly easy to see. An objective review along these lines of the current data, presented to Congress as a challenge to requested funding based on (false) claims seems straightforward and simple.
Why is it that what we see posted in such clear and simple graphs has no apparent credibility and use for the Inhofes who wish to explose the CAGW fantasy?
A series of about 4 graphs seems to show it clearly. And that is the ADJUSTED data. What is wrong, technically, with these comparisons?

eadler
February 13, 2011 7:22 pm

LazyTeenager says:
February 12, 2011 at 6:30 pm

My feeling about this new project is that it’s leadership is extremely naive about the mishmash of attitudes that make up climate skeptic land.
I predict that if it does not give the answer that people want it, will still be attacked irrespective of the quality of the result.
Even amongst the comments here today I see people maneuvering to preemptively set up a dismissal.

You may call yourself lazy, but you seem to be an astute observer of mankind.

eadler
February 13, 2011 7:30 pm

Doug Proctor says:
February 13, 2011 at 11:06 am
As Kwik notes http://wattsupwiththat.com/2011/02/11/new-independent-surface-temperature-record-in-the-works/#comment-597152:
The rural vs urban temperature difference in the GISS data is so easily identified by even a 10-year-old, it is a wonder that the re-analysis needs doing at all. The correlation with global temperature increase with decreasing station count is as easily seen with other comparisons within the GISS official data. The divergence of global temperatures between land stations and satellite data is similarly easy to see. An objective review along these lines of the current data, presented to Congress as a challenge to requested funding based on (false) claims seems straightforward and simple.
Why is it that what we see posted in such clear and simple graphs has no apparent credibility and use for the Inhofes who wish to explose the CAGW fantasy?
A series of about 4 graphs seems to show it clearly. And that is the ADJUSTED data. What is wrong, technically, with these comparisons?

There is not much divergence between the global average temperature anomaly between satellite observations and the 3 major temperature station data bases.
http://tamino.wordpress.com/2010/12/16/comparing-temperature-data-sets/
When the data beginning in 1980 is analysed using the same baseline years for temperature, the graphs correspond quite well.
http://tamino.files.wordpress.com/2010/12/5t12.jpg

Espen
February 14, 2011 7:44 am

LazyTeenager says:

Despite the moist enthalpy argument, you can’t away from the fact that if the global heat content of the oceans, the land and the air is going up, then the global average air temp is also going to go up as well.

You just don’t get it, do you? The global average air temperature may in theory drop even if the heat content of the atmosphere rises (for instance if the rise occurred just in already warm areas, while colder and drier areas got cooler).
(Besides, ocean heat content has not increased at all since we started to get better measurements)

February 14, 2011 11:51 am

eadler, neither you nor LazyTeenager have ever given evidence that you’ve studied the air temperature record enough to actually understand its problems. And yet, you’re both ready to dismiss its critics. Therefore, from your own perspectives and at the very best, you’re both as prejudicially biased as they are.
You also wrote, “There is not much divergence between the global average temperature anomaly between satellite observations and the 3 major temperature station data bases. … When the data beginning in 1980 is analysed using the same baseline years for temperature, the graphs correspond quite well.
Since satellite temperatures are calibrated to buoy SSTs, they’re not independent data sets. Their correlation, therefore, tells you nothing about the reliability of the record.
AGW climate science is rife with the sort of sloppy tendentious analysis you posted.

onion2
February 16, 2011 1:04 pm

Pat Frank:
“Since satellite temperatures are calibrated to buoy SSTs”
No they are not. AGW skepticism is rife with the sort of sloppy tendentious analysis you posted

Doug Proctor
February 16, 2011 1:32 pm

Satellite data is about 0.22K cooler than the GISTemp data. As I understand/understood, the satellite data is calibrated internally, but there must have been some ground-truthing done, some baseline established. So the temp difference is real.
To be cooler, the satellite data is likely ocean-biased (relative to GISTemp). GISTemp is probably land-biased in the same way. Since there is a known UHIE issue of undercorrection (of an undetermined amount, said by warmists to be neglibible, said by skeptics to be up to 0.3K), is it possible that the satellite-GISTemp difference IS the uncorrected UHIE?
A 0.22K of incorrectly corrected UHIE in the GISTemp record would be about right by general guesses. What would that do to the temp rise since 1965? The urban-rural ratio has increased dramatically since then: the UHIE inadequate correction will bring down current temperatures (and anomalies) while leaving older records intact.
So: is the UAH/GISTemp temperature value discrepancy the uncorrected UHIE?

Cole
February 16, 2011 3:08 pm

This is all moot if they are using erronious surfacestations.
http://www.surfacestations.org/
With an error rate of 89 percent how does this even make sense?

Robert DePree
February 21, 2011 7:54 am

The most important mission of BEST is to establish open science regarding the AGW hypothesis, and to measure it’s performance using common sense data and pragmatic homogenization procedures. It is paramount that your analysis be fully disclosed for all to evaluate, critique, comment, and use to test alternative hypotheses.
Let rational and open science prevail. We who seek the truth, including a plurality of the electorate, require a scientifically balanced basis for responding to policy prescriptions. You have the power to replace the divisive and non-productive “advocacy spin” with which we are deluged daily with measured and reasoned consideration within the tradition of truthful scientific inquiry, including all it’s uncertainties.
I think its been adequately demonstrated that repurposing human energy regimes, with overwhelming cost, behavior mods, and personal inconvenience will not occur without broadscale disclosure of these nuances of enviromental science. Have faith in the common sense of an informed electorate, as much as you can count on the resistance of an electorate beseiged by advocacy spin. The people are receptive to the honest truth.
Best wishes for your endeavor.