New independent surface temperature record in the works

Good news travels fast. I’m a bit surprised to see this get some early coverage, as the project isn’t ready yet. However since it has been announced by press, I can tell you that this project is partly a reaction and result of what we’ve learned in the surfacesations project, but mostly, this project is a reaction to many of the things we have been saying time and again, only to have NOAA and NASA ignore our concerns, or create responses designed to protect their ideas, rather than consider if their ideas were valid in the first place.  I have been corresponding with Dr. Muller, invited to participate with my data, and when I am able, I will say more about it. In the meantime, you can visit the newly minted web page here. I highly recommend reading the section on methodology here. Longtime students of the surface temperature record will recognize some of the issues being addressed. I urge readers not to bombard these guys with questions. Let’s “git ‘er done” first.

Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.

From The Daily Californian:

Professor Counters Global Warming Myths With Data

By Claire Perlman

Daily Cal Senior Staff Writer

Global warming is the favored scapegoat for any seemingly strange occurrence in nature, from dying frogs to hurricanes to drowning polar bears. But according to a Berkeley group of scientists, global warming does not deserve all these attributions. Rather, they say global warming is responsible for one thing: the rising temperature.

However, global warming has become a politicized issue, largely becoming disconnected from science in favor of inflammatory headlines and heated debates that are rarely based on any science at all, according to Richard Muller, a UC Berkeley physics professor and member of the team.

“There is so much politics involved, more so than in any other field I’ve been in,” Muller said. “People would write their articles with a spin on them. The people in this field were obviously very genuinely concerned about what was happening … But it made it difficult for a scientist to go in and figure out that what they were saying was solid science.”

Muller came to the conclusion that temperature data – which, in the United States, began in the late 18th century when Thomas Jefferson and Benjamin Franklin made the first thermometer measurements – was the only truly scientifically accurate way of studying global warming.

Without the thermometer and the temperature data that it provides, Muller said it was probable that no one would have noticed global warming yet. In fact, in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius, and sea levels, which are directly affected by the temperature, have increased by eight inches.

Photo: Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised.
Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised. Javier Panzar/Staff

To that end, he formed the Berkeley Earth group with 10 other highly acclaimed scientists, including physicists, climatologists and statisticians. Before the group joined in the study of the warming world, there were three major groups that had released analysis of historical temperature data. But each has come under attack from climate skeptics, Muller said.

In the group’s new study, which will be released in about a month, the scientists hope to address the doubts that skeptics have raised. They are using data from all 39,390 available temperature stations around the world – more than five times the number of stations that the next most thorough group, the Global Historical Climatology Network, used in its data set.

Other groups were concerned with the quality of the stations’ data, which becomes less reliable the earlier it was measured. Another decision to be made was whether to include data from cities, which are known to be warmer than suburbs and rural areas, said team member Art Rosenfeld, a professor emeritus of physics at UC Berkeley and former California Energy Commissioner.

“One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”

Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point.

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

Some scientists who fear that their results will be misinterpreted as proof that global warming is not urgent, such as in the case of Climategate, fall into a similar trap of exaggeration.

The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.

“We believed that if we brought in the best of the best in terms of statistics, we could use methods that would be easier to understand and not as open to actual manipulation,” said Elizabeth Muller, Richard Muller’s daughter and project manager of the study. “We just create a methodology that will then have no human interaction to pick or choose data.”

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
205 Comments
Inline Feedbacks
View all comments
February 11, 2011 9:41 am

I have mixed feelings about all this. If only it wasn’t coming out of Berkeley! I’ll be open-minded for now.
Meanwhile, James Holdren, Obama’s Science Adviser admits that there is a problem with AGW–the problem is that we skeptics need to be “educated.” Sheesh.
http://thetruthpeddler.wordpress.com/2011/02/11/obamas-top-science-advisor-calls-global-warming-skeptics-an-education-problem-unbelievable/

stephen richards
February 11, 2011 9:42 am

Anthony
This would have been a good one to post with comments switched off. There is nothing to see, nothing to say.

Grumpy Old Man
February 11, 2011 9:43 am

Of course, global warming is real. We know this from historical evidence. There are no more frost fairs on the Thames. The question is, ‘how much and what caused it’? An improved set of data may answer the first part but this absolutely depends on how much we can trust the data. It will do little to bring us to the cause. It may tame the exagerators and that will be good. In the end, real science will give us the answer. That the Supreme Court of the US ruled that CO2 is a harmful gas is just unbelievable. How many science degrees do these judges have?
This debate cannot be settled until we have a clear understanding of how climate works and, I suspect, that is many years in the future. Meanwhile an improved dataset can only assist us. (Good luck Anthony). It might even give a pointer to the next ice age, surely just around the corner. (Keep pumping the CO2 folks – the alarmists might be right (and I’m too darned old to cope with an ice age).

JEM
February 11, 2011 9:45 am

It’s coming out of Berkeley so one has to be cautious about any expectation that the folks concerned are going into this unbiased, but if they’re willing to fully document and honestly support what they do, then they’ll be making a worthwhile contribution.
I’ve been mulling over for a while just how one would go about creating a database of temperatures, where each entry was not just a value but a complete biography of the data point including time, location, any related imagery, qualitative metrics including confidence intervals, annotations, etc.
Where it gets sticky is being able to assign essentially a GUID to each data point and its associated metadata, so as to be able to track its use through all subsequent aggregations and analyses that rely on that value, and to ensure that any qualitative and annotative metadata automatically propagates through those aggregations and analyses and cannot be short-circuited by, say, someone whose statistical and data-warehousing expertise may be, say, a bit short.
The basic data structures involved are easy, but the analytical and processing side stumbles – if you (for instance) sum a range of values you also have to generate a GUID, data structure, and biographical metadata for the sum as well as the one-to-many relationship to the GUIDs of all the values you just summed.
Obviously, maintenance of all this overhead would be necessary only as a repository for ‘published’ results, but it still strikes me as some distance beyond where we are now.
Now, back to your regularly scheduled reality…

Baa Humbug
February 11, 2011 9:45 am

I can’t see how using thousands of stations with all the variables involved will give us an accurate indication of where global Ts are going.
Adjustments will need to be made for station moves, UHI effects etc etc.
This will become just another point of back n forth arguements.
I would have preferred just using stations with very long unbroken records such as the Central England record irregardless of how few of these stations there are.
Afterall, CO2 should be doing it’s “work” without fear nor favour all over the world, in all seasons and during all natural cycles such as the ENSO PDO and AMO etc. If we have just a handfull of reliable records of 60 years or longer, that should be enough to give us a good indication of what’s happening to global Ts.
An indication is the best we can hope for with the current measuring systems in place.
p.s. Sharperoo your comments are getting really really tiresome.

sharper00
February 11, 2011 9:48 am

S Courtney
“Your comments to Anthony Watts at February 11, 2011 at 8:15 am and February 11, 2011 at 8:39 am are offensive in the extreme. You owe him an apology.”
I don’t believe Anthony needs you to be offended on his behalf. No offence was intended and none was apparently taken
“Anthony Watts had told you that “We have a paper in peer review”. In other words, his demonstration of the “problem” is submitted for publication, so he HAS demonstrated a “problem”. “
When the paper is published we’ll see what’s up with that (a little joke there, hope you enjoyed it).
REPLY: Oh, I’m plenty offended, but I tried to be polite. I’ve stuck my neck out, done the work, recruited co-authors, argued the science in reviews, and put my name to my words. You on the other hand, snipe from the shadows, contributing nothing. That’s the real joke, and it’s on you. – Anthony

Elizabeth
February 11, 2011 9:48 am

Having done more reading on the Berkeley site I found: “The Berkeley Earth Surface Temperature study has been organized under the auspices of the non-profit Novim Group.”
Link to the Novim group page, http://www.novim.org/
Scan the page and you see, “Despite efforts to stabilize CO2 concentrations, it is possible that the climate system could respond abruptly with catastrophic consequences” followed by extensive discussion on climate engineering projects.
The integrity of the research team notwithstanding, further inquiry into the nature of Novim Group’s mandate is warranted.

Richard111
February 11, 2011 9:48 am

From this laymans point of view it only needs one thermometer to prove the global temperature is rising. That thermometer must be placed in a desert far from human habitation and in an open environment. Then simply record the MINIMUM temperature. After a few years of readings it should become apparent if the minimums are increasing, however slowly.

Marc77
February 11, 2011 9:53 am

If I had to do an analyses of surface temperature, I would use the information about wind. If there is a very local UHI, it should be washed away by wind. Also, if the wind is strong enough, the air will pass over a larger region over a certain time, so the temperature should be more representative. I wonder how much effort has been done to understand how to deal with the effect of wind on temperature measurement.
It is great to do the best statistical analyses, but just like computer models, statistics alone don’t go anywhere. I guess for now we have to wait and see what they have done exactly.

Peter Plail
February 11, 2011 9:55 am

Richard S Courtney says:
February 11, 2011 at 9:18 am
sharper00
Well said that man. I agree with every word and am pleased that you (unlike p00) are prepared to stand by your comments with your name, not hide behind an alias.
Sharper00 was accused of being Steig on an earlier thread here. Did I miss the denial?

Jerry from Boston
February 11, 2011 10:06 am

They’re going to use 39,390 temperature stations!! Anthony already had enough trouble compiling and analyzing data on the 1221 U.S. stations out of the 3,000 world-wide stations, detecting where the potential biases are on data collection and analysis, and showing they were low quality stations. Now they’re adding 36,000 new stations?! From flippin’ where? And what’s the quality of the databases they’ll receive? Handwritten records? Electronic (yeah, right!)? Going back how many years? How many station changes, and how well documented are these?
Anthony, don’t get suckered like Delingpole and Moncton did recently by trusting these guys (they are from Berkely, after all. Can you imagine what would happen to them on campus if they found global warming wasn’t as bad as thought? They’d be drawn-and-quartered and survivors of the purge hounded off campus. Careers over!). My suggestion – grab from their database the data for the stations you’ve inspected during surfacestations and compare them to your database. Maybe it’ll give you info you didn’t have yet. Although I would guess that if they were going to tweak data, they wouldn’t be dumb enough to do it on the most-scrutinized 1,221 U.S. station database subset in the 39,390 station dataset.
Personally, I think this whole Berkley effort is going to degenerate into farce. If the last 30 years of surface temp records don’t follow the satellite pattern, does that nullify the surface or the satellite data? If the surface temperature record climbs faster than the satellites, are we to “re-calibrate” the satellites to higher levels? If the surface temperatures are lower than the satellites, do we tweak up the surface temperatures to match the “gold standard” satellites? Or the reverse – admit major biases in the surface temperature records and adjust those back 150 years to show lower global warming in 150 years during the surface thermometer records but accept the boost in temperature during the satellite era for the last 30 years?
One thing for sure – if this study shows a big boost in temperature over the last 150 years or simply “confirms” the AGW global warming rise, it’ll be front page in the MSM for weeks.
I think this will be a mess.

February 11, 2011 10:09 am

Looking at the methodology described at the Berkeley site, they do not even mention systematic error. Systematic error inevitably contaminates the surface temperature record. If they neglect to discuss or evaluate that error, they’ll end up producing yet another centennial global temperature anomaly trend with plenty of statistical moxie but with no physical meaning.
There is no way to remove systematic error from an old data set. One can only estimate how large it might have been, and put conservative uncertainty bars on the result.
In either case — ignoring the systematic error in the 20th century record, or adding an estimated uncertainty width — there’s no doubt but that the global (or any local, for that matter) centennial temperature anomaly trend will be no better than almost entirely spurious.
If the Berkeley group ignores that empirical truth, their product will only kick up more controversy and be yet one more entry in the obscure the issue sweepstakes.

Perry
February 11, 2011 10:15 am

stephen richards says:
February 11, 2011 at 9:42 am
Anthony
This would have been a good one to post with comments switched off. There is nothing to see, nothing to say.
…………………………………………….
I second that motion and in addition, call upon sharper00 to moderate his/her language. The report will be released in due time. Further speculation is not required.

Dave Springer
February 11, 2011 10:20 am

UC Berkeley is well known for their politically unbiased faculty rather than the decidedly libtard demolib bias amongst the faculty of most other institutions of higher learning. Finally we can be assured of getting the truth. /sarc

JEM
February 11, 2011 10:22 am

Pat Frank – and that’s where the work is going to be needed in terms of validating whatever result they come up with.
As a start, extracting slices of their data based on probable quality indicators (stability of location, population density/UHI, etc.) and comparing the trends slice by slice.
There will be argument…

richard verney
February 11, 2011 10:22 am

My initial reaction was that this is a good development, but it appears likely that this will be a missed opportunity to examine matters from scratch.
Personally, I consider the idea of a global average temperature set to be absurd. It would be more sensible to have a data set dealing individually on a country by country basis. After all, every country will be affected differently to rising temperatures, and the global distribution of rising temperatures may point to a cause behind temperature rise which cause may be lost or not be apparent when looking at data globally.
Further, it would be sensible to compile such data set based only upon good quality raw data that requires no obvious adjustment, ie., only class 1 station data, preferably only class 1 rural data. This might mean fewer stations but some times less is more. A few accurate and uncurrupted stations may better tell what is truly going on.
Of course, however they compile the data set, it should be compiled in such way that one can do an analysis on sub data, ie,, select only rural data set, or only urban data set, or a combination of both. Similarly, only class 1 station data set, only class 2 station data set, only class 3 station data set etc nad a combination of all of these.
Compiling the data set in this manner will hekp analyse what is going on.

February 11, 2011 10:24 am

Sharperoo, “t seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.
See Anthony’s post on “Surface temperature uncertainty quantified,” here. You can download a free reprint of the paper here, (pdf) courtesy of the publisher.
When you figure out how to get around the systematic uncertainty in the surface air temperature record of the last 150 years, do let us know.

Jerry
February 11, 2011 10:27 am

Would someone please explain to me where he found the 39,000 stations? Sure, having more stations has got to be much better than the shoddy and manipulated GHCN data we have now, but we still need quality control. What percentage of all these stations are properly sited, etc.?

Ged
February 11, 2011 10:33 am

“Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.”
This is science! This is one of the most BASIC tenants of statistics! I hope people will pay greater attention to this seemingly innocuous statement, for it is part of the very core of all valid data gathering, processing, and interpretation.
Bravo for realizing and upholding this staple of the scientific method.

Jerry from Boston
February 11, 2011 10:40 am

“Alan Clark says:
February 11, 2011 at 9:17 am
Personally, I think we should also be setting up new automated stations to obtain more uniform coverage, which we can use to build 30-60 years worth of high quality new data.”
“RickA says:
February 11, 2011 at 8:09 am”
“I agree completely Rick. Individuals have the ability today to have a home weather station feeding data into their home computers that could be feeding data to a commingled base elsewhere, freely accessible by all. Ten years from now and beyond, we would have some serious raw data that would be very useful in establishing trends and verifying current projections.”
NASA has already done that. Starting a few years ago, they’ve finished putting about 114 stations around the U.S. (a few doubled up for quality control), spaced pretty equidistantly around the country, fully automated and in areas not subject to UHI, trees, whatever. Class 1 quality at each site (though their fencing looks a little weird, but that’s just me.) Their website says IIRC that they want to collect about 30-50 years of data so they can detect reliable long term trends in the U.S. I suspect that initial contacts by Anthony was a motivating factor.
I’ll look for the link.

Editor
February 11, 2011 10:51 am

sharper00 says:
February 11, 2011 at 8:39 am


To go back to your initial complaint about NASA and NOAA – the publication of your analysis showing there’s a problem for them to be concerned about is the starting point to them addressing it. It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.

Anthony has very clearly demonstrated huge problems in the sitings, spacing from buildings, nature of the surroundings, and other important issues affecting many, many, many temperature stations. Nor was he the first, Roger Pielke (IIRC) demonstrated the same thing for Colorado stations a few years before. Note that these are problems according to the official guidelines for surface stations, not just some issues that Anthony or Roger made up.
Now, if NOAA and NASA could pull their heads out of their fundamental orifices and put down their models and look out the window at the real world, surely those well-documented problems with their data collection apparatus would be a, what did you call it …. oh, yes, a “starting point to them addressing it”.
And in fact, in any well-run operation, the starting point would have been the NOAA/NASA internal evaluation of the siting of their ground stations. For Anthony to have to document the ground stations is a ringing indictment of the people that you so vigorously defend. They have obviously not done their jobs. Why do you defend that?
And for you to attack him with the fatuous claim that he has not provided anything for NOAA/NASA to use as a starting point is, to put it mildly, evidence of a serious misunderstanding on your part … Anthony has done their job for them, and deserves your thanks, not your approbation.
w.

Jerry from Boston
February 11, 2011 10:51 am

Sorry, folks. The new stations were put in place by NOAA, not NASA, at its Climate Reference Network:
http:/www.ncdc.noaa.gov/crn/#
And they’re looking for 50 years of data, not 30. Can’t wait!

GaryP
February 11, 2011 10:56 am

An open source for data and methods is to be applauded.
There still remains the problem that one cannot average intensive variables!
I have a 200 ml of water at 80 °C and a liter of water at 10°C. What is the average volume? First I add to get the total. 200 + 1000 = 1200 ml. This is okay, it is the total volume. Then I divide by two to get the average of 600 ml.
What is the average temperature? First I add to get the total. 80 + 10 = 90°C. This is the total temperature????? WattsUpWithThat!?! This is meaningless.
Now if both samples were exactly the same size and composition, then the total heat in each one could be measured by knowing the temperature. One could calculate the average heat in each one and compute the temperature if they were mixed together. Mathematically it would look like averaging the temperatures. This only works for exactly the same size and composition and no phase changes, volume changes, etc.
One cannot claim a 5°C change in bone dry Arctic air with a dew point of -50°C is the same as a 5°C change in tropical air with a dew point of +80°C. The energy change per unit mass is different and averaging the temperature changes is invalid. Don’t even think about the energy change per unit volume. Mixing identical volumes of incompressible fluids is one thing. Mixing volumes of air at different densities is different. Do you mix them reversibly or irreversibly? It makes a difference.

February 11, 2011 11:05 am

dp says:
February 11, 2011 at 8:18 am
On the face of it this is not what I expect from the denizens of my old home town. Be prepared to learn that any 1000 randomly chosen thermometers selected from the full set and calibrated over time tell the same story as all 39,000 thermometers similarly calibrated.
#####
Be prepared to learn that any 100 randomly chosen tell the same story.
heck, pick the 10 longest records and you get the same story.

Oslo
February 11, 2011 11:06 am

GregP:
“Also Judith Curry is named as a member of their team.”
I don’t think this is entirely true.
I think she (JC) is genuinely trying to balance things. Of course she doesn’t subscribe to the deception and corruption of Mann and Schmidt, but still, she does not oppose their view of the coming catastrophe.
For whatever reason. Keep in mind that JC’s coming to fame was with the hurricane katrina.
She has a chance now to be a genuine broker between the two factions, or to disappear as just another policy-driven advocate.
It is up to her.