New independent surface temperature record in the works

Good news travels fast. I’m a bit surprised to see this get some early coverage, as the project isn’t ready yet. However since it has been announced by press, I can tell you that this project is partly a reaction and result of what we’ve learned in the surfacesations project, but mostly, this project is a reaction to many of the things we have been saying time and again, only to have NOAA and NASA ignore our concerns, or create responses designed to protect their ideas, rather than consider if their ideas were valid in the first place.  I have been corresponding with Dr. Muller, invited to participate with my data, and when I am able, I will say more about it. In the meantime, you can visit the newly minted web page here. I highly recommend reading the section on methodology here. Longtime students of the surface temperature record will recognize some of the issues being addressed. I urge readers not to bombard these guys with questions. Let’s “git ‘er done” first.

Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.

From The Daily Californian:

Professor Counters Global Warming Myths With Data

By Claire Perlman

Daily Cal Senior Staff Writer

Global warming is the favored scapegoat for any seemingly strange occurrence in nature, from dying frogs to hurricanes to drowning polar bears. But according to a Berkeley group of scientists, global warming does not deserve all these attributions. Rather, they say global warming is responsible for one thing: the rising temperature.

However, global warming has become a politicized issue, largely becoming disconnected from science in favor of inflammatory headlines and heated debates that are rarely based on any science at all, according to Richard Muller, a UC Berkeley physics professor and member of the team.

“There is so much politics involved, more so than in any other field I’ve been in,” Muller said. “People would write their articles with a spin on them. The people in this field were obviously very genuinely concerned about what was happening … But it made it difficult for a scientist to go in and figure out that what they were saying was solid science.”

Muller came to the conclusion that temperature data – which, in the United States, began in the late 18th century when Thomas Jefferson and Benjamin Franklin made the first thermometer measurements – was the only truly scientifically accurate way of studying global warming.

Without the thermometer and the temperature data that it provides, Muller said it was probable that no one would have noticed global warming yet. In fact, in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius, and sea levels, which are directly affected by the temperature, have increased by eight inches.

Photo: Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised.
Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised. Javier Panzar/Staff

To that end, he formed the Berkeley Earth group with 10 other highly acclaimed scientists, including physicists, climatologists and statisticians. Before the group joined in the study of the warming world, there were three major groups that had released analysis of historical temperature data. But each has come under attack from climate skeptics, Muller said.

In the group’s new study, which will be released in about a month, the scientists hope to address the doubts that skeptics have raised. They are using data from all 39,390 available temperature stations around the world – more than five times the number of stations that the next most thorough group, the Global Historical Climatology Network, used in its data set.

Other groups were concerned with the quality of the stations’ data, which becomes less reliable the earlier it was measured. Another decision to be made was whether to include data from cities, which are known to be warmer than suburbs and rural areas, said team member Art Rosenfeld, a professor emeritus of physics at UC Berkeley and former California Energy Commissioner.

“One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”

Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point.

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

Some scientists who fear that their results will be misinterpreted as proof that global warming is not urgent, such as in the case of Climategate, fall into a similar trap of exaggeration.

The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.

“We believed that if we brought in the best of the best in terms of statistics, we could use methods that would be easier to understand and not as open to actual manipulation,” said Elizabeth Muller, Richard Muller’s daughter and project manager of the study. “We just create a methodology that will then have no human interaction to pick or choose data.”

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

205 Comments
Inline Feedbacks
View all comments
February 11, 2011 10:14 pm

Let me add, by the way, that the assigned uncertainties in the surface station CRN rating key, that Anthony is assessing for his paper, represent guesstimated systematic errors and are statistically analogous to the (+/-)0.2 C “reading error” guesstimate of Folland, et al., 2001.
The CRN keys likewise fall under Case 3b in my paper. That means the CRN key uncertainties propagate into an anomaly average as s^2 = sqrt{[sum over N of(CRN key)^2]/(N-1)}, and will end up producing a large uncertainty in any average air temperature anomaly time series.
When all is said and done, there will almost certainly be no way to avoid the conclusion that the current instrumental surface air temperature record is pretty much climatologically useless; likely any trend less than at least (+/-)1 C will be lost under the uncertainty bars.

AlanG
February 11, 2011 10:19 pm

This is what you get when people try to do ‘science’ while playing with a computer. 5 x garbage is still garbage. They can only process this much data with no quality control or calibration. As most thermometers are in built up areas (where people can get to them) the larger data set will be WORSE. It’s a con job.
I’m going to wait for Anthony’s paper.

February 11, 2011 10:27 pm

“Sadly, in my view, the historical surface temperature record is simply too crude to have conclusive scientific value. I remain inclined to place far greater weight on satellite data – the UAH data in particular – and very little weight on even properly analyzed historical surface temperature data.”
So when the satellite measure matches the land surface record what do you conclude about the land surface record?
And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?
Do you think the LIA existed? why? on what evidence? is that “evidence” as accurate or as highly sampled as the evidence from 1900-2010?

February 11, 2011 10:31 pm

“Some of the new CRN stations may include that capacity, but monitoring now won’t do anything for systematic inaccuracies in the prior 150 years of the instrumental record.”
Well, that’s not actually the case. The CRN are set up in a “paired” configuration for a large part of the network. That means old stations are paired with new stations. That
will allow for the creation of transfer functions from the new network to the old.
Look at it this way. You accept the conclusions of ODonnell 2010. I do. In that paper the new data of the satellites was used to calibrate and infill the old land data. You’ll
have the same kind of proceedure with CRN and the old network. Already we know that CRN does not deviate from the old network.

John Robertson
February 11, 2011 11:11 pm

If computational horsepower is needed consider using the same shared engine as SETI@Home – distributed computers. Of course there is at least one climate model running (since around 2000) called Climate Prediction

kwik
February 12, 2011 12:32 am

eadler says:
February 11, 2011 at 4:06 pm
“In addition, when urban stations were dropped from the global data set used by GISS, it made no difference in the trend. This result was reported in the peer reviewed literature.”
Oh really?
You can Peer review this one;

rukidding
February 12, 2011 1:30 am

What is the point. Even if you use every temperature measuring device in the world there will still be large areas of the Earth’s surface that is not monitored.So until we have a thermometer on every Square metre of Earth there will still be room for a fiddle factor.
And what does it mean anyway.I think the current average is somewhere around 14.5 C.Is that the average were I live no is it the average were you live probably not.
If I took all the stations inside the Arctic circle and averaged them would that be the world temperature no.If I took all the stations within 1deg north or south of the Equater and average them would that be the world temperature no.
So why would a a whole heap of stations placed randomly around the world be any different.

John Marshall
February 12, 2011 1:41 am

Sounds like a victory for common sense. But will we get the real data?
Well done Anthony and keep at ’em!

February 12, 2011 1:53 am

Steven mosher, you write:
“So when the satellite measure matches the land surface record what do you conclude about the land surface record?”
I agree with your point.
And in this particular context checkout the global compare with the UAH-GISS divergence coupled to logarithmic population change:
http://joannenova.com.au/2011/02/the-urban-heat-island-effect-could-africa-be-more-affected-than-the-us/
😉

February 12, 2011 1:56 am

Steven Mosher, here the logarithmic illustration UAH-GISS divergence versus population growth:
http://hidethedecline.eu/media/UHIINDICATOR/fig14.jpg

BigOil
February 12, 2011 2:01 am

I support the suggestion of a previous post that it would be far more useful to have temperature records by region.
Many posters have pointed out that the world average is meaningless. Quite a clever device really.
The other useful thing would be to show a graph of actual temperature – 16.5 degrees or whatever- rather then the anomaly as this distorts the scale of change to look much worse than it is.

HR
February 12, 2011 3:04 am

Given we have independant surface temperature data from satellites and a reliable OHC data collection system isn’t this all a bit of a backward step?

Simon Barnett
February 12, 2011 3:08 am

@Feynman (et al disparging this effort)
But global warming – i.e the fact that our planet warmed in the 20th century – is not at dispute.
The point is we just don’t know by how much – becuase of the politicisation of the existing datasets. And we have no idea what caused it – the measured increase may be attributable to natural variation, the UHIE, reflect change in land-usage, solar variation, a combination of the above or something else entirely.
We don’t even really know if warming has ceased in the last 15 years becuase of “adjustments” made to the existing datasets to make every succsessive year “the hottest eva!!!”.
A documented, open source, temperature record – one that is based on the numbers and not the politics, and one where the math can be indipendantly verified by both sides – is vital starting point in _scientifically_ answering these questions and I see this as a very positive development.
If we cannot even properly quantify the temperature change we have no business trying to attribute that change to human activity, or to anything else. Attributing changes in temperature to any one factor in a vast and complex climatic system of which we currently have only limited understanding – carbon for example – before we even have agreement on how much the temperature changed is junk science conducted largely by Malthusian activists.
Scientifically you just can’t get there from here.

Keith
February 12, 2011 3:36 am

Anthony:
I looked at the description of methodology. Will that methodology remove records where the sign of the temperature is reversed by failing to input/transcribe the data properly by failing to include the M for minus signs? GISS & METAR – dial “M” for missing minus signs: it’s worse than we thought. http://wattsupwiththat.com/2010/04/17/giss-metar-dial-m-for-missing-minus-signs-its-worse-than-we-thought/

February 12, 2011 5:35 am

I’m encouraged by this effort and can’t wait until they release the data as I have a methodology using absolute temperatures that is different to the normal way of measuring temps, and more of this sort of data will suit it perfectly.
The way I look at at the temperature recordings is that they are all in error, so the more data that can be collected, the more the errors are reduced.
We see from the brouhaha over Steig that two stations 2.5km apart have a 2deg difference in recorded temps. It could be thermometer error, it could be that it really is different by that amount, who knows.
500metres from a temp station, the temperature will be different, half an hour after the recording is made, the temperature will be different. Yet all we have are these snapshots in time and place of the temperature.
The trick is to find the things that follow a normal distribution of error, and those that don’t. For instance, instrument accuracy is said to be +/-2deg. I think it would be right to expect that there are just as many errors upwards as those downwards, so they follow a normal distribution.
In summer, it would be expected that the temperature would be above the average of the max and min for longer than it would be below. However, it is balanced by longer lower temperatures in winter, so it could be said to follow a normal distribution over the course of the seasons.
UHI does not follow a normal distribution, neither does the March of the Thermometers, or the lowering in average elevation of the temperature stations.
These are three of the major ways, and no doubt there are others, that the data has become less useful than it could be, and they need to be quantified and adjusted for, and the more data we have, the better we can test those things and generate accurate results. Also the use of anomalies results in a huge loss of information, in my view.
Remember, Anthony has been involved in this, and if for no other reason than as a mark of respect to him, we should await the releasing of the data before saying things we may regret later.

John Brookes
February 12, 2011 6:26 am

Well, this is a rather exciting development! But what a lot of work it will be. Temperature records, like any records, have mistakes in them. There will be so many subtle problems to fix. For example, how do you handle the case where a weather station is moved? A move of a few kilometres from the coast can dramatically affect the temperatures. How do you take this into account?
I’m going to assume that we have skillful people doing this, and that they will manage to make sense out of the vast amounts of temperature data, and get meaningful results out. I have a reasonable faith that clever people can do miracles!
Attitudes to this new study seem to be splitting skeptics into those who think the science is tractable and worth pursuing, and those who think its all too hard and we should just make our minds up without any evidence.

R. de Haan
February 12, 2011 8:48 am

No matter the outcome, this is a great initiative. Thanks

kforestcat
February 12, 2011 9:01 am

Dear steven mosher
Where you state:
“And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?”
Assuming the Climate Reference Network (CRN) sites in question were co-located with the older the surface temperature stations. I would conclude the “pristine” sites were accurate to within their calibration range at the time the data was compared – based on independent verification.
Note however that my understanding is that the CRN data is obtained from land based” automated instrument package[s], transmitted to a GOES satellite which in turn transmits the data to Wallops Island, VA.” (See http://www.data.gov/geodata/E5110AB7-6A2A-7705-9B63-CBDEDA02DFA5) If I am in error, and the CRN data represents temperature data collected directly by satellite and not land-based data collected by satellite please let me know – you knowage in this area being better than mine.
Even without the land-based CRN or “pure” satellite data, I would normally be inclined to “believe it likely” that scientific readings at “pristine” sites were “likely” accurate during periods prior to the independent verification – under a blanket assumption that the site’s QA/QC procedures were followed…Unfortunately I have lost all confidence in NOAA’s QA/QC program for surface temperature stations.
That said, I would not/could not “conclusively” state the stations pre-comparison data was accurate …unless I had access to reliable QA/QC data showing consistent and independent verification of the source instrument readings.
Further, if the inherit accuracy of the source instrument was beyond to the range required to answer the climate change issue at hand – then I would be forced to conclude that I could not discern a usable result for that purpose.
My comment was not intended to suggest that all surface temperature stations readings have no scientific value. Rather that there is not a sufficient number of reliable “pristine” stations available throughout the world from which one can draw a reliable conclusion about the world “surface” temperature…or to even assumed a difference from an arbitrary set “normal” temperature.
Consequently, in my view, while the historical record may provide a “indicator” to “suggest” past events, the data is simply not reliable enough nor available in sufficient quantity to draw a firm conclusion about the “world temperature”. (Assuming , as a side issue, said number has any meaning).
In conclusion, where surface temperature data can be verified as reliable and is of sufficient quantity to draw specific conclusion I have no problem accepting the results. I am simply not convinced this is the case.
Between gentleman, recognizing you appear to have a divergent view, do you come to a different conclusion(s) with the same facts? Or do you differ with my view of the realiablity, quality, and quantity of the data avaliable? What reasoning divides us?
No hostile intent or sarcasm implied, I do value you opinion.
Regards, Kforestcat

eadler
February 12, 2011 9:06 am

Pat Frank says:
February 11, 2011 at 7:18 pm
eadler, see my paper, here (free pdf download).
I’m talking of systematic error in the temperature record due to problems at the instrumental level. Inaccuracies enter field-measured temperatures because of solar loading on sensor shields and wind speed effects, which cause the sensor to record something other than the true air temperature.

I think you are making a logical error in this paper in case 2 sec 2.2. You claim that for a given station, the variation in the actual temperature , s, contributes to uncertainty in the monthly average, which is in addition to the measurement noise. This is incorrect. The average of the real temperature has no uncertainty as a result of the real variation. If you were choosing N temperature samples at random from an infinite sample, then the average that you get would have a statistical uncertainty that you state, even when the measurements were perfectly accurate. But this is not what applies to the monthly average at a given station. You are not choosing N values from from a random sample of temperatures. The N temperature measurements at a given station are all that there is. The average is the average with no uncertainty due to sampling.
I don’t have access to the references you cite, and am not acquainted with the terminology you use, so I can’t comment on the details of the other aspects of your analysis of temperature uncertainty. I will have to wait and see what the climate science community makes of it.
The actual global temperature is not what we are calculating, but rather the change in temperature over time. Its seems to me that the sensor errors, that you mention, will cancel when the temperature anomaly is calculated, unless there is a systematic drift over time.

Chris Riley
February 12, 2011 9:49 am

“It seems unreasonable to me to criticise them for not fixing a problem you (or anyone else) have yet to demonstrate.” Sharper00
It seems reasonable to ME to criticize them (NOAA NASA CRU etc. etc.) for pushing a draconian re-ordering of the world’s economy, one that without question would result in gargantuan increases in human misery, as a “solution” to a “problem” that they (or anyone else) have yet to demonstrate.

Allen
February 12, 2011 10:44 am

@Bigdinny, Ged, MtK,
Thank you for asking the question BD and for the responses. I too am a “lurker” and the bee in my bonnet has always been about the integrity of the scientific inquiry that led to this fading AGW alarmism. I really took an interest when Climategate broke, when emails seemed to indicate that the peer-review process was being actively corrupted by a small number of scientists. More recently, as you might have seen here at WUWT, there is more evidence of peer-review corruption with the Steig/O’Donnell affair.
In summary, the atmosphere has been warming since the last ice age. But some would have us believe, based on a corrupted scientific inquiry process, that our consumption of fossil fuels and the consequent emission of CO2 is catastrophically exacerbating the warming trend. It follows that we have it in our power to reverse the trend by reducing our consumption of fossil fuels.
If the scientific inquiry process IS corrupt, then how can we know the true causes behind the warming trend? For me, the critique starts there, with an inquiry about the scientific process itself.

DT UK
February 12, 2011 1:14 pm

The lead scientist of this new group Robert Rohde has I believe been an administrator of Wikipedia since 2005, using the name Dragons Flight he has been pretty ‘active’ mainly in climate related topics
Look him up, his style while not as obvious as one William M Connolley is none the less……..well make your own mind up
BTW I predict this team will find even more warming than had been previously found

eadler
February 12, 2011 1:25 pm

kwik says:
February 12, 2011 at 12:32 am
eadler says:
February 11, 2011 at 4:06 pm
“In addition, when urban stations were dropped from the global data set used by GISS, it made no difference in the trend. This result was reported in the peer reviewed literature.”
Oh really?
You can Peer review this one;

I don’t know what the nature of the data that the family in your video was accessing. If the data was not homogenized an UHI effect will be detected. It is a real effect. Before climate scientists use the data, it is homogenized to account for station moves, equipment changes and abrupt temperature changes due to environment. The result is that once this is done, no difference between urban and rural data bases can be detected.
http://www.ncdc.noaa.gov/oa/wmo/ccl/rural-urban.pdf
All analyses of the impact of urban heat islands (UHIs) on in situ temperature observations suffer from inhomogeneities or biases in the data. These inhomogeneities make urban heat island analyses difficult and can
lead to erroneous conclusions. To remove the biases caused by differences in elevation, latitude, time of observation, instrumentation, and nonstandard siting, a variety of adjustments were applied to the data. The resultant
data were the most thoroughly homogenized and the homogeneity adjustments were the most rigorously evaluated and thoroughly documented of any large-scale UHI analysis to date. Using satellite night-lights–derived urban/rural metadata, urban and rural temperatures from 289 stations in 40 clusters were compared using data from 1989 to 1991. Contrary to generally accepted wisdom, no statistically significant impact of urbanization could be found in annual temperatures. It is postulated that this is due to micro- and local-scale impacts dominating over the mesoscale urban heat island. Industrial sections of towns may well be significantly warmer than rural sites, but urban meteorological observations are more likely to be made within park cool islands than industrial regions.

Dave Springer
February 12, 2011 1:33 pm

steven mosher says:
February 11, 2011 at 11:05 am
“Be prepared to learn that any 100 randomly chosen tell the same story.
heck, pick the 10 longest records and you get the same story.”
That’s what happens when people won’t let facts get in the way of a story.
What do you think the story would say if we took the 10 longest rural records and used only raw data with no adjustments?
The problem is the instrument record isn’t accurate enough or long enough or global enough to pull such a small signal out of the noise of the past 130 years. Then because that can’t tell a credible story they try to manipulate, extrapolate, interpolate, adjust, and otherwise massage the poor data to make it better. Once you commit to massaging poor data like that with statistical techniques and unverifiable quality assumptions you can make it say whatever you want it to say which is why there’s a trite expression “Lies, Damned Lies, and Statistics” which came from a popular book by the same title.
Anthony Watts added in this thread “global warming is real” and only the magnitude is in question.
Not quite, Anthony. You rely on the satellite record for that which is short (32 years) and not without its own problems, questionable assumptions, and assorted other artifacts to say nothing of it not measuring the air temperature directly with a thermometer 4 feet off the ground inside a Stevenson screen but rather is making an indirect measurement of radiation that has travelled through kilometers of atmosphere and has to be adjusted and tranformed with mad skillz to get an actual temperature out of it. The number of revisions over the past 32 years to how the satellite data is and was massaged is legion and the sad fact is the satellite record is still the best temperature we have despite all the problems with it.
So when you say “global warming is real” you’re manufacturing a factual statement. If you said “global warming appears to be real over the past few decades” I would have no argument with it but you didn’t – you stated it is a fact when it is no such thing.

Bill Illis
February 12, 2011 1:51 pm

I, for one, am very interested in seeing the raw data from all 39,000 sites across the world.
As long as they outline how many are in cities, UHI will not be a problem since we can deduce how much of whatever increase is just UHI.
It doesn’t matter what the results are.
We have the right to have access to all the raw data in this very important issue (and I prefer to see all of it – not just the ones NCDC or GISS or the Hadley Centre have picked out for me and made all kinds of unknown adjustments to).