New independent surface temperature record in the works

Good news travels fast. I’m a bit surprised to see this get some early coverage, as the project isn’t ready yet. However since it has been announced by press, I can tell you that this project is partly a reaction and result of what we’ve learned in the surfacesations project, but mostly, this project is a reaction to many of the things we have been saying time and again, only to have NOAA and NASA ignore our concerns, or create responses designed to protect their ideas, rather than consider if their ideas were valid in the first place.  I have been corresponding with Dr. Muller, invited to participate with my data, and when I am able, I will say more about it. In the meantime, you can visit the newly minted web page here. I highly recommend reading the section on methodology here. Longtime students of the surface temperature record will recognize some of the issues being addressed. I urge readers not to bombard these guys with questions. Let’s “git ‘er done” first.

Note: since there’s been some concern in comments, I’m adding this: Here’s the thing, the final output isn’t known yet. There’s been no “peeking” at the answer, mainly due to a desire not to let preliminary results bias the method. It may very well turn out to agree with the NOAA surface temperature record, or it may diverge positive or negative. We just don’t know yet.

From The Daily Californian:

Professor Counters Global Warming Myths With Data

By Claire Perlman

Daily Cal Senior Staff Writer

Global warming is the favored scapegoat for any seemingly strange occurrence in nature, from dying frogs to hurricanes to drowning polar bears. But according to a Berkeley group of scientists, global warming does not deserve all these attributions. Rather, they say global warming is responsible for one thing: the rising temperature.

However, global warming has become a politicized issue, largely becoming disconnected from science in favor of inflammatory headlines and heated debates that are rarely based on any science at all, according to Richard Muller, a UC Berkeley physics professor and member of the team.

“There is so much politics involved, more so than in any other field I’ve been in,” Muller said. “People would write their articles with a spin on them. The people in this field were obviously very genuinely concerned about what was happening … But it made it difficult for a scientist to go in and figure out that what they were saying was solid science.”

Muller came to the conclusion that temperature data – which, in the United States, began in the late 18th century when Thomas Jefferson and Benjamin Franklin made the first thermometer measurements – was the only truly scientifically accurate way of studying global warming.

Without the thermometer and the temperature data that it provides, Muller said it was probable that no one would have noticed global warming yet. In fact, in the period where rising temperatures can be attributed to human activity, the temperature has only risen a little more than half a degree Celsius, and sea levels, which are directly affected by the temperature, have increased by eight inches.

Photo: Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised.
Richard Muller, a UC Berkeley physics professor, started the Berkeley Earth group, which tries to use scientific data to address the doubts that global warming skeptics have raised. Javier Panzar/Staff

To that end, he formed the Berkeley Earth group with 10 other highly acclaimed scientists, including physicists, climatologists and statisticians. Before the group joined in the study of the warming world, there were three major groups that had released analysis of historical temperature data. But each has come under attack from climate skeptics, Muller said.

In the group’s new study, which will be released in about a month, the scientists hope to address the doubts that skeptics have raised. They are using data from all 39,390 available temperature stations around the world – more than five times the number of stations that the next most thorough group, the Global Historical Climatology Network, used in its data set.

Other groups were concerned with the quality of the stations’ data, which becomes less reliable the earlier it was measured. Another decision to be made was whether to include data from cities, which are known to be warmer than suburbs and rural areas, said team member Art Rosenfeld, a professor emeritus of physics at UC Berkeley and former California Energy Commissioner.

“One of the problems in sorting out lots of weather stations is do you drop the data from urban centers, or do you down-weight the data,” he said. “That’s sort of the main physical question.”

Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point.

“There are the skeptics – they’re not the consensus,” Muller explained. “There are the exaggerators, like Al Gore and Tom Friedman who tell you things that are not part of the consensus … (which) goes largely off of thermometer records.”

Some scientists who fear that their results will be misinterpreted as proof that global warming is not urgent, such as in the case of Climategate, fall into a similar trap of exaggeration.

The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.

“We believed that if we brought in the best of the best in terms of statistics, we could use methods that would be easier to understand and not as open to actual manipulation,” said Elizabeth Muller, Richard Muller’s daughter and project manager of the study. “We just create a methodology that will then have no human interaction to pick or choose data.”

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

205 Comments
Inline Feedbacks
View all comments
Dave Andrews
February 12, 2011 2:09 pm

Steven Mosher,
OK the ‘law of large numbers’ says there is warming. No one disagrees with that- its what coming out of the LIA also says.
But the ‘ law’ says nothing about whether the warming is unprecedented, the records being too short, or whether there is a significant AGW component. The MWP occurred ,with variation, over approx 300 years. Similarly the LIA again with variation during the period, lasted over a similar period. There was precious little AGW involved in these events.
Why should we assume human influence is now the dominant factor in climate change?

Bigdinny
February 12, 2011 3:21 pm

So now, having read every post on this thread (and thank you for the attempt at direct answers Allen,Duster,Ged and MtK), I can safely conclude that there are no answers, only more questions. As an elected official in a small town who must make reasoned judgments in the allocation of tax dollars, it leaves me, well, cold. 🙂 As a tidbit for all you analytical mathematicians out there, our community recently discovered (somewhat rudely) that the price of wind is directly proportional to the price of fossil fuels, at least as it relates to the generation of electricity. Someday, on an appropriate thread, I’ll reveal some of the dirty little secrets regarding recycling. As Kermit the Frog once said, “It’s not easy to be green”.

rbateman
February 12, 2011 4:19 pm

Patrick Davis says:
February 11, 2011 at 8:24 am
Perhaps the group found the data that Phil Jones says got lost.
I hope it includes the missing data from my town. I can dream, can’t I?

u.k.(us)
February 12, 2011 5:05 pm

So, now that we have the surface temps figured out, we can factor in the effects of CO2.
I assume it won’t be good news.

February 12, 2011 5:49 pm

Steven Mosher, “So when the satellite measure matches the land surface record what do you conclude about the land surface record?
The systematic error of a PRT temperature sensor, e.g., inside a CRS shelter, has been measured, Steve. It’s about (+/-)0.5 C, and about half that for an MMTS sensor. Why does comparison with satellites tell anyone anything about the reality of empirically determined systematic errors in a surface air temperature sensor?
Apart from that, satellite temperatures from IR sensors are calibrated against buoy SST measurements, and so are no more accurate than the buoys are. Since buoy SSTs are also used to deduce or calibrate marine air temperatures, it’s not so surprising that satellite and surface temperature trends should match.
And when 10 years of CRN data ( all pristine sites) match the “old” sites that they are paired with, what do you conclude?
Was the CRN data corrected for the systematic error impacting its own sensors? If you look at Hubbard and Lin’s 2002 paper (reference 13 in the E&E paper), you’ll see that the precision sensors in all tested shelters were systematically biased in the same direction.
In their 2004 paper, Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks J. Atm. Ocean. Tech. 21, 1025-1032, H&L, among other things, examined errors in USCRN sensors. They found that, “For the USCRN PRT sensor in the USCRN network, the RSS errors can reach 0.2 – 0.34 C due to the inaccuracy of CR23X datalogger…”, where “RSS” is root-mean-squared. These are systematic errors, not random, and do not decrement as 1/sqrtN.
When the systematic effects are derived from the same forces as determine air temperature, it’s not surprising that anomaly trends correlate, even when they’re inaccurate. But in any case, there doesn’t seem to be much reassurance available in comparative analysis.

LazyTeenager
February 12, 2011 6:07 pm

Espen says
———
Are they going to publish yet another “global mean temperature”? I’m surprised that physicists are willing to work with that kind of metric
———
Despite the moist enthalpy argument, you can’t away from the fact that if the global heat content of the oceans, the land and the air is going up, then the global average air temp is also going to go up as well.
You,ve let yourself become distracted by a “can’t see the wood for the trees” argument,

LazyTeenager
February 12, 2011 6:14 pm

Dave Andrews says
———-
Why should we assume human influence is now the dominant factor in climate change?
———–
The reverse also applies.
Why should we assume that human influences cannot affect climate now, when only natural influences affected the climate in the past?
Remember “that was then this is now”.

LazyTeenager
February 12, 2011 6:23 pm

Chris Riley says
———–
It seems reasonable to ME to criticize them (NOAA NASA CRU etc. etc.) for pushing a draconian re-ordering of the world’s economy, one that without question would result in gargantuan increases in human misery, as a “solution” to a “problem” that they (or anyone else) have yet to demonstrate.
———-
Re gargantuan increases in human misery:
Let me guess:
1. This statement is incontrovertible
2. The economics is settled.
3. It must be true because there is a consensus of Internet bloggers.
Or maybe you are just making up a story.

February 12, 2011 6:30 pm

Steven MosherWell, that’s not actually the case. The CRN are set up in a “paired” configuration for a large part of the network. That means old stations are paired with new stations. That will allow for the creation of transfer functions from the new network to the old.
Steve, how will transfer functions from a new CRN sensor presently parallel to, say, a LiG set up in a CRS screen, remove systematic error from the prior decades of LiG temperatures? Error will have varied systematically with erratic micro-climatic conditions. At best, after a decade or two of parallel measurements, you’ll get an estimate of the average bias and SD for the LiG/CRS system, relative to the CRN system. Subtracting the average bias from prior LiG temperatures will not remove the relative LiG SD. That SD will be uncertainty bars around any LiG measurements spliced onto a measured CRN trend.
If there happened to be a larger systematic variance or a different bias in the past LiG measurements than in the LiG temperatures taken in the later parallel measurements, then subtracting the new average bias may even make the older LiG temperatures less accurate. But we’d never know, and so that would produce another form of uncertainty, namely implicate uncertainty in that we’d not really know whether our correction actually corrected the older record.
Further, the LiG SD will have built into it the unmeasured systematic error from the CRN sensor. Unless, that is, a further parallel calibration temperature sensor system was put in place that is relatively impervious to solar loading and wind speed effects. That sensor would yield the accurate air temperatures that will reveal the systematic bias in the CRN sensor temperatures.
A fully critical experiment would include a pyranometer and an anemometer to independently measure radiation and wind speed, and use those plus the more accurate temperatures to obtain empirical transfer functions to correct the systematic bias out from the CRN temperatures.
So, I see the CRN-LiG parallel set-ups to be just business as usual for NCDC, where they merely want to adjust and renormalize older LiG or MMTS temperatures to line up with newer CRN temperatures. The parallel measurements won’t help them with systematic error already in the older instrumental surface air temperature record, and won’t help them with the systematic error that will also enter the newer CRN temperatures.
As I recall, both Ryan O’D and Jeff C observed that their study corrected a faulty method, but said nothing about a physically real temperature trend in Antarctica. Given that, it looks like your argument there assumes a conclusion that was not present.

LazyTeenager
February 12, 2011 6:30 pm

My feeling about this new project is that it’s leadership is extremely naive about the mishmash of attitudes that make up climate skeptic land.
I predict that if it does not give the answer that people want it, will still be attacked irrespective of the quality of the result.
Even amongst the comments here today I see people maneuvering to preemptively set up a dismissal.

February 12, 2011 6:50 pm

eadler, you wrote, “[In a] monthly average at a given station[, y]ou are not choosing N values from from a random sample of temperatures. The N temperature measurements at a given station are all that there is. The average is the average with no uncertainty due to sampling.
Daily temperatures at a given station are not day-by-day constant across the month. They may oscillate and there will likely be a trend with a non-zero slope across the month. The average of those temperatures will be one number, it’s true. However, it is false to say that the average temperature is an accurate representation of the temperature for that month. The average has a magnitude uncertainty that communicates the variation in daily temperature across that month. Including the sqrt(variance) of the magnitude is the only physically complete representation of a monthly average temperature.
That leads to the interesting point that the 30 sets of 12 months in a 30-year anomaly normal period will display a magnitude uncertainty in their 30-year monthly averages. That magnitude uncertainty should be propagated into any long-term temperature anomaly trend based on that 30-year normal, as an indication of the natural variation in temperature during the climate regime of the normal epoch. I present that calculation in my next paper, already reviewed and accepted by E&E, and it turns out to further seriously impact the meaning of the 20th century surface air temperature anomaly trend.
You wrote that, “The actual global temperature is not what we are calculating, but rather the change in temperature over time. Its seems to me that the sensor errors, that you mention, will cancel when the temperature anomaly is calculated, unless there is a systematic drift over time.
When calculating an anomaly by subtraction, the errors in the normal and the temperature propagate as their rms. They don’t subtract away. The rest of your comment about errors canceling is true only when errors are known to be random. Systematic errors are not random, and the estimated climate station measurement errors are not known to be random. Applying the statistics of random errors to measurement errors that are not known to be random, is a mistake.

February 12, 2011 7:01 pm

Lazy teenager says:
“Even amongst the comments here today I see people maneuvering to preemptively set up a dismissal.”
Classic projection.

Doug Badgero
February 12, 2011 7:36 pm

Lazy Teenager,
This project is not capable of settling the issue. The debate isn’t about what the earth’s temperature has been doing for the last 150 years. All this project can do is damage the warmist meme by showing that we haven’t warmed. I would be surprised if that were the case.

johanna
February 12, 2011 8:04 pm

“The Berkeley Earth Surface Temperature Study was conducted with the intention of becoming the new, irrefutable consensus, simply by providing the most complete set of historical and modern temperature data yet made publicly available, so deniers and exaggerators alike can see the numbers.”
———————————————————————-
What a load of cobblers. Putting together a dataset has nothing to do with creating ‘a new, irrefutable consensus’. There ain’t no such animal as a ‘new, irrefutable consensus’, anyway.
I don’t see any harm in this project, if they are as transparent as they promise to be. But, given the points made by PPs and Anthony about the quality of even the raw data, it will probably end up as a GIGO exercise. More crappy data does not mean more accurate conclusions.
Where a problem could arise is if the results of putting together a dataset are splashed across the world as somehow providing answers on a global scale. That would be several bridges too far. In fact, it is more likely that the dataset will be useful at local levels to provide pointers to the veracity of numbers being generated in particular areas.
As has been pointed out, none of this touches on causation anyway. But, as Ryan O has discovered, methodology is every bit as fraught as subsequent steps in this field.

John Brookes
February 12, 2011 8:28 pm

[Snip. We frown upon the labeling of different points of view as “deniers.” ~dbs, mod.]

Alex Heyworth
February 12, 2011 9:58 pm

Prof Muller is, IIRC, the author of Physics for Future Presidents. If his work on this project is up to the standard he displayed there, I look forward to the results.

Chris Riley
February 12, 2011 10:18 pm

Lazy teenager says (2/12/6:23 pm)
———
Re gargantuan increases in human misery:
Let me guess:
1. This statement is incontrovertible
2. The economics is settled.
3. It must be true because there is a consensus of Internet bloggers.
Or maybe you are just making up a story.
————
Consider the misery generated to date by a program that is so tiny that even its developers admit that it will have no measurable impact on the climate. I am referring to CAFE standards, not the internet kind, or the kind that should only sell coffee grown in the shade, but the program wherein the U.S government mandates the fuel mileage of motor vehicles sold in this country.
Four studies have looked at the number of deaths caused by this program, and no, the studies were not done by “Big Oil”. The studies I refer to were done by the following:
1. USA TODAY
2. Brookings
3. NAS (National Academy of sciences)
4. NTSB (National Transportation Safety Board)
JR Dunn writing in The American Thinker compiled the results of these studies and published ranges in the estimated deaths from the CAFE standards to date.
This ranges between 42,000 and 125,000 Americans killed as of April of last year.
CAFE standards alone have already caused what anyone but a Bolshevik would describe as “gargantuan human misery.” Imagine what a program that would materially impact CO2 concentration in the atmosphere would do.
Or maybe you are just making up a story.

Allen
February 12, 2011 10:18 pm

@Bigdinny: When we stop asking questions we stop doing science.
I don’t envy your lot as a politician. You deal in rhetoric and as Plato so fervently argued, the truth cannot be found using it. However, as a politician you must make decisions and arguments with the information you have at hand. The problem with climate science is that it does not operate within a paradigm theory as do chemistry (Lavoisier’s combustion theory) or physics (Einstein’s theories of relativity). While there are competing theories to account for the behaviour of the climate only one has been given the full backing of the rhetoricians irrespective of the validity of the theory. This theory, as I have said before, has its basis in a corrupt line of scientific inquiry, so we cannot know what is true by using this theory.
I think that what we do know about climate is presently so incomplete that we cannot discern man made effects, much less the magnitude of those effects. So rather than hitching political fortunes to global climate dogma the prudent politician should put government resources to use in ways that produce direct benefit for his constituents. Wind farms, as you have found out, are dubious “investments”.

February 13, 2011 1:48 am

“…Global warming is real, Muller said, but both its deniers and exaggerators ignore the science in order to make their point…”
Yes, global warming IS real, and nobody really denies that.
The question all along has always been: “Just exactly how much warming have we seen, and is this warming outside the bounds of natural warming in the past”?
Several scientists have questioned the use of thermometers used for daily readings being “re-purposed” to provide a climate record.
Over the years, the moves, changes in equipment, UHI, encroachment on the thermometers and other things puts a possible error in the system.
Add to that the adjustments, dropping of stations, 1200km smoothing, extrapolation to account for areas of no data, classification of rural or not based on nightlights, refusal of scientists to simply tell people which sites they used and how they processed that data, use of different averaging periods, etc – and you see how they’ve managed to muddy the original question.
And we haven’t even TOUCHED the whole idea of an “anomaly”, especially when we don’t know what “normal” is.
It seems the only use we see of the processed data is as a basis to declare “warmest month since whenever”.
Unfortunately, the “exaggerators” have already attacked. They’ve looked at the list of donors and determined the effort to be worthless.
That is, unless the data confirms their theory. Then, they’ll fall over themselves to deem the study a success.
Can’t wait for the data to come in…

Bill Illis
February 13, 2011 5:00 am

If you want to watch your historical temperature series get changed every month by GISS and Hadcrut, sign yourself up at.
http://www.changedetection.com/
Over the last two months for example, over 50% of the annual numbers in this familar chart were changed.
http://data.giss.nasa.gov/gistemp/graphs/Fig.D.gif
Over the last 11 years, the trend in US temperatures has been adjusted upward by about 0.5C which is close to the total increase in the new adjusted numbers.
http://img844.imageshack.us/img844/3259/gistempuschanges11years.png

February 13, 2011 10:31 am

u.k.(us) says:
So, now that we have the surface temps figured out, we can factor in the effects of CO2.
I assume it won’t be good news.

I was about to ask about this, too. No matter WHAT they find, it doesn’t demonstrate ANY forcing. In fact, this particular project won’t even show any correlations.

February 13, 2011 11:06 am

As Kwik notes http://wattsupwiththat.com/2011/02/11/new-independent-surface-temperature-record-in-the-works/#comment-597152:
The rural vs urban temperature difference in the GISS data is so easily identified by even a 10-year-old, it is a wonder that the re-analysis needs doing at all. The correlation with global temperature increase with decreasing station count is as easily seen with other comparisons within the GISS official data. The divergence of global temperatures between land stations and satellite data is similarly easy to see. An objective review along these lines of the current data, presented to Congress as a challenge to requested funding based on (false) claims seems straightforward and simple.
Why is it that what we see posted in such clear and simple graphs has no apparent credibility and use for the Inhofes who wish to explose the CAGW fantasy?
A series of about 4 graphs seems to show it clearly. And that is the ADJUSTED data. What is wrong, technically, with these comparisons?

eadler
February 13, 2011 7:22 pm

LazyTeenager says:
February 12, 2011 at 6:30 pm

My feeling about this new project is that it’s leadership is extremely naive about the mishmash of attitudes that make up climate skeptic land.
I predict that if it does not give the answer that people want it, will still be attacked irrespective of the quality of the result.
Even amongst the comments here today I see people maneuvering to preemptively set up a dismissal.

You may call yourself lazy, but you seem to be an astute observer of mankind.

eadler
February 13, 2011 7:30 pm

Doug Proctor says:
February 13, 2011 at 11:06 am
As Kwik notes http://wattsupwiththat.com/2011/02/11/new-independent-surface-temperature-record-in-the-works/#comment-597152:
The rural vs urban temperature difference in the GISS data is so easily identified by even a 10-year-old, it is a wonder that the re-analysis needs doing at all. The correlation with global temperature increase with decreasing station count is as easily seen with other comparisons within the GISS official data. The divergence of global temperatures between land stations and satellite data is similarly easy to see. An objective review along these lines of the current data, presented to Congress as a challenge to requested funding based on (false) claims seems straightforward and simple.
Why is it that what we see posted in such clear and simple graphs has no apparent credibility and use for the Inhofes who wish to explose the CAGW fantasy?
A series of about 4 graphs seems to show it clearly. And that is the ADJUSTED data. What is wrong, technically, with these comparisons?

There is not much divergence between the global average temperature anomaly between satellite observations and the 3 major temperature station data bases.
http://tamino.wordpress.com/2010/12/16/comparing-temperature-data-sets/
When the data beginning in 1980 is analysed using the same baseline years for temperature, the graphs correspond quite well.
http://tamino.files.wordpress.com/2010/12/5t12.jpg

Espen
February 14, 2011 7:44 am

LazyTeenager says:

Despite the moist enthalpy argument, you can’t away from the fact that if the global heat content of the oceans, the land and the air is going up, then the global average air temp is also going to go up as well.

You just don’t get it, do you? The global average air temperature may in theory drop even if the heat content of the atmosphere rises (for instance if the rise occurred just in already warm areas, while colder and drier areas got cooler).
(Besides, ocean heat content has not increased at all since we started to get better measurements)