A Considered Critique of Berkeley Temperature Series

Guest post by Jeff Id File:Berkeley earth surface temperature logo.jpg

I will leave this alone for another week or two while I wait for a reply to my emails to the BEST group, but there are three primary problems with the Berkeley temperature trends which must be addressed if the result is to be taken seriously.  Now by seriously, I don’t mean by the IPCC which takes all alarmist information seriously, but by the thinking person.

Here’s the points:

1 – Chopping of data is excessive.   They detect steps in the data, chop the series at the steps and reassemble them.   These steps wouldn’t  be so problematic if we weren’t worrying about detecting hundredths of a degree of temperature change per year. Considering that a balanced elimination of up and down steps in any algorithm I know of would always detect more steps in the opposite direction of trend, it seems impossible that they haven’t added an additional amount of trend to the result through these methods.

Steve McIntyre discusses this here. At the very least, an examination of the bias this process could have on the result is required.

2 – UHI effect.  The Berkeley study not only failed to determine the magnitude of UHI, a known effect on city temperatures that even kids can detect, it failed to detect UHI at all.  Instead of treating their own methods with skepticism, they simply claimed that UHI was not detectable using MODIS and therefore not a relevent effect.

This is not statistically consistent with prior estimates, but it does verify that the effect is very small, and almost insignificant on the scale of the observed warming (1.9 ± 0.1 °C/100yr since 1950 in the land average from figure 5A).

This is in direct opposition to Anthony Watts surfacestation project which through greater detail was very much able to detect the ‘insignificant’ effect.

Summary and Discussion

The classification of 82.5% of USHCNv2 stations based on CRN criteria provides a unique opportunity for investigating the impacts of different types of station exposure on temperature trends, allowing us to extend the work initiated in Watts [2009] and Menne et al. [2010].

The comparison of time series of annual temperature records from good and poor exposure sites shows that differences do exist between temperatures and trends calculated from USHCNv2 stations with different exposure characteristics. 550 Unlike Menne et al. [2010], who grouped all USHCNv2 stations into two classes and found that “the unadjusted CONUS minimum temperature trend from good and poor exposure sites … show only slight differences in the unadjusted data”, we found the raw (unadjusted) minimum temperature trend to be significantly larger when estimated from the sites with the poorest exposure sites relative to the sites with the best exposure. These trend differences were present over both the recent NARR overlap period (1979-2008) and the period of record (1895-2009). We find that the partial cancellation Menne et al. [2010] reported between the effects of time of observation bias adjustment and other adjustments on minimum temperature trends is present in CRN 3 and CRN 4 stations but not CRN 5 stations. Conversely, and in agreement with Menne et al. [2010], maximum temperature trends were lower with poor exposure sites than with good exposure sites, and the differences in

trends compared to CRN 1&2 stations were statistically significant for all groups of poorly sited stations except for the CRN 5 stations alone. The magnitudes of the significant trend differences exceeded 0.1°C/decade for the period 1979-2008 and, for minimum temperatures, 0.7°C per century for the period 1895-2009.

The non-detection of UHI by Berkeley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people. A skeptical scientist would be naturally concerned by this and it leaves a bad taste in my mouth to say the least that the authors aren’t more concerned with the Berkeley methods. Either surfacestations very detailed, very public results are flat wrong or Berkeley’s black box literal “characterization from space” results are.

Someone needs to show me the middle ground here because I can’t find it.

I sent this in an email to Dr. Curry:

Non-detection of UHI is a sign of problems in method. If I had the time, I would compare the urban/rural BEST sorting with the completed surfacestations project. My guess is that the comparison of methods would result in a non-significant relationship.

3 – Confidence intervals.

The confidence intervals were calculated in this method by eliminating a portion of the temperature stations and looking at the noise that the elimination created. Lubos Motl described the method accurately as intentionally ‘damaging’ the dataset.  It is a clever method to identify the sensitivity of the method and result to noise.  The problem is that the amount of damage assumed is equal to the percentage of temperature stations which were eliminated. Unfortunately the high variance stations are de-weighted by intent in the processes such that the elimination of 1/8 of the stations is absolutely no guarantee of damaging 1/8 of the noise. The ratio of eliminated noise to change in final result is assumed to be 1/8 and despite some vague discussion of Monte-Carlo verifications, no discussion of this non-linearity was even attempted in the paper.

Prayer to the AGW gods.

All that said, I don’t believe that warming is undetectable or that temperatures haven’t risen this century. I believe that CO2 helps warming along as the most basic physics proves. My objection has always been to the magnitude caused by man, the danger and the literally crazy “solutions”. Despite all of that, this temperature series is statistically speaking, the least impressive on the market. Hopefully, the group will address my confidence interval critiques, McIntyre’s very valid breakpoint detection issues and a more in depth UHI study.

Holding of breath is not advised.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

145 Comments
Inline Feedbacks
View all comments
Editor
November 3, 2011 6:37 pm

Excellent Jeff. My own concerns centre around issue #2.

R. Shearer
November 3, 2011 6:39 pm

Despite all this “warming” we’re only one large volcanic eruption away from a “year without a summer.”

November 3, 2011 7:06 pm

Still more of why I don’t think it shoud be called “BEST” –
I’m still favoring Berkeley EST.

David
November 3, 2011 7:06 pm

When is all the raw data and code going to be released? My understanding is that what has been released thus far is of very limited value.
I know some people have said it’s just preliminary stuff and the real stuff is coming but if the papers are ready for peer review and Muller is all over the news it seems odd that what was released is not particularly useable to people who want to get to the heart of the methodolgy that was used.

Doug in Seattle
November 3, 2011 7:18 pm

While I agree that the BEST scalpel is a good idea, I think that in the end it can only be properly employed based on a direct examination of both the metadata and the temperature data. What the BEST crew did was to try and automate this process based on trend.
This is really a problem with the research model rather than the researchers. Crowd sourcing, as was done in the Surface Stations project might be a better way to accomplish this.

Mike Bromley the Kurd
November 3, 2011 7:20 pm

Concern #1 speaks loudest. Once you start hacking at the data, you basically add a bias. Mother Nature doesn’t do that, nor does Anthropogenophecles, the god of significant figures.

November 3, 2011 7:21 pm

Admin: just some housekeeping. Can you please perform a replace of Berkley with Berkeley? Thanks.

Mike Bromley the Kurd
November 3, 2011 7:23 pm

David says:
November 3, 2011 at 7:06 pm
“…it seems odd that what was released is not particularly useable to people who want to get to the heart of the methodolgy that was used.”

People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.

u.k.(us)
November 3, 2011 7:23 pm

I just don’t want Al Gore running the world.

Randy
November 3, 2011 7:31 pm

Well spoken. The words you use are the same as the thoughts we are thinking as we read along. Succinct. And to the point. Thanks for what you and Anthony do.

Matt
November 3, 2011 7:34 pm

Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Look, I’m not interested in debating whether or not the latter statement is true. But, the fact that “kid’s can measure” local urban heat islands doesn’t mean it has a significant impact on globally averaged means.
“The non-detection of UHI by Berkley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people”
Lot’s of science involves *amazing detail* and yet yields a null result. I’m glad that Anthony et al did such a thorough assessment of station quality. It was a real service to the science. But, that fact that they worked hard does not have any bearing on whether or not UHI has an impact on the change in globally averaged temperature anomaly. Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments. What Jeff is saying is that the Berkeley results *must* be wrong because they go against his a priori belief.
One last point: I may be tired and misreading this, but doesn’t the excerpt from Anthony’s paper say that the reconstructions with the bad sites *understate* the temperature trend (in agreement with Meene et al and also, BTW, Berkeley)? Isn’t that the opposite of what Jeff wants to believe?

Don Monfort
November 3, 2011 7:38 pm

“Admin: just some housekeeping. Can you please perform a replace of Berkley with Berkeley? Thanks.”
It’s actually Berzerkely.

u.k.(us)
November 3, 2011 7:55 pm

Matt says:
November 3, 2011 at 7:34 pm
“Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments.”
===========
I assume you have a peer-reviewed paper to back-up this claim ?
I’m sorry, I mean a paper that has cleared peer-review.
A link to same would be best.

November 3, 2011 7:57 pm

Verity
Lets look at issue #2
Look at Steve McIntyre’s latest post.
We start with the trend of Satellites. Surely you accept the trends of John Christy and Roy Spencer.
Then steve applies a similar technique to that used here
http://hurricane.atmos.colostate.edu/Includes/Documents/Publications/klotzbachetal2009.pdf
Then he compares it to the surface trend
Lets walk through it slowly using one example.
1. We accept the UAH trend say .18C per decade
2. we look at CRU warming more at .28C per deacde
Can we conclude ( as Christy, Spencer and Steve do) that the difference
.28 – .18C or .1C per decade could be UHI?
Thats about what Ross suggests?
We all realize that UHI is a potential problem. The first question is can we bound the problem.
its not zero ( so BEST is wrong) AND its not 9C. the whole world is not Tokyo.
Steve’s analysis suggest an upper bound. Are you open to discussion of the upper bound or do you disagree with McIntyre, Christy, Spencer and Pielke?

November 3, 2011 8:02 pm

Another item I’ve NEVER heard these “wonks” address. Yes, we have “thermometer data going back to about 1800.
NO, none of the data until the 20’s or 30’s (and a small amount at first) was taken with “rotating drum”, daily recorders.
The problem with this? WHEN IS PEAK, when is trough?
We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I HAVE NEVER HEARD THIS MENTIONED OR ADDRESSED.
Frankly, the lack of discussion of that fact tells me that “all the best laid plans of mice and men” have gone wrong.
There is, in essence, NO VALUE to data from about 1800 to 1920 !!!
Max

Richard M
November 3, 2011 8:11 pm

The UHI effect is obviously a fact. The problem is detecting it and determining how much influence it has on the overall trends. Since it varies by time and location it will require a lot of effort to understand. I haven’t seen this effort from anyone including BEST.

November 3, 2011 8:17 pm

Is the BEST the enemy of good (analysis)?
How do these BEST trends compare with satellite trends (for the period for which both sets exist)? I am not sure one can give too much credence to land based measurements considering the changes that can be introduced into the microenvironment, without even trying (e.g., due to changes in vegetative cover, clearing of land, in the immediate vicinity of the instrument, etc.)

Don Monfort
November 3, 2011 8:21 pm

Matt,
“there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Cities, towns, villages, suburbs, neighborhoods, hamlets, etc. are where the thermometers are. Population centers are over-represented in the temperature records. Read the BEST paper on UHI and tell us how they dealt with that issue.

November 3, 2011 8:22 pm

Max Hugoson says:
November 3, 2011 at 8:02 pm
The problem with this? WHEN IS PEAK, when is trough? We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I HAVE NEVER HEARD THIS MENTIONED OR ADDRESSED.

The max-min-thermometer was invented in 1794 by Six, so now you have heard this mentioned.

Don Monfort
November 3, 2011 8:23 pm

I will play Steve. If you are asserting that a reasonable upper limit on UHI is .1C per decade, I will take that. I only hope that this time your guessing game has and ending.

JJ
November 3, 2011 8:27 pm

JohnWho says:
“Still more of why I don’t think it shoud be called ‘BEST’ –
I’m still favoring Berkeley EST.”

I think it should be Berkeleyest. That keeps is useful as an adjective, but (unlike the current acronym) it would be accurate. e.g. –
“Did you catch the nonsense that came out of Durban last week? It is about the Berkeleyest thing I’ve heard in a long time!”
🙂

Theo Goodwin
November 3, 2011 8:33 pm

Matt says:
November 3, 2011 at 7:34 pm
“Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Look, I’m not interested in debating whether or not the latter statement is true. But, the fact that “kid’s can measure” local urban heat islands doesn’t mean it has a significant impact on globally averaged means.”
Your comment contains the equivocation on the phrase “no UHI” that is found in BEST’s work. You seem to be aware of the problem but unaware that BEST uses the equivocation in a fallacious argument. They begin talking about UHI as a local phenomenon, which is the topic they took from Anthony but conclude that UHI has no impact on global average temperatures, a new topic that was not addressed by Anthony. In addition, introducing the topic of “global averaged trends” at all is a beautiful example of a Red Herring; that is, they introduced a topic that might sound like the actual topic but is actually irrelevant to it.
Anthony’s claim is that there is local UHI and that it has a disproportionate effect On The Measurement of global average trends because thermometers are disproportionately found in settings affected by UHI. Anthony does not claim that UHI has a causal effect on global temperature; rather, his claim is about the measurements that feed into measurements of global temperature.
In not addressing Anthony’s concerns about local UHI (and in changing from Anthony’s 30 year period for which there is metadata to a 60 year period for which there is none for the first 30 years) BEST betrayed Anthony’s trust yet chose to give the impression that they did address his concerns. That is plain old deception for the purpose of appearing to attain a success that was not attained, not earned, and not deserved.

November 3, 2011 8:42 pm

Jeff – when you say – “They detect steps in the data, chop the series at the steps and reassemble them. ”
Do these two GISS diagrams express the same thing ? – from Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl 2001. A closer look at United States and global surface temperature change. J. Geophys. Res. 106, 23947-23963, doi:10.1029/2001JD000354. and a pdf can be downloaded at http://pubs.giss.nasa.gov/abstracts/2001/
I have a page here –
http://www.warwickhughes.com/papers/gissuhi.htm
and also a blog post.
http://www.warwickhughes.com/blog/?p=753

Theo Goodwin
November 3, 2011 8:51 pm

Steven Mosher says:
November 3, 2011 at 7:57 pm
“1. We accept the UAH trend say .18C per decade
2. we look at CRU warming more at .28C per decade”
Where did you get these numbers? Does CRU claim that temperatures have risen at .28C per decade? For how many years? Let’s take 60 years as it is very important in some of BEST’s work, especially that having to do with Anthony and local UHI.
Let’s see, .28 per decade times 6 decades yields 1.68C for the most recent sixty years. In turn, that is equivalent to about three degrees Fahrenheit. Does CRU claim that global average temperature has risen 3F in the last 60 years?

Gail Combs
November 3, 2011 8:54 pm

Indur M. Goklany says:
November 3, 2011 at 8:17 pm
Is the BEST the enemy of good (analysis)?
How do these BEST trends compare with satellite trends (for the period for which both sets exist)? I am not sure one can give too much credence to land based measurements considering the changes that can be introduced into the microenvironment, without even trying (e.g., due to changes in vegetative cover, clearing of land, in the immediate vicinity of the instrument, etc.)
____________________________
Anthony’s Surface Station project is looking into all those microenvironments.
As far as the early data goes, one has to look at the logs and notes of the collectors of the data.
WUWT posted this earlier
“…from 1892 is a letter from Sergeant James A. Barwick to cooperative weather observers in California….</b. http://wattsupwiththat.com/2011/10/26/even-as-far-back-as-1892-station-siting-was-a-concern/
Some one else is trying to collect the old British shipping records on water temperature.

1 2 3 6