Guest post by Jeff Id
I will leave this alone for another week or two while I wait for a reply to my emails to the BEST group, but there are three primary problems with the Berkeley temperature trends which must be addressed if the result is to be taken seriously. Now by seriously, I don’t mean by the IPCC which takes all alarmist information seriously, but by the thinking person.
Here’s the points:
1 – Chopping of data is excessive. They detect steps in the data, chop the series at the steps and reassemble them. These steps wouldn’t be so problematic if we weren’t worrying about detecting hundredths of a degree of temperature change per year. Considering that a balanced elimination of up and down steps in any algorithm I know of would always detect more steps in the opposite direction of trend, it seems impossible that they haven’t added an additional amount of trend to the result through these methods.
Steve McIntyre discusses this here. At the very least, an examination of the bias this process could have on the result is required.
2 – UHI effect. The Berkeley study not only failed to determine the magnitude of UHI, a known effect on city temperatures that even kids can detect, it failed to detect UHI at all. Instead of treating their own methods with skepticism, they simply claimed that UHI was not detectable using MODIS and therefore not a relevent effect.
This is not statistically consistent with prior estimates, but it does verify that the effect is very small, and almost insignificant on the scale of the observed warming (1.9 ± 0.1 °C/100yr since 1950 in the land average from figure 5A).
Summary and Discussion
The classification of 82.5% of USHCNv2 stations based on CRN criteria provides a unique opportunity for investigating the impacts of different types of station exposure on temperature trends, allowing us to extend the work initiated in Watts  and Menne et al. .
The comparison of time series of annual temperature records from good and poor exposure sites shows that differences do exist between temperatures and trends calculated from USHCNv2 stations with different exposure characteristics. 550 Unlike Menne et al. , who grouped all USHCNv2 stations into two classes and found that “the unadjusted CONUS minimum temperature trend from good and poor exposure sites … show only slight differences in the unadjusted data”, we found the raw (unadjusted) minimum temperature trend to be significantly larger when estimated from the sites with the poorest exposure sites relative to the sites with the best exposure. These trend differences were present over both the recent NARR overlap period (1979-2008) and the period of record (1895-2009). We find that the partial cancellation Menne et al.  reported between the effects of time of observation bias adjustment and other adjustments on minimum temperature trends is present in CRN 3 and CRN 4 stations but not CRN 5 stations. Conversely, and in agreement with Menne et al. , maximum temperature trends were lower with poor exposure sites than with good exposure sites, and the differences in
trends compared to CRN 1&2 stations were statistically significant for all groups of poorly sited stations except for the CRN 5 stations alone. The magnitudes of the significant trend differences exceeded 0.1°C/decade for the period 1979-2008 and, for minimum temperatures, 0.7°C per century for the period 1895-2009.
The non-detection of UHI by Berkeley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people. A skeptical scientist would be naturally concerned by this and it leaves a bad taste in my mouth to say the least that the authors aren’t more concerned with the Berkeley methods. Either surfacestations very detailed, very public results are flat wrong or Berkeley’s black box literal “characterization from space” results are.
Someone needs to show me the middle ground here because I can’t find it.
I sent this in an email to Dr. Curry:
Non-detection of UHI is a sign of problems in method. If I had the time, I would compare the urban/rural BEST sorting with the completed surfacestations project. My guess is that the comparison of methods would result in a non-significant relationship.
3 – Confidence intervals.
The confidence intervals were calculated in this method by eliminating a portion of the temperature stations and looking at the noise that the elimination created. Lubos Motl described the method accurately as intentionally ‘damaging’ the dataset. It is a clever method to identify the sensitivity of the method and result to noise. The problem is that the amount of damage assumed is equal to the percentage of temperature stations which were eliminated. Unfortunately the high variance stations are de-weighted by intent in the processes such that the elimination of 1/8 of the stations is absolutely no guarantee of damaging 1/8 of the noise. The ratio of eliminated noise to change in final result is assumed to be 1/8 and despite some vague discussion of Monte-Carlo verifications, no discussion of this non-linearity was even attempted in the paper.
Prayer to the AGW gods.
All that said, I don’t believe that warming is undetectable or that temperatures haven’t risen this century. I believe that CO2 helps warming along as the most basic physics proves. My objection has always been to the magnitude caused by man, the danger and the literally crazy “solutions”. Despite all of that, this temperature series is statistically speaking, the least impressive on the market. Hopefully, the group will address my confidence interval critiques, McIntyre’s very valid breakpoint detection issues and a more in depth UHI study.
Holding of breath is not advised.