Clarification on BEST submitted to the House

UPDATES: A number of feckless political commentators have simply missed this response I prepared, so I’m posting it to the top for a day or two. I’ll have a follow up on what I’ve learned since then in the next day or two. Also, NCDC weighs in at the LA Times, calling the BEST publicity effort without publishing science papers “seriously compromised”

Also – in case you have not seen it, this new analysis from an independent private climate data company shows how the siting of weather stations affects the data they produce. – Anthony

——————————————————————————————

As many know, there’s a hearing today in the House of Representatives with the Subcommittee on Energy and Environment, Committee on Science, Space, and Technology and there are a number of people attending, including Dr. John Christy of UAH and Dr. Richard Muller of the newly minted Berkeley Earth Surface Temperature (BEST) project.

There seems a bit of a rush here, as BEST hasn’t completed all of their promised data techniques that would be able to remove the different kinds of data biases we’ve noted. That was the promise, that is why I signed on (to share my data and collaborate with them). Yet somehow, much of that has been thrown out the window, and they are presenting some results today without the full set of techniques applied. Based on my current understanding, they don’t even have some of them fully working and debugged yet. Knowing that, today’s hearing presenting preliminary results seems rather topsy turvy. But, post normal science political theater is like that.

I have submitted this letter to be included in the record today. It is written for the Members of the committee, to give them a general overview of the issue, so may seem generalized and previously covered in some areas. It also addresses technical concerns I have, also shared by Dr. Pielke Sr. on the issue. I’ll point out that on the front page of the BEST project, they tout openness and replicability, but none of that is available in this instance, even to Dr. Pielke and I. They’ve had a couple of weeks with the surfacestations data, and now without fully completing the main theme of data cleaning, are releasing early conclusions based on that data, without providing the ability to replicate. I’ve seen some graphical output, but that’s it. What I really want to see is a paper and methods. Our upcoming paper was shared with BEST in confidence.

BEST says they will post Dr. Muller’s testimony with a notice on their FAQ’s page which also includes a link to video testimony. So you’ll be able to compare. I’ll put up relevant links later. – Anthony

UPDATE: Dr. Richard Muller’s testimony is now available here. What he proposes about Climate -ARPA is intriguing. I also thank Dr. Muller for his gracious description of the work done by myself, my team, and Steve McIntyre.

A PDF version of the letter below is here: Response_to_Muller_testimony

===========================================================

Chairman Ralph Hall

Committee on Science, Space, and Technology

2321 Rayburn House Office Building

Washington, DC 20515

Letter of response from Anthony Watts to Dr. Richard Muller testimony 3/31/2011

It has come to my attention that data and information from my team’s upcoming paper, shared in confidence with Dr. Richard Muller, is being used to suggest some early conclusions about the state of the quality of the surface temperature measurement system of the United States and the temperature data derived from it.

Normally such scientific debate is conducted in peer reviewed literature, rather than rushed to the floor of the House before papers and projects are complete, but since my team and I are not here to represent our work in person, we ask that this letter be submitted into the Congressional record.

I began studying climate stations in March 2007, stemming from a curiosity about paint used on the Stevenson Screens (thermometer shelters) used since 1892, and still in use today in the Cooperative Observer climate monitoring network. Originally the specification was for lime based whitewash – the paint of the era in which the network was created. In 1979 the specification changed to modern latex paint. The question arose as to whether this made a difference. An experiment I performed showed that it did. Before conducting any further tests, I decided to visit nearby climate monitoring stations to verify that they had been repainted. I discovered they had, but also discovered a larger and troublesome problem; many NOAA climate stations seemed to be next to heat sources, heat sinks, and have been surrounded by urbanization during the decades of their operation.

The surfacestations.org project started in June 2007 as a result of a collaboration begun with Dr. Roger Pielke Senior. at the University of Colorado, who had done a small scale study (Pielke and Davies 2005) and found identical issues.

Since then, with the help of volunteers, the surfacestations.org project has surveyed over 1000 United States Historical Climatological Network (USHCN) stations, which are chosen by NOAA’s National Climatic Data Center (NCDC) to be the best of the NOAA volunteer operated Cooperative Observer network (COOP). The surfacestations.org project was unfunded, using the help of volunteers nationwide, plus an extensive amount of my own volunteer time and travel. I have personally surveyed over 100 USHCN stations nationwide. Until this project started, even NOAA/NCDC had not undertaken a comprehensive survey to evaluate the quality of the measurement environment, they only looked at station records.

The work and results of the surfacestations.org project is a gift to the citizens of the United States.

There are two methods of evaluating climate station siting quality. The first is the older 100 foot rule implemented by NOAA http://www.nws.noaa.gov/om/coop/standard.htm which says:

The [temperature] sensor should be at least 100 feet from any paved or concrete surface.

A second siting quality method is for NOAA’s Climate Reference Network, (CRN) a hi-tech, high quality electronic network designed to eliminate the multitude of data bias problems that Dr. Muller speaks of. In the 2002 document commissioning the project, NOAA’s NCDC implemented a strict code for placement of stations, to be free of any siting or urban biases.

http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf

The analysis of metadata produced by the surfacestations.org project considered both techniques, and in my first publication on the issue, at 70% of the USHCN surveyed (Watts 2009) I found that only 1 in 10 NOAA climate stations met the siting quality criteria for either the NOAA 100 foot rule or the newer NCDC CRN rating system. Now, two years later, with over 1000 stations, 82.5% surveyed, the 1 in 10 number holds true using NOAA’s own published criteria for rating station siting quality.

Figure 1 Findings of siting quality from the surfacestations project

During the nationwide survey, we found that many NOAA climate monitoring stations were sited in what can only be described as sub optimal locations. For example, one of the worst examples was identified in data by Steven McIntyre as having the highest decadal temperature trend in the United States before we actually surveyed it. We found it at the University of Arizona Atmospheric Sciences Department and National Weather Service Forecast Office, where it was relegated to the center of their parking lot.

Figure2 – USHCN Station in Tucson, AZ

Photograph by surfacestations.org volunteer Warren Meyer

This USHCN station, COOP# 028815 was established in May 1867, and has had a continuous record since then. One can safely conclude that it did not start out in a parking lot. One can also safely conclude from human experience as well as peer reviewed literature (Yilmaz, 2009) that temperatures over asphalt are warmer than those measured in a field away from such modern influence.

The surfacestations.org survey found hundreds of other examples of poor siting choices like this. We also found equipment problems related to maintenance and design, as well as the fact the the majority of cooperative observers contacted had no knowledge of their stations being part of the USHCN, and were never instructed to perform an extra measure of due diligence to ensure their record keeping, and that their siting conditions should be homogenous over time.

It is evident that such siting problems do in fact cause changes in absolute temperatures, and may also contribute to new record temperatures. The critically important question is: how do these siting problems affect the trend in temperature?

Other concerns, such as the effect of concurrent trends in local absolute humidity due to irrigation, which creates a warm bias in the nighttime temperature trends, the effect of height above the ground on the temperature measurements, etc. have been ignored in past temperature assessments, as reported in, for example:

Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider, and R.A Pielke Sr, 2011: Screen level temperature increase due to higher atmospheric carbon dioxide in calm and windy nights revisited. J. Geophys. Res., 116, D02122, doi:10.1029/2010JD014612.

These issues are not yet dealt with in Dr. Richard Muller’s analysis, and he agrees.

The abstract of the 2007 JGR paper reads:

This paper documents various unresolved issues in using surface temperature trends as a metric for assessing global and regional climate change. A series of examples ranging from errors caused by temperature measurements at a monitoring station to the undocumented biases in the regionally and globally averaged time series are provided. The issues are poorly understood or documented and relate to micrometeorological impacts due to warm bias in nighttime minimum temperatures, poor siting of the instrumentation, effect of winds as well as surface atmospheric water vapor content on temperature trends, the quantification of uncertainties in the homogenization of surface temperature data, and the influence of land use/land cover (LULC) change on surface temperature trends.

Because of the issues presented in this paper related to the analysis of multidecadal surface temperature we recommend that greater, more complete documentation and quantification of these issues be required for all observation stations that are intended to be used in such assessments. This is necessary for confidence in the actual observations of surface temperature variability and long-term trends.

While NOAA and Dr. Muller have produced analyses using our preliminary data that suggest siting has no appreciable effect, our upcoming paper reaches a different conclusion.

Our paper, Fall et al 2011 titled “Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends” has this abstract:

The recently concluded Surface Stations Project surveyed 82.5% of the U.S. Historical Climatology Network (USHCN) stations and provided a classification based on exposure conditions of each surveyed station, using a rating system employed by the National Oceanic and Atmospheric Administration (NOAA) to develop the U.S. Climate Reference Network (USCRN). The unique opportunity offered by this completed survey permits an examination of the relationship between USHCN station siting characteristics and temperature trends at national and regional scales and on differences between USHCN temperatures and North American Regional Reanalysis (NARR) temperatures. This initial study examines temperature differences among different levels of siting quality without controlling for other factors such as instrument type.

Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite-signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications. Homogeneity adjustments tend to reduce trend differences, but statistically significant differences remain for all but average temperature trends. Comparison of observed temperatures with NARR shows that the most poorly-sited stations are warmer compared to NARR than are other stations, and a major portion of this bias is associated with the siting classification rather than the geographical distribution of stations. According to the best-sited stations, the diurnal temperature range in the lower 48 states has no century-scale trend.

The finding that the mean temperature has no statistically significant trend difference that is dependent of siting quality, while the maximum and minimum temperature trends indicates that the lack of a difference in the mean temperatures is coincidental for the specific case of the USA sites, and may not be true globally. At the very least, this raises a red flag on the use of the poorly sited locations for climate assessments as these locations are not spatially representative.

Whether you believe the century of data from the NOAA COOP network we have is adequate, as Dr. Muller suggests, or if you believe the poor siting placements and data biases that have been documented with the nationwide climate monitoring network are irrelevant to long term trends, there are some very compelling and demonstrative actions by NOAA that speak directly to the issue.

1. NOAA’s NCDC created a new hi-tech surface monitoring network in 2002, the Climate Reference Network, with a strict emphasis on ensuring high quality siting. If siting does not matter to the data, and the data is adequate, why have this new network at all?

2. Recently, while resurveying stations that I previously surveyed in Oklahoma, I discovered that NOAA has been quietly removing the temperature sensors from some of the USHCN stations we cited as the worst (CRN4, 5) offenders of siting quality. For example, here are before and after photographs of the USHCN temperature station in Ardmore, OK, within a few feet of the traffic intersection at City Hall:

Figure 3 Ardmore USHCN station , MMTS temperature sensor, January 2009

Figure 4 Ardmore USHCN station , MMTS temperature sensor removed, March 2011

NCDC confirms in their meta database that this USHCN station has been closed, the temperature sensor removed, and the rain gauge moved to another location – the fire station west of town. It is odd that after being in operation since 1946, that NOAA would suddenly cease to provide equipment to record temperature from this station just months after being surveyed by the surfacestations.org project and its problems highlighted.

Figure 5 NOAA Metadata for Ardmore, OK USHCN station, showing equipment list

3. Expanding the search my team discovered many more instances nationwide, where USHCN stations with poor siting that were identified by the surfacestations.org survey have either had their temperature sensor removed, closed, or moved. This includes the Tucson USHCN station in the parking lot, as evidenced by NOAA/NCDC’s own metadata online database, shown below:

Figure 6 NOAA Metadata for Tucson USHCN station, showing closure in March 2008

It seems inconsistent with NOAA’s claims of siting effects having no impact that they would need to close a station that has been in operation since 1867, just a few months after our team surveyed it in late 2007 and made its issues known, especially if station siting quality has no effect on the data the station produces.

It is our contention that many fully unaccounted for biases remain in the surface temperature record, that the resultant uncertainty is large, and systemic biases remain. This uncertainty and the systematic biases needs to be addressed not only nationally, but worldwide. Dr. Richard Muller has not yet examined these issues.

Thank you for the opportunity to present this to the Members.

Anthony Watts

Chico, CA

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
225 Comments
Inline Feedbacks
View all comments
Steve Keohane
April 2, 2011 4:55 am

Glenn Tamblyn says: April 2, 2011 at 1:06 am
Yet Anthony thinks ‘they’ are fudging something! Based on what?

Have you looked at the surface station project on this site?

April 2, 2011 7:50 am

Thank you very much Anthony!
Very good article, in my opinion. Your good work for humanity can never be paid justly enough. It is high time we start using only satellites to learn anything about our planet as a whole.
But lets not forget that even if the north of the Earth shows more elevated land surface temperatures lately, we still don’t know that this increase is “anthropogenic”, nor that it has anything to do with CO2.
I am more interested in learning whether CO2 produces, or not, a “greenhouse effect”.
On this, I reading http://www.ilovemycarbondioxide.com/pdf/Understanding_the_Atmosphere_Effect.pdf
(Joseph E. Postma, M.Sc. Astrophysics, Honours B.Sc. Astronomy. March 2011, .pdf)
[a work in progress]

Bowen
April 2, 2011 8:53 am

Andres Valencia? . . . only satellites to learn anything about our planet as a whole!
Right on the face of it . . . this would be a big mistake . . . generally gauges are like dictionaries or watches they eliminate petty arguments of opinion . . . and without those gauges, satellites would not have evolved as they have . . . (in my opinion) . . .It’s like the pitcher pump existed before the electric water pump . . . and it will always have it’s place where there is not electricity . . .
I also, do think the cost of acquiring knowledge and understanding in our current political “climate” must be considered . . . Smart people of integrity can not afford to work for free, and should not be expected to.
NASA Study Goes to Earth’s Core for Climate Insights
http://www.sciencedaily.com/releases/2011/03/110311140706.htm
Satellites Act As Thermometers In Space
http://www.sciencedaily.com/releases/2004/04/040422002701.htm

April 2, 2011 1:08 pm

Yes, satellites can now cover the whole planet fast. To me, clearly the best for learning about the temperatures and vapor content at different heights, cloud cover, and much more.
Alas, history will always remain and grow, and meander around. Never to be forgotten.
Amateurs work for free most of the time, our reward is not monetary, but donations sure help along the way.

lapogus
April 3, 2011 2:19 am

Interesting discussion. After reading Steven Mosher’s posts I was going to suggest that UHI may not be significant in temperate/warm climates, but is still likely to be in colder zones (and iirc Lucy Skywalker’s analysis of Siberian stations confirmed UHI is especially significant in winter months). Russian scientists were also scathing of CRU’s station selection techniques irrc). AussieDan’s findings suggest that UHI is apparent in Australian data, and we all know that the New Zealand data is about as much use as a chocolate teapot. Anthony hasn’t had his full say yet, and I have not checked what the Chiefio is latest thinking on this yet. So my gut feeling is still that there has been some warming but half of the 0.7C warming can can be put down to UHI/dodgy date selection/homogensisation and bald adjustments. At the very least I would say that there’s a lot of uncertainty out there. As the creator of this graph (iirc Jim from NYC) has often pointed out, there does not seem to be much signal for AGW in the CET and 10 other long term temperature records – http://oi49.tinypic.com/rc93fa.jpg To me it just looks like we had a warm decade (or rather a series of mild winters in the northern hemisphere) in the 1990s. Nothing to panic about, and it seems to be getting colder again now anyway.
It is sad to see Muller appear to suggest that CO2 can be blamed for 0.6 of the 0.7C warming, when there is no evidence for this. Natural variation, cloud cover changes, oceanic and solar magnetic cycles are all much more likely to be the cause, rather than an increase in atmospheric CO2 from 0.0285% to 0.0385% .

zman
April 3, 2011 11:42 pm

Hopefully this will be constructive criticism. After reading your letter for the Congressional record I came away thinking that it was written for scientists, not laypeople (like our Congress critters). I suspect many folks eyes would just glaze over before they figured out what points you were making. My suggestion is that future inputs like this should have more introductory material with less technical content before you go into the technical details (which I think still belong in there).
Cheers!

pat
April 3, 2011 11:49 pm

I still smell a rat. And the carcass is very large.

tonyb
Editor
April 4, 2011 12:40 am

I think there are several things that surprise me about this debate.
Firstly, that the effect of UHi on temperature is potentially huge yet we try to discount it. The Romans knew about the effects of UHI and numerous historic studies through the ages have commented on the urban effect. Many stations started off in a cool field and have since been engulfed by urban development so are likely to be warmer than otherwise they would be. Consequently very many stations within the record are affected by uhi to varying degrees as the places that contain the thermometers have changed utterly in nature.
Secondly, the efffects of station ‘movement’ is a big factor. Those sites not engulfed by development have often been moved-some a number of times- and the micro climate being measured is therefore often very different to the ones they started off with.
Thirdly, we greatly overstimate the historic accuracy of these figures and ignore two factors in this. The first is that thermometers were accurate to perhaps one degree-no more. Secondly the methodology at each site-until very recent times-differed from each other-there was no ‘handbook.’ For example, measurements might be taken at varying times of day, the thermometer were sited at different heights, not all were shaded correctly, night time temperatures were not always known.
Sorting out trends from all that mass of flawed information and coming up with a ‘ global’ figure back to 1880, supposedly accurate to tenths or hundredths of a degree, is stretching a point.
Hubert Lamb (the first Director of Cru) identified 1750 as the point at which glaciers had begun melting. The long term instrumental temperature records indicate a warming that can be traced back to 1659-our wealth of observations indicate we have been warming (with numerous advances and reverses) since 1607. The GISS records from 1880 plug into the end of this warming trend and do not herald the start of it.
It should come as a surprise to no one that the Earth is warmer today than it was during the Sporadic Little Ice age. What is perhaps a bigger surprise is that the temperature increase is not greater. CET in 1659-the first year of the record was 8.83C. In 2010 the last year of the record it was…. 8.83C.
We had a slightly warm decade through the 1990’s in some places not dissimilar to the 1730’s and other periods.
We really must get away from the idea that we know a ‘global’ temperature back to 1880 that is accurate to tenths of a degree. We don’t. Incidentally I would be interested to hear Mosh’s take on SST’s. This must be the most ludicrous ‘scientific’ measurement known to man yet we solemnly produce data from it supposedly accurate to hundredths of a degree. Sheer hubris.
Tonyb

David Schofield
April 4, 2011 12:58 am

The obvious question to the statement ‘there is no difference between bad stations and good stations data’ is;
Why have we spent a lot of time and money setting and implementing standards of placement [distance from buildings, height from ground, cutting back foliage, designing screen sizes, changing paints etc.] if they all give the same quality data?
If I were a manager/politician I’d want to know why we wasted money.
A follow up question might be;
Why are we removing many ‘poor’ stations when their data are fine?

Allan M
April 4, 2011 1:21 am

As to the motivation of the BEST project, this link may be useful, if true.
http://jer-skepticscorner.blogspot.com/2011/04/best-novim-and-other-solution.html

stephen richards
April 4, 2011 1:43 am

you mean a summary for politicians as per IPCC ARx ? 🙂

April 4, 2011 2:53 am

The Oort Cloud: A hypothetical reservoir of comets in orbit around the sun located beyond the detection capabilities of current technology.
The Nemesis Star: A hypothetical brown dwarf in orbit around the sun located beyond the detection capabilities of current technology.
So we have a specialist who creates the hypothesis of a Nemesis star that interacts with the hypothetical Oort cloud to produce a catastrophe that cannot be verified. The perfect description of a climate scientist.

wayne Job
April 4, 2011 2:57 am

“Good better best, do not let it rest, until your good is better, and your better best”
This was the motto cast into the iron ends of horse drawn water carts in Australia made by a Mr Furphy.
These were used widely in WW1 to supply our troops with water, these roving soles dishing out water also spread rumours and untruths. To this day in our fair land the spreading of untruths in wide spread fashion are known as Furphies.
Cover your A##se Anthony I feel this BEST mob are spreading Furphies for gain and your diminishment.

April 4, 2011 4:33 am

A query to Steven Mosher in relation to his statement Mar 31 @3.18 pm in answering Lubos.
“And that wont change vene after you look at UHI. UHI is not that large. We know this by looking at rural only sites. i know this from looking at long rural records. we know this by looking at UHA”.
I understand about this in relation to UAH however in regard to rural records I don’t understand. My understanding is that there is plenty of work done which suggests there is an logarithmic relationship between population density and UHI i.e. the increase in the UHI effect is greatest at lower population densities and decreases as population densities increase.
Sure as the UHI effect is cumulative large cities might be expected to have a larger UHI component compared with a rural site. However it’s the rate of population increase together with the absolute population (the rate will be greatest at lower population levels) that will determine the UHI effect on temperature trends. Rural site rates compared with urban site rates, per se- irrelevant.
Roy Spencer showed some interesting work on this a year ago and was going to publish. Here’s the link: –
http://www.drroyspencer.com/page/9/

April 4, 2011 5:04 am

Thge more I read about Muller, the more I believe he deviously portrayed himself as a skeptic who didn’t accept the current CAGW meme. He even convinced the Koch brothers to fund BEST – then he did a complete about face and announced his presumed conclusions to the world without sufficient facts in hand.
Now the NY Times is weighing in, saying BEST has shown the same results as heavily grant funded scientists despite the fact that Muller’s self-serving conclusions are based on little evidence. Further, prominent skeptical scientists seem to be generally missing from the BEST staff. But of course, the discredited Phil Jones is on board.
Muller pulled a fast one. That is becoming increasingly clear.

Man Bear Pig
April 4, 2011 5:16 am

There is an interesting post on a UK newsgroup here
http://www.democracyforum.co.uk/environment-energy/91359-letter-anthony-watts-wuwt-hor-commitee.html#5
Basically it says, if you take a random sample of stations including the biased ones, the results will show the same bias.

Dave in Delaware
April 4, 2011 5:37 am

Anyone remember Peter and his Dad, looking at Urban vs Rural GISS data? They selected 28 pairs of US cities, and then compared averages and trends.
This was covered on WUWT in December 2009:
Picking out the UHI in climatic temperature records – so easy a 6th grader can do it!
http://wattsupwiththat.com/2009/12/09/picking-out-the-uhi-in-global-temperature-records-so-easy-a-6th-grader-can-do-it
Anthony said: They used a simple pairing of rural and urban sites to show the differences. This shows why homogenization, which smears all the data from urban and rural sites together, is a bad idea, and gives trends that don’t exist in reality.
———————————-
I actually extracted those 28 pairs of data, using the same cities and extraction as used by Peter and his dad, and did my own analysis. For years 1900 -2006 from GISS web site for the list of paired locations, my results:
Rural temperature slope, DegC per century = 0.57
Urban temperature slope, DegC per century = 1.07
More results, all in DegC:
……………….Rural ……..Urban
Minimum …….5.8………….7.2
Maximum ….22.4…………23.3
Average………13.1………….14.3
*There are 6 of 28 (~20%) Rural negative slopes (cooling).
*None of the Urban temperature slopes are negative/cooling.
Note – the only data ‘clean up’ was to remove the “999” error/missing placeholder values from the GISS data before doing the calculations.
Dave

April 4, 2011 5:49 am

I surveyed some of those stations. In all but one case I was able to talk to the observer. One station was visited and seen by me but I did not photograph it or make a report, because the observer refused permission. The observer had a home renovation in progress. The temperature station had been pulled from the ground and was leaning against the north side of the garage… still reporting temperatures.

pyromancer76
April 4, 2011 6:15 am

bobbyj0708, March 31, 2011 at 8:38 am:
I dislike appearing to look like a conspiratorial nutjob but having read Muller’s book “Physics for Future Presidents” I have always doubted the man is impartial about CAGW. When you read his book the chapters on energy, terrorism, nuclear weapons and such are all logical and predicated upon what’s known from a physical standpoint. And then you get to the global warming chapter and it becomes “well, I know that the evidence is sketchy but trust me, I know what I’m talking about”.
I know Muller excoriated Mann and the team on Youtube but I really believe Muller is a dyed in the wool warmist. I don’t trust him. Read his book and see for yourself.”
Anthony, there are others here who are vigorously advising you to withdraw your (scientific) support. Don’t be a patsy for Obama-Soros-elitists-crony corporatists(the-ones-who-make-the-big-bucks-from-parading-around-AGW). They are beginning the soft-sell for Obama’s second term. UCB is pure left!

Pamela Gray
April 4, 2011 6:18 am

Muller fails to select control groups for his random 100 data site investigations. Sites should be picked based on a variable under investigation and then compared against controls. A smaller random selection compared to a larger random data set is not proper investigative technique. But the worst of it is, I think he knows that and is hoping tax payers don’t know that.

Sal Minella
April 4, 2011 6:34 am

With over 85% of the stations sited in such a way as to introduce a greater than +- 1 C error, it would seem difficult to measure a .7 C increase in temp..

JR
April 4, 2011 6:41 am

UHI is not that large. We know this by looking at rural only sites.
Look at Valentia Observatory, Ireland. It is categorized as “rural” in GHCN, but if you read the station notes in World Weather Records and note the discontinuity in the temperature series at 1951, you will see that UHI can affect rural stations as well.

DJ
April 4, 2011 7:02 am

I was suspicious of BEST from the very beginning, and expressed that here when it was first posted. I’m sickened that it appears I was right. Sometimes I hate being right. BEST had all the earmarks of a sucker punch in the making.
Now we can wait for the proclamation from BEST that their results come with the approval and support of the denier crowd and skeptical scientists.

Ripper
April 4, 2011 7:21 am

There you go Steve Mosher, a random selection of >100 stations selected many years ago.
http://www.john-daly.com/stations/stations.htm
Run that through your algos and see what curve you come up with.

Ed Barbar
April 4, 2011 8:08 am

In an email exchange with Rich Muller, he reiterated he will make all data available, as stated on the BEST site. No time frame was provided.
Regarding other points, the preliminary nature of the data is painfully called out in the testimony. And it would be a real shame if any breech of trust occurred with Anthony Watts. Everyone should be very grateful to Anthony for his important work to purify the data. Without a good measuring stick, how can you verify a theory? If Dr. Muller does indeed publish everything he has promised, the truth will come out about this and probably many other unresolved issues.
Some have called Muller a warmist. I think he is, as he states: “. . . if the global warming models are right, and they’re very likely right, we are going to have global warming.” This is from his talk were he discusses, among other things, the actual issues with global warming, and living with it:

4:35
(extracted from a point where he notes the biggest agreed to uncertainty is cloud cover).
Frankly, one reason I find AGW the theory so distasteful is the politics and theatrics surrounding it, and I hope to see the egotistical, self aggrandizing nature of some of the IPCC minions crushed. I’ll settle for good science, and trust we will get it from Muller for at least the surface temperatures.