UPDATES: A number of feckless political commentators have simply missed this response I prepared, so I’m posting it to the top for a day or two. I’ll have a follow up on what I’ve learned since then in the next day or two. Also, NCDC weighs in at the LA Times, calling the BEST publicity effort without publishing science papers “seriously compromised”
Also – in case you have not seen it, this new analysis from an independent private climate data company shows how the siting of weather stations affects the data they produce. – Anthony
——————————————————————————————
As many know, there’s a hearing today in the House of Representatives with the Subcommittee on Energy and Environment, Committee on Science, Space, and Technology and there are a number of people attending, including Dr. John Christy of UAH and Dr. Richard Muller of the newly minted Berkeley Earth Surface Temperature (BEST) project.
There seems a bit of a rush here, as BEST hasn’t completed all of their promised data techniques that would be able to remove the different kinds of data biases we’ve noted. That was the promise, that is why I signed on (to share my data and collaborate with them). Yet somehow, much of that has been thrown out the window, and they are presenting some results today without the full set of techniques applied. Based on my current understanding, they don’t even have some of them fully working and debugged yet. Knowing that, today’s hearing presenting preliminary results seems rather topsy turvy. But, post normal science political theater is like that.
I have submitted this letter to be included in the record today. It is written for the Members of the committee, to give them a general overview of the issue, so may seem generalized and previously covered in some areas. It also addresses technical concerns I have, also shared by Dr. Pielke Sr. on the issue. I’ll point out that on the front page of the BEST project, they tout openness and replicability, but none of that is available in this instance, even to Dr. Pielke and I. They’ve had a couple of weeks with the surfacestations data, and now without fully completing the main theme of data cleaning, are releasing early conclusions based on that data, without providing the ability to replicate. I’ve seen some graphical output, but that’s it. What I really want to see is a paper and methods. Our upcoming paper was shared with BEST in confidence.
BEST says they will post Dr. Muller’s testimony with a notice on their FAQ’s page which also includes a link to video testimony. So you’ll be able to compare. I’ll put up relevant links later. – Anthony
UPDATE: Dr. Richard Muller’s testimony is now available here. What he proposes about Climate -ARPA is intriguing. I also thank Dr. Muller for his gracious description of the work done by myself, my team, and Steve McIntyre.
A PDF version of the letter below is here: Response_to_Muller_testimony
===========================================================
Chairman Ralph Hall
Committee on Science, Space, and Technology
2321 Rayburn House Office Building
Washington, DC 20515
Letter of response from Anthony Watts to Dr. Richard Muller testimony 3/31/2011
It has come to my attention that data and information from my team’s upcoming paper, shared in confidence with Dr. Richard Muller, is being used to suggest some early conclusions about the state of the quality of the surface temperature measurement system of the United States and the temperature data derived from it.
Normally such scientific debate is conducted in peer reviewed literature, rather than rushed to the floor of the House before papers and projects are complete, but since my team and I are not here to represent our work in person, we ask that this letter be submitted into the Congressional record.
I began studying climate stations in March 2007, stemming from a curiosity about paint used on the Stevenson Screens (thermometer shelters) used since 1892, and still in use today in the Cooperative Observer climate monitoring network. Originally the specification was for lime based whitewash – the paint of the era in which the network was created. In 1979 the specification changed to modern latex paint. The question arose as to whether this made a difference. An experiment I performed showed that it did. Before conducting any further tests, I decided to visit nearby climate monitoring stations to verify that they had been repainted. I discovered they had, but also discovered a larger and troublesome problem; many NOAA climate stations seemed to be next to heat sources, heat sinks, and have been surrounded by urbanization during the decades of their operation.
The surfacestations.org project started in June 2007 as a result of a collaboration begun with Dr. Roger Pielke Senior. at the University of Colorado, who had done a small scale study (Pielke and Davies 2005) and found identical issues.
Since then, with the help of volunteers, the surfacestations.org project has surveyed over 1000 United States Historical Climatological Network (USHCN) stations, which are chosen by NOAA’s National Climatic Data Center (NCDC) to be the best of the NOAA volunteer operated Cooperative Observer network (COOP). The surfacestations.org project was unfunded, using the help of volunteers nationwide, plus an extensive amount of my own volunteer time and travel. I have personally surveyed over 100 USHCN stations nationwide. Until this project started, even NOAA/NCDC had not undertaken a comprehensive survey to evaluate the quality of the measurement environment, they only looked at station records.
The work and results of the surfacestations.org project is a gift to the citizens of the United States.
There are two methods of evaluating climate station siting quality. The first is the older 100 foot rule implemented by NOAA http://www.nws.noaa.gov/om/coop/standard.htm which says:
The [temperature] sensor should be at least 100 feet from any paved or concrete surface.
A second siting quality method is for NOAA’s Climate Reference Network, (CRN) a hi-tech, high quality electronic network designed to eliminate the multitude of data bias problems that Dr. Muller speaks of. In the 2002 document commissioning the project, NOAA’s NCDC implemented a strict code for placement of stations, to be free of any siting or urban biases.
http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf
The analysis of metadata produced by the surfacestations.org project considered both techniques, and in my first publication on the issue, at 70% of the USHCN surveyed (Watts 2009) I found that only 1 in 10 NOAA climate stations met the siting quality criteria for either the NOAA 100 foot rule or the newer NCDC CRN rating system. Now, two years later, with over 1000 stations, 82.5% surveyed, the 1 in 10 number holds true using NOAA’s own published criteria for rating station siting quality.
Figure 1 Findings of siting quality from the surfacestations project
During the nationwide survey, we found that many NOAA climate monitoring stations were sited in what can only be described as sub optimal locations. For example, one of the worst examples was identified in data by Steven McIntyre as having the highest decadal temperature trend in the United States before we actually surveyed it. We found it at the University of Arizona Atmospheric Sciences Department and National Weather Service Forecast Office, where it was relegated to the center of their parking lot.
Figure2 – USHCN Station in Tucson, AZ
Photograph by surfacestations.org volunteer Warren Meyer
This USHCN station, COOP# 028815 was established in May 1867, and has had a continuous record since then. One can safely conclude that it did not start out in a parking lot. One can also safely conclude from human experience as well as peer reviewed literature (Yilmaz, 2009) that temperatures over asphalt are warmer than those measured in a field away from such modern influence.
The surfacestations.org survey found hundreds of other examples of poor siting choices like this. We also found equipment problems related to maintenance and design, as well as the fact the the majority of cooperative observers contacted had no knowledge of their stations being part of the USHCN, and were never instructed to perform an extra measure of due diligence to ensure their record keeping, and that their siting conditions should be homogenous over time.
It is evident that such siting problems do in fact cause changes in absolute temperatures, and may also contribute to new record temperatures. The critically important question is: how do these siting problems affect the trend in temperature?
Other concerns, such as the effect of concurrent trends in local absolute humidity due to irrigation, which creates a warm bias in the nighttime temperature trends, the effect of height above the ground on the temperature measurements, etc. have been ignored in past temperature assessments, as reported in, for example:
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.
Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider, and R.A Pielke Sr, 2011: Screen level temperature increase due to higher atmospheric carbon dioxide in calm and windy nights revisited. J. Geophys. Res., 116, D02122, doi:10.1029/2010JD014612.
These issues are not yet dealt with in Dr. Richard Muller’s analysis, and he agrees.
The abstract of the 2007 JGR paper reads:
This paper documents various unresolved issues in using surface temperature trends as a metric for assessing global and regional climate change. A series of examples ranging from errors caused by temperature measurements at a monitoring station to the undocumented biases in the regionally and globally averaged time series are provided. The issues are poorly understood or documented and relate to micrometeorological impacts due to warm bias in nighttime minimum temperatures, poor siting of the instrumentation, effect of winds as well as surface atmospheric water vapor content on temperature trends, the quantification of uncertainties in the homogenization of surface temperature data, and the influence of land use/land cover (LULC) change on surface temperature trends.
Because of the issues presented in this paper related to the analysis of multidecadal surface temperature we recommend that greater, more complete documentation and quantification of these issues be required for all observation stations that are intended to be used in such assessments. This is necessary for confidence in the actual observations of surface temperature variability and long-term trends.
While NOAA and Dr. Muller have produced analyses using our preliminary data that suggest siting has no appreciable effect, our upcoming paper reaches a different conclusion.
Our paper, Fall et al 2011 titled “Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends” has this abstract:
The recently concluded Surface Stations Project surveyed 82.5% of the U.S. Historical Climatology Network (USHCN) stations and provided a classification based on exposure conditions of each surveyed station, using a rating system employed by the National Oceanic and Atmospheric Administration (NOAA) to develop the U.S. Climate Reference Network (USCRN). The unique opportunity offered by this completed survey permits an examination of the relationship between USHCN station siting characteristics and temperature trends at national and regional scales and on differences between USHCN temperatures and North American Regional Reanalysis (NARR) temperatures. This initial study examines temperature differences among different levels of siting quality without controlling for other factors such as instrument type.
Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite-signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications. Homogeneity adjustments tend to reduce trend differences, but statistically significant differences remain for all but average temperature trends. Comparison of observed temperatures with NARR shows that the most poorly-sited stations are warmer compared to NARR than are other stations, and a major portion of this bias is associated with the siting classification rather than the geographical distribution of stations. According to the best-sited stations, the diurnal temperature range in the lower 48 states has no century-scale trend.
The finding that the mean temperature has no statistically significant trend difference that is dependent of siting quality, while the maximum and minimum temperature trends indicates that the lack of a difference in the mean temperatures is coincidental for the specific case of the USA sites, and may not be true globally. At the very least, this raises a red flag on the use of the poorly sited locations for climate assessments as these locations are not spatially representative.
Whether you believe the century of data from the NOAA COOP network we have is adequate, as Dr. Muller suggests, or if you believe the poor siting placements and data biases that have been documented with the nationwide climate monitoring network are irrelevant to long term trends, there are some very compelling and demonstrative actions by NOAA that speak directly to the issue.
1. NOAA’s NCDC created a new hi-tech surface monitoring network in 2002, the Climate Reference Network, with a strict emphasis on ensuring high quality siting. If siting does not matter to the data, and the data is adequate, why have this new network at all?
2. Recently, while resurveying stations that I previously surveyed in Oklahoma, I discovered that NOAA has been quietly removing the temperature sensors from some of the USHCN stations we cited as the worst (CRN4, 5) offenders of siting quality. For example, here are before and after photographs of the USHCN temperature station in Ardmore, OK, within a few feet of the traffic intersection at City Hall:
Figure 3 Ardmore USHCN station , MMTS temperature sensor, January 2009
Figure 4 Ardmore USHCN station , MMTS temperature sensor removed, March 2011
NCDC confirms in their meta database that this USHCN station has been closed, the temperature sensor removed, and the rain gauge moved to another location – the fire station west of town. It is odd that after being in operation since 1946, that NOAA would suddenly cease to provide equipment to record temperature from this station just months after being surveyed by the surfacestations.org project and its problems highlighted.
Figure 5 NOAA Metadata for Ardmore, OK USHCN station, showing equipment list
3. Expanding the search my team discovered many more instances nationwide, where USHCN stations with poor siting that were identified by the surfacestations.org survey have either had their temperature sensor removed, closed, or moved. This includes the Tucson USHCN station in the parking lot, as evidenced by NOAA/NCDC’s own metadata online database, shown below:
Figure 6 NOAA Metadata for Tucson USHCN station, showing closure in March 2008
It seems inconsistent with NOAA’s claims of siting effects having no impact that they would need to close a station that has been in operation since 1867, just a few months after our team surveyed it in late 2007 and made its issues known, especially if station siting quality has no effect on the data the station produces.
It is our contention that many fully unaccounted for biases remain in the surface temperature record, that the resultant uncertainty is large, and systemic biases remain. This uncertainty and the systematic biases needs to be addressed not only nationally, but worldwide. Dr. Richard Muller has not yet examined these issues.
Thank you for the opportunity to present this to the Members.
Anthony Watts
Chico, CA
Pardon Me but I ment rural not urban. I was thinking one thing and writing another.
DDE
So Berkeley’s Best team have also managed to “lose” the warm period of the 1930s from the Raw Data.
Isn’t that Conveient for the other groups who have analysed the data.
That really smacks of a full scale stitch up of Anthony & Dr Pielke sr.
Anybody who has seen the analysis of Chefio knows that the Best team are not doing what they said they would.
Berkeley. Toldja so.
A disturbing development indeed. Presently, Anthony, I do not know what your paper says, but presuming that it gets published and presuming that it shows that there are differences in trends depending upon the quality of the station siting, this will raise issues that will need to be addressed sooner or later by the mainstream establishment. It may be that BEST will look very silly, if the conclusions that can be drawn from your paper are strong.
At the end of the day, if temperatures are not rising and have now essentially flat lined and if the satellite data and ARGO data depict such a flat line trend, the establishment can only hide this fact for so long and eventually the truth will come out.
@Alexander K who said:
Motives are very difficult to determine. Poor science is easily identified given the original data.
Oh great. Now with many of these sensors being quietly moved to more appropriate locations, there’s going to be a downward step in the temperature records from these stations. According to the descriptions of the BEST methods, any step changes, that are not repeated in neighboring stations are being factored by adjustment.
So even though the station was bad, and is now good, the good data will be adjusted so that it looks more like the older, bad data.
The hearing is over, so the link is no longer active. Not that it means anything now, the link (while it was still active) at CA:
http://science.edgeboss.net/wmedia-live/science/60333/300_science-hall_110204.asx
And these guys are supposed to be scientists? They are just at the beginning of their work but they already have presented their conclusions to Congress????
This is not science.
Dave
First link to the place where it says the hearing is happening via Cspan
http://science.house.gov/hearing/full-committee-hearing-climate-change
Second link to instructions to get to the above link
http://radiotv.house.gov/
Third links says it supposed to be live
http://energycommerce.house.gov/news/PRArticle.aspx?NewsID=8407
all started with this link
http://www.capitolhearings.org/
which came from C-span
I can’t make it work/play . . . I am no engineer and work from a Public library . . .
Now I know why . . .
“U.S. HOUSE OF REPRESENTATIVES
COMMITTEE ON SCIENCE, SPACE, AND TECHNOLOGY
HEARING CHARTER
Climate Change: Examining the Processes Used to Create Science and Policy
Thursday, March 31, 2011
10:00 a.m. to 12:00 p.m.
2318 Rayburn House Office Building”
http://science.house.gov/sites/republicans.science.house.gov/files/documents/hearings/FINAL%20Climate%20Process%20Hearing%20Charter.pdf
Anthony,
yet again the simple concept of quality control of instrumentation seems to be beyond these people. Everyone that has worked in the real world of instrumentation and not some academic, publicly funded money-no-limit-because-the-taxpayer-is-paying-and-there’s-no-customer project knows that Poor quality is like an iceberg: what you see, will be just a small fraction of the problem, and even if you “account” for the bit you can see and remove that part of the problem, the bulk of it will just pop up to rear its ugly head.
That is all they have done. Taken data which anyone with any experience in the field knows is not up to the job – its carp quality and because they lack the experience they somehow think that if you take a lot of wet-behind the ears academics and apply a lot of statistics to a lot of carp, the answer will improve and must be better than someone who actually knows what they are talking about.
What is most disappointing about the Muller temperature record whitewash review presented to Congress is that even at the very first announcement of the BEST project the criteria presented upon which temperature data were to be reviewed included absolutely nothing about UHI impacts on the land based temperature record. This was pointed out in many many comments on WUWT at the time. Now we have yet another alarmist scientist presenting an alleged independent land based temperature record review with absolutely no mention whatsoever of UHI issues. What a farce.
polistra says:
March 31, 2011 at 10:20 am
Berkeley. Toldja so.
==========================
Yep.
Anthony
Thank you from Canada ! ☺
Good luck with the paper.
Clive
AdderW says:
March 31, 2011 at 9:24 am
“So, not a single point there…”
hmmmmmm…….okay! I change it to;
Is Muller an undercover Warmist? Isn’t it strange how many warmist’s that have german names? Hansen, Schmidt, Mann, Schellnhuber….and Muller???
Any points now?
Given that you write “The work and results of the surfacestations.org project is a gift to the citizens of the United States,” and that you mention a paper by Fall (published? peer-reviewed?) , why do you not reference Menne 2010 (peer reviewed and published in the Journal of Geophysical Research) in your letter?
Prof. Muller is the real deal, an honest scientist. He acknowledges that the preliminary analysis could change in his written testimony, and says why:
“The Berkeley Earth agreement with the prior analysis surprised us, since our preliminary results don’t yet address many of the known biases. When they do, it is possible that the corrections could bring our current agreement into disagreement.
Why such close agreement between our uncorrected data and their [NOAA and others] adjusted data? One possibility is that the systematic corrections applied by the other groups are small. We don’t yet know.”
Prof. Muller also says:
“In our preliminary analysis of these stations, we found a warming trend that is shown in the figure. It is very similar to that reported by the prior groups: a rise of about 0.7 degrees C since 1957. (Please keep in mind that the Berkeley Earth curve, in black, does not include adjustments designed to eliminate systematic bias.)”
Let Prof. Muller and his group work things out, let Anthony and Prof. Muller exchange their data. If the systematic corrections by other groups [NOAA, etc.] are indeed small, but appropriate corrections, yet to be applied by Dr. Muller, are larger, then the agreement will disappear. If adjustments designed to eliminate systematic bias are applied and things change, we will see.
Don’t trash honest scientists like Dr. Muller just because you don’t like his preliminary, uncorrected 2% results. He acknowledges that the correct corrections have yet to be applied.
I’m not sure I agree with the flavor of the majority of posts I see here.
It seems to me that Muller’s agenda is mainly to advance non-Team research spending through some type of ARPA structure. I don’t think that sounds like a terribly bad idea in theory, so long as the usual suspects don’t end up running it. With Obama in the driver’s seat, it would likely end up being run by someone like his former climate czar though. There are quite a few things that an ARPA-climate agency could do which lie outside the normal chanels. How about an honest and reliable survey of the ‘concensus’ for example?
All I did to see the live hearing was to copy, Subcommittee on Energy and Environment, Committee on Science, Space, and Technology into Bing and it was on the first search result.
It’s the dishonest, the deceit that wears us all down, but in the end ensures that people turn against the AGW proponents.
Anthony, I am a retired environmental manager from a Fortune 100 company. Are there any USHCN sites in the Toledo, OH area that still need to be surveyed?
I like Muller’s preliminary report very much. It was articulate and fully covered important uncertainties.
Those uncertainties are going to be the real story. We see, once again, a hockey stick, over the relatively short period of 1980 to current. That is peculiar and does NOT correlated to either industrialization or to carbon dioxide changes. It is therefore unexplained.
Total unknowns do NOT justify profoundly destructive uneconomic actions by governments.
Back in ’08 AW noted a lack of study in station bias (http://wattsupwiththat.com/2008/04/21/this-is-why-you-dont-put-an-official-noaa-temperature-sensor-over-concrete/). Where I work we have about 30 acres of asphalt and roof top surrounded by pasture, and the hawks frequently take advantage of early afternoon updrafts generated here. I sure would like to see a quantitative study of temperature alteration by altitude and asphalt islands of various sizes.
John in L du B says:
March 31, 2011 at 8:03 am
“Ok. Clarify this for me. It appears from this that Muller and his group is being paid to paper over the USHCN climate temperature record, whitewash biased temperature data and tell Congress that the climate record is ok. He’s doing this with unpublished data shared in confidence and prejudging the outcome before all the analysis has been compeleted. Have I got that correct or am I hyperbolizing here? Is there someone else’s “paper’s I won’t be reading”?”
Spot on. Muller gives every appearance of fostering a whitewash. He should not have been there today. There is no way to justify his appearance there today. That leaves one to ask whose influence got him there? The Muller Team just might be turning evil.
Noelle says:
March 31, 2011 at 10:51 am
“why do you not reference Menne 2010 (peer reviewed and published in the Journal of Geophysical Research) in your letter?”
HAHAHA !!! Priceless!!!