UPDATES: A number of feckless political commentators have simply missed this response I prepared, so I’m posting it to the top for a day or two. I’ll have a follow up on what I’ve learned since then in the next day or two. Also, NCDC weighs in at the LA Times, calling the BEST publicity effort without publishing science papers “seriously compromised”
Also – in case you have not seen it, this new analysis from an independent private climate data company shows how the siting of weather stations affects the data they produce. – Anthony
——————————————————————————————
As many know, there’s a hearing today in the House of Representatives with the Subcommittee on Energy and Environment, Committee on Science, Space, and Technology and there are a number of people attending, including Dr. John Christy of UAH and Dr. Richard Muller of the newly minted Berkeley Earth Surface Temperature (BEST) project.
There seems a bit of a rush here, as BEST hasn’t completed all of their promised data techniques that would be able to remove the different kinds of data biases we’ve noted. That was the promise, that is why I signed on (to share my data and collaborate with them). Yet somehow, much of that has been thrown out the window, and they are presenting some results today without the full set of techniques applied. Based on my current understanding, they don’t even have some of them fully working and debugged yet. Knowing that, today’s hearing presenting preliminary results seems rather topsy turvy. But, post normal science political theater is like that.
I have submitted this letter to be included in the record today. It is written for the Members of the committee, to give them a general overview of the issue, so may seem generalized and previously covered in some areas. It also addresses technical concerns I have, also shared by Dr. Pielke Sr. on the issue. I’ll point out that on the front page of the BEST project, they tout openness and replicability, but none of that is available in this instance, even to Dr. Pielke and I. They’ve had a couple of weeks with the surfacestations data, and now without fully completing the main theme of data cleaning, are releasing early conclusions based on that data, without providing the ability to replicate. I’ve seen some graphical output, but that’s it. What I really want to see is a paper and methods. Our upcoming paper was shared with BEST in confidence.
BEST says they will post Dr. Muller’s testimony with a notice on their FAQ’s page which also includes a link to video testimony. So you’ll be able to compare. I’ll put up relevant links later. – Anthony
UPDATE: Dr. Richard Muller’s testimony is now available here. What he proposes about Climate -ARPA is intriguing. I also thank Dr. Muller for his gracious description of the work done by myself, my team, and Steve McIntyre.
A PDF version of the letter below is here: Response_to_Muller_testimony
===========================================================
Chairman Ralph Hall
Committee on Science, Space, and Technology
2321 Rayburn House Office Building
Washington, DC 20515
Letter of response from Anthony Watts to Dr. Richard Muller testimony 3/31/2011
It has come to my attention that data and information from my team’s upcoming paper, shared in confidence with Dr. Richard Muller, is being used to suggest some early conclusions about the state of the quality of the surface temperature measurement system of the United States and the temperature data derived from it.
Normally such scientific debate is conducted in peer reviewed literature, rather than rushed to the floor of the House before papers and projects are complete, but since my team and I are not here to represent our work in person, we ask that this letter be submitted into the Congressional record.
I began studying climate stations in March 2007, stemming from a curiosity about paint used on the Stevenson Screens (thermometer shelters) used since 1892, and still in use today in the Cooperative Observer climate monitoring network. Originally the specification was for lime based whitewash – the paint of the era in which the network was created. In 1979 the specification changed to modern latex paint. The question arose as to whether this made a difference. An experiment I performed showed that it did. Before conducting any further tests, I decided to visit nearby climate monitoring stations to verify that they had been repainted. I discovered they had, but also discovered a larger and troublesome problem; many NOAA climate stations seemed to be next to heat sources, heat sinks, and have been surrounded by urbanization during the decades of their operation.
The surfacestations.org project started in June 2007 as a result of a collaboration begun with Dr. Roger Pielke Senior. at the University of Colorado, who had done a small scale study (Pielke and Davies 2005) and found identical issues.
Since then, with the help of volunteers, the surfacestations.org project has surveyed over 1000 United States Historical Climatological Network (USHCN) stations, which are chosen by NOAA’s National Climatic Data Center (NCDC) to be the best of the NOAA volunteer operated Cooperative Observer network (COOP). The surfacestations.org project was unfunded, using the help of volunteers nationwide, plus an extensive amount of my own volunteer time and travel. I have personally surveyed over 100 USHCN stations nationwide. Until this project started, even NOAA/NCDC had not undertaken a comprehensive survey to evaluate the quality of the measurement environment, they only looked at station records.
The work and results of the surfacestations.org project is a gift to the citizens of the United States.
There are two methods of evaluating climate station siting quality. The first is the older 100 foot rule implemented by NOAA http://www.nws.noaa.gov/om/coop/standard.htm which says:
The [temperature] sensor should be at least 100 feet from any paved or concrete surface.
A second siting quality method is for NOAA’s Climate Reference Network, (CRN) a hi-tech, high quality electronic network designed to eliminate the multitude of data bias problems that Dr. Muller speaks of. In the 2002 document commissioning the project, NOAA’s NCDC implemented a strict code for placement of stations, to be free of any siting or urban biases.
http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf
The analysis of metadata produced by the surfacestations.org project considered both techniques, and in my first publication on the issue, at 70% of the USHCN surveyed (Watts 2009) I found that only 1 in 10 NOAA climate stations met the siting quality criteria for either the NOAA 100 foot rule or the newer NCDC CRN rating system. Now, two years later, with over 1000 stations, 82.5% surveyed, the 1 in 10 number holds true using NOAA’s own published criteria for rating station siting quality.
Figure 1 Findings of siting quality from the surfacestations project
During the nationwide survey, we found that many NOAA climate monitoring stations were sited in what can only be described as sub optimal locations. For example, one of the worst examples was identified in data by Steven McIntyre as having the highest decadal temperature trend in the United States before we actually surveyed it. We found it at the University of Arizona Atmospheric Sciences Department and National Weather Service Forecast Office, where it was relegated to the center of their parking lot.
Figure2 – USHCN Station in Tucson, AZ
Photograph by surfacestations.org volunteer Warren Meyer
This USHCN station, COOP# 028815 was established in May 1867, and has had a continuous record since then. One can safely conclude that it did not start out in a parking lot. One can also safely conclude from human experience as well as peer reviewed literature (Yilmaz, 2009) that temperatures over asphalt are warmer than those measured in a field away from such modern influence.
The surfacestations.org survey found hundreds of other examples of poor siting choices like this. We also found equipment problems related to maintenance and design, as well as the fact the the majority of cooperative observers contacted had no knowledge of their stations being part of the USHCN, and were never instructed to perform an extra measure of due diligence to ensure their record keeping, and that their siting conditions should be homogenous over time.
It is evident that such siting problems do in fact cause changes in absolute temperatures, and may also contribute to new record temperatures. The critically important question is: how do these siting problems affect the trend in temperature?
Other concerns, such as the effect of concurrent trends in local absolute humidity due to irrigation, which creates a warm bias in the nighttime temperature trends, the effect of height above the ground on the temperature measurements, etc. have been ignored in past temperature assessments, as reported in, for example:
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229
Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.
Steeneveld, G.J., A.A.M. Holtslag, R.T. McNider, and R.A Pielke Sr, 2011: Screen level temperature increase due to higher atmospheric carbon dioxide in calm and windy nights revisited. J. Geophys. Res., 116, D02122, doi:10.1029/2010JD014612.
These issues are not yet dealt with in Dr. Richard Muller’s analysis, and he agrees.
The abstract of the 2007 JGR paper reads:
This paper documents various unresolved issues in using surface temperature trends as a metric for assessing global and regional climate change. A series of examples ranging from errors caused by temperature measurements at a monitoring station to the undocumented biases in the regionally and globally averaged time series are provided. The issues are poorly understood or documented and relate to micrometeorological impacts due to warm bias in nighttime minimum temperatures, poor siting of the instrumentation, effect of winds as well as surface atmospheric water vapor content on temperature trends, the quantification of uncertainties in the homogenization of surface temperature data, and the influence of land use/land cover (LULC) change on surface temperature trends.
Because of the issues presented in this paper related to the analysis of multidecadal surface temperature we recommend that greater, more complete documentation and quantification of these issues be required for all observation stations that are intended to be used in such assessments. This is necessary for confidence in the actual observations of surface temperature variability and long-term trends.
While NOAA and Dr. Muller have produced analyses using our preliminary data that suggest siting has no appreciable effect, our upcoming paper reaches a different conclusion.
Our paper, Fall et al 2011 titled “Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends” has this abstract:
The recently concluded Surface Stations Project surveyed 82.5% of the U.S. Historical Climatology Network (USHCN) stations and provided a classification based on exposure conditions of each surveyed station, using a rating system employed by the National Oceanic and Atmospheric Administration (NOAA) to develop the U.S. Climate Reference Network (USCRN). The unique opportunity offered by this completed survey permits an examination of the relationship between USHCN station siting characteristics and temperature trends at national and regional scales and on differences between USHCN temperatures and North American Regional Reanalysis (NARR) temperatures. This initial study examines temperature differences among different levels of siting quality without controlling for other factors such as instrument type.
Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends. The opposite-signed differences of maximum and minimum temperature trends are similar in magnitude, so that the overall mean temperature trends are nearly identical across site classifications. Homogeneity adjustments tend to reduce trend differences, but statistically significant differences remain for all but average temperature trends. Comparison of observed temperatures with NARR shows that the most poorly-sited stations are warmer compared to NARR than are other stations, and a major portion of this bias is associated with the siting classification rather than the geographical distribution of stations. According to the best-sited stations, the diurnal temperature range in the lower 48 states has no century-scale trend.
The finding that the mean temperature has no statistically significant trend difference that is dependent of siting quality, while the maximum and minimum temperature trends indicates that the lack of a difference in the mean temperatures is coincidental for the specific case of the USA sites, and may not be true globally. At the very least, this raises a red flag on the use of the poorly sited locations for climate assessments as these locations are not spatially representative.
Whether you believe the century of data from the NOAA COOP network we have is adequate, as Dr. Muller suggests, or if you believe the poor siting placements and data biases that have been documented with the nationwide climate monitoring network are irrelevant to long term trends, there are some very compelling and demonstrative actions by NOAA that speak directly to the issue.
1. NOAA’s NCDC created a new hi-tech surface monitoring network in 2002, the Climate Reference Network, with a strict emphasis on ensuring high quality siting. If siting does not matter to the data, and the data is adequate, why have this new network at all?
2. Recently, while resurveying stations that I previously surveyed in Oklahoma, I discovered that NOAA has been quietly removing the temperature sensors from some of the USHCN stations we cited as the worst (CRN4, 5) offenders of siting quality. For example, here are before and after photographs of the USHCN temperature station in Ardmore, OK, within a few feet of the traffic intersection at City Hall:
Figure 3 Ardmore USHCN station , MMTS temperature sensor, January 2009
Figure 4 Ardmore USHCN station , MMTS temperature sensor removed, March 2011
NCDC confirms in their meta database that this USHCN station has been closed, the temperature sensor removed, and the rain gauge moved to another location – the fire station west of town. It is odd that after being in operation since 1946, that NOAA would suddenly cease to provide equipment to record temperature from this station just months after being surveyed by the surfacestations.org project and its problems highlighted.
Figure 5 NOAA Metadata for Ardmore, OK USHCN station, showing equipment list
3. Expanding the search my team discovered many more instances nationwide, where USHCN stations with poor siting that were identified by the surfacestations.org survey have either had their temperature sensor removed, closed, or moved. This includes the Tucson USHCN station in the parking lot, as evidenced by NOAA/NCDC’s own metadata online database, shown below:
Figure 6 NOAA Metadata for Tucson USHCN station, showing closure in March 2008
It seems inconsistent with NOAA’s claims of siting effects having no impact that they would need to close a station that has been in operation since 1867, just a few months after our team surveyed it in late 2007 and made its issues known, especially if station siting quality has no effect on the data the station produces.
It is our contention that many fully unaccounted for biases remain in the surface temperature record, that the resultant uncertainty is large, and systemic biases remain. This uncertainty and the systematic biases needs to be addressed not only nationally, but worldwide. Dr. Richard Muller has not yet examined these issues.
Thank you for the opportunity to present this to the Members.
Anthony Watts
Chico, CA
Is there a chance that Dr. Muller and the “BEST” team are not a bunch of venal pseudo scientists who hide data and methods to promote a political agenda? We’ve studied this issue, and our preliminary answer is no.
/sarc
At http://www.skepticalscience.com/news.php?p=1&t=214&&n=123 “Kforestcat” (comment 21) addresses the problem of a cooling anomaly in Menne’s study, but none of the folk at Skeptical Science seemed to understand his point. As I understand him, he is saying that as the ambient temperature approaches that of the interfering heat source, the bias decreases. The validity of the assertion is beyond dispute. E.g., the condensation coils of an AC unit become less efficient at higher temperatures, and contribute less heat to a nearby thermometer. Or as air temperature comes closer to that of hot asphalt, basic thermodynamics require that the asphalt become a less efficient heat source. Of course the temperature is influenced secondarily by air temperature, but primarily by the rays of the sun.
The so called “cooling bias” is predictable then by elementary thermodynamics.
eadler says:
March 31, 2011 at 8:15 am
It seems, given the prior finding by NOAA, what Muller has reported as his preliminary finding seems pretty solid, despite the letter written by Anthony Watts.
Really!?
Without seeing the data or the methodology you simply accept his conclusions in testomony to Congress!
The data and methodology that would supposedly be revealed to all before any conclusions were determined.
Nope. I’m not buying it at all.
John A. Fleming . . . . I am with you . . . I look how many times the music industry has done the same thing . . . how many ways are there to sell the same thing . . . endless for the naked greedy. . .
I tell you, Many, most, business plans include planned obsolescence, it is the nature of “capitalism” . . . . o
An anti-capitalist I am not . . but, I never thought Huck Finn was smart like so many I considered him rotten thief who used his “friends” . . . . I remember those famous words in Alex Haley’s “Roots” . . . But WHY? Why don’t you wanna be my slave, no more? . . . . I thought we was FRIENDS?
“at a three-day science conference starting in Wellington on Thursday are looking at implications of new work on climate change.” Don’t assume uniform warming – expert
http://www.iol.co.za/scitech/science/environment/don-t-assume-uniform-warming-expert-1.1049979
I rest my case . . . . for now!
I found the contrast between Prof. Muller’s and Prof. Christy’s testimonies to be quite interesting.
The climate ‘science’ establishment should not be surprised that people no longer trust a word they say.
The BEST team promised they would produce an ‘open source’ project to provide the best global temperature data yet. However, before they have even properly started, they are already claiming initial results validate the other dubious data sets in front of a House of Representatives hearing.
Are these people really stupid enough to think the public will swallow this?
John A. Fleming says:
March 31, 2011 at 1:58 pm: There’s already an ARPA-like foundation in place, associated with BEST, and ready to take charge of the funds: http://www.kavlifoundation.org/ Check ’em out, these guys have BIG ideas.
eadler says: March 31, 2011 at 10:05 am
Anyone who cannot see that urban encroachment raises temperatures higher than a rural location over the same time period, ie. the encroached area has a higher rate of warming, with higher highs and higher lows than the rural site, as has been shown here at numerous stations cannot be shown. Don’t forget the migration of monitoring stations to airports, and elimination of rural sites, enhancing the delusion of warming. As far as siting papers you are so enamoured with, nope. I spent a large part of my engineering career developing temperature measurement and process control to +/- .1 deg. F, +/- 3 sigma, decades ago, and know this is false empirically.
A few notes.
There is nothing wrong whatsoever with Muller and company releasing preliminary results to congress or the public or to private citizens. That is the whole point of TRANSPARENCY and OPENNESS.
They clearly state this is a 2% sample. They clearly state that the final results may change things. I am QUITE SURE that the addition of more stations will NOT change the answer. Welcome to the Law of Large numbers. We have known for quite some time than ANY collection of 100 sites picked randomly gives you the same answer.
Adding more stations will only do one thing and one thing only. It will narrow the errors due to spatial sampling.
Second: the methodology for treating any siting change as a new station, will also NOT CHANGE the curve in any significant way. What it will do is increase the uncertainty. So there will be a trade off between the uncertainty due to spatial sampling ( which goes down) and the uncertainty due to changing station features
( which will go up). But the shape of the curve will no change in any significant way.
Especially since 1979. We know the record from 1979 on is good. we know this because it tracks well with UHA and RSS. The warming will not disappear, cannot disappear.
With respect to the microsite issue. This is what we know.
Micro site irregularities can COOL a station and they can WARM a station. We have no numerical evidence of
A. the SIZE of the effect
B. the FREQUENCY of the effect
C. the DIRECTION of the effect
D. the overall impact of the effect.
Some thoughts an this last item. These thoughts are based on 1) preliminary analysis conducted by JohnV and myself. 2) preliminary analysis conducted by Menne.
3) field experiments conducted by the scientist who came up with the rating system.
A) The size of the effect. On any given day IF the meterlogical conditions are right you can see large effects. Roughly speaking a CRN 3 could see a cooling of 3C or a warming of 3C. That’s NOT an effect that you see every day. Conditions have to be
right to see that Size of an effect. A CRN 4 could see, ON SOME DAYS, a 4C cooling or a 4C warming. Please note that you dont see these effects every day. Many things can modulate this effect. I will list a few
a) clouds
b) rain
c) wind speed
Fundamentally, If the bias happened every day of the year, you would have no difficulty finding the bias signal. Even with small samples. Even with a simple station comparison. But We Dont find consistent and persistent Biases of this magnitude. Why not? see the next point.
B) FREQUENCY. the effect does not happen every day of the year or even every day of a season. take for example, the effect of air conditioners. The air condition can only impact the record, If it is running. And only iff the temperature of the air it exhausts is GREATER THAN the Tmax for the day. If the AC comes on AFTER Tmax has been recorded, then it cant bias the record. The same goes for SHADING and cooling. Shading is seasonally dependent. the same goes for rain, clouds and wind. All of these mitigate the effect. In the one field test performed the bias was seen as something on the order of .1C. That means over the course of a long time you see biases that spike high and spike low. they dont happen every day. When you look at them in TOTAL the cumulative effect is small. Its small because there are both positive and negative biases. its small because conditions have to be RIGHT for the bias to occur. Hot sunny day, no wind, and the AC coming on at just the right time.
C. Direction of the effect. the bias can be UP or DOWN. we dont know how they balance out. How is shading during the day ( Tmax goes down) balanced by higher Tmins due to the surface (asphalt) holding more heat? Nobody knows (mathematically) how these balance.
D. the overall impact. When you look at the effect size, the effect frequency, and the direction of the effect, it may turn out that the overall impact is SMALL. in fact I suspect it will be small BECAUSE preliminary research has ruled out a BIG effect.
We know the effect size is Small SIMPLY BY COMPARING WITH UHA.
if the land record was represented as L = T +B, where T= truth and B = Microsite Bias. then we can estimate the size of the bias by simply comparing UHA to L.
UHA is not effected by any bias. Because UHA and RSS track the land record closely, we know the bias must be small. For example, if the bias was 1C, we would expect GISS or CRU to show much higher temps or trends than UHA. they dont. From that we can conclude that the bias must be small. By small I mean something on the order
of .1C to .15C. Finding a bias that small will be very difficult.
In the end, here is what you will find. You will find that the Global LAND temp calculated by BEST will be within .15C of that calculated by other systems. You will find that if you pick the very best stations the answer will not change (.15C+-)
You will have better understanding of the real uncertainty. But the world will still be warming. C02 will still cause warming. the question will be what it has ALWAYS BEEN. How much warming? is it dangerous? to whom? and what can we do? what should we do?
Skepticism about AGW will take a step FORWARD when the land record is seen as being fairly accurate. Then the conversation should turn to REAL questions. how much warming? is it dangerous.. etc. And hopefully people will put their collective energy on that.
guam says: Intriguing Post, bad enough to be using corrupted sites,
its even worse we are forced to rely on corrupted scientists.
There seems to be a lot of missing the point around here by a long shot. Anthony basically admitted above that the paper he is about to publish came to the same conclusion as Dr Muller’s BEST project, i.e. that good rural and bad city stations have the same trend in average daily temps. Anthony says the trend in diurnal range differs between good and bad stations, but that just means the daily high and low temps of city stations are getting closer together, though their average still has the same trend as the good rural stations.
What is really interesting about this is what is happening in the city stations. City stations are getting warmer at night than rural stations, no mystery there. What is strange is that the city stations are getting cooler during the hottest part of the day compared to the rural stations!!!? So apparently as you build up more and more asphalt and such into a big city around a formerly rural or small town thermometer, it gets cooler during the hot part of the day! Something really strange is going on here! Note that I’m not saying city high temps are actually getting cooler than they used to be, I’m just saying that they’re not getting hotter as fast as the highs at rural thermometers. Also note that this is not the conclusion of Muller or BEST, this appears to be Anthony’s conclusion based on Anthony’s own study.
I see little reason to condemn Muller here. There are videos of him giving the hockey stick trick a well deserved slamming like I would expect of a good scientist. It doesn’t look quite right for him to announce so soon before making his data and methods available, but he probably had to make a decision about appearing at the Congressional hearing when the opportunity probably wouldn’t be there later. Since his conclusions were consistent with Anthony’s, he probably decided to go for it. But then I do wonder why he didn’t make more of the odd urban cooling effect.
I should be more circumspect. It doesn’t matter how strong your rectitude starts out as. A big pile of free government money with limited oversight, no goal, and an incestuous grantor/grantee relationship, will corrupt anyone and everyone. A Climate-ARPA, just the idea of it, should make your curl up in a fetal ball and whimper “No, no, no, …!”.
I gotta ask, is it naivete, or duplicity, or excessive self-regard, that makes a person suggest such a thing to Congress?
Lubos
“Concerning BEST, I am confused about yet another thing – the promised transparency about everything. As far as I can see, BEST is currently offering an even worse transparency, at least to me, than any other previous team. Is that just me? I can’t even get the final data. And they’re already presenting “results” to the Congress?”
Hold your horses.
They are being transparent about everything. They are releasing preliminary findings.
That is being transparent. They showed those preliminary results to me, to anthony, to Zeke, to congress. As for the final data. They will make the final data ready when they publish. Just like Anthony will make his data available when he publishes.
I think you and others assumed that the BEST approach would somehow disapppear the warming. it cant. it wont. it will give you a better estimate of the uncertainty, but the final answer will be in the ballpark of Giss and Cru, give or take .15C.
And that wont change vene after you look at UHI. UHI is not that large. We know this by looking at rural only sites. i know this from looking at long rural records. we know this by looking at UHA.
Time to focus on the real issue: sensitivity
REPLY: Sorry Mosh, I completely disagree with you on this related to transparency. More at a future date.- Anthony
Anybody ready for some good news, after reading this distressing post?
Here is the Committee Chairman’s opening statement:
http://science.house.gov/sites/republicans.science.house.gov/files/documents/hearings/033111_hall.pdf
Enjoy. (1 page)
Hang in there Anthony, Et al.
REPLY: Sorry Mosh, I completely disagree with you on this related to transparency. More at a future date.- Anthony
#####Care to be more transparent about the lack of transparency?
There are two modes of working openly that I know of.
1. where everyone can watch your every step, even the false steps.
full open access to the dev teams commits. (we worked this way at Openmoko)
2. Where you provide access after you’ve taken your final step.
Ideally we would like to see number 1. But as we know that approach also causes confusion. We see that in the reports of ice for example. we see that in UHA records.
What’s absolutely required is #2. There are also approaches where you give limited access until #2.
Not sure at all why my earlier post was snipped. My point was and remains Muller’s report was no surprise to me – it was expected. At least by me. I really do think the “team” won this big and it is just one more aspect of the frau* going on in climate science. I think Muller was a ringer from the outset.
Most of the influence exerted on the USHCN Ver. 2 dataset by methodological biases and siting issues are accounted for in the data adjustment processes, as show in Menne et al. 2009 and Menne et al. 2010. However, that does not mean it is good science to leave the most critically problematic stations running or continue including them in datasets. The problem you have with the removal of stations is a very contentious non-issue. Is the goal of surfacestations.org not to identify the major issues surrounding methodology and siting? Therefore, does the removal of problematic stations not constitute a major achievement in data recording quality control for surfacestations.org? Whether statistical adjustments account for the station associated biases or not, there is no doubt that a top-down approach of removing these stations also helps mitigate bias. The removal of stations from the USHCN is no secret or dubious act by the NCDC;
“The actual subset of stations constituting the HCN has changed twice since 1987. By the mid-1990s, station closures and relocations had already forced a reevaluation of the composition of the U.S. HCN as well as the creation of additional composite stations. The reevaluation led to 52 station deletions and 54 additions, for a total of 1,221 stations (156 of which were composites). Since the 1996 release (Easterling et al. 1996), numerous station closures and relocations have again necessitated a revision of the network. As a result, HCN version 2 contains 1,218 stations, 208 of which are composites; relative to the 1996 release, there have been 62 station deletions and 59 additions.” – Menne et al. 2009
Whether the reanalysis of the USHCN data, prompted by your work, done on an incomplete unintended document, it still accounts for the issues addressed in that report. It is my hope that your complete and thorough exploration of the USHCN network siting and methodological problems (slated to be published this year?) will prompt another, more thorough reanalysis of the USHCN data, and a Version 3 dataset. I certainly think that your work represents an important critique of the quality control of the data recording stations, methodology, and practices of government agencies.
I am curious as to why a post on this blog from 2010 says the Menne et al. 2010 paper was based on a 43% total of the network surveyed and that (at the time in 2010) your current dataset is 87% of the network surveyed. Yet, this post/letter says Menne et al. 2009 was based on the same report which was a 70% survey of the network and you are currently at 82.5% of the network surveyed. These are two major inconsistencies in what you have said, and I would appreciate you addressing them for me. Especially as this was a letter intended to be included in the Subcommittee on Energy and Environment, Committee on Science, Space, and Technology’s record for the hearing on Climate Change today.
“Without the efforts of Anthony Watts and his team, we would have only a series of
anecdotal images of poor temperature stations, and we would not be able to evaluate the integrity of the data.
This is a case in which scientists receiving no government funding did work crucial to
understanding climate change.”
Yes, indeed. And while I’m proud of Anthony and the team’s efforts (and my own small participation) it is a disgrace that what Muller rightly describes as “crucial” had to be done by volunteers with no official sanction or assistance.
Billions and trillions they are willing to spend. A few million for “crucial”? Not so much, if it will lead to embarrassment.
dp says:
March 31, 2011 at 4:37 pm (Edit)
“My point was and remains Muller’s report was no surprise to me – it was expected. At least by me. I really do think the “team” won this big and it is just one more aspect of the frau* going on in climate science. I think Muller was a ringer from the outset.”
Muller is not a member of the Hockey Team, he’s an astrophysicist who is more well known for his theory of Nemesis, a hypothetical companion red dwarf star or brown dwarf orbiting our solar system in 28 million year orbits, that has remained undetected because most red dwarfs that are known have never had their distance from our Sun measured, and we are incapable of detecting brown dwarfs easily in interstellar space, at least until the data from WISE is fully analysed. Nemesis is thought to be responsible for disturbing the orbits of Oort Cloud comets that are thought to be the cause of the impacts that trigger the periodic mass extinctions of life on Earth.
It is an outrage that the of the 0.7C increase since 1957, 0.6C can be blamed on AGW. Behind that statement is the assumption/belief that natural warming from 1850 suddenly ended in 1957. From 1910 to 1942 the temp rose 0.44C; that is said to be “natural”. The temp dropped from 1942 to 1965 by 0.2C; that is supposed to be suppressed warming by aerosol pollution or (depending on the writer) a “natural” cooling. So AGW is responsible except when it is not.
Such short-term, non-critical thinking.
Class Act Anthony!
And the quote of the day?
“They cant even clean the mess up properly :)”
(ouch)
😀
@ur momisugly Steve Mosher “We have known for quite some time than ANY collection of 100 sites picked randomly gives you the same answer.”
Is that really true?
If so, then are you saying that any two cherry picked selections, which would, by definition, both be subsets of all random selections, of 100 sites would show the same answer?
Suggests to me that if that is true then the data is little more than bollocks.
Excuse the vernacular.
BEST a sneaky way to soften the climbdown? Buy some time and protect some butts?And later show more honest results to save own butt? What me cynical toward climatology?
MackemX says:
March 31, 2011 at 5:14 pm (Edit)
@ur momisugly Steve Mosher “We have known for quite some time than ANY collection of 100 sites picked randomly gives you the same answer.”
“Is that really true?
If so, then are you saying that any two cherry picked selections, which would, by definition, both be subsets of all random selections, of 100 sites would show the same answer?”
Because any random sampling has the same average amount of sites that are absolute crap.
i.e. Garbage A In = Garbage A Out = Garbage B In = Garbage B Out
Its axiomatic.