30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.
This was in AGU’s press release news feed today. At about the time this story publishes, I am presenting it at the AGU 2015 Fall meeting in San Francisco. Here are the details.
NEW STUDY OF NOAA’S U.S. CLIMATE NETWORK SHOWS A LOWER 30-YEAR TEMPERATURE TREND WHEN HIGH QUALITY TEMPERATURE STATIONS UNPERTURBED BY URBANIZATION ARE CONSIDERED
Figure 4 – Comparisons of 30 year trend for compliant Class 1,2 USHCN stations to non-compliant, Class 3,4,5 USHCN stations to NOAA final adjusted V2.5 USHCN data in the Continental United States
EMBARGOED UNTIL 13:30 PST (16:30 EST) December 17th, 2015
SAN FRANCISCO, CA – A new study about the surface temperature record presented at the 2015 Fall Meeting of the American Geophysical Union suggests that the 30-year trend of temperatures for the Continental United States (CONUS) since 1979 are about two thirds as strong as officially NOAA temperature trends.
Using NOAA’s U.S. Historical Climatology Network, which comprises 1218 weather stations in the CONUS, the researchers were able to identify a 410 station subset of “unperturbed” stations that have not been moved, had equipment changes, or changes in time of observations, and thus require no “adjustments” to their temperature record to account for these problems. The study focuses on finding trend differences between well sited and poorly sited weather stations, based on a WMO approved metric Leroy (2010)1 for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement. An example is shown in Figure 2 below, showing the NOAA USHCN temperature sensor for Ardmore, OK.
Following up on a paper published by the authors in 2010, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends2 which concluded:
Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends
…this new study is presented at AGU session A43G-0396 on Thursday Dec. 17th at 13:40PST and is titled Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network
A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010)1 . The United States temperature trends estimated from the relatively few stations in the classes with minimal artificial impact are found to be collectively about 2/3 as large as US trends estimated in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.
1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.
2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1)
3. Equipment bias (CRS v. MMTS stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).
4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.
5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.
6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.
The study is authored by Anthony Watts and Evan Jones of surfacestations.org , John Nielsen-Gammon of Texas A&M , John R. Christy of the University of Alabama, Huntsville and represents years of work in studying the quality of the temperature measurement system of the United States.
Lead author Anthony Watts said of the study: “The majority of weather stations used by NOAA to detect climate change temperature signal have been compromised by encroachment of artificial surfaces like concrete, asphalt, and heat sources like air conditioner exhausts. This study demonstrates conclusively that this issue affects temperature trend and that NOAA’s methods are not correcting for this problem, resulting in an inflated temperature trend. It suggests that the trend for U.S. temperature will need to be corrected.” He added: “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.
The full AGU presentation can be downloaded here: https://goo.gl/7NcvT2
 Leroy, M. (2010): Siting Classification for Surface Observing Stations on Land, Climate, and Upper-air Observations JMA/WMO Workshop on Quality Management in Surface, Tokyo, Japan, 27-30 July 2010
 Fall et al. (2010) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends https://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf
Abstract ID and Title: 76932: Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network
Final Paper Number: A43G-0396
Presentation Type: Poster
Session Date and Time: Thursday, 17 December 2015; 13:40 – 18:00 PST
Session Number and Title: A43G: Tropospheric Chemistry-Climate-Biosphere Interactions III Posters
Location: Moscone South; Poster Hall
Full presentation here: https://goo.gl/7NcvT2
Some side notes.
This work is a continuation of the surface stations project started in 2007, our first publication, Fall et al. in 2010, and our early draft paper in 2012. Putting out that draft paper in 2012 provided us with valuable feedback from critics, and we’ve incorporated that into the effort. Even input from openly hostile professional people, such as Victor Venema, have been highly useful, and I thank him for it.
Many of the valid criticisms of our 2012 draft paper centered around the Time of Observation (TOBs) adjustments that have to be applied to the hodge-podge of stations with issues in the USHCN. Our viewpoint is that trying to retain stations with dodgy records and adjusting the data is a pointless exercise. We chose simply to locate all the stations that DON”T need any adjustments and use those, therefor sidestepping that highly argumentative problem completely. Fortunately, there was enough in nthe USHCN, 410 out of 1218.
It should be noted that the Class1/2 station subset (the best stations we have located in the CONUS) can be considered an analog to the Climate Reference Network in that these stations are reasonably well distributed in the CONUS, and like the CRN, require no adjustments to their records. The CRN consists of 114 commissioned stations in the contiguous United States, our numbers of stations are similar in size and distribution. This should be noted about the CRN:
One of the principal conclusions of the 1997 Conference on the World Climate Research Programme was that the global capacity to observe the Earth’s climate system is inadequate and deteriorating worldwide and “without action to reverse this decline and develop the GCOS [Global Climate Observing System], the ability to characterize climate change and variations over the next 25 years will be even less than during the past quarter century” (National Research Council [NRC] 1999). In spite of the United States being a leader in climate research, long term U.S. climate stations have faced challenges with instrument and site changes that impact the continuity of observations over time. Even small biases can alter the interpretation of decadal climate variability and change, so a substantial effort is required to identify non-climate discontinuities and correct the station records (a process calledhomogenization). Source: https://www.ncdc.noaa.gov/crn/why.html
The CRN has a decade of data, and it shows a pause in the CONUS. Our subset of adjustment free unperturbed stations spans over 30 years, We think it is well worth looking at that data and ignoring the data that requires loads of statistical spackle to patch it up before it is deemed usable. After all, that’s what they say is the reason the CRN was created.
We do allow for one and only one adjustment in the data, and this is only because it is based on physical observations and it is a truly needed adjustment. We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to [paint] and maintenance issues. The MMTS gill shield is a superior exposure system that prevents bias from daytime short-wave and nighttime long-wave thermal radiation. The CRS requires yearly painting, and that often gets neglected, resulting in exposure systems that look like this:
See below for a comparison of the two:
Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison.
We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails. i.e “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!” and “I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor.”.
When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative” attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.
It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable had lower trends that would have bolstered our conclusions.
The “gallery” server from that 2007 surfacestations project that shows individual weather stations and siting notes is currently offline, mainly due to it being attacked regularly and that affects my office network. I’m looking to move it to cloud hosting to solve that problem. I may ask for some help from readers with that.
We think this study will hold up well. We have been very careful, very slow and meticulous. I admit that the draft paper published in July 2012 was rushed, mainly because I believed that Dr. Richard Muller of BEST was going before congress again the next week using data I provided which he agreed to use only for publications, as a political tool. Fortunately, he didn’t appear on that panel. But, the feedback we got from that effort was invaluable. We hope this pre-release today will also provide valuable criticism.
People might wonder if this project was funded by any government, entity, organization, or individual; it was not. This was all done on free time without any pay by all involved. That is another reason we took our time, there was no “must produce by” funding requirement.
Dr. John Nielsen-Gammon, the state climatologist of Texas, has done all the statistical significance analysis and his opinion is reflected in this statement from the introduction
Dr. Nielsen-Gammon has been our worst critic from the get-go, he’s independently reproduced the station ratings with the help of his students, and created his own series of tests on the data and methods. It is worth noting that this is his statement:
The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization.
The p-values from Dr. Nielsen-Gammon’s statistical significance analysis are well below 0.05 (the 95% confidence level), and many comparisons are below 0.01 (the 99% confidence level). He’s on-board with the findings after satisfying himself that we indeed have found a ground truth. If anyone doubts his input to this study, you should view his publication record.
At the time this post goes live, I’ll be presenting at AGU until 18:00PST , so I won’t be able to respond to queries until after then. Evan Jones “may” be able to after about 330PM PST.
This is a technical thread, so those who simply want to scream vitriol about deniers, Koch Brothers, and Exxon aren’t welcome here. Same for people that just want to hurl accusations without backing them up (especially those using fake names/emails, we have a few). Moderators should use pro-active discretion to weed out such detritus. Genuine comments and/or questions are welcome.
Thanks to everyone who helped make this study and presentation possible.