New study shows half of the global warming in the USA is artificial

PRESS RELEASE – U.S. Temperature trends show a spurious doubling due to NOAA station siting problems and post measurement adjustments.

Chico, CA July 29th, 2012 – 12 PM PDT – FOR IMMEDIATE RELEASE

A comparison and summary of trends is shown from the paper. Acceptably placed thermometers away from common urban influences read much cooler nationwide:

A reanalysis of U.S. surface station temperatures has been performed using the recently WMO-approved Siting Classification System devised by METEO-France’s Michel Leroy. The new siting classification more accurately characterizes the quality of the location in terms of monitoring long-term spatially representative surface temperature trends. The new analysis demonstrates that reported 1979-2008 U.S. temperature trends are spuriously doubled, with 92% of that over-estimation resulting from erroneous NOAA adjustments of well-sited stations upward. The paper is the first to use the updated siting system which addresses USHCN siting issues and data adjustments.

The new improved assessment, for the years 1979 to 2008, yields a trend of +0.155C per decade from the high quality sites, a +0.248 C per decade trend for poorly sited locations, and a trend of +0.309 C per decade after NOAA adjusts the data. This issue of station siting quality is expected to be an issue with respect to the monitoring of land surface temperature throughout the Global Historical Climate Network and in the BEST network.

Today, a new paper has been released that is the culmination of knowledge gleaned from five years of work by Anthony Watts and the many volunteers and contributors to the SurfaceStations project started in 2007.

This pre-publication draft paper, titled An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends, is co-authored by Anthony Watts of California, Evan Jones of New York, Stephen McIntyre of Toronto, Canada, and Dr. John R. Christy from the Department of Atmospheric Science, University of Alabama, Huntsville, is to be submitted for publication.

The pre-release of this paper follows the practice embraced by Dr. Richard Muller, of the Berkeley Earth Surface Temperature project in a June 2011 interview with Scientific American’s Michael Lemonick in “Science Talk”, said:

I know that is prior to acceptance, but in the tradition that I grew up in (under Nobel Laureate Luis Alvarez) we always widely distributed “preprints” of papers prior to their publication or even submission. That guaranteed a much wider peer review than we obtained from mere referees.

The USHCN is one of the main metrics used to gauge the temperature changes in the United States. The first wide scale effort to address siting issues, Watts, (2009), a collated photographic survey, showed that approximately 90% of USHCN stations were compromised by encroachment of urbanity in the form of heat sinks and sources, such as concrete, asphalt, air conditioning system heat exchangers, roadways, airport tarmac, and other issues. This finding was backed up by an August 2011 U.S. General Accounting Office investigation and report titled: Climate Monitoring: NOAA Can Improve Management of the U.S. Historical Climatology Network

All three papers examining the station siting issue, using early data gathered by the SurfaceStations project, Menne et al (2010), authored by Dr. Matt Menne of NCDC, Fall et al, 2011, authored by Dr. Souleymane Fall of Tuskeegee University and co-authored by Anthony Watts, and Muller et al 2012, authored by Dr. Richard Muller of the University of California, Berkeley and founder of the Berkeley Earth Surface Temperature Project (BEST) were inconclusive in finding effects on temperature trends used to gauge the temperature change in the United States over the last century.

Lead author of the paper, Anthony Watts, commented:

“I fully accept the previous findings of these papers, including that of the Muller et al 2012 paper. These investigators found exactly what would be expected given the siting metadata they had. However, the Leroy 1999 site rating method employed to create the early metadata, and employed in the Fall et al 2011 paper I co-authored was incomplete, and didn’t properly quantify the effects.

The new rating method employed finds that station siting does indeed have a significant effect on temperature trends.”

Watts et al 2012 has employed a new methodology for station siting, pioneered by Michel Leroy of METEOFrance in 2010, in the paper Leroy 2010, and endorsed by the World Meteorological Organization (WMO) Commission for Instruments and Methods of Observation (CIMO-XV, 2010) Fifteenth session, in September 2010 as a WMO-ISO standard, making it suitable for reevaluating previous studies on the issue of station siting.

Previous papers all used a distance only rating system from Leroy 1999, to gauge the impact of heat sinks and sources near thermometers. Leroy 2010 shows that method to be effective for siting new stations, such as was done by NCDC adopting Leroy 1999 methods with their Climate Reference Network (CRN) in 2002 but ineffective at retroactive siting evaluation.

Leroy 2010 adds one simple but effective physical metric; surface area of the heat sinks/sources within the thermometer viewshed to quantify the total heat dissipation effect.

Using the new Leroy 2010 classification system on the older siting metadata used by Fall et al. (2011), Menne et al. (2010), and Muller et al. (2012), yields dramatically different results.

Using Leroy 2010 methods, the Watts et al 2012 paper, which studies several aspects of USHCN siting issues and data adjustments, concludes that:

These factors, combined with station siting issues, have led to a spurious doubling of U.S. mean temperature trends in the 30 year data period covered by the study from 1979 – 2008.

Other findings include, but are not limited to:

· Statistically significant differences between compliant and non-compliant stations exist, as well as urban and rural stations.

· Poorly sited station trends are adjusted sharply upward, and well sited stations are adjusted upward to match the already-adjusted poor stations.

· Well sited rural stations show a warming nearly three times greater after NOAA adjustment is applied.

· Urban sites warm more rapidly than semi-urban sites, which in turn warm more rapidly than rural sites.

· The raw data Tmean trend for well sited stations is 0.15°C per decade lower than adjusted Tmean trend for poorly sited stations.

· Airport USHCN stations show a significant differences in trends than other USHCN stations, and due to equipment issues and other problems, may not be representative stations for monitoring climate.

###

We will continue to investigate other issues related to bias and adjustments such as TOBs in future studies.

FILES:

This press release in PDF form: Watts_et_al 2012_PRESS RELEASE (PDF)

The paper in draft form: Watts-et-al_2012_discussion_paper_webrelease (PDF)

The Figures for the paper: Watts et al 2012 Figures and Tables (PDF)

A PowerPoint presentation of findings with many additional figures is available online:

Overview -Watts et al Station Siting 8-3-12 (PPT) UPDATED

Methodology – Graphs Presentation (.PPT)

Some additional files may be added as needed.

Contact:

Anthony Watts at: http://wattsupwiththat.com/about-wuwt/contact-2/

References:

GAO-11-800 August 31, 2011, Climate Monitoring: NOAA Can Improve Management of the U.S. Historical Climatology Network Highlights Page (PDF)   Full Report (PDF, 47 pages)   Accessible Text Recommendations (HTML)

Fall, S., Watts, A., Nielsen‐Gammon, J. Jones, E. Niyogi, D. Christy, J. and Pielke, R.A. Sr., 2011, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends, Journal of Geophysical Research, 116, D14120, doi:10.1029/2010JD015146, 2011

Leroy, M., 1999: Classification d’un site. Note Technique no. 35. Direction des Systèmes d’Observation, Météo-France, 12 pp.

Leroy, M., 2010: Siting Classification for Surface Observing Stations on Land, Climate, and Upper-air Observations JMA/WMO Workshop on Quality Management in Surface, Tokyo, Japan 27-30 July 2010 http://www.jma.go.jp/jma/en/Activities/qmws_2010/CountryReport/CS202_Leroy.pdf

Menne, M. J., C. N. Williams Jr., and M. A. Palecki, 2010: On the reliability of the U.S. surface temperature record, J. Geophys. Res., 115, D11108, doi:10.1029/2009JD013094

Muller, R.A., Curry, J., Groom, D. Jacobsen, R.,Perlmutter, S. Rohde, R. Rosenfeld, A., Wickham, C., Wurtele, J., 2012: Earth Atmospheric Land Surface Temperature and Station Quality in the United States. http://berkeleyearth.org/pdf/berkeley-earth-station-quality.pdf

Watts, A., 2009: Is the U.S. surface temperature record reliable? Published online at: http://wattsupwiththat.files.wordpress.com/2009/05/surfacestationsreport_spring09.pdf

World Meteorological Organization Commission for Instruments and Methods of Observation, Fifteenth session, (CIMO-XV, 2010) WMO publication Number 1064, available online at: http://www.wmo.int/pages/prog/www/CIMO/CIMO15-WMO1064/1064_en.pdf

Notes:

1. The SurfaceStations project was a crowd sourcing project started in June 2007, done entirely with citizen volunteers (over 650), created in response to the realization that very little physical site survey metadata exists for the entire United States Historical Climatological Network (USHCN) and Global Historical Climatological Network (GHCN) surface station records worldwide. This realization came about from a discussion of a paper and some new information that occurred on Dr. Roger Pielke Sr. Research Group Weblog. In particular, a thread regarding the paper: Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res.

2. Some files in the initial press release had some small typographical errors. These have been corrected. Please click on links above for  new press release and figures files.

3. A work page has been established for Watts et al 2012 for the purpose of managing updates. You can view it here.

==========================================================

Note: This will be top post for a couple of days, new posts will appear below this one. Kinda burned out and have submission to make so don’t expect much new for a day or two. See post below this for a few notes on backstory. Thanks everybody!  – Anthony

NOTE: 7/31/12 this thread has gotten large and unable to load for some commenters, it continues here.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

1.1K Comments
Inline Feedbacks
View all comments
Entropic man
July 30, 2012 2:57 am

Best not to jump too quickly into triumphalism or despair until this passes peer review.
And before you get all excited, a reminder to the non-scientists here. Peer review is not a filter removing politically incorrect ideas, or a way of preventing someone presenting ideas outside the consensus. It is quality control, a system for checking that the design, execution and data analysis meet the minimum standard expected of a scientific paper.

Sensorman
July 30, 2012 2:58 am

Hey Anthony – it’s a bit of an understatement to say “good work”! Anywhere specific you want possible edits to be sent? Minor stuff, but e.g. line 560 suggest “majority” rather than “plurality”

Stephen Wilde
July 30, 2012 3:03 am

To all those who pick at details and suggest that the paper might not be taken seriously I would just say that the basic approach and the results are what matter.
It is now in the public domain that there is a much better site assessment procedure which has not previously been methodically applied.
Also, that when it is applied, the difference in trend between sites of differing qualities becomes apparent. The most important point of Muller’s work was that there were no significant trend differences between sites of differing qualities.
The science has moved on such that the earlier assertions of Muller and the entire climate establishment are now out of date. They should graciously acknowledge that fact.
Leroy 2010 has been a time bomb waiting to go off and this paper has lit the fuse.
All else is chaff.
That is not to deny that warming has occurred but it does reduce it substantially from what we have been led to believe.
In the meantime natural variability is being shown to have a greater influence than previously recognised.
Those two factors combine to squeeze AGW into insignificance for policy purposes.

kadaka (KD Knoebel)
July 30, 2012 3:11 am

From Nick Stokes on July 29, 2012 at 4:47 pm:

I calculated the 1979-2008 trend from NOAA’s ConUS figures. It came to 0.24 °C/decade. That compares well to the UAH trend of 0.23 °C/decade (for 1979-present).

The full UAH record starts at 12/1978 and currently runs to 6/2012.
The “USA48” figure is the slope for the entire record. In a spreadsheet just do a quick conversion to decimal years, (month-1)/12 + year, then use the SLOPE function. OpenOffice gave me 0.23°C/decade, same as reported by UAH.
At your NOAA data link, I retrieved all available data for individual months and assembled it all in a spreadsheet. Over the same period as the UAH record, after converting °F to °C, slope was 0.34°C/decade, clearly more than UAH, by about 48%.
From 1979 to 2008 inclusive, UAH yields 0.25°C/decade. By that NOAA data I got 0.39°C/decade, clearly more than UAH, by about 56%.
Your NOAA link as specified is the 12 month average for December, which was found to be the average temperature from January to December after comparison to 12-mo averages I figured from the monthly data. With those numbers, 1979 to 2008 inclusive had a slope of 0.32°C/decade, not the 0.24°C/decade you got.
So when the periods are properly matched up, it is found the trends from NOAA data are considerably larger than those from UAH, more than half again as large by the average of these two periods.
And when you properly calculate the trends, that from 1979-2008 NOAA ConUS 12-mo averages (annual figures) does not compare well to the UAH trend (actually from 12/1978 to present), being 39% higher.
You’re wrong. Again. Twice just in that one part of one of your comments. At least twice. Try harder.

Jessie
July 30, 2012 3:17 am

Congratulations Anthony, Evan, Stephen and John on your work. Also congratulations to your merry band of field volunteers and the mods, all those comments!
And now to read the papers………………

Evan Jones
Editor
July 30, 2012 3:28 am

Music to my ears, Evan. Only yesterday I was remembering those words of yours, that I’ve not heard for a long time.
Thanks, Lucy!
[youtube http://www.youtube.com/watch?v=hUsXliGhhHU&w=420&h=315%5D

Steve Richards
July 30, 2012 3:29 am

Excellent job, well done to all.
Does the word ‘May’ and this sentence need reviewing? Or have I got the wrong end of the stick?
387 May airports, due to the weather stations being placed on grassy areas in between
388 runways, are rated as “compliant” by both Leroy (1999) and Leroy (2010) rating
389 systems.

michaelozanne
July 30, 2012 3:32 am

Still Nothing on the main page at RC. I think we may be taking the fingers in ears “Nyah Nyah, not peer reviewed” approach.

David A. Evans
July 30, 2012 3:33 am

Ian H says:
July 29, 2012 at 5:23 pm
I was asking when we stopped using max/min thermometers because the era of electronic measurement should have eliminated the necessity of TOBs adjustment. I did appreciate the problems associated with Obs being close to either the Max or Min.
DaveE.

Rafa
July 30, 2012 3:34 am

EVAN says above he did not work that hard to have later the data condemned in some inaccesible file. He has my sympathy for that. Let me remind that in some cases mainstream researchers claimed the data file was lost, the dog ate the data, etc, or even more exotic excuses for something paid with tax payers money.

Nick Stokes
July 30, 2012 3:44 am

HaroldW says: July 29, 2012 at 10:23 pm
Nick-From the data at the link, I get a trend of 0.32 C/decade.”

Yes, you’re right. I slipped a line in reading in the data, and calculated 1980-2009 instead of 1979-2008. I’m surprised it made so much difference.
But since it did, I thought I should calculate the exact years for UAH. Unless I got the years wrong there, it came to 0.25 °C/decade. The se was 0.05, and of the NOAA trend, 0.09. So the differences aren’t significant.

Entropic man
July 30, 2012 3:48 am

I refer you to lines 306 through 316, relating to shade and other factors causing a station to underread. Since stations in shade, or in frost hollows are going to have a reducing effect on any data of which they are a part. a peer reviewer may seek further confirmation that these effects have been properly taken into account.

HK
July 30, 2012 3:53 am

This is fascinating.
As far as I can tell, Class 1 and 2 are always combined into one “bin”.
I can see why you would not have separated them, because there are so few Class 1 stations, but is there any trend difference at all between Class 1 and Class 2?

Peter S
July 30, 2012 4:02 am

So – you DO need a weatherman to know which way the wind blows!
Well done Anthony an’ all.

Andyj
July 30, 2012 4:05 am

EVAN!
leading zero before a decimal point:-
“REPLY – Thought about it, then decided that since every starting number there would be a zero, anyway, why bother? ~ Evan”
Please Wiki “decimal” & IEEE 754-2008
It’s the standard.
Sorry but we want this to work and be totally bullet proof.

michaelozanne
July 30, 2012 4:10 am


And before you get all excited, a reminder to the non-scientists here. Peer review is not a filter removing politically incorrect ideas, or a way of preventing someone presenting ideas outside the consensus. It is quality control, a system for checking that the design, execution and data analysis meet the minimum standard expected of a scientific paper.”
Yet in the Gergis case failed to notice that the stated method had not been followed and that the maths had been done by a blind chimp wasted on crack…. A reminder to the scientists here. Quality control is not an optional extra applied post process. But an integral part of the work that should be initiated at the conceptual design stage of the paper and constantly referred to and audited against at every stage until the published output is produced. Perhaps if academic institutions were to insist on and enforce some basic industrial standards we would have less drivel being loaded into the policy making process. We would also have less risible rubbish to have a good snigger at which would be a regrettable side effect.

pwl
July 30, 2012 4:26 am

“757 … This is true for,
758 in all nine geographical areas of all five data samples.”
“This is true for, in all nine geographical areas of all five data samples.”
The grammar structure of the wording in the above sentence needs clarification. Did you mean to say “This is true for all nine geographical areas of and all five data samples.” or some variation thereof?

Chris
July 30, 2012 4:34 am

Anticlimax. Rather than working on a real solution you promote yet another pile of goop. The swallowers swallow.

tckev
July 30, 2012 4:38 am

Excellent work!
Will any Governments’ movers and shakers get behind this and force an abandonment of the useless carbon taxes? I doubt it!

Peter Ellis
July 30, 2012 4:38 am

Also, that when it is applied, the difference in trend between sites of differing qualities becomes apparent. The most important point of Muller’s work was that there were no significant trend differences between sites of differing qualities.
Unfortunately, Anthony’s paper doesn’t show that. He shows that sites of differing qualities have a significantly different trend in the raw data, but not in the homogenised data. Thus, the homogenisation procedures remove the UHI effect, precisely as they are designed to do.
The other observation is that the trend observed from homogenised data is higher than the trend you get from high quality raw data. This is already known, and is due to other necessary adjustments such as time-of-observation bias and the change from liquid-in-glass thermometers to MMTS. Since Anthony did not carry out these adjustments, or say anything whatsoever about the methodology for doing so, he has no grounds to claim that the homogenised temperatures are wrong. This I think will preclude publication: failure to correct for known biases is simply wrong, and comparing uncorrected data (Anthony’s) to corrected data (USHCN) is inappropriate.
What Anthony can say is that +0.155 degrees C – i.e. the high station quality raw data without adjustment for time of observation or thermometer type – represents a lower bound for the temperature trend. This puts him fully in agreement with the published record. This again may be a barrier to publication, since it’s insufficiently novel. I may be wrong though.
What would be really helpful is if Anthony simply releases the list of which stations fall into which categories, so that the USHCN and BEST teams can re-do their urban/rural comparisons using a better metric of station quality.

Michael Schaefer
July 30, 2012 4:47 am

That’s what I call FIFO-science: Facts In – Facts Out.
Well done.

July 30, 2012 4:55 am

Shouldn’t the headline be “New study shows that half of the warming in the USA is artifactual?” If it is in the USA then it is not global, and if it is artificial, that can be read as real, but anthropogenicaly driven, as opposed to natural warming.

Lowell Bergey
July 30, 2012 4:55 am

NBCNews.com skeptic….
http://usnews.nbcnews.com/_news/2012/07/29/13020337-ex-climate-change-skeptic-humans-cause-global-warming?lite
Claims he was a AGW skeptic
His interview with Grist in 2008 shows differently
http://grist.org/article/lets-get-physical/

james@hotmail.com
July 30, 2012 5:00 am

A. Scott
I’m baffled. Nothing you wrote addresses or refutes my point about endorsement.

SanityP
July 30, 2012 5:01 am

Now we only need the FOIA release (fingers crossed for it to contain something juicy) and the coffin will be complete. Perhaps.

1 27 28 29 30 31 43