Press Release – Watts at #AGU15 The quality of temperature station siting matters for temperature trends

30 year trends of temperature are shown to be lower, using well-sited high quality NOAA weather stations that do not require adjustments to the data.

This was in AGU’s press release news feed today. At about the time this story publishes, I am presenting it at the AGU 2015 Fall meeting in San Francisco. Here are the details.


 

NEW STUDY OF NOAA’S U.S. CLIMATE NETWORK SHOWS A LOWER 30-YEAR TEMPERATURE TREND WHEN HIGH QUALITY TEMPERATURE STATIONS UNPERTURBED BY URBANIZATION ARE CONSIDERED

Figure4-poster

Figure 4 – Comparisons of 30 year trend for compliant Class 1,2 USHCN stations to non-compliant, Class 3,4,5 USHCN stations to NOAA final adjusted V2.5 USHCN data in the Continental United States

EMBARGOED UNTIL 13:30 PST (16:30 EST) December 17th, 2015

SAN FRANCISCO, CA – A new study about the surface temperature record presented at the 2015 Fall Meeting of the American Geophysical Union suggests that the 30-year trend of temperatures for the Continental United States (CONUS) since 1979 are about two thirds as strong as officially NOAA temperature trends.

Using NOAA’s U.S. Historical Climatology Network, which comprises 1218 weather stations in the CONUS, the researchers were able to identify a 410 station subset of “unperturbed” stations that have not been moved, had equipment changes, or changes in time of observations, and thus require no “adjustments” to their temperature record to account for these problems. The study focuses on finding trend differences between well sited and poorly sited weather stations, based on a WMO approved metric Leroy (2010)1 for classification and assessment of the quality of the measurements based on proximity to artificial heat sources and heat sinks which affect temperature measurement. An example is shown in Figure 2 below, showing the NOAA USHCN temperature sensor for Ardmore, OK.

Following up on a paper published by the authors in 2010, Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends2 which concluded:

Temperature trend estimates vary according to site classification, with poor siting leading to an overestimate of minimum temperature trends and an underestimate of maximum temperature trends, resulting in particular in a substantial difference in estimates of the diurnal temperature range trends

…this new study is presented at AGU session A43G-0396 on Thursday Dec. 17th at 13:40PST and is titled Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network

A 410-station subset of U.S. Historical Climatology Network (version 2.5) stations is identified that experienced no changes in time of observation or station moves during the 1979-2008 period. These stations are classified based on proximity to artificial surfaces, buildings, and other such objects with unnatural thermal mass using guidelines established by Leroy (2010)1 . The United States temperature trends estimated from the relatively few stations in the classes with minimal artificial impact are found to be collectively about 2/3 as large as US trends estimated in the classes with greater expected artificial impact. The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization. The homogeneity adjustments applied by the National Centers for Environmental Information (formerly the National Climatic Data Center) greatly reduce those differences but produce trends that are more consistent with the stations with greater expected artificial impact. Trend differences are not found during the 1999- 2008 sub-period of relatively stable temperatures, suggesting that the observed differences are caused by a physical mechanism that is directly or indirectly caused by changing temperatures.

clip_image004
Figure 1 – USHCN Temperature sensor located on street corner in Ardmore, OK in full viewshed of multiple heatsinks.
Figure 2 - Analysis of artificial surface areas within 10 and 30 meter radii at Ashland, NE USHCN station (COOP# 250375) using Google Earth tools. The NOAA temperature sensor is labeled as MMTS.
Figure 2 – Analysis of artificial surface areas within 10 and 30 meter radii at Ashland, NE USHCN station (COOP# 250375) using Google Earth tools. The NOAA temperature sensor is labeled as MMTS.
Table 1 -Tabulation of station types showing 30 year trend for compliant Class 1&2 USHCN stations to poorly sited non-compliant, Classes 3,4,&5 USHCN stations in the CONUS, compared to official NOAA adjusted and homogenized USHCN data.
Table 1 – Tabulation of station types showing 30 year trend for compliant Class 1&2 USHCN stations to poorly sited non-compliant, Classes 3,4,&5 USHCN stations in the CONUS, compared to official NOAA adjusted and homogenized USHCN data.
Figure 3 - Comparisons of well sited (compliant Class 1&2) USHCN stations to poorly sited USHCN stations (non-compliant, Classes 3,4,&5) by CONUS and region to official NOAA adjusted USHCN data (V2.5) for the entire (compliant and non-compliant) USHCN dataset.
Figure 3 – Tmean Comparisons of well sited (compliant Class 1&2) USHCN stations to poorly sited USHCN stations (non-compliant, Classes 3,4,&5) by CONUS and region to official NOAA adjusted USHCN data (V2.5) for the entire (compliant and non-compliant) USHCN dataset.

Key findings:

1. Comprehensive and detailed evaluation of station metadata, on-site station photography, satellite and aerial imaging, street level Google Earth imagery, and curator interviews have yielded a well-distributed 410 station subset of the 1218 station USHCN network that is unperturbed by Time of Observation changes, station moves, or rating changes, and a complete or mostly complete 30-year dataset. It must be emphasized that the perturbed stations dropped from the USHCN set show significantly lower trends than those retained in the sample, both for well and poorly sited station sets.

2. Bias at the microsite level (the immediate environment of the sensor) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend. Well sited stations show significantly less warming from 1979 – 2008. These differences are significant in Tmean, and most pronounced in the minimum temperature data (Tmin). (Figure 3 and Table 1)

3. Equipment bias (CRS v. MMTS stations) in the unperturbed subset of USHCN stations has a significant effect on the mean temperature (Tmean) trend when CRS stations are compared with MMTS stations. MMTS stations show significantly less warming than CRS stations from 1979 – 2008. (Table 1) These differences are significant in Tmean (even after upward adjustment for MMTS conversion) and most pronounced in the maximum temperature data (Tmax).

4. The 30-year Tmean temperature trend of unperturbed, well sited stations is significantly lower than the Tmean temperature trend of NOAA/NCDC official adjusted homogenized surface temperature record for all 1218 USHCN stations.

5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.

6. The data suggests that the divergence between well and poorly sited stations is gradual, not a result of spurious step change due to poor metadata.

The study is authored by Anthony Watts and Evan Jones of surfacestations.org , John Nielsen-Gammon of Texas A&M , John R. Christy of the University of Alabama, Huntsville and represents years of work in studying the quality of the temperature measurement system of the United States.

Lead author Anthony Watts said of the study: “The majority of weather stations used by NOAA to detect climate change temperature signal have been compromised by encroachment of artificial surfaces like concrete, asphalt, and heat sources like air conditioner exhausts. This study demonstrates conclusively that this issue affects temperature trend and that NOAA’s methods are not correcting for this problem, resulting in an inflated temperature trend. It suggests that the trend for U.S. temperature will need to be corrected.” He added: “We also see evidence of this same sort of siting problem around the world at many other official weather stations, suggesting that the same upward bias on trend also manifests itself in the global temperature record”.

The full AGU presentation can be downloaded here: https://goo.gl/7NcvT2

[1] Leroy, M. (2010): Siting Classification for Surface Observing Stations on Land, Climate, and Upper-air Observations JMA/WMO Workshop on Quality Management in Surface, Tokyo, Japan, 27-30 July 2010

[2] Fall et al. (2010) Analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends https://pielkeclimatesci.files.wordpress.com/2011/07/r-367.pdf


 

AGU-Poster-Watts-2015

Abstract ID and Title: 76932: Comparison of Temperature Trends Using an Unperturbed Subset of The U.S. Historical Climatology Network

Final Paper Number: A43G-0396

Presentation Type: Poster

Session Date and Time: Thursday, 17 December 2015; 13:40 – 18:00 PST

Session Number and Title: A43G: Tropospheric Chemistry-Climate-Biosphere Interactions III Posters

Location: Moscone South; Poster Hall

Full presentation here: https://goo.gl/7NcvT2


Some side notes.

This work is a continuation of the surface stations project started in 2007, our first publication, Fall et al. in 2010, and our early draft paper in 2012. Putting out that draft paper in 2012 provided us with valuable feedback from critics, and we’ve incorporated that into the effort. Even input from openly hostile professional people, such as Victor Venema, have been highly useful, and I thank him for it.

Many of the valid criticisms of our 2012 draft paper centered around the Time of Observation (TOBs) adjustments that have to be applied to the hodge-podge of stations with issues in the USHCN. Our viewpoint is that trying to retain stations with dodgy records and adjusting the data is a pointless exercise. We chose simply to locate all the stations that DON”T need any adjustments and use those, therefor sidestepping that highly argumentative problem completely. Fortunately, there was enough in nthe USHCN, 410 out of 1218.

It should be noted that the Class1/2 station subset (the best stations we have located in the CONUS) can be considered an analog to the Climate Reference Network in that these stations are reasonably well distributed in the CONUS, and like the CRN, require no adjustments to their records. The CRN consists of 114 commissioned stations in the contiguous United States, our numbers of stations are similar in size and distribution. This should be noted about the CRN:

One of the principal conclusions of the 1997 Conference on the World Climate Research Programme was that the global capacity to observe the Earth’s climate system is inadequate and deteriorating worldwide and “without action to reverse this decline and develop the GCOS [Global Climate Observing System], the ability to characterize climate change and variations over the next 25 years will be even less than during the past quarter century” (National Research Council [NRC] 1999). In spite of the United States being a leader in climate research, long term U.S. climate stations have faced challenges with instrument and site changes that impact the continuity of observations over time. Even small biases can alter the interpretation of decadal climate variability and change, so a substantial effort is required to identify non-climate discontinuities and correct the station records (a process calledhomogenization). Source: https://www.ncdc.noaa.gov/crn/why.html

The CRN has a decade of data, and it shows a pause in the CONUS. Our subset of adjustment free unperturbed stations spans over 30 years, We think it is well worth looking at that data and ignoring the data that requires loads of statistical spackle to patch it up before it is deemed usable. After all, that’s what they say is the reason the CRN was created.

We do allow for one and only one adjustment in the data, and this is only because it is based on physical observations and it is a truly needed adjustment. We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to [paint] and maintenance issues. The MMTS gill shield is a superior exposure system that prevents bias from daytime short-wave and nighttime long-wave thermal radiation. The CRS requires yearly painting, and that often gets neglected, resulting in exposure systems that look like this:

Detroit_lakes_USHCN

See below for a comparison of the two:

CRS-MMTS

Some might wonder why we have a 1979-2008 comparison when this is 2015. The reason is so that this speaks to Menne et al. 2009 and 2010, papers launched by NOAA/NCDC to defend their adjustment methods for the USCHN from criticisms I had launched about the quality of the surface temperature record, such as this book in 2009: Is the U.S. Surface Temperature Record Reliable? This sent NOAA/NCDC into a tizzy, and they responded with a hasty and ghost written flyer they circulated. In our paper, we extend the comparisons to the current USHCN dataset as well as the 1979-2008 comparison.

We are submitting this to publication in a well respected journal. No, I won’t say which one because we don’t need any attempts at journal gate-keeping like we saw in the Climategate emails. i.e “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow — even if we have to redefine what the peer-review literature is!” and “I will be emailing the journal to tell them I’m having nothing more to do with it until they rid themselves of this troublesome editor.”.

When the journal article publishes, we’ll make all of the data, code, and methods available so that the study is entirely replicable. We feel this is very important, even if it allows unscrupulous types to launch “creative”  attacks via journal publications, blog posts, and comments. When the data and paper is available, we’ll welcome real and well-founded criticism.

It should be noted that many of the USHCN stations we excluded that had station moves, equipment changes, TOBs changes, etc that were not suitable  had lower trends that would have bolstered our conclusions.

The “gallery” server from that 2007 surfacestations project that shows individual weather stations and siting notes is currently offline, mainly due to it being attacked regularly and that affects my office network. I’m looking to move it to cloud hosting to solve that problem. I may ask for some help from readers with that.

We think this study will hold up well. We have been very careful, very slow and meticulous. I admit that the draft paper published in July 2012 was rushed, mainly because I believed that Dr. Richard Muller of BEST was going before congress again the next week using data I provided which he agreed to use only for publications, as a political tool. Fortunately, he didn’t appear on that panel. But, the feedback we got from that effort was invaluable. We hope this pre-release today will also provide valuable criticism.

People might wonder if this project was funded by any government, entity, organization, or individual; it was not. This was all done on free time without any pay by all involved. That is another reason we took our time, there was no “must produce by” funding requirement.

Dr. John Nielsen-Gammon, the state climatologist of Texas, has done all the statistical significance analysis and his opinion is reflected in this statement from the introduction

Dr. Nielsen-Gammon has been our worst critic from the get-go, he’s independently reproduced the station ratings with the help of his students, and created his own series of tests on the data and methods. It is worth noting that this is his statement:

The trend differences are largest for minimum temperatures and are statistically significant even at the regional scale and across different types of instrumentation and degrees of urbanization.

The p-values from Dr. Nielsen-Gammon’s statistical significance analysis are well below 0.05 (the 95% confidence level), and many comparisons are below 0.01 (the 99% confidence level). He’s on-board with the findings after satisfying himself that we indeed have found a ground truth. If anyone doubts his input to this study, you should view his publication record.

COMMENT POLICY:

At the time this post goes live, I’ll be presenting at AGU until 18:00PST , so I won’t be able to respond to queries until after then. Evan Jones “may” be able to after about 330PM PST.

This is a technical thread, so those who simply want to scream vitriol about deniers, Koch Brothers, and Exxon aren’t welcome here. Same for people that just want to hurl accusations without backing them up (especially those using fake names/emails, we have a few). Moderators should use pro-active discretion to weed out such detritus. Genuine comments and/or questions are welcome.

Thanks to everyone who helped make this study and presentation possible.

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
651 Comments
Inline Feedbacks
View all comments
RD
December 17, 2015 5:10 pm

“5. We believe the NOAA/NCDC homogenization adjustment causes well sited stations to be adjusted upwards to match the trends of poorly sited stations.”
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>..
Important. Well done.

Evan Jones
Editor
Reply to  RD
December 17, 2015 8:43 pm

Perhaps not the most important scientific observation — but by far the reddest of the meat.

bones
December 17, 2015 5:13 pm

Anthony, thanks for putting in the effort that this study required. This is science done the right way; something that has been too rare for much too long.

December 17, 2015 5:14 pm

I visit here daily, because I believe that we are being conned. But as far as this UHI effect is concerned, I fail to see the point. These temperature monitoring points record higher temperatures, because of where they are located. I get that. But can their location also dictate that readings get higher, year on year? Also, why would you want to record temperatures only in places where just the weather is involved? All this heat is being transmitted into the same system, so why is it important where you get the readings from? surely the only thing you’re looking for is a trend? So eventually, wherever the readings come from, a trend will become apparent. I’m not a scientist, or even a qualified person in any respect, but I ain’t stupid. The reason I became anti AGW, when it first reared its head, was because I appreciate the age of the planet, and I appreciate that our(I mean humankind here) presence here covers less than a fingersnap of the total time that the planet has existed. Then somebody turns up with numbers relating to the temperature of our planet going back for a tiny fraction of that aforementioned fingersnap, and says that they can see a trend??? I smell a rat, instantly. Then I come here, to this site, and find out about the MWP, and the warmist attempts to get rid of it, because it doesn’t fit the cause.
People on this site annoy me on a regular basis, because they tend to disappear up their own fundaments in their efforts to refute the warmist propaganda. What I cling to, is the incredibly stupid proposition that CO2 is the reason for the perceived warming! That is the main plank of their argument, and the weakest point too! 0.4%!!! If AGW ever had a chance of being convincing, the alarmists went for it, big style! They made CO2 the bad guy before anyone knew how little of it exists, and how vital it is to our very survival in our world. There are, I learn, other elements(greenhouse gasses), which have the same warming effect on climate, and that one of them, water vapour, exists in quantities compared to which CO2 becomes almost less than minimal. So surely, logically, the AGW brigade should pick on water vapour?
I realise that everyone here has an axe to grind, and a point to make; and that many contributors are actual (principled)scientists. But it seems to me that the big hole in the AGW argument is the one I just mentioned. It seems to me that this is the drum everyone should be banging. in rhythm.

Reply to  Derek Wood
December 17, 2015 5:43 pm

You ask a good question: “But can their location also dictate that readings get higher, year on year?” Anothony’s results show that yes, it can. The thing is that the change say from rural to urban is not a one time event. For example, our local weather station is at an airport. When I was there recently, a departing jet literally shook the terminal building like a (small) earthquake. There’s a *LOT* of energy in jet exhaust. What do you think happens when the number of flights increases? Another station I know is in a park, near a road. What happens as car traffic increases over time? What happens when a large shiny building is built on the other side of the road? And so it goes. But what I’ve just written is a just-so story saying it *could* happen that way. The empirical evidence is Anthony’s numbers showing that it *does* happen that way, on the whole, and of course in the “surface stations” web site.

Evan Jones
Editor
Reply to  Richard A. O'Keefe
December 17, 2015 10:43 pm

“But can their location also dictate that readings get higher, year on year?”
It can. It does. Just is.

Reply to  Derek Wood
December 17, 2015 5:43 pm

Derek See 5:05 pm post above to see what is happening.

Dennis Kuzara
Reply to  Derek Wood
December 17, 2015 6:00 pm

But can their location also dictate that readings get higher, year on year?
Yes it can and yes it will. because the buildup around the station is gradual, certainly not a step function and it will trend in the hotter direction as additional heat sources are added.

Evan Jones
Editor
Reply to  Dennis Kuzara
December 17, 2015 10:49 pm

Additional heat sink/sources are not necessary. We find that poorly sited stations warm (and cool) faster than well sited stations when Microsite ratings are constant throughout. This is an essential point.

Dr. S. Jeevananda Reddy
Reply to  Derek Wood
December 17, 2015 7:38 pm

To establish a meteorological station, there are standards defined. When you establish a station against these standards,you can not compare such met data with data collected at a standard station. This is basically because, non-standard data are influenced by the local/surrounding variations. Because of this only we are questioning the data averaging at global level as with it several local variations are involved.
When you plot a data of a station from standard station, you can explain the perturbations in temperature. We need such analysis to clear the global warming [a global issue] from ecological — land & water use/cover — changes with the time, which are local.
IPCC was not sure of the sensitivity factor and thus it goes on reducing the sensitivity factor from report to report with wide range [ maximum to mean to minimum] with scientific validity.
Dr. S. Jeevananda Reddy

Evan Jones
Editor
Reply to  Derek Wood
December 17, 2015 8:45 pm

Thanks. But we are not being conned. But we are being subjected to a serious error. Therefore we endeavor to define, account for, and otherwise explore this error. Making a mistake is not fraud, it is just a mistake.

David Ball
Reply to  Evan Jones
December 18, 2015 9:49 am

It seems to me that data collection problems should be the first thing to be looked at. These are basic scientific tenets that the other side has consistently refused to even consider. Clearly, and to anyone with a whiff of common sense, there is an issue, and one would think they would immediately investigate, They refused. Period.
This “Microsite” issue is a another “shark that has been jumped”. The blow back will be harsh, as you are not just upsetting one apple cart, you are upsetting ALL the applecarts.
The numbers undermine every aspect of AGW, not just CAGW. This coupled with the failed models blows the Co2 conjecture right out of the boiling oceans.

David Larsen
December 17, 2015 5:18 pm

Today in rural Montana, Big Horn county, Hardin, I saw a 7 degree differential between the two digital thermometers I have mentioned before. One hangs over cement about 10 feet and the other in a tower, not near cement. Cold island effect.

Eamon Butler
December 17, 2015 5:18 pm

Oh I like this. Well done Mr. W and all concerned.
But, …what about Paris?

Reply to  Eamon Butler
December 17, 2015 5:27 pm

It’s just evidence, Eamon. Never mind that AGW turns out to be much ado about almost nothing. We’ll always have Paris. 😉

Reply to  daveburton
December 17, 2015 10:03 pm

I laughed until I cried at that…”We’ll always have Paris”. Perfect!

clipe
Reply to  Eamon Butler
December 17, 2015 5:28 pm

But, …what about Paris?

Paris

clipe
Reply to  clipe
December 17, 2015 5:32 pm

But, …what about Paris?
Paris

Evan Jones
Editor
Reply to  Eamon Butler
December 17, 2015 10:50 pm
Dennis Kuzara
December 17, 2015 5:31 pm

I know this is a pain, but….
We use the MMTS adjustment noted in Menne et al. 2009 and 2010 for the MMTS exposure housing versus the old wooden box Cotton Region Shelter (CRS) which has a warm bias mainly due to pain and maintenance issues.

clipe
Reply to  Dennis Kuzara
December 17, 2015 5:33 pm

noted multiple times upthread

Evan Jones
Editor
Reply to  clipe
December 17, 2015 8:48 pm

due to pain
You don’t know the half of it.

Third Party
December 17, 2015 5:32 pm

Where can I go to see the Station by Station analysis?

Evan Jones
Editor
Reply to  Third Party
December 17, 2015 8:50 pm

You’ll have to wait until publication. Twice already we’ve had reason to regret releasing preliminary data. So we must tread with caution. But you shall have it. All of it. That’s a cross-my-heart promise.

December 17, 2015 5:34 pm

I would love to see the average temperature trend (as measured by balloons/satellites) for the Continental US added to lead graphic. Let’s compare the satellite data with the pristine ground stations.

Evan Jones
Editor
Reply to  Mike Smith
December 17, 2015 8:54 pm

The Class 1\2 station trends run ~10% cooler than the sats. this is a near-perfect result for our purposes, as basic physics indicates that surface trends should be lower than sats over a warming period.

December 17, 2015 5:35 pm

This is a really *beautiful* piece of work. Conceptually simple, technically a lot of hard grind, clearly explained, *solid* results. Since you don’t claim a zero or negative trend, there is even some possibility that you might be believed. If you seriously believe in global warming, dear reader, you have to want the best measurements you can get to tell you where the money needs to be spent. So any honest believer in CAGW (and I know a few) should want the best temperature measurement network that’s practically affordable and needs to know that we currently don’t really have one. (Amongst other things, how will we know we’ve *succeeded* in curbing AGW if our thermometers are wrong?) This isn’t just carping, this is real data with real numbers demanding real action. If I didn’t have a broken toe right now, I’d be jumping up and down with exclamations of admiration. WELL DONE!

Evan Jones
Editor
Reply to  Richard A. O'Keefe
December 17, 2015 8:56 pm

We like to think so, yes, yes, yes, yes, yes, yes, and thanks.
Sorry ’bout the toe. (Happens when it strays off the line. Or so they say.)

Knute
December 17, 2015 5:38 pm

How They Do It
The US federal government typically requires a Quality Assurance Plans (QAP) in any data that is used for decision making. This is a standard that appears to have evolved over the years in response to outcries of monkey business and the arbitrary use of collected info (data).
I did a little research to try and locate the current QAP for how NOAA collects climate based data on land (CONUS). The latest I found was in 2010. There may be a more recent one, but that’s what I found doing a quick search.
https://www.oig.doc.gov/OIGPublications/STL-19846.pdf
The above linked QAP review was done via congressional request, under Inspector General review so it has some weight and of course peer reviewed. The work that Anthony did will be compared to the requirements of the existing QAP.
It’s the rules for the game so to speak. If the work meets the requirements of the rules OR presents a best available science reason for a departure (improvement) from the QAP, he’ll have to be prepared to defend it.
Since he is doing this work “independently”, one of the potentially successful strategies might be validating the worth of the work (comparison to the QAP) and creating a co-monitoring solution where his work and methods are reaffirmed by NOAA replication. That way you are in their wheelhouse. The strategy has the added simultaneous effect of forcing transparency.
Just a couple of thoughts from the peanut gallery and a common practice of NGOs.

Mike the Morlock
Reply to  Knute
December 17, 2015 6:04 pm

All true Knute, Add in to the mix that NOAA is in “Hot Water” with L. Smith’s sub-committee, and one of the loyal opposition for is running for President is also conducting his own hearings in the Senate.
This could get interesting. The paper seems to be showing up in various web pages.
michael

Reply to  Mike the Morlock
December 17, 2015 8:26 pm

Yes sir Mike
Would hate to see it blown off with a nonsensical but successful cognitive dissonance “it was cherry picked attack”. The best way is to head that off by showing how it is as good if not better than their own standards.
Will help congress committee if you do the work for them.

Evan Jones
Editor
Reply to  Knute
December 17, 2015 9:09 pm

How we do it:
1.) Isolate the stations with unperturbed metadata (both good and bad).
2.) We apply the minimum corrections/adjustments necessary.
3.) We let ‘er roll.
P.S., we do subsets. By equipment, by mesosite, by region. We also provide full data for the stations we dropped.

Reply to  Evan Jones
December 17, 2015 9:33 pm

Do you mind defining …
1. unperturbed metadata
2. types of corrections and criteria for application
I realize you may be swamped, but it will come up sooner rather than later.
For additional thought, your definitions for 1 and esp 2 will be compared to current algorithm V2 from the BAMS 2009 pub, but you probably already know that ….

Evan Jones
Editor
Reply to  Evan Jones
December 17, 2015 11:00 pm

1. unperturbed metadata
— No change in site rating throughout. No moves where the previous location is unknown (we lost most of them that way) Localized moves where both ratings are the same are included. An unmoved station whose ratings were changed by encroaching heat sink is considered as a move and is dropped.
— No significant change of TOBS. If a station has a flip and a flip-back and roughly equal TOBS at each end, that will not affect trend and that data is included. JN-G ran TOBS-adjusted data for the set in order to check and the results were much to same. We list TOBS with the data, so you one make any changes (dropping or adding) as one sees fit.
We include stations with equipment changes, but make the necessary MMTS adjustments to correct.
2. types of corrections and criteria for application
MMTS only. Offset adjustment applied consistent with Menne (2009).

bit chilly
Reply to  Knute
December 18, 2015 9:08 pm

i like those thoughts knute.

Reply to  bit chilly
December 18, 2015 10:17 pm

Chilly
Glad you see some worth there.
I’m beginning to think that web based groups such as WUWT, CE and Nova are the “new” NGO. If Anthony et al can successfully be viewed by NOAA and friends as a valid NGO, the authorities will have to embrace working with their findings much in the same way they are obligated to work with any other “environmental” NGO. Methods review, independent audit, priority recommendations for work to be performed.
Excellent opportunity.

Tom Graney
December 17, 2015 5:47 pm

I hate temperature data expressed as color gradient maps, or any form of colored map. I can’t think of anything more useless, and I’m sure that’s why IPCC reports are rife with them. All we need is the damn trend of the actual damn temperature.

Evan Jones
Editor
Reply to  Tom Graney
December 17, 2015 8:59 pm

Well, I don’t mind it so much. And, seeing as how the numbers are clearly delineated on our map, I leaned on contrast, to boot. Let it sing out.

Reply to  Evan Jones
December 17, 2015 10:08 pm

I like them too. Easy to grasp and share with others. Don’t let the grumpers get to you.

Evan Jones
Editor
Reply to  Evan Jones
December 17, 2015 11:03 pm

They haven’t succeeded yet. (For my heart is pure. Sort of.)

Tom Judd
December 17, 2015 5:47 pm

Thank you very much.

Don B
December 17, 2015 5:50 pm

Dr. N-G’s involvement ought to pay dividends. He had struck me as a supporter of the alarming-warming meme, but it turns out he is an honorable scientist, and I was wrong. His contributions ought to open many eyes.

rogerknights
Reply to  Don B
December 17, 2015 7:28 pm

A question: Has this wobbled N-G’s warmism any?

Evan Jones
Editor
Reply to  rogerknights
December 18, 2015 4:33 am

I couldn’t tell you. But I do not doubt that he will be considering microsite and heat sink effect going forward.

Evan Jones
Editor
Reply to  Don B
December 17, 2015 9:01 pm

Dr. Nielson-Gammon’s contributions are invaluable and I will always be grateful for his honest efforts and the indispensable work he has produced.

Steve
December 17, 2015 5:52 pm

Oooh….moshepit not gonna like this. I await the driveby.

Evan Jones
Editor
Reply to  Steve
December 17, 2015 9:03 pm

I think you may be surprised. Mosh himself has said siting is a “good” issue, as it is a potentially systematic effect rather than eating around the edges.
All he needs to do to create an interesting and different approach is to apply his methods, but only pairwise good stations with the good and bad with the bad. I will be most interested in those results.

Dr. S. Jeevananda Reddy
December 17, 2015 6:03 pm

Anthony Watts — I appreciate getting some clarification on two issues — in Fig. 4 the pattern showed ‘W’ followed by ‘M’. What is the width of this? Secondly, All those 410 stations are from standard met station — inside the Stevenson Screen or they are different types? Is it possible to present a trend of a met station within the agriculture zone and one from urban zone? This why am asking is: to get the clarity on urban-heat-island effect and rural-cold-island effect.
Dr. S. Jeevananda Reddy

Evan Jones
Editor
Reply to  Dr. S. Jeevananda Reddy
December 17, 2015 9:17 pm

Scale is in hundredths degrees C. Interval is 30 years. Stationset is all of the unperturbed stations of the USHCN2 (except a few we never found — yet). All equipment is included, CRS, MMTS, and ASOS. MMTS-only trend is lower (0.163C/decade).
We include Crops as one of our subsets. And CRS-only, urban/non-urban, Rural-only MMTS, etc. You can subdivide as you wish and create further categories.

December 17, 2015 6:07 pm

Here is something like an Australian land mass temperature comparison. It covers the period 1972 to 2006 inclusive (I did the work in 2008 or so. The start date was chosen to be after the change from degrees F to degrees C reporting here).
The rationale is on the spread sheet here –
http://www.geoffstuff.com/pristine_feb_2015.xls
Australia has over 1,200 land weather station sites on record. I chose the 44 sites whose history suggested maximum isolation and minimum effects of the hand of Man. Both maximum and minimum daily temperatures were used as the basic data.
I calculated linear trend lines and compared the trends expressed as degC change per century.
The trends were so noisy and hard to interpret that I gave up, noting that the most pristine sites may well have the poorest quality, an enigma with no solution.
NOTE: I did not take averages of trend numbers because that common approach (as in CMIP% for example) is not statistically pure.
Naturally, I am happy to field questions.
[Thank you. .mod]

Evan Jones
Editor
Reply to  Geoff Sherrington
December 17, 2015 11:32 pm

Here is something
Yes, interesting. Got metadata?

Reply to  Evan Jones
December 18, 2015 3:12 am

Some metadata available online from BOM. Usually sketchy.
Some other sites like Rutherglen and Amberlet remain in a state of “We agree to disagree” about importance and accuracy of recently released addional metadata.
My career in mineral exploration took me to most corners of What so local knowledge was used in selecting sites for this work. Also extensive use of Google Earth for aerial views geography distances etc and government stats for populations.
I put a sting in tail by correlating the 44 station trends with their digital
World Meteorological Number. Supports my claim of noisy useless data.
To be a study similar to yours in USA, I would need to use BOMhomogenised data but I have not found any remote sites like these that have gone through the mincer.
Note that derived trends exceeding say +/- 3 degC per century are intuitively non- physical.

Mike the Morlock
December 17, 2015 6:09 pm

Anthony Watts
Thank you for all the hard work. As this ends go and have nice dinner and a good night. You did good.
A long road from starting out as a shy Weatherman yes?
best
michael

Evan Jones
Editor
Reply to  Mike the Morlock
December 17, 2015 11:36 pm

Oh, right. Food. Now enjoying lentils exposed to heat source radiation. Night is already good.

December 17, 2015 6:23 pm

Great work Anthony. I’m glad to see John N-G was on board and in agreement. The implication is that the entire “homogenization” process is making results less accurate than might be accomplished by wisely choosing meso or synoptic scale sites over microscale sites within the GHCN as opposed to homogenizing. I also prefer raw milk from happy pastured cows to homogenized milk from confinement feedlot farms. 🙂

Evan Jones
Editor
Reply to  oz4caster
December 17, 2015 11:07 pm

JN-G was not in agreement at first. That is why we invited him in. it is good to have a hypothesis-skeptic on board for any study.

December 17, 2015 6:24 pm

Well done Anthony. One cold beer for you!

old construction worker
December 17, 2015 6:46 pm

nice job
The “Team” support members will start waving their hands saying “Big Oil funded” Anthony.

Evan Jones
Editor
Reply to  old construction worker
December 17, 2015 11:07 pm

This paper is entirely unfunded.

December 17, 2015 6:55 pm

Anthony, your release says you identified 410 “unperturbed” weather stations.
How many of the 410 were compliant Class 1,2 USHCN stations and how many were non-compliant, Class 3,4,5 USHCN stations?

Evan Jones
Editor
Reply to  David Sanger (@davidsanger)
December 17, 2015 9:20 pm

92 Class 1\2. 318 Class 3\4\5.

Reply to  Evan Jones
December 17, 2015 9:24 pm

thanks Evan – just saw that in your comment up above.

December 17, 2015 6:57 pm

Reblogged this on Sierra Foothill Commentary and commented:
Ellen and I supported Anthony’s project by collecting weather station data across the country. We are proud of our contribution to citizen science.
We surveyed stations in Deleware, Maryland, South Dakota, Wyoming, Nevada, Idaho and California.

Evan Jones
Editor
Reply to  Russ Steele
December 17, 2015 9:21 pm

Greetings, my brother.

Michael Hebert
December 17, 2015 7:01 pm

(Comment deleted. Please use only one screen name. -mod)

techgm
December 17, 2015 7:18 pm

These 410 stations, how are they distributed by elevation and latitude, and how do those distributions compare to those of all stations? If there are differences in the distributions, how do those differences affect reported average temperatures?

Evan Jones
Editor
Reply to  techgm
December 17, 2015 9:23 pm

Pretty well distributed. Even the set of 92 Class 1\2s is pretty good, all things considered. Call it lucky (because that’s what it is).

AndyK
December 17, 2015 7:23 pm

Knute
Quality Assurance Plans (QAP)
If the current practices have been allowed under the current QAP. I don’t think this paper will have any QAP problems.

Reply to  AndyK
December 17, 2015 7:40 pm

Andy
If this dataset is used by Cruz’ committee, its QAP will be compared to the standards methods employed by the federal government (NOAA) From what I’ve read, those methods were reviewed and approved by an independent panel.
In order for Anthony to override the counter that he cherry picked, he’ll have to either have a better (approved by a supporting independent peer review or otherwise approved method) method or have found flaw in how NOAA executed their own QAP and methods.
The committee will request a Data Quality Audit.
Essentially, the committee gets to use the “independent” data once it’s been audited.
It’s a big step and very nitpicky.

Evan Jones
Editor
Reply to  knutesea
December 17, 2015 9:24 pm

I think we will have no problem addressing any allegations of cherrypicking.