From the Journal of International Climatology and the “if you can’t beat ’em, join ’em” department.
To me, this feels like vindication. For years, I’ve been pointing out just how bad the U.S. and Global Surface monitoring network has been. We’ve seen stations that are on pavement, at airports collecting jet exhaust, and failing instruments reading high, and right next to the heat output of air conditioning systems.



We’ve been told it “doesn’t matter” and that “the surface monitoring network is producing good data”. Behind the scenes though, we learned that NOAA/NCDC scrambled when we reported this, quietly closing some of the worst stations, while making feverish and desperate PR pitches to prop up the narrative of “good data”.
Read my report from 2009 on the state of the US Historical Climate Network:
That 2009 report (published with the help of the Heartland Institute) spurred a firestorm of criticism, and an investigation and report by the U.S. Office of the Inspector General who wrote:
Lack of oversight, non-compliance and a lax review process for the State Department’s global climate change programs have led the Office of the Inspector General (OIG) to conclude that program data “cannot be consistently relied upon by decision-makers” and it cannot be ensured “that Federal funds were being spent in an appropriate manner.”
Read it all here: https://wattsupwiththat.com/2014/02/07/report-from-the-office-of-the-inspector-general-global-climate-change-program-data-may-be-unreliable/
More recently, I presented at AGU15 : Watts at #AGU15 The quality of temperature station siting matters for temperature trends
And showed just how bad the old surface network is in two graphs:


Now, some of the very same people who have scathingly criticized my efforts and the efforts of others to bring these weaknesses to the attention of the scientific community have essentially done an about-face, and authored a paper calling for a new global climate monitoring network like the United States Climate Reference Network (USCRN) which I have endorsed as the only suitable way to measure surface temperature and extract long term temperature trends.
During my recent trip to Kennedy Space Center (Thanks to generous donations from WUWT readers), I spotted an old-style airport ASOS weather station right next to one of the new USCRN stations, at the Shuttle Landing Facility runway, presumably placed there to study the difference between the two. Or, possibly, they just couldn’t trust the ASOS station when they most needed it -during a Shuttle landing where accurate temperature is of critical importance in calculating density altitude, and therefore the glide ratio. Comparing the data between the two is something I hope to do in a future post.

Here is the aerial view showing placement:

Clearly, with its selection of locations, triple redundant state of the art aspirated air temperature sensors, the USCRN station platform is the best possible way to measure long-term trends in 2 meter surface air temperature. Unfortunately, the public never sees the temperature reports from it in NOAA’s “State of the Climate” missives, but they instead rely on the antiquated and buggy surface COOP and GHCN network and it’s highly biased and then adjusted data.
So, for this group of people to call for a worldwide USCRN style temperature monitoring network, is not only a step in the right direction, but a clear indication that even though they won’t publicly admit to the unreliable and uncertain existing COOP/USHCN networks worldwide being “unfit for purpose” they are in fact endorsing the creation of a truly “fit for purpose” global system to monitor surface air temperature, one that won’t be highly biased by location, sensor/equipment issues, and have any need at all for adjustments.
I applaud the effort, and I’ll get behind it. Because by doing so, it puts an end to the relevance of NASA GISS and HadCRUT, whose operators (Gavin Schmidt and Phil Jones) are some of the most biased, condescending, and outright snotty scientists the world has ever seen. They should not be gatekeepers for the data, and this will end their lock on that distinction. To Phil Jones credit, he was a co-author of this new paper. Gavin Schmidt, predictably, was not.
This is something both climate skeptics and climate alarmists should be able to get behind and promote. More on that later.
Here’s the paper: (note they reference my work in the 2011 Fall et al. paper)
Towards a global land surface climate fiducial reference measurements network
P. W. Thorne, H. J. Diamond, B. Goodison, S. Harrigan, Z. Hausfather, N. B. Ingleby, P. D. Jones, J. H. Lawrimore, D. H. Lister, A. Merlone, T. Oakley, M. Palecki, T. C. Peterson, M. de Podesta, C. Tassone, V. Venema, K. M. Willett
Abstract
There is overwhelming evidence that the climate system has warmed since the instigation of instrumental meteorological observations. The Fifth Assessment Report of the Intergovernmental Panel on Climate Change concluded that the evidence for warming was unequivocal. However, owing to imperfect measurements and ubiquitous changes in measurement networks and techniques, there remain uncertainties in many of the details of these historical changes. These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system. Rather, they act to make the picture less clear than it could be, particularly at the local scale where many decisions regarding adaptation choices will be required, both now and in the future. A set of high-quality long-term fiducial reference measurements of essential climate variables will enable future generations to make rigorous assessments of future climate change and variability, providing society with the best possible information to support future decisions. Here we propose that by implementing and maintaining a suitably stable and metrologically well-characterized global land surface climate fiducial reference measurements network, the present-day scientific community can bequeath to future generations a better set of observations. This will aid future adaptation decisions and help us to monitor and quantify the effectiveness of internationally agreed mitigation steps. This article provides the background, rationale, metrological principles, and practical considerations regarding what would be involved in such a network, and outlines the benefits which may accrue. The challenge, of course, is how to convert such a vision to a long-term sustainable capability providing the necessary well-characterized measurement series to the benefit of global science and future generations.
INTRODUCTION: HISTORICAL OBSERVATIONS, DATA CHALLENGES, AND HOMOGENIZATION
A suite of meteorological parameters has been measured using meteorological instrumentation for more than a century (e.g., Becker et al., 2013; Jones, 2016; Menne, Durre, Vose, Gleason, & Houston, 2012; Rennie et al., 2014; Willett et al., 2013, henceforth termed “historical observations”). Numerous analyses of these historical observations underpin much of our understanding of recent climatic changes and their causes (Hartmann et al., 2013). Taken together with measurements from satellites, weather balloons, and observations of changes in other relevant phenomena, these observational analyses underpin the Intergovernmental Panel on Climate Change conclusion that evidence of historical warming is “unequivocal” (Intergovernmental Panel on Climate Change, 2007 2007, 2013).
Typically, individual station series have experienced changes in observing equipment and practices (Aguilar, Auer, Brunet, Peterson, & Wieringa, 2003; Brandsma & van der Meulen, 2008; Fall et al., 2011; Mekis & Vincent, 2011; Menne, Williams Jr., & Palecki, 2010; Parker, 1994; Sevruk, Ondrás, & Chvíla, 2009). In addition, station locations, observation times, instrumentation, and land use characteristics (including in some cases urbanization) have changed at many stations. Collectively, these changes affect the representativeness of individual station series, and particularly their long-term stability (Changnon & Kunkel, 2006; Hausfather et al., 2013; Karl, Williams Jr., Young, & Wendland, 1986; Quayle, Easterling, Karl, & Hughes, 1991). Metadata about changes are limited for many of the stations. These factors impact our ability to extract the full information content from historical observations of a broad range of essential climate variables (ECVs) (Bojinski et al., 2014). Many ECVs, such as precipitation, are extremely challenging to effectively monitor and analyse due to their restricted spatial and temporal scales and globally heterogeneous measurement approaches (Goodison, Louie, & Yang, 1998; Sevruk et al., 2009).
Changes in instrumentation were never intended to deliberately bias the climate record. Rather, the motivation was to either reduce costs and/or improve observations for the primary goal(s) of the networks, which was most often meteorological forecasting. The majority of changes have been localized and quasi-random in nature and so are amenable to statistical averaging of their effects. However, there have been regionally or globally systemic transitions specific to certain periods of time whose effect cannot be entirely ameliorated by averaging. Examples include:
- Early thermometers tended to be housed in polewards facing wall screens, or for tropical locales under thatched shelter roofs (Parker, 1994). By the early 20th century better radiation shielding and ventilation control using Stevenson screens became ubiquitous. In Europe, Böhm et al. (2010) have shown that pre-screen summer temperatures were about 0.5 °C too warm.
- In the most recent 30 or so years a transition to automated or semi-automated measurements has occurred, although this has been geographically heterogeneous.
- As highlighted in the recent World Meteorological Organization (WMO) SPICE intercomparison (http://www.wmo.int/pages/prog/www/IMOP/intercomparisons/SPICE/SPICE.html) and the previous intercomparison (Goodison et al., 1998), measuring solid precipitation remains a challenge. Instrument design, shielding, siting, and transition from manual to automatic all contribute to measurement error and bias and affect the achievable uncertainties in measurements of solid precipitation and snow on the ground.
- For humidity measurements, recent decades have seen a switch to capacitive relative humidity sensors from traditional wet- and dry-bulb psychrometers. This has resulted in a shift in error characteristics that is particularly significant in wetter conditions (Bell, Carroll, Beardmore, England, & Mander, 2017; Ingleby, Moore, Sloan, & Dunn, 2013).
As technology and observing practices evolve, future changes are inevitable. Imminent issues include the replacement of mercury-in-glass thermometers and the use of third party measurements arising from private entities, the general public, and non-National Met Service public sector activities.
From the perspective of climate science, the consequence of both random and more systematic effects is that almost invariably a post hoc statistical assessment of the homogeneity of historical records, informed by any available metadata, is required. Based on this analysis, adjustments must be applied to the data prior to use. Substantive efforts have been made to post-process the data to create homogeneous long-term records for multiple ECVs (Mekis & Vincent, 2011; Menne & Williams, 2009; Rohde et al., 2013; Willett et al., 2013, 2014; Yang, Kane, Zhang, Legates, & Goodison, 2005) at both regional and global scales (Hartmann et al., 2013). Such studies build upon decades of development of techniques to identify and adjust for breakpoints, for example, the work of Guy Callendar in the early 20th century (Hawkins & Jones, 2013). The uncertainty arising from homogenization using multiple methods for land surface air temperatures (LSAT) (Jones et al., 2012; Venema et al., 2012; Williams, Menne, & Thorne, 2012) is much too small to call into question the conclusion of decadal to centennial global-mean warming, and commensurate changes in a suite of related ECVs and indicators (Hartmann et al., 2013, their FAQ2.1). Evidence of this warming is supported by many lines of evidence, as well as modern reanalyses (Simmons et al., 2017).
The effects of inhomogeneities are stronger at the local and regional level, may be impacted by national practices complicating homogenization efforts, and are more challenging to remove for sparse networks (Aguilar et al., 2003; Lindau & Venema, 2016). The effects of inhomogeneities are also manifested more strongly in extremes than in the mean (e.g., Trewin, 2013) and are thus important for studies of changes in climatic extremes. State-of-the art homogenization methods can only make modest improvements in the variability around the mean of daily temperature (Killick, 2016) and humidity data (Chimani et al., 2017).
In the future, it is reasonable to expect that observing networks will continue to evolve in response to the same stakeholder pressures that have led to historical changes. We can thus be reasonably confident that there will be changes in measurement technology and measuring practice. It is possible that such changes will prove difficult to homogenize and would thus threaten the continuity of existing data series. It is therefore appropriate to ask whether a different route is possible to follow for future observational strategies that may better meet climate needs, and serve to increase our confidence in records going forwards. Having set out the current status of data sets derived from ad hoc historical networks, in the remainder of this article, we propose the construction of a different kind of measurement network: a reference network whose primary mission is the establishment of a suite of long-term, stable, metrologically traceable, measurements for climate science.
…
Siting considerations
Each site will need to be large enough to house all instrumentation without adjacent instrumentation interfering with one another, with no shading or wind-blocking vegetation or localized topography, and at least 100 m from any artificial heat sources. Figure 2 provides a site schematic for USCRN stations that meets this goal. The siting should strive to adhere to Class 1 criteria detailed in guidance from the WMO Commission for Instruments and Methods of Observations (World Meteorological Organization, 2014, part I, chap. I). This serves to minimize representativity errors and associated uncertainties. Sites should be chosen in areas where changes in siting quality and land use, which may impact representativity, are least likely for the next century. The site and surrounding area should further be selected on the basis that its ownership is secure. Thus, site selection requires an excellent working and local knowledge of items such as land/site ownership proposed, geology, regional vegetation, and climate. As it cannot be guaranteed that siting shall remain secure over decades or centuries, sites need to be chosen so that a loss will not critically affect the data products derived from the network. A partial solution would be to replace lost stations with new stations with a period of overlap of several years (Diamond et al., 2013). It should be stressed that sites in the fiducial reference network do not have to be new sites and, indeed, there are significant benefits from enhancing the current measurement program at existing sites. Firstly, co-location with sites already undertaking fiducial reference measurements either for target ECVs or other ECVs, such as GRUAN or GCW would be desirable. Secondly, co-location with existing baseline sites that already have long records of several target ECVs has obvious climate monitoring, cost and operational benefits.

Siting considerations should be made with accessibility in mind both to better ensure uninterrupted operations and communications, and to enable both regular and unscheduled maintenance/calibration operations. If a power supply and/or wired telecommunication system is required then the site will need to provide an uninterrupted supply, and have additional redundancy in the form of a back-up generator or batteries. For many USCRN sites the power is locally generated via the use of a combination of solar, wind, and/or methane generator sources, and the GOES satellite data collection system provides one-way communication from all sites.
For a reference grade installation, an evaluated uncertainty value should be ascertained for representativeness effects which may differ synoptically and seasonally. Techniques and large-scale experiments for this kind of evaluation and characterization of the influences of the siting on the measured atmospheric parameters are currently in progress (Merlone et al., 2015).
Finally, if the global surface fiducial reference network ends up consisting of two or more distinct set-ups of instrumentation (section 4.1), there would be value in side-by-side operations of the different configurations in a subset of climatically distinct regions to ensure long-term comparability is assured (section 3). This could be a task for the identified super-sites in the network.
…
There are many possible metrics for determining the success of a global land surface fiducial reference climate network as it evolves, such as the number and distribution of fiducial reference climate stations or the percent of stations adhering to the strict reference climate criteria described in this article. However, in order to fully appreciate the significance of the proposed global climate surface fiducial reference network, we need to imagine ourselves in the position of scientists working in the latter part of the 21st century and beyond. However, not just scientists, but also politicians, civil servants, and citizens faced with potentially difficult choices in the face of a variable and changing climate. In this context, we need to act now with a view to fulfilling their requirements for having a solid historical context they can utilize to assist them making scientifically vetted decisions related to actions on climate adaptation. Therefore, we should care about this now because those future scientists, politicians, civil servants, and citizens will be—collectively—our children and grandchildren, and it is—to the best of our ability—our obligation to pass on to them the possibility to make decisions with the best possible data. Having left a legacy of a changing climate, this is the very least successive generations can expect from us in order to enable them to more precisely determine how the climate has changed.
Read the full open access paper here, well worth your time: http://onlinelibrary.wiley.com/doi/10.1002/joc.5458/full
h/t to Zeke Hausfather for notice of the paper. Zeke, unlike some of his co-authors, actually engages me with respect. Perhaps his influence will help them become not just civil servants, but civil people.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Well, this is kind of puzzling. I was looking at the locations of the stations used in GHCN, and wondered how many there were way up north where it is insisted that massive warming is going on. I wanted to know just how many reporting stations there were.
There aren’t many. I have them installed in a .kml file on Google Earth, and between some of them up there, the ruler shows a distance of up to 2200km. That’s a long way to interpolate data. So I was looking for stations way, way up there, and found a station at Ostrov Vize, Russia. The station ID is RSM00020069 in the GHCN daily files, and 22220069000 in the GHCN Monthly files. I have no idea why the stations have two different ID (actually, three, because its WMO ID is 20069).
I went to the NOAA site where one can get data for a station for any month it was recording, so I looked up RSM00020069, and entered March 2017. This is part of the data I was shown:
Year Month Day Max Min
2017 3 1 2
2017 3 2 -2
2017 3 3 -11 -18
2017 3 4 -11 -18
2017 3 5
2017 3 6 -18
2017 3 7 -7
2017 3 8 -15
2017 3 9 -24
2017 3 10 -13
2017 3 11 -25
2017 3 12 -5
2017 3 13 -19
2017 3 14 -17
2017 3 15 -4 -13
2017 3 16 -9
2017 3 17 3
2017 3 18 26
2017 3 19 29
2017 3 20 27
2017 3 21 22
2017 3 22 -10
2017 3 23 3
2017 3 24 22 -9
2017 3 25 9
2017 3 26 1 -10
2017 3 27 -8
2017 3 28 7 0
2017 3 29 0
2017 3 30 -7
2017 3 31 -17
-------------------------------------------------------
AVG 4 -13
I hope that’s readable. Then I went to look at my GHCN Monthly and Daily files, and got the March 2107 numbers for those. The daily looked like this:
RSM00020069 2017 03
DAY TAVG
1 -18.4
2 -22.4
3 -25.8
4 -25.4
5 -26.6
6 -22.0
7 -23.9
8 -27.8
9 -27.4
10 -28.2
11 -23.0
12 -22.8
13 -25.9
14 -19.6
15 -22.2
16 -17.4
17 -6.8
18 -5.2
19 -4.7
20 -6.8
21 -14.1
22 -19.2
23 -19.3
24 -8.7
25 -17.3
26 -20.1
27 -18.1
28 -16.1
29 -20.1
30 -23.6
31 -23.6
--------------
AVG -19.4
And finally, the Monthly:
22220069000 2017 TAVG
MAR -19.30
This is why I’m confused. The NOAA QuickData page for that station shows pretty sparse data for March 2017, while the NOAA Daily shows complete data that isn’t even close to what the first report showed. To top it all off, the Monthly average is 0.1°C warmer than what you get averaging the Daily data.
Is it any wonder why it’s frustrating to try to figure out what’s being done with the data?
James,
The columns didn’t quite line up.
Try using the “pre” format options.
( “less than sign”pre“greater than sign” at the start of your table. End your table with “less than sign”pre/pre“greater than sign” )
More (and better) info here. https://wermenh.com/wuwt/index.html
OOPS!
Make that … “less than sign”/pre“greater than sign” ..!
Something is clearly wrong with the first set. I don’t believe that it was 29°C on 19th of March on this Arctic island.
Great work. Your efforts are greatly appreciated.
I believe it was Anthony, shortly before he got started on the surface stations project, conducted an experiment where he took two boards, and painted one with white wash and another with modern white latex paint.
He found that the board painted with latex got a little bit warmer than the board painted with white wash.
Apparently while both reflect visible light well, white wash does a better job of reflecting in the IR spectrum.
This got started because somebody had noted that in the past all temperature enclosures were painted with white wash, but in recent years they had been shifting over to latex because it lasted longer.
The other thing was, very few stations wrote down when the changeover occurred.
Meltdown warning : This post/thread has been picked up on by Daily Caller News Foundation and if Drudge picks up on it too, then pack your servers in ice.
And it all began with “the name/website that was not to be spoken” by the Climategate team. LOL
I can’t even begin to offer the appreciation and respect Anthony Watts and WUWT deserves!
Dear Mr. Watts,
You are absolutely correct. This does represent a vindication of all your extraordinary efforts on behalf of genuine science (and it is long overdue).
You deserve congratulations.
How can I trust someone who plainly states their goal is to provide actionable data to our children in order to absolve for our legacy of global warming? Watch for legitimate declines in temperature being attributed to equipment changes. Watch for more downward adjustments to the historical record. Watch for cherry-picked “super-sites,” whatever that means.
Anthony, this is fabulous! Kudos to you for your dogged determination and to all the volunteers and their thousands of hours of support to illustrate the system was FUBAR.
Unfortunately, my read is still that we have tens of thousands of data points in the record — spanning decades — for which we have little confidence, but which are the only data we have. The old data have questionable validity, utility and value, but yet they will be used for decades.
Meanwhile it will likely take decades for the establishment of a new network, and years of data collection and analysis (plus we should assume there will be associated “spin” regarding the results) before we have a real understanding of how very poor the existing data will be.
When the Great Lakes had record ice 5 years ago, the raw temperatures in the states around the Great Lakes showed much colder than usual temperatures. But after the adjustments The Great Lakes states were reported has having normal temperatures that winter. That example is one of many why I am skeptical of the adjustment process.
Nice admission.
Funny how when Anthony first brought this small problem up, the “authorities” responded most unkindly.
Now they essentially pretend it is their own idea.
We all know certain people who behave this way.
The claims of significant information remain deliberately off the radar.
The stated claim of warming between 0.7 to0.9 degree C in the “Average Global Temperature” are never stated with error ranges.
Naturally if the estimated error range accompanied the A.G.T the result would be laughter.
Much ado about nothing, this Emperor has never been clothed.
Thank you very much. For all the work on the temperature quality issue generally,
but specifically for the link to this paper, which I am now using as a source in a report I am writing.
Quote from the abstract: “These uncertainties do not call into question the … overall magnitude of the changes in the global climate system.”
I think it strange and notable that Anthony Watts does not comment on this assertion about the overall magnitude. Watt’s once published analysis indicating the opposite. Or am I misunderstanding something?
Watts and the other volunteers that contributed to the survey work at http://surfacestations.org/ performed vital and fundamental research into climate science. The home page of surfacestations.org says it was last updated in 2012. That’s too bad.