From the Journal of International Climatology and the “if you can’t beat ’em, join ’em” department.
To me, this feels like vindication. For years, I’ve been pointing out just how bad the U.S. and Global Surface monitoring network has been. We’ve seen stations that are on pavement, at airports collecting jet exhaust, and failing instruments reading high, and right next to the heat output of air conditioning systems.



We’ve been told it “doesn’t matter” and that “the surface monitoring network is producing good data”. Behind the scenes though, we learned that NOAA/NCDC scrambled when we reported this, quietly closing some of the worst stations, while making feverish and desperate PR pitches to prop up the narrative of “good data”.
Read my report from 2009 on the state of the US Historical Climate Network:
That 2009 report (published with the help of the Heartland Institute) spurred a firestorm of criticism, and an investigation and report by the U.S. Office of the Inspector General who wrote:
Lack of oversight, non-compliance and a lax review process for the State Department’s global climate change programs have led the Office of the Inspector General (OIG) to conclude that program data “cannot be consistently relied upon by decision-makers” and it cannot be ensured “that Federal funds were being spent in an appropriate manner.”
Read it all here: https://wattsupwiththat.com/2014/02/07/report-from-the-office-of-the-inspector-general-global-climate-change-program-data-may-be-unreliable/
More recently, I presented at AGU15 : Watts at #AGU15 The quality of temperature station siting matters for temperature trends
And showed just how bad the old surface network is in two graphs:


Now, some of the very same people who have scathingly criticized my efforts and the efforts of others to bring these weaknesses to the attention of the scientific community have essentially done an about-face, and authored a paper calling for a new global climate monitoring network like the United States Climate Reference Network (USCRN) which I have endorsed as the only suitable way to measure surface temperature and extract long term temperature trends.
During my recent trip to Kennedy Space Center (Thanks to generous donations from WUWT readers), I spotted an old-style airport ASOS weather station right next to one of the new USCRN stations, at the Shuttle Landing Facility runway, presumably placed there to study the difference between the two. Or, possibly, they just couldn’t trust the ASOS station when they most needed it -during a Shuttle landing where accurate temperature is of critical importance in calculating density altitude, and therefore the glide ratio. Comparing the data between the two is something I hope to do in a future post.

Here is the aerial view showing placement:

Clearly, with its selection of locations, triple redundant state of the art aspirated air temperature sensors, the USCRN station platform is the best possible way to measure long-term trends in 2 meter surface air temperature. Unfortunately, the public never sees the temperature reports from it in NOAA’s “State of the Climate” missives, but they instead rely on the antiquated and buggy surface COOP and GHCN network and it’s highly biased and then adjusted data.
So, for this group of people to call for a worldwide USCRN style temperature monitoring network, is not only a step in the right direction, but a clear indication that even though they won’t publicly admit to the unreliable and uncertain existing COOP/USHCN networks worldwide being “unfit for purpose” they are in fact endorsing the creation of a truly “fit for purpose” global system to monitor surface air temperature, one that won’t be highly biased by location, sensor/equipment issues, and have any need at all for adjustments.
I applaud the effort, and I’ll get behind it. Because by doing so, it puts an end to the relevance of NASA GISS and HadCRUT, whose operators (Gavin Schmidt and Phil Jones) are some of the most biased, condescending, and outright snotty scientists the world has ever seen. They should not be gatekeepers for the data, and this will end their lock on that distinction. To Phil Jones credit, he was a co-author of this new paper. Gavin Schmidt, predictably, was not.
This is something both climate skeptics and climate alarmists should be able to get behind and promote. More on that later.
Here’s the paper: (note they reference my work in the 2011 Fall et al. paper)
Towards a global land surface climate fiducial reference measurements network
P. W. Thorne, H. J. Diamond, B. Goodison, S. Harrigan, Z. Hausfather, N. B. Ingleby, P. D. Jones, J. H. Lawrimore, D. H. Lister, A. Merlone, T. Oakley, M. Palecki, T. C. Peterson, M. de Podesta, C. Tassone, V. Venema, K. M. Willett
Abstract
There is overwhelming evidence that the climate system has warmed since the instigation of instrumental meteorological observations. The Fifth Assessment Report of the Intergovernmental Panel on Climate Change concluded that the evidence for warming was unequivocal. However, owing to imperfect measurements and ubiquitous changes in measurement networks and techniques, there remain uncertainties in many of the details of these historical changes. These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system. Rather, they act to make the picture less clear than it could be, particularly at the local scale where many decisions regarding adaptation choices will be required, both now and in the future. A set of high-quality long-term fiducial reference measurements of essential climate variables will enable future generations to make rigorous assessments of future climate change and variability, providing society with the best possible information to support future decisions. Here we propose that by implementing and maintaining a suitably stable and metrologically well-characterized global land surface climate fiducial reference measurements network, the present-day scientific community can bequeath to future generations a better set of observations. This will aid future adaptation decisions and help us to monitor and quantify the effectiveness of internationally agreed mitigation steps. This article provides the background, rationale, metrological principles, and practical considerations regarding what would be involved in such a network, and outlines the benefits which may accrue. The challenge, of course, is how to convert such a vision to a long-term sustainable capability providing the necessary well-characterized measurement series to the benefit of global science and future generations.
INTRODUCTION: HISTORICAL OBSERVATIONS, DATA CHALLENGES, AND HOMOGENIZATION
A suite of meteorological parameters has been measured using meteorological instrumentation for more than a century (e.g., Becker et al., 2013; Jones, 2016; Menne, Durre, Vose, Gleason, & Houston, 2012; Rennie et al., 2014; Willett et al., 2013, henceforth termed “historical observations”). Numerous analyses of these historical observations underpin much of our understanding of recent climatic changes and their causes (Hartmann et al., 2013). Taken together with measurements from satellites, weather balloons, and observations of changes in other relevant phenomena, these observational analyses underpin the Intergovernmental Panel on Climate Change conclusion that evidence of historical warming is “unequivocal” (Intergovernmental Panel on Climate Change, 2007 2007, 2013).
Typically, individual station series have experienced changes in observing equipment and practices (Aguilar, Auer, Brunet, Peterson, & Wieringa, 2003; Brandsma & van der Meulen, 2008; Fall et al., 2011; Mekis & Vincent, 2011; Menne, Williams Jr., & Palecki, 2010; Parker, 1994; Sevruk, Ondrás, & Chvíla, 2009). In addition, station locations, observation times, instrumentation, and land use characteristics (including in some cases urbanization) have changed at many stations. Collectively, these changes affect the representativeness of individual station series, and particularly their long-term stability (Changnon & Kunkel, 2006; Hausfather et al., 2013; Karl, Williams Jr., Young, & Wendland, 1986; Quayle, Easterling, Karl, & Hughes, 1991). Metadata about changes are limited for many of the stations. These factors impact our ability to extract the full information content from historical observations of a broad range of essential climate variables (ECVs) (Bojinski et al., 2014). Many ECVs, such as precipitation, are extremely challenging to effectively monitor and analyse due to their restricted spatial and temporal scales and globally heterogeneous measurement approaches (Goodison, Louie, & Yang, 1998; Sevruk et al., 2009).
Changes in instrumentation were never intended to deliberately bias the climate record. Rather, the motivation was to either reduce costs and/or improve observations for the primary goal(s) of the networks, which was most often meteorological forecasting. The majority of changes have been localized and quasi-random in nature and so are amenable to statistical averaging of their effects. However, there have been regionally or globally systemic transitions specific to certain periods of time whose effect cannot be entirely ameliorated by averaging. Examples include:
- Early thermometers tended to be housed in polewards facing wall screens, or for tropical locales under thatched shelter roofs (Parker, 1994). By the early 20th century better radiation shielding and ventilation control using Stevenson screens became ubiquitous. In Europe, Böhm et al. (2010) have shown that pre-screen summer temperatures were about 0.5 °C too warm.
- In the most recent 30 or so years a transition to automated or semi-automated measurements has occurred, although this has been geographically heterogeneous.
- As highlighted in the recent World Meteorological Organization (WMO) SPICE intercomparison (http://www.wmo.int/pages/prog/www/IMOP/intercomparisons/SPICE/SPICE.html) and the previous intercomparison (Goodison et al., 1998), measuring solid precipitation remains a challenge. Instrument design, shielding, siting, and transition from manual to automatic all contribute to measurement error and bias and affect the achievable uncertainties in measurements of solid precipitation and snow on the ground.
- For humidity measurements, recent decades have seen a switch to capacitive relative humidity sensors from traditional wet- and dry-bulb psychrometers. This has resulted in a shift in error characteristics that is particularly significant in wetter conditions (Bell, Carroll, Beardmore, England, & Mander, 2017; Ingleby, Moore, Sloan, & Dunn, 2013).
As technology and observing practices evolve, future changes are inevitable. Imminent issues include the replacement of mercury-in-glass thermometers and the use of third party measurements arising from private entities, the general public, and non-National Met Service public sector activities.
From the perspective of climate science, the consequence of both random and more systematic effects is that almost invariably a post hoc statistical assessment of the homogeneity of historical records, informed by any available metadata, is required. Based on this analysis, adjustments must be applied to the data prior to use. Substantive efforts have been made to post-process the data to create homogeneous long-term records for multiple ECVs (Mekis & Vincent, 2011; Menne & Williams, 2009; Rohde et al., 2013; Willett et al., 2013, 2014; Yang, Kane, Zhang, Legates, & Goodison, 2005) at both regional and global scales (Hartmann et al., 2013). Such studies build upon decades of development of techniques to identify and adjust for breakpoints, for example, the work of Guy Callendar in the early 20th century (Hawkins & Jones, 2013). The uncertainty arising from homogenization using multiple methods for land surface air temperatures (LSAT) (Jones et al., 2012; Venema et al., 2012; Williams, Menne, & Thorne, 2012) is much too small to call into question the conclusion of decadal to centennial global-mean warming, and commensurate changes in a suite of related ECVs and indicators (Hartmann et al., 2013, their FAQ2.1). Evidence of this warming is supported by many lines of evidence, as well as modern reanalyses (Simmons et al., 2017).
The effects of inhomogeneities are stronger at the local and regional level, may be impacted by national practices complicating homogenization efforts, and are more challenging to remove for sparse networks (Aguilar et al., 2003; Lindau & Venema, 2016). The effects of inhomogeneities are also manifested more strongly in extremes than in the mean (e.g., Trewin, 2013) and are thus important for studies of changes in climatic extremes. State-of-the art homogenization methods can only make modest improvements in the variability around the mean of daily temperature (Killick, 2016) and humidity data (Chimani et al., 2017).
In the future, it is reasonable to expect that observing networks will continue to evolve in response to the same stakeholder pressures that have led to historical changes. We can thus be reasonably confident that there will be changes in measurement technology and measuring practice. It is possible that such changes will prove difficult to homogenize and would thus threaten the continuity of existing data series. It is therefore appropriate to ask whether a different route is possible to follow for future observational strategies that may better meet climate needs, and serve to increase our confidence in records going forwards. Having set out the current status of data sets derived from ad hoc historical networks, in the remainder of this article, we propose the construction of a different kind of measurement network: a reference network whose primary mission is the establishment of a suite of long-term, stable, metrologically traceable, measurements for climate science.
…
Siting considerations
Each site will need to be large enough to house all instrumentation without adjacent instrumentation interfering with one another, with no shading or wind-blocking vegetation or localized topography, and at least 100 m from any artificial heat sources. Figure 2 provides a site schematic for USCRN stations that meets this goal. The siting should strive to adhere to Class 1 criteria detailed in guidance from the WMO Commission for Instruments and Methods of Observations (World Meteorological Organization, 2014, part I, chap. I). This serves to minimize representativity errors and associated uncertainties. Sites should be chosen in areas where changes in siting quality and land use, which may impact representativity, are least likely for the next century. The site and surrounding area should further be selected on the basis that its ownership is secure. Thus, site selection requires an excellent working and local knowledge of items such as land/site ownership proposed, geology, regional vegetation, and climate. As it cannot be guaranteed that siting shall remain secure over decades or centuries, sites need to be chosen so that a loss will not critically affect the data products derived from the network. A partial solution would be to replace lost stations with new stations with a period of overlap of several years (Diamond et al., 2013). It should be stressed that sites in the fiducial reference network do not have to be new sites and, indeed, there are significant benefits from enhancing the current measurement program at existing sites. Firstly, co-location with sites already undertaking fiducial reference measurements either for target ECVs or other ECVs, such as GRUAN or GCW would be desirable. Secondly, co-location with existing baseline sites that already have long records of several target ECVs has obvious climate monitoring, cost and operational benefits.

Siting considerations should be made with accessibility in mind both to better ensure uninterrupted operations and communications, and to enable both regular and unscheduled maintenance/calibration operations. If a power supply and/or wired telecommunication system is required then the site will need to provide an uninterrupted supply, and have additional redundancy in the form of a back-up generator or batteries. For many USCRN sites the power is locally generated via the use of a combination of solar, wind, and/or methane generator sources, and the GOES satellite data collection system provides one-way communication from all sites.
For a reference grade installation, an evaluated uncertainty value should be ascertained for representativeness effects which may differ synoptically and seasonally. Techniques and large-scale experiments for this kind of evaluation and characterization of the influences of the siting on the measured atmospheric parameters are currently in progress (Merlone et al., 2015).
Finally, if the global surface fiducial reference network ends up consisting of two or more distinct set-ups of instrumentation (section 4.1), there would be value in side-by-side operations of the different configurations in a subset of climatically distinct regions to ensure long-term comparability is assured (section 3). This could be a task for the identified super-sites in the network.
…
There are many possible metrics for determining the success of a global land surface fiducial reference climate network as it evolves, such as the number and distribution of fiducial reference climate stations or the percent of stations adhering to the strict reference climate criteria described in this article. However, in order to fully appreciate the significance of the proposed global climate surface fiducial reference network, we need to imagine ourselves in the position of scientists working in the latter part of the 21st century and beyond. However, not just scientists, but also politicians, civil servants, and citizens faced with potentially difficult choices in the face of a variable and changing climate. In this context, we need to act now with a view to fulfilling their requirements for having a solid historical context they can utilize to assist them making scientifically vetted decisions related to actions on climate adaptation. Therefore, we should care about this now because those future scientists, politicians, civil servants, and citizens will be—collectively—our children and grandchildren, and it is—to the best of our ability—our obligation to pass on to them the possibility to make decisions with the best possible data. Having left a legacy of a changing climate, this is the very least successive generations can expect from us in order to enable them to more precisely determine how the climate has changed.
Read the full open access paper here, well worth your time: http://onlinelibrary.wiley.com/doi/10.1002/joc.5458/full
h/t to Zeke Hausfather for notice of the paper. Zeke, unlike some of his co-authors, actually engages me with respect. Perhaps his influence will help them become not just civil servants, but civil people.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

To add – you only have to look at the trend in executive pay or the wealth enjoyed by politicians to know that there is no longer any moral compass in both the private and public world….
Jeremy
As an old, retired CFO, I would strongly encourage you to go try to qualify for one of those executive jobs running a multi-billion dollar global corporation. Everybody KNOWS it’s easy…heck, some people even believe financial statements should be accurate to 2 decimal places.
However, I do agree anybody can be a politician.
Sorry but as a top-tiered business schooled, CFAed, buy-side analyst/investor weened and educated by Buffett and other true long-term investors, management stock options are the very definition of dishonesty and thimblerigging. With the amount of money these folks are being paid, there is absolutely no reason these people can’t acquire their stock the same way I do— by writing a check (and thereby sharing in BOTH the upside AND the downside).
Otherwise, management stock options are nothing more than a “heads I win, tails I don’t lose” asymmetrical shell game.
When the company does well, stock options get more valuable.
When the company does poorly stock options get less valuable, some become worthless.
Stock options are a viable means for tying the interests of an executive to the well being of the company.
Stock options that can’t be exercised until some time in the future help to make sure the executive is taking the long term interests of the company into account.
Just because you don’t understand or benefit from something is not evidence that something fraudulent is going on.
>>>>>> Mark W <<<<<<<<
I understand management stock options all too well (and so does Warren Buffett— which is why there are no management stock options at Berkshire Hathaway).
Managers who have real "skin in the game" behave very differently than those who don't.
I repeat: There is absolutely no reason managers can't acquire their stock the same way I do— by writing a check (and thereby sharing in BOTH the upside AND the downside. Without that “skin in the game” management stock options are an asymmetrical “heads I win, tails I don’t lose” shell game.
Those with stock options share in both the upside and the downside, as the value of those options goes up and down along with the value of the stock.
If you can’t understand that, then your claim to understanding falls flat.
>>>>>>>>>>>Mark W<<<<<<<<<<<<
What's with all the sophistry and casuistry ?
How about the application of a little Occam's Razor and Keep It Simple Stupid (the well-known "KISS" rule) ?
Managers want upside? There's a very easy way to create a precise "identity of interest" between shareholders and managers. Write a check.
“Writing a check is what separates a conversation from a commitment.”
-Warren Edward Buffett
Otherwise, management stock options are an asymmetrical shell game of “heads, I win; tails, I don’t lose.” Tell us all about “reloading.” Tell us all about annual grants. Tell us all about the farce of valuing them. Anybody who closely examines the mathematics of Black-Scholes quickly realizes that it’s a joke.
There are very good reasons Berkshire Hathaway doesn’t employ them.
Perhaps they might start by including Ithaca (Cornell University) in NY State. One of the best, high quality, well maintained, and long running sites in NY.
It has had morning readings since at leat 1947, and NOAA admit they have no info prior to that.
Yet GHCN have managed to totally corrupt the temperature record:
https://notalotofpeopleknowthat.wordpress.com/2018/01/26/tobs-at-ithaca/
The NOAA data sheet for Cornell is here. From it, the history of adjustments is thus:
The metadata for the site is here. And from it we find that there was indeed a station move on 1969-07-09, which moved it 0.7 miles East. And another move, it seems on 1948-05-01. It doesn’t say how far, but previously it was 1 mile from the PO, and it ended up 2.6 miles away, so looks like about a mile. That seems to account for the two main changes.
Is this adjustments made to the raw measurements and older data?
If so, this would indicate that ALL data from 1930 to 1942 has been adjusted downward (cooled) by 1C from the original measurements and that all measurements since 1970 have had .6C added to them (Warmed).
This would certainly look like an artificial warming trend of 1.5C was added into the anomaly data since 1930
Or am I just interpreting this information incorrectly??
“This would certainly look like an artificial warming trend of 1.5C was added into the anomaly data since 1930”
It would reflect adjustment to the absolute values to that effect, yes. It sounds like the site moved twice, each a bit further out of town.
Cool the past and warm the present to account for moves of about a mile each from the center of that powerful urban heat island effect of Ithaca, NY, lol.
Frankly, assuming Bryan A’s interpretation is correct, I am stunned at the data Nick appears to have provided.
Nick, the thing is when station moves, it is not just that it moves. It is probably having a clean white-washed shelter after the move. Thus, the adjustment should not necessarily only be a step change, but a glide.
So, how to find a glide bias?
The remarkable thing about data “homogenization” as practiced by “climate science” is that its methods demand not only geophysical ignorance but also circular reasoning. The Ithaca example is emblematic of both vices: very modest changes of siting in a college town of 30K residents are used to justify major data adjustments throughout the entire record–bringing its trend into conformance with that at Binghamton, a city of 301K residents, 47km away.
Not surprisingly, the negative secular trends evident at Ithaca and other small towns, such as Elmira, throughout the region thus wind up being converted into positive trends. Such balderdash is commonplace in the absence of any realistic recognition of the corruptive effects of UHI and of station-maintenance issues, aided by the blind hubris that unknown non-climatic effects can be reliably removed via statistical massage.
Great idea, only thirty years overdue but better late than never.
“Great idea, only thirty years overdue but better late than never.”
Wasn’t Lamb calling for this even before that? But his successors ignored him. Why?
PS: Sherlock Holmes said, “Data, data, data! I can make no bricks without straw.”
The Oklahoma Mesonet has been around for about 20 years. Might could use its reports to check accuracy of the other stations in the area: http://www.mesonet.org/
The West Texas Mesonet has been around almost as long. The Kansas Mesonet is newer, but still has some good stuff. I check both from time to time depending on which way the wind is blowing.
Looking forward to your reports from the conference.
Looks like a response to political climate change by the laws of the conservation of career. I know right where the money for this project should come from – out of the budgets of the UN and the agencies that fund all of the inane climate change (cha-ching!) and gullible warming studies.
Latitude is right, what’s point if the data handlers of the new network just start post facto changing the info again? We would need honest people collecting and protecting this data. The current criminals involved in it need to go away. The obfuscators (we know who they are) running interference on blogs like this need to retire, too.
Andrew
Bottom line.
We don’t really know the past temperatures. Present temperatures are uncertain.
The foundation of CAGW and all its supporting models along with policies based on it are built on sand.
Time to move on.
CO2 is not going to “kill us all”.
PS Would it be asking too much for Al Gore to refund what he profited from the foundation-less scare?
Maybe give toward paying down the principal on the national debt?
Speaking for Al, yes, that would be asking too much…
Finally!! I am excited. Good data is a must. Let’s not let our disdain for global warming theory ruin this great news—good data is what has been asked for all along. Not adjusted, not interpolated.
It’s interesting that one needs extreme accuracy for shuttle landings, but “adjusted” and “intepolated” are good enough to remake the entire energy and commerce sectors of the world.
You make a good point. I can’t release an aircraft after a major repair without approved data that has gone through stress analysis etc. but we are expected to accept that the world as we know it is going to end if we don’t curtail CO2 based on data that has to be “made accurate” due to bad collection. Zeke and his colleagues need to keep doing what they are doing to make the best (no pun intended) of what data we have but, I prefer that we continue to use the energy sources that have served us well until technology makes them obsolete, instead of a government mandated reduction in the quality of my life.
Good catch by DHR above!, and you beat me to it. The photos of USCRN sites they use to sell the USCRN concept show big valleys with no people and one lonely station in the middle. This station? Next to a runway? That’s what we have now, sites next to runways, heating plants, etc., as Anthony has documented. I’d like to hear Anthony’s take on what he actually saw around this station, maybe I’m missing something. But if this is a typical USCRN station, I’m very disappointed.
That said, I agree wholeheartedly with Anthony and this approach. We need a network of CRN quality stations, in CRN Level 1 quality sites, worldwide. USA should build and donate the stations, subject to siting approval. The budget for that is a drop in the bucket of the $20B+ US is already spending each year on climate, per the required annual Congressional report. I agree with the criticism that it will take awhile, but it will start giving accurate trend data right away.
An important early target would be to fill in the map of the large areas with no data, places like the Arctic and Antarctic, large swaths of Africa and South America, etc. Those gray areas in the maps on Goddard’s site. The statistical analyses seem to say that a small number of good sites can capture the trend over a large area, which is what we care about most. Not a big cost for a big benefit. How fun it would be to leave behind forever stale debates about kriging vs linear interpolation, etc.
But I sure don’t want these Global CRN (“GCRN”) stations sending their data to GISS or Hadley. They have lost everybody’s trust, deservedly so. I like Steve McI’s suggestion from a few years back that we already have government organizations that track large databases with apparent integrity and strong statistical tools and understanding, things like GDP, unemployment, census. I’d like to see the data, from USCRN too, go to some group like that whose budget doesn’t depend on US continuing to spend that $20B+ each year on climate.
My humble (and it is humble) opinion is there is not enough money & other necessary resources to populate the earth & oceans with consistently accurate temperature sensors,
What measurement that will get done needs to be a different technology (satellites are a good start).
German meteorlog Klaus Hager has, for 3144 days (8 + years) compared an old stevenson
lig thermometer and a new Pt 100 resistance thermometer in the same spot.
Result: The Pt100 runs 0,93 C warmer than the mercury thermometer.
He says that this is only applicable to this very spot. No general conclusions can be drawn.
http://www.hager-meteo.de/aktuelle%20berichte.htm
But: How many sites have 8 years overlap at an instrumentation change?
Very few, I dare say.
His point is that every sensor is different. And that includes both mercury and Pt100 thermometers.
That these two units are 0.93C apart is not evidence that any random two mercury and Pt100 thermometers are also going to be 0.93C apart.
The proper procedure when replacing one sensor with another is to place them side by side for a year or two, in order to plot how they differ under various weather conditions. Make a note of those difference, and only then, de-commissioning the old unit.
you might want to let the australian bom know of that procedure ,seems they might have forgotten about it on occasion 😉
Forgotten, or deliberately ignored?
Are mercury thermometers accurate to 2 decimal places C? How doe we square this with being different by 0.93C?
Maybe we need a 3rd thermometer…
No they aren’t. I should have picked up on that rather than just repeating the information given.
0.95°C? That is a huge discrepancy!
Ummm… A PT 100 PRT is reasonably accurate, but not accurate to claim 0.93C without specifying the uncertainty.
Just for the probe, best accuracy is more than 0.05C over typical temperature ranges — if a 1/10 DIN probe is used. More than 0.5C if a Class B PT 100 is used. (Did he specify?)
But then you have uncertainties due to wiring, the meter, and more.
AND… given that he’s not measuring known temperatures (ie a calibration system), averaging doesn’t improve things at all.
So, overall his calibrated PT100 meter most likely provides measurements of about 0.93C +/- 0.1C or so… assuming a 1/10 DIN probe. If only Class B, 0.93C +/- 0.6C or so.
Without details like this, it’s pretty much mush.
Unfortunately, they will continue to use the existing data from the just admitted poor quality data network, since “it’s all we’ve got”.
MarkW
Sounds kinda like you have problems with tree rings.
/sarc
Will the new climate reference network use satellites?
And actually measure temperature in places other than airports?
With no need to interpolate between sites?
Free of UHIE bias?
Giggle. Sorry that was just me musing about something which is so insanely unlikely as to be an impossible thought. Wash my brain out with Clorox.
For any climate data collection site it needs to be made clear; Airports cannot be used. Not should not, not with caution, not with adjustments. Just plain cannot. Anyone who cannot see the reasons for this are unqualified to site climate data instrumentation.
For past climate data it needs to be made clear; Airports cannot be used. Not should not, not with caution, not with adjustments. Just plain cannot. Anyone who cannot see the reasons for this are unqualified to use climate data.
I think airports were used because the system was was in place, not for any “Global” data but, to provide data for the planes landing or taking off from that little spot on the globe.
The numbers were useful locally. They’ve been used globally.
That’s a point that needs to be driven home. The current sensor network was never designed to measure climate. The equipment and quality controls for such a system were never designed in.
One thing I’ve always been in science is that you must use your instruments as they were designed to be used.
Using them outside their design constraints will always contaminate your data. Usually in ways that can’t be predicted or compensated for.
always been taught in science
Isn’t the whole purpose of using anomalies, instead of actual temps, to keep site moves, equipment changes, etc., from affecting long term readings? Homogenization shouldn’t be coming into play.
iirc, the computerized homogenization adjustments made by GISS are done daily and not just when major changes to monitoring sites occurs
We have some well sited historical sites in rural areas that have a long history. They generally show little warming over the last century. link
I worry that this new standard will be an excuse to decommission those sites. That would deprive us of a long unbroken record, resistant to adjustments.
Your link is to US stations. USCRN has been around for fifteen years, and hasn’t been used as an excuse for decommissioning sites. In fact, most non-CRN sites are there for reasons unrelated to climate research, and those reasons won’t go away.
There is indeed a worrying trend in GHCN to maintain far fewer CURRENT stations in it than previously, only 62 in Australia, only around 10 in the UK. That is enough if those stations don’t break or change, but nowhere near enough to do a proper job of … homogenisation, for recent times.
Also, much so-called unadjusted data in GHCN is nothing of the sort, it contains obvious errors (Irish stations in 2015 for example), and differs from the source data in some cases, often with inconsistency between monthly versions 3 and 4.
Somebody needs to do a top-down design of the system for storing RAW data.
” GHCN to maintain far fewer CURRENT stations in it than previously”
GHCN didn’t maintain more stations previously. Data before 1997 is from archives. A whole dataset can be read in at once. Data since is maintained – ie updated monthly, and hopefully within days. That is a much more demanding requirement.
“Somebody needs to do a top-down design of the system for storing RAW data.”
Met offices do that. The international system is that of CLIMAT forms, submitted monthly by those mets
OK, GHCN is full of archived data, though some of it can only be described as padding (very short records around 1960/70), excellent for homogenisation in the 20th century, but for reasons unknown the number of stations currently reporting is way too small from some countries, IMHO.
“Changes in instrumentation were never intended to deliberately bias the climate record. ”
Maybe not, but the adjustments were.
I remember reading your 2009 article when you first published it, and another talking about how you put a thermometer on your car and drove it from the center of town to the edge of town – and back again – to verify the UHI effect.
Of course it makes sense for either side of the Global Warming argument to want accurate measurements – measurements that don’t have to be adjusted year after year for some unexplained reason. This looks like a step in that direction.
Way to go, Anthony!
“These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system.”
To me, that’s like saying, “my data contains uncertainties and errors, but the conclusions I draw from that data are correct and should not be questioned. Doesn’t the magnitude of those uncertainties play a role? Do we even know what that magnitude is? If the uncertainties are few, the general trend may be determined, but the magnitude of the changes in the global climate system have to be questioned. The greatest uncertainty concerning a global average temperature has to do with the lack of data from large areas of the world. Local climate, just miles apart, can vary a great deal. So it’s not just the uncertainties in the temperature data we have that is the problem. It’s the lack of temperature data from vast areas of the world that brings the greatest uncertainty to the magnitude of global climate changes over time. Estimating temperatures over those vast areas just doesn’t cut it.
Those de rigeur genuflections to the climatocracy sound exactly like the sort of qualifications used by enlightenment scholars to protect themselves when science started blowing holes in the orthodoxy of the day.
Congratulations Anthony! Thanks for all your hard work. When doing nothing favours warming temperature records from bitumen, concrete, jet exhaust or cars. etc., they are happy to do nothing as it suits the Warmista Narrative. When on the rare occasion, they need accurate results for rocketry, they leap into action.
When I first saw your work detailing the “How Not to Measure Temperature” failures of the system, and the resulting lousy quality of the data, it turned me from a believer into a skeptic. I’ve been reading your blog ever since.
“Now, some of the very same people who have scathingly criticized my efforts and the efforts of others to bring these weaknesses to the attention of the scientific community have essentially done an about-face…”
I have to wonder about the motivation. Could it be that they are now pointing out the weaknesses and uncertainties in the data as a hedge in case global temperatures do not rise as fast as they projected? If we enter another “hiatus” period, or if temperatures actually drop, they will need an excuse to explain it. I can see them saying something like, “The lack of warming on a local basis does not mean that global climate change is not happening. The planet is still warming. It’s just that the data we have is not sufficient to accurately show what is happening on a global basis.”
Unlike other scares in the past, they will never give up on climate change. The hysteria about climate change may enter hiatus from time to time, but as soon as there is a heat wave, a major flood or drought, a deadly hurricane, or other extreme weather, it will make a comeback. All we can hope for is that the majority of people will eventually get sick of all the hype and will choose to ignore it.
The “global warming” media freak show was never about climate change. The UN/IPCC was always about the globalization agenda under the environmental climate change smoke screen. This fact has been admitted openly by various apparatchiks over the years. Then the money making aspects became exploited by ubiquitous money grubbers. I’m sure there are a few street vendors trying to sell gas masks with CO2 scrubbers at crazy prices to the gullible.
“I’m sure there are a few street vendors trying to sell gas masks with CO2 scrubbers at crazy prices to the gullible.”
An interesting concept…would the gas masks scrub the inhaled CO2 or the exhaled CO2?
Scrubbing? I thought they were sealing the mask off so the CO2 couldn’t escape.
interesting that the almost 8 billion people on the planet collectively in a year are responsible for 1ppm CO2 being put into the atmosphere. Of course that CO2 was withdrawn out of the atmosphere by the plants. So we humans get a pass on that one from the AGW people. Good thing too or else there would be a call for a culling of the human race by the alarmists. I am sorry to say that after reviewing more and more of these climate study “scientific” reports , I am increasingly getting pissed off at this whole AGW fiasco. The reports are not worthy of high school science projects never mind being written by PhDs. I keep referring to one report that divided up the world’s land areas into 2 areas “Dry lands and wet lands and tried to convince the reader that different physics mechanisms were acting on the 2 types of land. Simple garbage. yes folks a PhD was responsible for that.
On a different note we need more CO2 in the atmosphere not less. If it turns out that we are indeed warming the planet slightly less than 1C every century I would gladly take the increase in temperature being as I live in a cold climate. The plants will love it too. Paying the prices of carbon taxes and cap and trade limits and doubling and tripling of my electricity prices for a failed vision of unreliable green energy is making my blood boil. THIS IS WAR.
The expensive climate reference network sensors provide actual data that represent local measurements over long periods of time because the location of sensors follows a scientifically justified methodology.
Why build such a network unless the previous system was inadequate, or known to be faulty.
There are a very few older surface weather stations that do have a history of good siting and maintenance. All of those stations must be located in rural areas well away from any contaminating influences.
Those stations that follow the rules show no significant warming. There is no point in mixing bad stations in with the good. Bad data must always be rejected, because it isn’t data, it’s garbage.
Here is a starting point for those interested in siting criteria for the USCRN stations.
ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/site_info/CRNFY02SiteSelectionTask.pdf
Using google maps, it appears that the temperature sensors for the Titusville 7E station are within 60 meters of the concrete parking pad with the shuttle mockup, and 90 meters from the edge of the runway. That may be a violation of Class 1 siting criteria for USCRN stations.
Class 1: Flat on horizontal ground surrounded by a clear surface with a slope below 1/3 (<19 degrees). Grass/low vegetation ground cover <10 cm high. Sensors located at least 100 meters (m) from artificial heating or reflecting surfaces, such as buildings, concrete surfaces, and parking lots.
Class 2: Same as Class 1 with the following differences. Surrounding Vegetation 5 degrees
A bunch of stations replaced/moved/removed/changed/added?
A post-facto adjuster’s paradise.
My trend line is squiggling with excitement.
Andrew
Good article. The obvious truth is that ‘scientists’ have a plethora of temperature measurements to play with, and play they do. Functionally, none these measurements are scientifically valid, since heat sinks like airport runways, city centres, farms, factories, forests, freeways, insolation etc. confound the raw ‘data’ beyond repair. No matter. There’s money to be made by scaring the hoi polloi with doomsday predictions. And that’s what’s happening today.
I have read that fan aspirated shields draw air across a temperature sensor to improve the accuracy of the air temperature measurements. Many years ago (~50) when doing heat balances on power plants the heat of pumping had to be factored int the analysis. I realize this is a small amount of added heat, however, it seems to me that it is relatively the same percentage as what i was dealing with, (i think.) Is this of any concern.
I’m pretty sure the fan is down wind from the sensor.