From the Journal of International Climatology and the “if you can’t beat ’em, join ’em” department.
To me, this feels like vindication. For years, I’ve been pointing out just how bad the U.S. and Global Surface monitoring network has been. We’ve seen stations that are on pavement, at airports collecting jet exhaust, and failing instruments reading high, and right next to the heat output of air conditioning systems.



We’ve been told it “doesn’t matter” and that “the surface monitoring network is producing good data”. Behind the scenes though, we learned that NOAA/NCDC scrambled when we reported this, quietly closing some of the worst stations, while making feverish and desperate PR pitches to prop up the narrative of “good data”.
Read my report from 2009 on the state of the US Historical Climate Network:
That 2009 report (published with the help of the Heartland Institute) spurred a firestorm of criticism, and an investigation and report by the U.S. Office of the Inspector General who wrote:
Lack of oversight, non-compliance and a lax review process for the State Department’s global climate change programs have led the Office of the Inspector General (OIG) to conclude that program data “cannot be consistently relied upon by decision-makers” and it cannot be ensured “that Federal funds were being spent in an appropriate manner.”
Read it all here: https://wattsupwiththat.com/2014/02/07/report-from-the-office-of-the-inspector-general-global-climate-change-program-data-may-be-unreliable/
More recently, I presented at AGU15 : Watts at #AGU15 The quality of temperature station siting matters for temperature trends
And showed just how bad the old surface network is in two graphs:


Now, some of the very same people who have scathingly criticized my efforts and the efforts of others to bring these weaknesses to the attention of the scientific community have essentially done an about-face, and authored a paper calling for a new global climate monitoring network like the United States Climate Reference Network (USCRN) which I have endorsed as the only suitable way to measure surface temperature and extract long term temperature trends.
During my recent trip to Kennedy Space Center (Thanks to generous donations from WUWT readers), I spotted an old-style airport ASOS weather station right next to one of the new USCRN stations, at the Shuttle Landing Facility runway, presumably placed there to study the difference between the two. Or, possibly, they just couldn’t trust the ASOS station when they most needed it -during a Shuttle landing where accurate temperature is of critical importance in calculating density altitude, and therefore the glide ratio. Comparing the data between the two is something I hope to do in a future post.

Here is the aerial view showing placement:

Clearly, with its selection of locations, triple redundant state of the art aspirated air temperature sensors, the USCRN station platform is the best possible way to measure long-term trends in 2 meter surface air temperature. Unfortunately, the public never sees the temperature reports from it in NOAA’s “State of the Climate” missives, but they instead rely on the antiquated and buggy surface COOP and GHCN network and it’s highly biased and then adjusted data.
So, for this group of people to call for a worldwide USCRN style temperature monitoring network, is not only a step in the right direction, but a clear indication that even though they won’t publicly admit to the unreliable and uncertain existing COOP/USHCN networks worldwide being “unfit for purpose” they are in fact endorsing the creation of a truly “fit for purpose” global system to monitor surface air temperature, one that won’t be highly biased by location, sensor/equipment issues, and have any need at all for adjustments.
I applaud the effort, and I’ll get behind it. Because by doing so, it puts an end to the relevance of NASA GISS and HadCRUT, whose operators (Gavin Schmidt and Phil Jones) are some of the most biased, condescending, and outright snotty scientists the world has ever seen. They should not be gatekeepers for the data, and this will end their lock on that distinction. To Phil Jones credit, he was a co-author of this new paper. Gavin Schmidt, predictably, was not.
This is something both climate skeptics and climate alarmists should be able to get behind and promote. More on that later.
Here’s the paper: (note they reference my work in the 2011 Fall et al. paper)
Towards a global land surface climate fiducial reference measurements network
P. W. Thorne, H. J. Diamond, B. Goodison, S. Harrigan, Z. Hausfather, N. B. Ingleby, P. D. Jones, J. H. Lawrimore, D. H. Lister, A. Merlone, T. Oakley, M. Palecki, T. C. Peterson, M. de Podesta, C. Tassone, V. Venema, K. M. Willett
Abstract
There is overwhelming evidence that the climate system has warmed since the instigation of instrumental meteorological observations. The Fifth Assessment Report of the Intergovernmental Panel on Climate Change concluded that the evidence for warming was unequivocal. However, owing to imperfect measurements and ubiquitous changes in measurement networks and techniques, there remain uncertainties in many of the details of these historical changes. These uncertainties do not call into question the trend or overall magnitude of the changes in the global climate system. Rather, they act to make the picture less clear than it could be, particularly at the local scale where many decisions regarding adaptation choices will be required, both now and in the future. A set of high-quality long-term fiducial reference measurements of essential climate variables will enable future generations to make rigorous assessments of future climate change and variability, providing society with the best possible information to support future decisions. Here we propose that by implementing and maintaining a suitably stable and metrologically well-characterized global land surface climate fiducial reference measurements network, the present-day scientific community can bequeath to future generations a better set of observations. This will aid future adaptation decisions and help us to monitor and quantify the effectiveness of internationally agreed mitigation steps. This article provides the background, rationale, metrological principles, and practical considerations regarding what would be involved in such a network, and outlines the benefits which may accrue. The challenge, of course, is how to convert such a vision to a long-term sustainable capability providing the necessary well-characterized measurement series to the benefit of global science and future generations.
INTRODUCTION: HISTORICAL OBSERVATIONS, DATA CHALLENGES, AND HOMOGENIZATION
A suite of meteorological parameters has been measured using meteorological instrumentation for more than a century (e.g., Becker et al., 2013; Jones, 2016; Menne, Durre, Vose, Gleason, & Houston, 2012; Rennie et al., 2014; Willett et al., 2013, henceforth termed “historical observations”). Numerous analyses of these historical observations underpin much of our understanding of recent climatic changes and their causes (Hartmann et al., 2013). Taken together with measurements from satellites, weather balloons, and observations of changes in other relevant phenomena, these observational analyses underpin the Intergovernmental Panel on Climate Change conclusion that evidence of historical warming is “unequivocal” (Intergovernmental Panel on Climate Change, 2007 2007, 2013).
Typically, individual station series have experienced changes in observing equipment and practices (Aguilar, Auer, Brunet, Peterson, & Wieringa, 2003; Brandsma & van der Meulen, 2008; Fall et al., 2011; Mekis & Vincent, 2011; Menne, Williams Jr., & Palecki, 2010; Parker, 1994; Sevruk, Ondrás, & Chvíla, 2009). In addition, station locations, observation times, instrumentation, and land use characteristics (including in some cases urbanization) have changed at many stations. Collectively, these changes affect the representativeness of individual station series, and particularly their long-term stability (Changnon & Kunkel, 2006; Hausfather et al., 2013; Karl, Williams Jr., Young, & Wendland, 1986; Quayle, Easterling, Karl, & Hughes, 1991). Metadata about changes are limited for many of the stations. These factors impact our ability to extract the full information content from historical observations of a broad range of essential climate variables (ECVs) (Bojinski et al., 2014). Many ECVs, such as precipitation, are extremely challenging to effectively monitor and analyse due to their restricted spatial and temporal scales and globally heterogeneous measurement approaches (Goodison, Louie, & Yang, 1998; Sevruk et al., 2009).
Changes in instrumentation were never intended to deliberately bias the climate record. Rather, the motivation was to either reduce costs and/or improve observations for the primary goal(s) of the networks, which was most often meteorological forecasting. The majority of changes have been localized and quasi-random in nature and so are amenable to statistical averaging of their effects. However, there have been regionally or globally systemic transitions specific to certain periods of time whose effect cannot be entirely ameliorated by averaging. Examples include:
- Early thermometers tended to be housed in polewards facing wall screens, or for tropical locales under thatched shelter roofs (Parker, 1994). By the early 20th century better radiation shielding and ventilation control using Stevenson screens became ubiquitous. In Europe, Böhm et al. (2010) have shown that pre-screen summer temperatures were about 0.5 °C too warm.
- In the most recent 30 or so years a transition to automated or semi-automated measurements has occurred, although this has been geographically heterogeneous.
- As highlighted in the recent World Meteorological Organization (WMO) SPICE intercomparison (http://www.wmo.int/pages/prog/www/IMOP/intercomparisons/SPICE/SPICE.html) and the previous intercomparison (Goodison et al., 1998), measuring solid precipitation remains a challenge. Instrument design, shielding, siting, and transition from manual to automatic all contribute to measurement error and bias and affect the achievable uncertainties in measurements of solid precipitation and snow on the ground.
- For humidity measurements, recent decades have seen a switch to capacitive relative humidity sensors from traditional wet- and dry-bulb psychrometers. This has resulted in a shift in error characteristics that is particularly significant in wetter conditions (Bell, Carroll, Beardmore, England, & Mander, 2017; Ingleby, Moore, Sloan, & Dunn, 2013).
As technology and observing practices evolve, future changes are inevitable. Imminent issues include the replacement of mercury-in-glass thermometers and the use of third party measurements arising from private entities, the general public, and non-National Met Service public sector activities.
From the perspective of climate science, the consequence of both random and more systematic effects is that almost invariably a post hoc statistical assessment of the homogeneity of historical records, informed by any available metadata, is required. Based on this analysis, adjustments must be applied to the data prior to use. Substantive efforts have been made to post-process the data to create homogeneous long-term records for multiple ECVs (Mekis & Vincent, 2011; Menne & Williams, 2009; Rohde et al., 2013; Willett et al., 2013, 2014; Yang, Kane, Zhang, Legates, & Goodison, 2005) at both regional and global scales (Hartmann et al., 2013). Such studies build upon decades of development of techniques to identify and adjust for breakpoints, for example, the work of Guy Callendar in the early 20th century (Hawkins & Jones, 2013). The uncertainty arising from homogenization using multiple methods for land surface air temperatures (LSAT) (Jones et al., 2012; Venema et al., 2012; Williams, Menne, & Thorne, 2012) is much too small to call into question the conclusion of decadal to centennial global-mean warming, and commensurate changes in a suite of related ECVs and indicators (Hartmann et al., 2013, their FAQ2.1). Evidence of this warming is supported by many lines of evidence, as well as modern reanalyses (Simmons et al., 2017).
The effects of inhomogeneities are stronger at the local and regional level, may be impacted by national practices complicating homogenization efforts, and are more challenging to remove for sparse networks (Aguilar et al., 2003; Lindau & Venema, 2016). The effects of inhomogeneities are also manifested more strongly in extremes than in the mean (e.g., Trewin, 2013) and are thus important for studies of changes in climatic extremes. State-of-the art homogenization methods can only make modest improvements in the variability around the mean of daily temperature (Killick, 2016) and humidity data (Chimani et al., 2017).
In the future, it is reasonable to expect that observing networks will continue to evolve in response to the same stakeholder pressures that have led to historical changes. We can thus be reasonably confident that there will be changes in measurement technology and measuring practice. It is possible that such changes will prove difficult to homogenize and would thus threaten the continuity of existing data series. It is therefore appropriate to ask whether a different route is possible to follow for future observational strategies that may better meet climate needs, and serve to increase our confidence in records going forwards. Having set out the current status of data sets derived from ad hoc historical networks, in the remainder of this article, we propose the construction of a different kind of measurement network: a reference network whose primary mission is the establishment of a suite of long-term, stable, metrologically traceable, measurements for climate science.
…
Siting considerations
Each site will need to be large enough to house all instrumentation without adjacent instrumentation interfering with one another, with no shading or wind-blocking vegetation or localized topography, and at least 100 m from any artificial heat sources. Figure 2 provides a site schematic for USCRN stations that meets this goal. The siting should strive to adhere to Class 1 criteria detailed in guidance from the WMO Commission for Instruments and Methods of Observations (World Meteorological Organization, 2014, part I, chap. I). This serves to minimize representativity errors and associated uncertainties. Sites should be chosen in areas where changes in siting quality and land use, which may impact representativity, are least likely for the next century. The site and surrounding area should further be selected on the basis that its ownership is secure. Thus, site selection requires an excellent working and local knowledge of items such as land/site ownership proposed, geology, regional vegetation, and climate. As it cannot be guaranteed that siting shall remain secure over decades or centuries, sites need to be chosen so that a loss will not critically affect the data products derived from the network. A partial solution would be to replace lost stations with new stations with a period of overlap of several years (Diamond et al., 2013). It should be stressed that sites in the fiducial reference network do not have to be new sites and, indeed, there are significant benefits from enhancing the current measurement program at existing sites. Firstly, co-location with sites already undertaking fiducial reference measurements either for target ECVs or other ECVs, such as GRUAN or GCW would be desirable. Secondly, co-location with existing baseline sites that already have long records of several target ECVs has obvious climate monitoring, cost and operational benefits.

Siting considerations should be made with accessibility in mind both to better ensure uninterrupted operations and communications, and to enable both regular and unscheduled maintenance/calibration operations. If a power supply and/or wired telecommunication system is required then the site will need to provide an uninterrupted supply, and have additional redundancy in the form of a back-up generator or batteries. For many USCRN sites the power is locally generated via the use of a combination of solar, wind, and/or methane generator sources, and the GOES satellite data collection system provides one-way communication from all sites.
For a reference grade installation, an evaluated uncertainty value should be ascertained for representativeness effects which may differ synoptically and seasonally. Techniques and large-scale experiments for this kind of evaluation and characterization of the influences of the siting on the measured atmospheric parameters are currently in progress (Merlone et al., 2015).
Finally, if the global surface fiducial reference network ends up consisting of two or more distinct set-ups of instrumentation (section 4.1), there would be value in side-by-side operations of the different configurations in a subset of climatically distinct regions to ensure long-term comparability is assured (section 3). This could be a task for the identified super-sites in the network.
…
There are many possible metrics for determining the success of a global land surface fiducial reference climate network as it evolves, such as the number and distribution of fiducial reference climate stations or the percent of stations adhering to the strict reference climate criteria described in this article. However, in order to fully appreciate the significance of the proposed global climate surface fiducial reference network, we need to imagine ourselves in the position of scientists working in the latter part of the 21st century and beyond. However, not just scientists, but also politicians, civil servants, and citizens faced with potentially difficult choices in the face of a variable and changing climate. In this context, we need to act now with a view to fulfilling their requirements for having a solid historical context they can utilize to assist them making scientifically vetted decisions related to actions on climate adaptation. Therefore, we should care about this now because those future scientists, politicians, civil servants, and citizens will be—collectively—our children and grandchildren, and it is—to the best of our ability—our obligation to pass on to them the possibility to make decisions with the best possible data. Having left a legacy of a changing climate, this is the very least successive generations can expect from us in order to enable them to more precisely determine how the climate has changed.
Read the full open access paper here, well worth your time: http://onlinelibrary.wiley.com/doi/10.1002/joc.5458/full
h/t to Zeke Hausfather for notice of the paper. Zeke, unlike some of his co-authors, actually engages me with respect. Perhaps his influence will help them become not just civil servants, but civil people.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Great work, I have to think that your work influences anyone who is seeking the truth. Likely someone like Scott Pruitt is asking the right questions based on your work.
Amen!
Climate scientists have already developed an adjustment methodology for existing data that produces a robust correlation to increases in atmospheric CO2.
We know what’s wrong with the existing measurements and how to fix them in conformance with the CO2 hypothesis. What else do we need? Why should we destroy the existing adjustment jobs and businesses?
How can we test such a methodology given the thousands of data points and the therefore numerous degrees of freedom. The various influences of all the related systematic errors would be non-linear and multivariate. Saying “oh look we finally massages the statistical analysis to match this pattern of CO2!” is not science nor scientific in method. Nice try, might want to consult a high school-level textbook before replying.
Zander, did I forget to use /sarc tags again?
And did you click on the hyperlink above to Tony Heller’s graph?
The sheer absurdity that the land station set was ever ‘fit for purpose’ in determining a global temperature trend given the massive urbanisation over the past century or so let alone the local UHI effects as illustrated by the pics in the article, is utterly bizarre to my mind. Without the inherent trend of the HI effects it is completely speculative if there is any meaningful ‘trend’. Mannian reconstruction does not even rate as speculative IMO.
So now that ground stations are admittedly (by everyone) inaccurate will the satellite measurements be taken more seriously?
Satellites are THE global climate reference network
Satellites don’t take 2m level measurements, and that’s where human beings, their crops, and their meat animals live. IF adaptation is required, you need the information where you live, not the surface to thousands of meters.
D. J. Hawkins March 2, 2018 at 2:30 pm
Satellites don’t take 2m level measurements, and that’s where human beings, their crops, and their meat animals live. IF adaptation is required, you need the information where you live, not the surface to thousands of meters.
—–
I happen to live in the US, and the USCRN show no material warming since its inception. So by your reasoning, Americans have nothing to worry about. I couldn’t agree more.
@Reg;
Precisely.
“Satellites don’t take 2m level measurements, and that’s where human beings, their crops, and their meat animals live.”
But that’s not where AGW theory can be best falsified. The higher levels of the troposphere are where there is less noise, and where AGW theory makes its strongest predictions.
Also, the temperatures 2 meters above the oceans can’t be taken. So surface-level based measurements aren’t widespread enough.
I’m considering adaptation. It was -16°C out last time I checked and woollen socks would be nice. My toes are cold-numb and that’s not good for blood pressure
I’m pretty certain I’m not needing a certified network for adaptation in this case. The question is not at which decade I will be able to take the socks away.
The same applies to sea level. If my socks wet, I certainly don’t need a satellite telling me how much the distance of the mean sea level from the center of the Earth has increased. It is totally irrelevant.
There are cases, where precise information on trend is useful, but it is hardly the reason why we adapt. After all, we tend to not plan even for the past, as Mosher once mentioned.
Congrats Anthony! And thank you for all your hard work.
Who new? For software, the saying goes garbage in…garbage out and it applies to climate models too
Revised “old saying”:
Garbage in…garbage model…garbage out
Who Knew.
Everybody who reads WUWT? new long ago. This article should be listed under the “what took them so long?” file.
“Knew”. Long shift, no coffee.
Zeke is to be commended for being respectful. On the other hand, what a sad commentary that someone needs to be singled out for mention just because they aren’t being a complete d*ck.
Being respectful tells so much about a person, when observed to be systematic.
Being not respectful like some do in social media also tells a lot.
I think I could do better myself.
Could this be the rebirth of climatology as a science?
Going back to observations and seeking to improve them – rather than refining unvalidated models – sounds like real science.
Will the raw data be available to the general public via the internet or will only manipulated data be occasionally published?
Hi Anthony,
While we may disagree to some extent on the magnitude inhomogenities in existing monitoring stations (and their ability to be addressed by statistical homogenization approaches), I think we can all agree on the need for better monitoring going forward, particularly in areas of the world with much sparser coverage than the US and Europe.
One important benefit of a global land reference network would be to allow us to test how well our homogenization approaches are doing in correcting for biases in the larger weather station networks (e.g. how the raw and adjusted data compares to data from nearby reference network stations). There will always be projects looking at local or regional temperatures that will benefit from denser (if more inhomogenous) weather networks, even in a world that has a global reference network. The global reference network will play an important role in helping ensure that systemic biases associated with changes in local station measurement techniques or conditions aren’t impacting our results.
Why is it so hard to just admit, in plain language, that you change temperature data?
Zeke does not ‘change data’. How much biases there are in the homogenised data, I don’t know. The famous correlation a la Heller is of course interesting phenomena.
Tony Heller has done extensive research into the “homogenization approaches” (data changing) and his conclusions are much more persuasive than what BEST or gov funded agencies are concluding
Well now, the literal fact is, ….. iffen they changed all of the Surface Measuring Stations to incorporate “liquid immersed” temperature sensors …. then there would be no further need or use for “homogenization approaches” (data changing) because of random “noise” of short term temperature “spikes” or “drops” (decreases).
The “liquid” would automatically function as a “temperature averaging” function and thus the temperature sensors would be providing actual factual near-surface air temperatures.
Any extra mass around the sensor would serve that function. It doesn’t have to be liquid, which could leak.
MarkW, ….. I was thinking said “mass” surrounding the temperature sensor should be a liquid, ….. such as “-40F anti-freeze”, ……. because it is a per se, natural “conductor” of heat energy and not an “insulator”, …. with said “heat” transferring rather quickly and evenly throughput the liquid.
And there should be two (2) temperature sensors in each unit so that their output can be compared to each other to insure the unit is functioning correctly.
Honestly, shouldn’t the very scientists who “homogenize” data be the ones who thought of and championed a way to verify that the processes were working correctly? Like 20 years ago? I never trusted data I had to manipulate into telling its secrets unless I had a way to verify that the methods used were correct, within an acceptable margin of error. If I had no way of direct verification, then the new adjusted data set remained suspect.
Not speaking directly about you because I don’t know you, but seriously minded scientific people would ALWAYS look for ways to verify – and they would not trust proxy data that is also being manipulated unless they could not gather actual direct data to work with.
Serious minded scientific people are aware of and very careful of Confirmation Bias. It seems to this outsider that all of the Climate Science community is stuck in an echo chamber of bias. They need to step back and look once again at the entire picture – personal beliefs put aside – and ask themselves “Does all of the data support my claim, or do I need to adjust to the new reality?”
Well, thats largely why NOAA started setting up the US Climate Reference Network around 20 years ago, before taking pictures of poorly sited stations was de rigueur. And recently used it to verify that our homogenization processes were working correctly: http://onlinelibrary.wiley.com/doi/10.1002/2015GL067640/full
“If I had no way of direct verification”
Of course you can verify. The unadjusted data is available. You can calculate the index with no adjustment at all, if you prefer. It makes very little difference.
“It makes very little difference.”
Then why do it?
Andrew
“Then why do it?”
Individual stations changed. Moved, changed times etc. Some went up, some went down. But when you add it all up, there is little net difference. But you don’t know that until you do it.
” Some went up, some went down.”
..and this is done state of the art…..with ouija boards
“But you don’t know that until you do it.”
Then your argument would be, we don’t know what it’s going to do till we do it. Not “it makes very little difference” like you already know.
You are just a spin artist. Please retire.
Andrew
The verification in question is verification of the methods.
“There is overwhelming evidence that the climate system has warmed since the instigation of instrumental meteorological observations.”
I hope you meant initiation of instrumental meteorological observations, Zeke . . or is that a confession of sorts? ; )
No Zeke, NOAA just adjusted away the recent cold of this winter.
The system is not working anywhere that climate extremists can diddle with the data.
And the climate extremists control all of the data
“Not “it makes very little difference” like you already know.”
I do already know. I calculated the average with and without adjustments. As anyone can. It makes very little difference.
Nick, how far back is the raw data available? Do you have raw, as-recorded data from 1880 on for each US station? I was under the impression that that is only available as microfiche, or paper copies, but that most of those were destroyed. If you only go back 20 years from today, well, that is useless.
With alleged warming of 0.7C over 150 years, it doesn’t take much “little net difference” to account for most of it.
“but that most of those were destroyed”
I don’t know where you got that impression. Facsimiles are online here.
The real problem is that any data is adjusted at all.
It makes no difference either way is what they say. Except the original raw data shows a huge difference to what is used today as the raw data.
I don’t think the climate has changed at all. There is no physical evidence for it as in where the tree lines are, what will grow in your area, when the snow comes or when it melts out. When you turn your air conditioning or furnace on. How many times per year record cold days happen or record warm days.
The only change is how the global temperature line keeps getting adjusted up with a higher trend, despite the fact that it makes no difference.
I mean, how many times are we supposed to believe the basic global temperature trend changed from where it was. Every month, for 25 years now, it has been adjusted higher. Sooner or later, some actual physical impact should show up. If it doesn’t and your climate hasn’t changed at all, then why are so concerned about this gas.
Zeke, Mosher, et al & etc. will have us ignore Australia, New Zealand, South America, and all of the other places, including the US, where temps are shown to have been adjusted to enhance the climate consensus.
…who ya going to believe, your lying eyes or our homogenization?
Zeke, as to your claim that a recent study shows your whirliblend of data is valid, that is no more than one theologian endorsing another.
Thanks for playing, but you are not doing science.
“But when you add it all up”
THAT is the problem. You’ve averaged all these DIFFERENT sites together, and came up with a totally meaningless line on a graph.
“Except the original raw data shows a huge difference to what is used today as the raw data.”
Do you have any evidence of that? I don’t believe it is true.
They do use adjusted data (and say so) when calculating global averages.
Bill Illis hits nail on head:
None of the actual metrics of climate are changing in any meaningful way.
Instead we have contrived derivative numbers. And when those derivatives are tested they are proven time and again to be dubious at best.
ScottR – March 2, 2018 at 3:19 pm
Scott R, any surface temperature data/records prior to, say 1940, …. should be taken with “a grain of salt”, ……. and the reason(s) can be found herein the following, to wit:
History of the National Weather Service
Excerpted from above noted NWS History, to wit:
And then there was this:
The Volunteer Cooperative Observer Program
Excerpted from above noted Coop Program, to wit:
It has always tickled me that the climate fraternity has always been eager to “homogenise” (i.e. adjust by some never explained yet nefarious means) the data to fit their narrative, but seemed unable (or unwilling) to homogenise (i.e. make them uniform in instrumentation, structure and layout) their data collection stations.
The more reasons they have for “adjusting” the data, the easier it is to hide what they have been doing.
Zeke;
Assuming you could place as many stations as you wanted to, what spacing for land stations would you consider adequate to nail down your answers? USCRN has about 114 so far, or very roughly one per 26,000 square miles. I’m guessing that 10x to 20x would be about right but I think your opinion would be a little more informed.
this is the big question d.j.hawkins.i have always been troubled by the impression temperature trends anywhere give(whether real measured or through data “adjustment”) . take the uk. the influence of the atlantic is huge . the weather patterns vary according to what phase the amo is in,so not only can you have a warm or cold body of water directly affecting the temperature over a significant period but weather coming more often from a nearby land mass during the cool phase and straight off the ocean during the warm.
i would like to see sites on both sides of major mountain ranges at the same elevation , opposing coasts on all land masses greater in size than the isle of man and a project to measure the uhi and how it grows in real time on as yet undeveloped area. if agw is as dangerous as its proponents suggest they should have no trouble getting the funding to do all the above.
who knows it might be another instance of climate scientists being “amazed” at the unexpected results.
No Zeke, your side can’t agree.
“The science is settled”.
Either admit you were wrong or be seen as a hypocrite who now wants it both ways.
The evidence your side used to hijack the world with climate apocalypse claptrap is now admitted by your side to be no good.
How dare you try to pretend it is no big deal?
A lot of people can’t afford heat in Europe right now. People are dying. Some of of the price increases are a direct result of changes made in the name of global warming. Those changes were made, in part, because of the increased temperatures the public has been shown due to those homogenization algorithms. Those homogenization algorithms contribute to the global warming message. If you are right, then you’ve killed people today in the name of the future. If you are wrong, you’ve killed people today for no reason. Either way you should man up and admit that your message is killing people today.
I see one problem if temperature readings come without contamination from localised heat sources……the averages will probably dip.
Can the deniers amongst us promise to take that into account and not upset the acceptors ……are there are some deniers hereabouts? Yes? No? Just checking I’m in the right room 🕶 …_
Please explain what these ‘deniers’ you speak of are denying.
The alarmists use denier as a put down……steal the word off them, take the power.
Sticks and stones.
As the advert on the bus said….some people are gay, get over it…_
DiggerUK,
I like your idea!+10
DiggerUK, are you really asking if the people that have been saying its wrong are going to say, “See I told you so!” and upset the people that have been screaming at everyone that dares to disagree with them? Is that really what you’re asking?
There is every chance that properly sourced and uncontaminated temperature readings could lower the averages.
If we are to be taken seriously, then we will need to take that possibility on board…_
“There is every chance that properly sourced and uncontaminated temperature readings could lower the averages.”
Averages are the problem. You can’t average readings from different locations and come up with anything meaningful. THERE IS NO GLOBAL TEMPERATURE!
What is striking is the map showing the different trends in temperature for well v. badly sited stations. It looks like much of the trend is an artifact.
What? How? Did you somehow convince them that this was their idea?
No mention here of just how many ‘anomaly analyzers’ would be put out to pasture when, and if, the data sources are standardized. Seems like many pseudo-careers depend on the existing mess.
I’ll get excited…or not….when we see it all up and running….and how they adjust it to fit the old record
..if they adjust it like they did Envisat and Jason, etc
“Because by doing so, it puts an end to the relevance of NASA GISS and HadCRUT, whose operators (Gavin Schmidt and Phil Jones) are some of the most biased, condescending, and outright snotty scientists the world has ever seen. They should not be gatekeepers for the data, and this will end their lock on that distinction. ”
By being this way they have protected their “turf” and kept their income flowing for the last 9 years.
Yes and when it proves they have been lying, it will guarantee that they get charged with defrauding the government
The surface temperature records are apparently whatever Zeke Hausfather thinks they ought to be.
Great! We can deploy this over the next 3-5 years to accurately measure the coming drop in world-wide temps. And as such, there can be no “homogenization” because the data will be coming from a certified, bonafide and unchallengable measuring station. A win-win.
Of course, the historical data will all have to be ratcheted down even more to keep the drop off in temps getting too far out of hand. We will probably be at HadCRUTv15 before it is all over.
There are still no where near enough data sensors, especially outside the N. America and Europe.
That leaves plenty of “infilling” for them to adjust the numbers to better match the models.
3 – 5 years? Surely you jest, First, there will have to be studies, and then R&D on the snsor suite to deploy. Then there will have to be the actual rollout and testing. Then the new sites will have to be selected all over the world (even in war-torn places) and the land acquired and communication capability emplaced. Then, and only then, will we have a network of sensors we might be able to trust. Who will be collecting the data? Who will be analyzing it? Who will decide that a given station is malfunctioning and either delete its data or correct it? Note that by the time the full network is up and operational, the first stations installed will be beyond their use-by dates and need replacing.
3-5 years for a Government program???
Ok you do have point here. I will say though Wunderground has done a great job tying in personal weather stations globally so at least a communications network is up. With the exceptions of little coverage over 50% of the continents and probably 99% of the oceans, it is ready.
Just need to replace my Bass Pro Shops/Accurite weather station with the new ultra unit when it is developed and we are in business. Since every official reading is currently being adjusted, I would suggest these personal weather stations are just as accurate anyway, so start using these and replace them with the new units when ready.
“3 – 5 years? Surely you jest, First, there will have to be studies, and then R&D on the snsor suite to deploy. Then there will have to be the actual rollout and testing. Then the new sites will have to be selected all over the world (even in war-torn places) and the land acquired and communication capability emplaced. Then, and only then, will we have a network of sensors we might be able to trust. Who will be collecting the data? Who will be analyzing it? Who will decide that a given station is malfunctioning and either delete its data or correct it? Note that by the time the full network is up and operational, the first stations installed will be beyond their use-by dates and need replacing.”
And even after all that, you STILL won’t have a global temperature.
But developing nations have a vested interest in rising temperatures (to get cash from the UN to fund their mitigation efforts (supposedly)). They’d be tempted to massage the data before passing it along. Only if their weather stations uploaded their data automatically to satellites, without human intervention, would a worldwide system be trustworthy.
Wonderful news.
I have a difficulty with the climate stuff because the researchers are doing experiments on the data in an unblinded manner.
Better data will help, but to make it a science, we need to figure out how to blind the experimenters IMHO.
“throw in the towel”?
Who? How? Of course temperature measurement can be improved, and scientists are in favour of doing that. USCRN was introduced about fifteen years ago. Did anyone try to spin that as “throwing in the towel”? It’s just trying to measure temperature better in the future.
It doesn’t change the issue of trying to determine temperature history. There we still have to deal with what we have always had. An extensive record of measures taken, not by climate scientists, for reasons other than study of climate. A huge amount of information, with imperfections. It’s what we have and we can’t re-do it. We just have to figure it out, as scientists have been doing.
Nick
stop spinning its getting boring.
“There we still have to deal with what we have always had. An extensive record of measures taken, not by climate scientists, for reasons other than study of climate.”
Right, of course. Only “climate scientists” can take (and later adjust as required) an accurate measurement.
The rest of us out here, why, we’re just knuckle-dragging nobodies with no skin in the game whatsoever.
E.g., at King Ranch in FL there’s a 5000 acre orange grove subject to a freeze during certain parts of the year…why should they care about accurate temps???
“Only “climate scientists” can take (and later adjust as required) an accurate measurement.”
Who on Earth said that? This article makes a big deal about inaccuracies in the past record. Not me, I think it is mostly pretty good, especially recently. I doubt if “climate scientists” will be reading the thermometers in the new systems either.
Don’t continue to be an idiot, Nick.
Just admit that the existing temperature record is worthless, and that we need to start again with proper data.
We have lots of good data from the past. We can’t start again with that. The situation with global CRN will be like that of USCRN. It takes a long time to build up a new history. And as you build up, you find that the new stations aren’t really telling you anything different to the old.
Well here’s the thing Nick I don’t really give a hoot what you think about what the future station will report. The simple fact is and you can’t get around it is that the old station data has been corrupted and that’s why they’re calling for a new network moving forward so that they can ensure a continuity of quality data that is not biased nor has need for adjustments. You can whine about it all you want I simply don’t care.
It will be interesting to see who comes out against this new idea, of course specifying it and getting it actually done are two entirely different things.
“you find that the new stations aren’t really telling you anything different to the old”
Then why would you need a new network?
Andrew
Fascinating how Nick knows that the new stations are going to report exactly what the already admitted bad stations were reporting.
Sounds like the fix is in.
BTW, we have lots of data, but the evidence that it is good can’t be found.
The trouble is not the existing record so much as the methodological rsationale behind adjustments. For instance cooling historical data requires imputing behaviour to the data collectors of the time. That fails the Occam’s Razor for simplifying assumptions. Worse, if you are confident enough to adjust historical records, then you are also assuming that the imputed behaviours were “consistent.” If they were not then no amount of adjustment makes sense. Also, if they were, then the best practice adjustment would be to adjust modern data to match historic data. Adjust modern data to meet the assumptions made about the older data. I would like to see the results of that.
“cooling historical data requires imputing behaviour to the data collectors of the time”
No, GHCN adjustment procedures are based solely on the observed series. A sudden change which is not seen in nearby stations. The method does not impute behaviour. But when you look into the underlying reasons, it rarely relates to behaviour either. Usually a station move, change in observation time or instrumentation. There is no suggestion anyone was measuring wrongly; just that what they measured then isn’t quite the same thing.
Nick…maybe its true looking at past. Old (raw) v New (adj) data made no difference in your analysis. But that is no guarantee of future. Hence, stop taking crap data and messing with it to fit your perceived idea of truth and allow good data to be measured in the first place….cuts out need for all the smartypants adjustments. Oh, that might require a few less grants. Oops.
Nick, you are such an idiot I just wanted to read and be amused, but then you came out with another gem
‘just that what they measured then isn’t quite the same thing.’. Are you so thick that you don’t realise how stupid you sound?
jim,
“Are you so thick that you don’t realise how stupid you sound?”
No. It seems you are too thick to figure it out. If as station is in one place, and moves to another, the first set of measurements, however accurate, are not of the same thing. They are temperatures at a different location, and an adjustment is needed to bring the two series into line.
You offer no reasoning, only abuse.
“Climate science’s” original sin is the claim that derivation of ‘global temperature” is settled science.
Problem #1: Physics accurately calculates speed of light to 6 significant digits, magnetic moment of an electron to 12 significant digits; however, current climate science barely measures environmental temperature to 3 significant digits (degrees Kelvin).
Problem #2: Calculating “global temperature” (or other variants) is, at best, a poorly documented process of “in-fillng” and “data homogenization”.
Problem #3: Even stipulating temperature data collection is 99% accurate, there is no physics/mathematical proof that the atmosphere is a system that can be actually be modeled. Needless to say, there is no fundamental formula of climate science that can provide meaningful temperature predictions even if given accurate data.
From a scientific standpoint, not being able to accurately measure or predict fundamental physical phenomena is a silly place to be, especially after claiming you have already done it for 100 years into the future. From a political standpoint, it’s perfectly understandable.
” an adjustment is needed to bring the two series into line.”
Or, accept the fact that you have two distnct data series and stop pretending it is OK to try to knit them together.
“A huge amount of information, with imperfections. It’s what we have and we can’t re-do it. We just have to figure it out, as scientists have been doing.”
No Nick, they are not “figuring it out”. What they are doing is attempting to bootstrap information out of thin air. If there were some underlying solid theory for what the data ought to look like then the information input from that theory can allow these sort of statistical approaches to get somewhere. For example no one objected to software fixes for Hubble space telescope imaging before the hardware fix was made and the reason for that is you can calibrate the image across the field of view using a known spherical target. What makes this possible is that the theory of gravity determines that celestial objects over a certain mass limit are approximately spherical. That theory drives the data modifications. There is no theory as to what the surface temperature data ought to look like so any attempt to modify it is of necessity driven by the modifier’s opinions.
Hi Nick,
I’ve been bugging you lately in every 2nd thread with a couple of Qs. Any chance you respond to a contact request here: matz.hedman@telia.com ?
Maybe boring with constant newcomers and back to square zero but anyway…. I would certainly appreciate it.
Cheers
Nick is far too busy bulkschi!!ing everyone – and especially himself-
And that’s why the uncertainty bars should be in the product, but they aren’t.
Nick Stokes
“A huge amount of information, with imperfections. It’s what we have and we can’t re-do it. We just have to figure it out, as scientists have been doing.”
First sentence – indisputable.
Second sentence – indisputable.
Third sentence, first phrase – indisputable
Third sentence, second phrase – not only disputable but, according, to many more trustworthy sources, untrue. ‘Climate scientists’ appear to spend more time and effort trying to show that any information that does not tally with their supposition that climate is warming in line with increasing anthropogenic CO2 must be imperfect and therefore adjusted or eliminated altogether.
Solomon: “Third sentence, first phrase – indisputable” I dispute it, why “must” we figure out average global temp? Can’t we just live with the data we have, put apt error bars on it, and stop this adjustocene? Of course, that would put US (and others) average temps from the thirties back up to higher-than-now, and tony heller would need to supersize Hansen’s charts to show an error bar above Hansen’s 30’s highs. Seriously, though, does it seem that we have a “mcguffin” here? These folks want to gin up hair-on-fire panic over .01C jumps on charts with no error bars. I’d say these stations that were installed and are fit for purpose (shuttle or airport landings) should be used for purpose; and when you use them outside that purpose, fine for academic curiosity, but no policy should be set based on this data that even N. Stokes admits is so poor, it’s “improved” by post-hoc adjustments. So: No, we don’t “have” to figure it out, not at all. The whole exercise has actually damaged science, IMO.
Why is there a Climate Reference Network station adjacent to the NASA runway? Doesn’t look like good siting to me.
Do shuttles land often enough to change the local climate?
No, but the runway heats enough to.
A runway in Florida will probably be heated by the Sun even in absence of Space Shuttles.
Nick, do you keep forgetting that a siting standard for Temperature measuring devices exist?
It never says place them at end of runways, next to building and so on.
[snip- we understand the sentiment towards Mr. Stokes, but your comment violates the website rules, perhaps you’d like to rephrase and resubmit? -mod]
Sunsettommy March 2, 2018 at 1:43 pm
Nick, do you keep forgetting that a siting standard for Temperature measuring devices exist?
It never says place them at end of runways, next to building and so on.
Remember that the primary purpose of such temperature measuring devices is to provide data for pilots and they are sited according to FAA regulations, any use as sources of climate data is secondary.
https://www.faa.gov/documentLibrary/media/Order/JO_6560_20C.pdf
“any use as sources of climate data”
is contra-indicated
Probably because that exact station is specific in real time to that specific runway, as any incoming Space Shuttle for a landing would have operated in that exact weather condition in real time. Perhaps to be able to calculate the glide ratio as the article stated, and to gather local specific head, tail or cross wind info and all other pertinent weather data. That would make sense to have a dedicated weather station located next to the runway that was being used for mission critical work, since this would be the local weather any incoming Space Shuttle landing would face or any large aircraft landing would be dealing with such as the specialized 747 carrying the Space Shuttle piggyback for re-launch.
Glide ratio doesn’t change with temperature or density. The airspeed for best glide ratio does change. You need the density altitude to get the airspeed to aim for. Weight comes into the calculation too, but only aerodynamics affect the glide ratio.
Yes, it’s pedantic, but most folks think it glides more steeply if heavy or in thin air. No, you just need the right speed.
DHR
I suspect NASA (not to mention the shuttle pilot) need very specific data about conditions at the shuttle landing strip (not 300 yards away…).
If this site also feeds “global temperature” data gathering, that’s probably a problem.
What great way to justify a need to adjust the existing data. We have this new data here, but we need to normalize this here old data a little bit to help us use the new data.
“but we need to normalize this here old data a little bit ”
normalize = cool
This has been common knowledge for anyone with half a brain. Temperature data since the advent of computers and satellites just in the last 20 years is entirely different than the previous 100 years. To compare data from 40 years ago with today’s is insulting and a bold face lie. Laughable and unscientific on its surface. Disgusting.
“Surface air temperature” is meaningless. It’s either the surface temperature or its the air temperature well above the surface. Either way, it’s a meaningless measurement.
Welcome paper, but probably without any practical impact. Money isnt there to implement globally. And in many places doubt the infrastructue exists to calibrate and maintain CRN level quality. Finally, even if those hurdles were somehow overcome, would need over three decades of results to have any impact on the CAGW debate. It will likely be over long before then given the many failed climate predictions, and failing ‘green’ solutions like diesel autos, solar, and wind.
I doubt the hardened climate catastrophe crusaders will ever give in, they’ll always manage to find another way to predict that the end is nigh.
True. We will likely always have South Australia as a CAGW crash test dummy, and Michael Mann, Gavin Schmidt and Naomi Oreskes as warmunists. But the zealots will be fewer, and their fatal flaws more clearly exposed, where it matters—to voters, legislators, and regulaters. The Maldives hypocracy is already exposed, as just one example.
The best approach for all the reasons you mention is to continue to enhance satellite measurements. We lost the satellite surface measurements a while back. That needs to be fixed. We will never be able to get the coverage or accuracy that can be derived from satellites with a bunch of surface stations.
And bring back the water vapour measurement program that Hansen cancelled in 2009. The official reason was they couldnt determine whether there was more or less over a 20 year period. I believe it was because they couldnt demonstrate an increase therefore it was useless to the AGW theory. On that point I have been thinking about latent heat. When water goes from solid to liquid and then to gaseous molecules the latent heat builds up at each of the 2 steps. However when the direction is reversed the latent heat is released in both of the 2 steps. If that is so and some of this IR goes downward then wouldn’t there be runaway global warming just from the hydrologic cycle alone? The only thing I can think of that would negate this is: During the evaporation process there are 2 types of cooling going on a) cooling of the ocean surface and b) cooling of the troposphere. Actually the plants also cool in the same process by a different name transpiration. The balance of the energy cycle (because of evaporation and condensation which are equal by the way because water is neither created nor destroyed on earth) causes the amount of heat that was taken out of atmosphere during the evaporation process ( I am not counting the heat lost by the plants and oceans because that is the latent heat transferred to the water molecules) to equal the heat transferred to the atmosphere during condensation. So this whole process only shows the balance without considering the role of CO2 nor the role of the latent heat . So if we consider the role of latent heat that was added from the oceans and plants during evaporation that heat has to exit to space during condensation for the heat equation to balance. the Energy balance graph of the National Weather Service of NOAA does not mention the role of evaporation causing latent heat to be added to H2O water vapour. It has a small yellow arrow called Sensible heat and calls it rising air currents. Why that is different from the large yellow arrow rising from the ground I cant fathom. The graph gives the heat fluxas a 100 % total for each pathway of heat flux. The graph balances but surely water vapour from evaporation has to play a role. In fact if you look at the accounting totals they do mention evaporation . So therefore the latent heat graph label is wrong. It is not the water vapour changing to liquid and ice. It should be labeled ice melting and water changing to water vapour. As explained above there are 2 sides to the latent heat energy balance. So there should be another red arrow representing water vapour changing to liquid or ice. Also NOAA has the surface emitting 116% of the heat flux. This is impossible if you took a one moment in time heat graph picture. The whole earth doesnt cycle from a glacier to a hotbed except in million year cycles. . These omissions make me question NOAAs understanding of the energy balance cycle.
Well, well…A bright green blade of grass pokes its way up through the supposedly impenetrable concrete. Well done, Mr. Watts. Your hard work is paying off.
So predictable. I have encountered this in both public and private life. All you ever get are denials and aggression from those who have not been doing a proper job => kill the messenger. Quietly however, they go back to work and start checking for the very problems that the messenger warns about. Finally, after a comprehensive study and big fanfare, those in charge announce that due to their genius they have discovered exactly what the messenger warned about and have brilliantly designed steps of remedial actions also what happen to be exactly what the messenger suggested (of course absolutely NO mention is ever made of the messenger in any reports, press or anything at all. In fact, almost universally the messenger or whistleblower is always discredited by those in charge. In the corporate or public world, the messenger is lucky to keep their job and will never be given a high level managerial position because they are identified as not being a “team player” (lying to protect the “team” is more highly valued than integrity in 99.9% of human organizations)
Armed with this knowledge and after some hard lessons, I have learned to keep my mouth shut. I even saw ongoing blatant corruption by senior managers but knew that reporting it was an immediate CLM. I think this behaviour has become completely entrenched in all Western corporations – hence Dieselgate and iphonebatterygate….
1st rule of Climate Science: Send (your) money
2nd rule of Climate Science: Never pull back the curtain and expose the Wizard of Oz
3rd rule of Climate Science: Send more (of your) money
Two things. First, any piece of equipment can fail. If it’s precision monitoring equipment, you’d like it to fail completely at one time, so there is no doubt about the quality of the data just prior to failure (this is of course a very undesirable failure mode for safety equipment such as life preservers). If you’re not lucky and your monitoring equipment drifts for some unknown time at some unknown rate, you don’t know how much data is suspect and by what amount.
Second, it’s still not clear to me how sparsely located monitoring equipment can validly represent large surrounding areas, especially encompassing different microclimates. A good example is rainfall gauges in the Hawiian islands, where 20 miles on the road can move you between coastal desert and tropical rain forest. Averaging the two is meaningless.
Actually three things. Even if you get a pristine new monitoring network sited and maintained to ideal standards, you either have to wait 50+ years to have an idea of what’s going on with the climate or you have to devise a way to bridge old and new data to get a longer timeline. I don’t think there is any way to use the new data to make the old data any more reliable. What is more likely to happen is the problems with the old data will infect the new data in the interests of homogenization.