NOAA shows 'the pause' in the U.S. surface temperature record over nearly a decade

USCRN_Average_CONUS_Jan2004-April2014NOTE: significant updates have been made, see below.

After years of waiting, NOAA has finally made a monthly dataset on the U.S. Climate Reference Network available in a user friendly way via their recent web page upgrades. This data is from state-of-the-art ultra-reliable triple redundant weather stations placed on pristine environments. As a result, these temperature data need none of the adjustments that plague the older surface temperature networks, such as USHCN and GHCN, which have been heavily adjusted to attempt corrections for a wide variety of biases. Using NOAA’s own USCRN data, which eliminates all of the squabbles over the accuracy of and the adjustment of temperature data, we can get a clear plot of pristine surface data. It could be argued that a decade is too short and that the data is way too volatile for a reasonable trend analysis, but let’s see if the new state-of-the-art USCRN data shows warming.

A series of graphs from NOAA follow, plotting Average, Maximum, and Minimum surface temperature follow, along with trend analysis and original source data to allow interested parties to replicate it.

First, some background on this new temperature monitoring network, from the network home page:

USCRN Station

 

The U.S. Climate Reference Network (USCRN)consists of 114 stations developed, deployed, managed, and maintained by the National Oceanic and Atmospheric Administration (NOAA) in the continental United States for the express purpose of detecting the national signal of climate change. The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the nation changed over the past 50 years? These stations were designed with climate science in mind.Three independent measurements of temperature and precipitation are made at each station, insuring continuity of record and maintenance of well-calibrated and highly accurate observations. The stations are placed in pristine environments expected to be free of development for many decades. Stations are monitored and maintained to high standards, and are calibrated on an annual basis.

Source: http://www.ncdc.noaa.gov/crn/

As you can see from the map below, the USCRN is well distributed, with good spatial resolution, providing an excellent representivity of the CONUS, Alaska, and Hawaii.

crn_map

From the Site Description page of the USCRN:

==========================================================

Every USCRN observing site is equipped with a standard set of sensors, a data logger and a satellite communications transmitter, and at least one weighing rain gauge encircled by a wind shield. Off-the-shelf commercial equipment and sensors are selected based on performance, durability, and cost.

Highly accurate measurements and reliable reporting are critical. Deployment includes calibrating the installed sensors and maintenance will include routine replacement of aging sensors. The performance of the network is monitored on a daily basis and problems are addressed as quickly as possible, usually within days.

Many criteria are considered when selecting a location and establishing a USCRN site:

  • Regional and spatial representation: Major nodes of regional climate variability are captured while taking into account large-scale regional topographic factors.
  • Sensitivity to the measurement of climate variability and trends: Locations should be representative of the climate of the region, and not heavily influenced by unique local topographic features and mesoscale or microscale factors.
  • Long term site stability: Consideration is given to whether the area surrounding the site is likely to experience major change within 50 to 100 years. The risk of man made encroachments over time and the chance the site will close due to the sale of the land or other factors are evaluated. Federal, state, and local government land and granted or deeded land with use restrictions (such as that found at colleges) often provide a high stability factor. Population growth patterns are also considered.
  • Naturally occurring risks and variability:
    • Flood plains and locations in the vicinity of orographically induced winds like the Santa Ana and the Chinook are avoided.
    • Locations with above average tornado frequency or having persistent periods of extreme snow depths are avoided.
    • Enclosed locations that may trap air and create unusually high incidents of fog or cold air drainage are avoided.
    • Complex meteorological zones, such as those adjacent to an ocean or to other large bodies of water are avoided.
  • Proximity:
    • Locations near existing or former observing sites with long records of daily precipitation and maximum and minimum temperature are desirable.
    • Locations near similar observing systems operated and maintained by personnel with an understanding of the purpose of climate observing systems are desirable.
    • Endangered species habitats and sensitive historical locations are avoided.
    • A nearby source of power is required. AC power is desirable, but, in some cases, solar panels may be an alternative.
  •  Access: Relatively easy year round access by vehicle for installation and periodic maintenance is desirable.

Source: http://www.ncdc.noaa.gov/crn/sitedescription.html

==========================================================

As you can see, every issue and contingency has been thought out and dealt with. Essentially, the U.S. Climate Reference Network is the best climate monitoring network in the world, and without peer. Besides being in pristine environments away from man-made influences such as urbanization and resultant UHI issues, it is also routinely calibrated and maintained, something that cannot be said for the U.S. Historical Climate Network (USHCN), which is a mishmash of varying equipment (alcohol thermometers in wooden boxes, electronic thermometers on posts, airport ASOS stations placed for aviation), compromised locations, and a near complete lack of regular thermometer testing and calibration.

Having established its equipment homogenity, state of the art triple redundant instrumentation, lack of environmental bias, long term accuracy, calibration, and lack of need for any adjustments, let us examine the data produced for the last decade by the U.S. Climate Reference Network.

First, from NOAA’s own plotter at the National Climatic Data Center in Asheville, NC, this plot they make available to the public showing average temperature for the Contiguous United States by month:

USCRN_avg_temp_Jan2004-April2014

Source: NCDC National Temperature Index time series plotter

To eliminate any claims of “cherry picking” the time period, I selected the range to be from 2004 through 2014, and as you can see, no data exists prior to January 2005. NOAA/NCDC does not make any data from the USCRN available prior to 2005, because there were not enough stations in place yet to be representative of the Contiguous United States. What you see is the USCRN data record in its entirety, with no adjustments, no start and end date selections, and no truncation. The only thing that has been done to the monthly average data is gridding the USCRN stations, so that the plot is representative of the Contiguous United States.

Helpfully, the data for that plot is also made available on the same web page. Here is a comma separated value (CSV) Excel workbook file for that plot above from NOAA:

USCRN_Avg_Temp_time-series (Excel Data File)

Because NOAA/NCDC offers no trend line generation in  their user interface, from that NOAA provided data file, I have plotted the data, and provided a linear trend line using a least-squares curve fitting procedure which is a function in the DPlot program that I use.

Not only is there a pause in the posited temperature rise from man-made global warming, but a clearly evident slight cooling trend in the U.S. Average Temperature over nearly the last decade:

USCRN_Average_CONUS_Jan2004-April2014

We’ve had a couple of heat waves and we’ve had some cool spells too. In other words, weather.

The NCDC National Temperature Index time series plotter also makes maximum and minimum temperature data plots available. I have downloaded their plots and data, supplemented with my own plots to show the trend line. Read on.

 

NOAA/NCDC plot of maximum temperature:

USCRN_max_temp_Jan2004-April2014Source of the plot here.

Data from the plot: USCRN_Max_Temp_time-series (Excel Data File)*

My plot with trend line:

USCRN_Max_Temp_time-series

As seen by the trend line, there is a slight cooling in maximum temperatures in the Contiguous United States, suggesting that heat wave events (seen in 2006 and 2012) were isolated weather incidents, and not part of the near decadal trend.

 

NOAA/NCDC plot of minimum temperature:

USCRN_min_temp_Jan2004-April2014

Source of the plot here.

USCRN_Min_Temp_time-series (Excel Data File)*

The cold winter of 2013 and 2014 is clearly evident in the plot above, with Feb 2013 being -3.04°F nationally.

My plot with trend line:

USCRN_Min_Temp_time-series

*I should note that NOAA/NCDC’s links to XML, CSV, and JSON files on their plotter page only provide the average temperature data set, and not the maximum and minimum temperature data sets, which may be a web page bug. However, the correct data appears in the HTML table on display below the plot, and I imported that into Excel and saved it as a data file in workbook format.

The trend line illustrates a cooling trend in the minimum temperatures across the Contiguous United States for nearly a decade. There is some endpoint sensitivity in the plots going on, which is to be expected and can’t be helped, but the fact that all three temperature sets, average, max, and min show a cooling trend is notable.

It is clear there has been no rise in U.S. surface air temperature in the past decade. In fact, a slight cooling is demonstrated, though given the short time frame for the dataset, about all we can do is note it, and watch it to see if it persists.

Likewise, there does not seem to have been any statistically significant warming in the contiguous U.S. since start of the new USCRN data, using the average, maximum or minimum temperature data.

I asked three people who are well versed in data plotting and analysis to review this post before I published it, one, Willis Eschenbach, added his own graph as part of the review feedback, a trend analysis with error bars, shown below.

CRN Mean US temperature anomaly

While we can’t say there has been a statistically significant cooling trend, even though the slope of the trend is downward, we also can’t say there’s been a statistically significant warming trend either.

What we can say, is that this is just one more dataset that indicates a pause in the posited rise of temperature in the Contiguous United States for nearly a decade, as measured by the best surface temperature monitoring network in the world. It is unfortunate that we don’t have similar systems distributed worldwide.

UPDATE:

Something has been puzzling me and I don’t have a good answer for the reason behind it, yet.

As Zeke pointed out in comments and also over at Lucia’s, USCRN and USHCN data align nearly perfectly, as seen in this graph. That seems almost too perfect to me. Networks with such huge differences in inhomogeneity, equipment, siting, station continuity, etc. rarely match that well.

Screen-Shot-2014-06-05-at-1.25.23-PM[1]

Note that there is an important disclosure missing from that NOAA graph, read on.

Dr Roy Spencer shows in this post the difference from USHCN to USCRN:

Spurious Warmth in NOAA’s USHCN from Comparison to USCRN

The results for all seasons combined shows that the USHCN stations are definitely warmer than their “platinum standard” counterparts:

Spencer doesn’t get a match between USHCN and USCRN, so why does the NOAA/NCDC plotter page?

And our research indicates that USHCN as a whole runs warmer that the most pristine stations within it.

In research with our surfacestations metadata, we find that there is quite a separation between the most pristine stations (Class 1/2) and the NOAA final adjusted data for USHCN. This is examining 30 year data from 1979 to 2008 and also 1979 to present. We can’t really go back further because metadata on siting is almost non-existent. Of course, it all exists in the B44 forms and site drawings held in the vaults of NCDC but is not in electronic form, and getting access is about as easy as getting access to the sealed Vatican archives.

By all indications of what we know about siting, the Class 1/2 USHCN stations should be very close, trend wise, to USCRN stations, yet the ENTIRE USHCN dataset, including the hundreds of really bad stations, with poor siting and trends that don’t come close to the most pristine Class 1/2 stations are said to be matching USCRN. But from our own examination of all USHCN data and nearly all stations for siting, we know that is not true.

So, I suppose I should put out a caveat here. I wrote this above:

“What you see is the USCRN data record in its entirety, with no adjustments, no start and end date selections, and no truncation. The only thing that has been done to the monthly average data is gridding the USCRN stations, so that the plot is representative of the Contiguous United States.”

I don’t know that for a fact to be totally true, as I’m going on what has been said about the intents of NCDC in the way they treat and display the USCRN data. They have no code or methodology reference on their plotter web page, so I can’t say with 100% certainty that the output of that web page plotter is 100% adjustment free.  The code is hidden in a web engine black box, and all we know are the requesting parameters. We also don’t know what their gridding process is. All I know is the stated intent that there will be no adjustments like we see in USHCN.

And some important information is missing that should be plainly listed.  NCDC is doing an anomaly calculation on USCRN data, but as we know, there is only 9 years and 4 months of data. So, what period are they using for their baseline data to calculate the anomaly? Unlike other NOAA graphs like this one below, they don’t show the baseline period or baseline temperature on the graph Zeke plotted above.

This one is the entire COOP network, with all its warts, has the baseline info, and it shows a cooling trend as well, albeit greater than USCRN:

NOAA_COOP_data_CONUS_2004-2014

Source: http://www.ncdc.noaa.gov/cag/time-series/us

Every climate dataset out there that does anomaly calculations shows the baseline information, because without it, you really don’t know what your are looking at. I find it odd that in the graph Zeke got from NOAA, they don’t list this basic information, yet in another part of their website, shown above, they do.

Are they using the baseline from another dataset, such as USHCN, or the entire COOP network to calculate an anomaly for USCRN? It seems to me that would be a no-no if in fact they are doing that. For example, I’m pretty sure I’d get flamed here if I used the GISS baseline to show anomalies for USCRN.

So until we get a full disclosure as to what NCDC is actually doing, and we can see the process from start to finish, I can’t say with 100% certainty that their anomaly output is without any adjustments, all I can say with certainty is that I know that is their intent.

Given that there are some sloppy things on this new NCDC plotter page, like the misspelling of the word Contiguous. They spell it Continguous, in the plotted output graph title and in the actual data file they produce: USCRN_Avg_Temp_time-series (Excel Data file). Then there’s the missing baseline information on the anomaly calc, and the missing outputs of data files for the max and min temperature data sets (I had to manually extract them from the HTML as noted by asterisk above).

All of this makes me wonder if the NCDC plotter output is really true, and if in the process of doing gridding, and anomaly calcs, if the USCRN data is truly adjustment free. I read in the USCRN documentation that one of the goals was to use that data to “dial in” the adjustments for USHCN, at least that is how I interpret this:

The USCRN’s primary goal is to provide future long-term homogeneous temperature and precipitation observations that can be coupled to long-term historical observations for the detection and attribution of present and future climate change. Data from the USCRN is used in operational climate monitoring activities and for placing current climate anomalies into an historical perspective. http://www.ncdc.noaa.gov/crn/programoverview.html

So if that is true, and USCRN is being used to “dial in” the messy USHCN adjustments for the final data set, it would explain why USCHN and USCRN match so near perfectly for those 9+ years. I don’t believe it is a simple coincidence that two entirely dissimilar networks, one perfect, the other a heterogeneous train wreck requiring multiple adjustments would match perfectly, unless there was an effort to use the pristine USCRN to “calibrate” the messy USHCN.

Given what we’ve learned from Climategate, I’ll borrow words from Reagan and say: Trust, but verify

That’s not some conspiracy theory thinking like we see from “Steve Goddard”, but a simple need for the right to know, replicate and verify, otherwise known as science. Given his stated viewpoint about such things, I’m sure Mosher will back me up on getting full disclosure of method, code, and output engine for the USCRN anomaly data for the CONUS so that we can do that,and to also determine if USHCN adjustments are being “dialed in” to fit USCRN data.

# # #

UPDATE 2 (Second-party update okayed by Anthony):  I believe the magnitude of the variations and their correlation (0.995) are hiding the differences.  They can be seen by subtracting the USHCN data from the USCRN data:

Cheers

Bob Tisdale

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Excellent work, gentlemen. The minimums are of particular interest. It underlines the fact we are forced to heat eight months of the year in Canada.

Its worth pointing out that the same site that plots USCRN data also contradicts Monckton’s frequent claims in posts here that USHCN and USCRN show different warming rates: http://rankexploits.com/musings/wp-content/uploads/2014/06/Screen-Shot-2014-06-05-at-1.25.23-PM.png

I had been wondering about these results, thank you for updating us. Is there a link to the digitalization of the data for your own project? Will we be getting an update on that topic any time soon?

K-Bob

Anthony – this is totally inconvenient, didn’t you get the memo?

rokshox

Any comment on Goddard’s recent observations that NOAA is deleting cooler stations and infilling with warmer data?
http://stevengoddard.wordpress.com/2014/06/08/more-data-tampering-forensics/

Lance Wallace

What about Alaska? 8 excellent stations should give us a good picture.

Leigh

“measured by the best surface temperature monitoring network in the world”
And why isn’t this on the front page of every news paper in America?
Another inconvenient truth that the likes of Gore and the white house will ignore.

Mike McMillan

So that’s what it looks like without TOBs and GIA. Too bad it doesn’t go back to 1998.

Goddard is wrong.
Let me put it this way.
Ask Anthony what he thinks of goddards work

Jimmy Haigh.

So where’s the missing heat? Is it at the bottom of the ocean? Or is it stuck in the pipeline?

Mike T

Indeed, an observation network to be proud of (if you’re American) or deeply envious of (if not). I especially like the triple set of thermometers, since stations in this country, Oz have just the one (well, two, wet & dry) and if they fall over or comms goes offline, bye-bye temp readings.

Daniel Vogler

Wait for it, “but the US isn’t the world!”

bh2

While ten years is a very short time, it would be interesting to see CO2 readings graphed for the same period to see if they also backed off (which would be the direct implication if warmist theory is correct)? Or did CO2 continue to rise even though this US temperature trend did not?

Do they still work in deg F at NOAA?

Keith Minto

A nearby source of power is required. AC power is desirable, but, in some cases, solar panels may be an alternative.

A nearby source of power may mean some type of development is nearby. In site selection power discussions I wonder what has the greatest pull ? AC power or solar collection.storage.? Is cost a consideration?
Over all, the USCRN network seems a solid system for data collection.

Charles Nelson

When the only comment a Warmist like Steven Mosher puts up (on a piece wherein NOAA clearly admits that there is no surface warming in the USA!!!) is ….’Goddard is wrong’, it surely must be worth taking a closer look at Real Science.
Are any/some/all of those examples Goddard posts of altered graphs/data ‘fakes’?
What about the old newspaper reports and photos of weather extremes and catastrophes from previous ‘cooler’ eras…are any/some or all of those fakes?
Perhaps WUWT should critique Real Science.
Willis Esenbach (who I have seen with my own eyes calculate the number of angels who can dance on the head of a pin) might even be able to verify whether or not Goddard’s claim that the past is being ‘cooled’ is valid.
Oh and before I go…am I the only one who would appreciate it if maybe Mosh could quickly explain to us why all that extra, ‘heat trapping’ CO2 we’ve been putting into the atmosphere doesn’t appear to be well…’trapping heat’ in the atmosphere?

“Any comment on Goddard’s recent observations that NOAA is deleting cooler stations and infilling with warmer data?”
At this point it doesn’t matter. We now have CRN data that require none of these adjustments and infills.

Jimmy Haigh. says:
June 7, 2014 at 10:45 pm
So where’s the missing heat? Is it at the bottom of the ocean? Or is it stuck in the pipeline?

H4 (The Hadley Heat Hidey Hole)

Patrick

“RokShox says:
June 7, 2014 at 10:23 pm”
I have a vague recollection of the phrase “March of the thermometers” where in the mid-1990’s many rural devices, data and records were dumped out of a dataset (I don’t recall which) which ultimately showed a warming trend.

george e. smith

Well, I’m not particularly into the government spending my tax dollars on trivial things.
But in this instance, it would seem, that we are buying ourselves a very nice piece of apparatus, that will become ever more valuable, as that data records spreads into multiple decades.
And thanks for the blow by blow on this Anthony. Can we just hope that your network farce expose, project, may actually have helped bring this net online.

john

Charles Nelson says:
“am I the only one who would appreciate it if maybe Mosh could quickly explain to us why all that extra, ‘heat trapping’ CO2 we’ve been putting into the atmosphere doesn’t appear to be well…’trapping heat’ in the atmosphere?”
John says…..
97% of 3 people agree with Steven Mosher therefore there’s a con sensus.
The science is settled – so DON’T ask embarrassing questions.

Grey Lensman

We all know the current temperature trend is flat for almost 18 years. This includes all the adjustments, corrections, removal of cold stations, reducing historical warms and a multitude of “hide the decline” tricks. If we reverse engineer those, we get lo and behold, actual physical cooling
.
And just what does this new “pristine” data tell us, COOLING>
Q,E.D.
The other measure of Globull Warming also tells a similar story viz:
Free renewable energy leads to the most expensive and unreliable electricity. Do the sums, cost of generation declines, price sky rockets.

jones

“Not only is there a pause in the posited temperature rise from man-made global warming, but a clearly evident slight cooling trend in the U.S. Average Temperature over nearly the last decade:”
.
Meh….it’s just a regional weather pattern……

“As a result, these temperature data need none of the adjustments that plague the older surface temperature networks, such as USHCN and GHCN, which have been heavily adjusted to attempt corrections for a wide variety of biases.”
As Zeke showed above, USHCN, with all these plagues, has essentially identical results. In this post USCRN is shown with a average T trend -0.7113 °F/century, 2005-Apr 2014. For USHCN I get for the same period -0.6986 °F/century. I used the same plotter, just asking for USHCN instead of USCRN.

jim

Some of the NOAA graphs misspell “contiguous”

MikeB

There seems to be an inconsistency in the trend shown on the second graph (-0,00503 deg.F per month) and Willis Eschenbach’s graph which shows a trend of 0.6 deg.C per decade.
0.00503 deg.F per month is 0.6 deg.F per decade. Should the units on the latter graph be degrees Fahrenheit ?
[Agreed. I work so infrequently in °F I mistyped the units. I’ll fix it, thanks. -w.]

Nick Stokes says: June 8, 2014 at 12:13 am
“For USHCN I get for the same period -0.6986 °F/century.”

Apologies, corerction here. I was comparing intercepts (and giving wrong units). For the trends, I do get a greater difference, with USHCN showing a larger downtrend -0.00727°F/month vs USCRN -0.00503°F/month here. -0.00727°F/month is -0.48°F/decade.

richard verney

It is long overdue that data only from pristine sites is used without the raw data being adjusted.
Why can’t pristine sites with data going back before Jan 2005, be identified? Surely, there must be some such sites, and what does the data from these sites show?
I do not like straight fit linear lines. My eye balling of the data (eg., the first plot set out) suggests thattemperatures fell during the first 5 years (2005 to 2010), thereafter rose for the next 2 years through to 2012, whereafter once again temperatures are falling.
Of course 10 years of data is far too short. If temperatures are driven by natural factors, at the very least a period encompassing a number of ocean cycles is required, before one can even venture to stick a toe in the ‘game’ by suggesting that the data tells us something important about temperature trends. Hence the reason why an attempt to identify sites with data prior to 2005 is required.
PS. Thanks Anthony, because you have played a substantial role in identifying the shortcomings of the weather station data, and the need to put this on a more scientific standing by upgrading the network. Unfortunately, this has come late in the game, and it is a great shame that this was not addressed in the 1970s when some scientists first considered that there may be concerns about GW. Those scientist have badly let down science, by not taking steps to get their most important data source in order. .

Kevin Hearle

Given that there are proximity stations in the old network and that they can be identified it must be possible mathematically and statistically to back engineer the temperature difference between this kosher and the older and longer data sets. At minimum it would be an interesting exercise even if the error bars were larger the longer you went back.

Komrade Kuma

EPA Central Committee will have a full report on this outrage against the scientific consensus as soon as this faithful comrade has gathered his faculties and can steady his hand enough to hit the keys and generate said report. Meanwhile I shall continue dictating my thoughts to an associate.
Clearly NOAA will shortly be disbanded and its staff sent to a bear infested camp “somewhere in Alaska”.

Another Ian

Re Patrick says:
June 7, 2014 at 11:48 pm
“March of the thermometers”
I think that was Chiefio

thegriss

Mosher is NOT a scientist ….
he is a propagandist…..
Take whatever he says with a pinch of salt….

What is the base period for developing the anomaly?
Is an actual temperature published?

darwin wyatt

Lets not miss the forest for the trees. It will have to get warm and stay warm for centuries to top the MWP or the RWP. The real danger is from cooling not warming. The fools had it right the first time.

Sensorman

If Willis is right, then -6 deg C per century – it’s worse than we thought!

Patrick

“thegriss says:
June 8, 2014 at 1:26 am
Mosher is NOT a scientist ….”
Dr. Tol is not a “scientist” either, so what is your point?

David

“The fools had it right the first time”: there were only two choices!

Richards in Vancouver

thegriss says:
June 8, 2014 at 1:26 am
Mosher is NOT a scientist ….
he is a propagandist…..
Take whatever he says with a pinch of salt….
Richards in Vancouver says:
DON’T !!! Those white crystals might not be salt!

Stubben

In Willis graph – switch C for F

Anthony
Interesting study. Thank you
As you know I am especially interested in the historic temperature record back to the 17th century and have written a number of articles. This one dated 2011 dealt with the considerable uncertainties involved
http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%E2%80%93-history-and-reliability-2/
I wonder if you have an opinion as to how many of the historic temperatures are reliable to a few tenths of a degree and WHEN we can say with some certainty that they are truly accurate and representative of the relevant location?
I ask this because I carried out a small experiment -which I don’t claim to be rigorous or scientific- whereby I used two reliable max/min thermometer set some 30 feet apart, one at the conventional height and the other 3 feet higher.
I also took the readings at specified times-for example noon, 3pm and at 8am. It was interesting that in only 20% of the readings was the max or min temperature reached, the same as that reached at the specified times.
So how reliable is the global historic temperature and when does it become wholly reliable when all the parameters concerned are consistent ?
Elements that can affect readings and which should be consistent include;
* the use of good quality reliable calibrated thermometers
* all instruments placed at the same height
* All suitably shielded and at the correct orientation
* measured at genuinely the warmest or coolest part of the day
* Used in a pristine non contaminated environment
* consistent like for like basis e.g the reading point did not physically move or become contaminated over time
* the use of the same reliable observer and methodology over protracted time scales, who wrote down the record immediately after the observation
* no political or other overtones-for example temperatures are not adjusted down in order to receive a greater cold weather allowance.
I don’t know if you have ever read Camuffo’s very long and detailed book whereby he examined 7 historic European temperatures and took into account even such things as whether a certain door was in use, or measured the likely effects of growing trees on the outcome?
I suspect that at the least, some inherent variability has been lost and that temperatures-other than in a few well researched cases- should be taken as, at best, merely indicative.
If Mosh wanders along in one of his expansive moods, I would be very pleased to have his take on this..
Tonyb

Mike T

One aspect of older readings which might not be well known is that they may not have been taken at exactly the specified time. Sometimes the screen was some distance from the observing office, especially at airports, and time had to be allowed to get out off the office and back again in time to compose and send messages around the nominal time of observation. This is not so much of an issue with electronic sensors and automatic message coding, as these perform observations right on observation time, and elements which need to be input manually are done beforehand. The time delay issue was not so important for maxima and minima as these are read from respective thermometers which “freeze” the reading until they’re reset.
On a related issue, I question temperature plots which have little error attached to them, given that even the best quality thermometers have an allowable error, often 0.3 degrees Celcius, which in the “old days” might be compounded by operator error (reading to 0.5 or to the whole degree, for instance). I’m also incredulous that the US still uses Fahrenheit.

johnbuk

Thank you Anthony, this is to be welcomed, not necessarily for the results but for the way the process has tried to present accuracy above anything else.
I am not a scientist, just a humble, retired, tax-payer, but having watched the shenanigans going on for the past few years through the good offices of yours and other blogs and seeing the unseemly behaviour of the so-called scientific community I am happy to be described as a “skeptic”. (The very fact some try to paint people like me as “deniers” with all the baggage that epithet brings says more about the accusers than the accused).
I appreciate this is a 10 year snapshot and given the context of the earth’s history is not something to base a whole mitigation strategy and tax regime on. Nevertheless I’d like to hear from the climate scientists how they view this apparent anomaly in their “predictions” (I know, I know).
I’d actually welcome someone from that community to actually say, “Well really, we have our views but we really don’t know why this has occurred, we need to go back to our models and review our assumptions”. “Perhaps we need to be more circumspect in our future pronouncements”.
If that were to happen then maybe they might start to rebuild some of the respect they have lost from the general community (ie their ultimate paymasters).
If that doesn’t happen, as I sadly expect, then I’m afraid that will only serve to confirm the public’s sceptic stance that all this is driven by politics as we have come to believe. Even the politicians eventually have to take account of the plebs – there’s more of us.
By the way, on a personal note, aren’t you due a holiday?

BioBob

Keith Minto says: June 7, 2014 at 11:29 pm
Over all, the USCRN network seems a solid system for data collection.

———————-
Not really…
Do the math (typically AGW types want accuracy to the nearest .01 degree; a typical day’s temperature range can be something like 78 to 48 = an absurdly large number of random samples, NOT 3)
http://www.surveysystem.com/sscalc.htm
http://www.research-advisors.com/tools/SampleSize.htm
So, either you realize you can not get your certainty down to .01 or you just make stuff up and pretend you have the certainty you desire, which is what they do now.

Eliza

Steven Goddard =1, Zeke + Mosher = 0 science value
http://stevengoddard.wordpress.com/2014/06/08/random-numbers-and-simple-minds/
As of today Zeke and Mosher should be considered warmist trolls with entertainment value only. Somewhat similar to William Conolley at most. LOL.
After this is all over in 5 years they will fade away into nothingness just like the famous “Phil” at Lucia’s babbling away about disappearing arctic ice about 5 years ago was 100% wrong. We haven’t heard from him in years LOL

The gist of Steve Goddard’s work shows that the past has been cooled considerably.
As this new USCRN only goes back 10 years, it cannot answer the question as to whether such adjustments are justified.

Willis Eschenbach

thegriss says:
June 8, 2014 at 1:26 am

Mosher is NOT a scientist ….
he is a propagandist…..
Take whatever he says with a pinch of salt….

Mosh is indeed a scientist … however, you should still take whatever he says with a grain of salt. Of course, this is true of me as well …
w.

Proud Skeptic

I think the elephant in the room is that none of the datasets seem to agree 100%. How many people do you think believe that there is one, highly accurate, well dispersed source of temperature data going back a century or two upon which all of climate science is based?
Anyone wonder how many people would be shocked to find out how spotty and inconsistent the temperature record is?

Willis Eschenbach

Eliza says:
June 8, 2014 at 2:54 am

Steven Goddard =1, Zeke + Mosher = 0 science value
http://stevengoddard.wordpress.com/2014/06/08/random-numbers-and-simple-minds/
As of today Zeke and Mosher should be considered warmist trolls with entertainment value only.

Say what? At your link, Goddard says:

My approach to calculating temperatures from large data sets uses the same principle – I assume that the errors in the data set are distributed randomly and thus do not bias the absolute temperature or trend. That is the normal way which scientists deal with large data sets which have no systematic bias.
Zeke apparently can’t comprehend this, and believes that absolute temperatures can’t be calculated with data sets which have missing data. His approach is to hide the missing data by calculating anomalies for each month. In doing this, he loses vast amounts of important information – like what I have been uncovering.
Zeke and Mosher think I should the same grossly flawed methodology which they use, and thus come up with the same wrong numbers which they do.

Note that Steve Goddard has not provided a single quote or citation to back up his claims about what Zeke and Mosher “think”, or what they comprehend, or what their “approach” is. It’s all just mud-slinging, which proves nothing except that Goddard can sling mud with the best of the climate alarmists. And you, Eliza, are mindlessly re-slinging Goddards mud for him, without a single question about his claims.
As near as I can tell, both Zeke and Mosh are scientists and intelligent, honest men. They provide their data and their code for everyone to examine and find fault with. And they apply those same standards to studies from both sides of the aisle. I can’t ask for more than that from any scientist, and most of climate scientists don’t provide anything like that.
So while I may and do disagree at times with both Zeke and Mosh, and although Mosh’s sparse posting style often drives me spare, I would strongly advise that people do not either ignore or dismiss their claims. Neither one is a fool, an alarmist, or a fraud. They, like me, are just honest guys doing the best they know how.
w.

Stephen Richards

Tonyb
You make some good points in your post. In science, normally, we are forced to past data for what they are. If we understand the measuring techniques and the instruments it may be possible to recreate the data and make a fair comparison against the latest methods and measuring instruments. I do not get the feeling that this is what government agencies do.

Willis Eschenbach says:
June 8, 2014 at 3:08 am
Sea salt, mayhap?

Jimbo

It would be interesting to compare the new, state of the art system with the older monitoring stations over the last 10 years. Also compare how accurate the adjustments for the older stations are.