An 'inconvenient result' – July 2012 not a record breaker according to data from the new NOAA/NCDC U.S. Climate Reference Network

I decided to do myself something that so far NOAA has refused to do: give a CONUS average temperature for the United States from the new ‘state of the art’ United States Climate Reference Network (USCRN). After spending millions of dollars to put in this new network from 2002 to 2008, they are still giving us data from the old one when they report a U.S. national average temperature. As readers may recall, I have demonstrated that old COOP/USHCN network used to monitor U.S. climate is a mishmash of urban, semi-urban, rural, airport and non-airport stations, some of which are sited precariously in observers backyards, parking lots, near air conditioner vents, airport tarmac, and in urban heat islands. This is backed up by the 2011 GAO report spurred by my work.

Here is today’s press release from NOAA, “State of the Climate” for July 2012 where they say:

The average temperature for the contiguous U.S. during July was 77.6°F, 3.3°F above the 20th century average, marking the hottest July and the hottest month on record for the nation. The previous warmest July for the nation was July 1936 when the average U.S. temperature was 77.4°F. The warm July temperatures contributed to a record-warm first seven months of the year and the warmest 12-month period the nation has experienced since recordkeeping began in 1895.

OK, that average temperature for the contiguous U.S. during July is easy to replicate and calculate using NOAA’s USCRN network of stations, shown below:

Map of the 114 climate stations in the USCRN, note the even distribution.
In case you aren’t familiar with his network and why it exists, let me cite NOAA/NCDC’s reasoning for its creation. From the USCRN overview page:

The U.S. Climate Reference Network (USCRN) consists of 114 stations developed, deployed, managed, and maintained by the National Oceanic and Atmospheric Administration (NOAA) in the continental United States for the express purpose of detecting the national signal of climate change. The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the nation changed over the past 50 years? These stations were designed with climate science in mind. Three independent measurements of temperature and precipitation are made at each station, insuring continuity of record and maintenance of well-calibrated and highly accurate observations. The stations are placed in pristine environments expected to be free of development for many decades. Stations are monitored and maintained to high standards, and are calibrated on an annual basis. In addition to temperature and precipitation, these stations also measure solar radiation, surface skin temperature, and surface winds, and are being expanded to include triplicate measurements of soil moisture and soil temperature at five depths, as well as atmospheric relative humidity. Experimental stations have been located in Alaska since 2002 and Hawaii since 2005, providing network experience in polar and tropical regions. Deployment of a complete 29 station USCRN network into Alaska began in 2009. This project is managed by NOAA’s National Climatic Data Center and operated in partnership with NOAA’s Atmospheric Turbulence and Diffusion Division.

So clearly, USCRN is an official effort, sanctioned, endorsed, and accepted by NOAA, and is of the highest quality possible. Here is what a typical USCRN station looks like:

USCRN Station at the Stroud Water Research Center, Avondale, PA

A few other points about the USCRN:

  • Temperature is measured with triple redundant air aspirated sensors (Platinum Resistance Thermometers) and averaged between all three sensors. The air aspirated shield exposure system is the best available.
  • Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
  • All stations were sited per Leroy 1999 siting specs, and are Class 1 or Class 2 stations by that siting standard. (see section 2.2.1 here of the USCRN handbook PDF)
  • The data goes through quality control, to ensure an errant sensor hasn’t biased the values, but is otherwise unchanged.
  • No stations are near any cities, nor have local biases of any kind that I have observed in any of my visits to them.
  • Unlike the COOP/USHCN network where they fought me tooth and nail, NOAA provided station photographs up front to prove the “pristine” nature of the siting environment.
  • All data is transmitted digitally via satellite uplink direct from the station.

So this means that:

  1. There are no observer or transcription errors to correct.
  2. There is no time of observation bias, nor need for correction of it.
  3. There is no broad scale missing data, requiring filling in data from potentially bad surrounding stations. (FILNET)
  4. There are no needs for bias adjustments for equipment types since all equipment is identical.
  5. There are no need for urbanization adjustments, since all stations are rural and well sited.
  6. There are no regular sensor errors due to air aspiration and triple redundant lab grade sensors. Any errors detected in one sensor are identified and managed by two others, ensuring quality data.
  7. Due to the near perfect geospatial distribution of stations in the USA, there isn’t a need for gridding to get a national average temperature.

Knowing this, I wondered why NOAA has never offered a CONUS monthly temperature from this new network. So, I decided that I’d calculate one myself.

The procedure for a CONUS monthly average temperature from USCRN:

  1. Download each station data set from here: USCRN Quality Controlled Datasets.
  2. Exclude stations that are part of the USHCN-M (modernized USHCN) or USRCRN-Lite stations which are not part of the 114 station USCRN master set.
  3. Exclude stations that are not part of the CONUS (HI and AK)
  4. Load all July USCRN 114 station data into an Excel Spreadsheet, available here: CRN_CONUS_stations_July2012_V1.2
  5. Note stations that have missing monthly totals data. Three in July 2012, Elgin, AZ, (4 missing days) Avondale, PA,(5 missing days) McClellanville, SC, (7 missing days) and  set their data aside to be dealt with separately.
  6. Do sums and calculate CONUS area averages from the Tmax, Tmin, Tavg and Tmean data provided for each station.
  7. Do a separate calculation to see how much difference the stations with missing/partial data make for the entire CONUS.

Here are the results:

USA Monthly Mean for July 2012:   75.72°F 

(111 stations)

USA Monthly Average for July 2012:   75.51°F 

(111 stations)

USA Monthly Mean for July 2012:   75.74°F 

(114 stations, 3 w/ partial missing data, difference  0.02)

USA Monthly Average for July 2012:   75.55°F 

(114 stations, 3 w/ partial missing data, difference  0.04)

============================

Comparison to NOAA’s announcement today:

Using the old network, NOAA says the USA Average Temperature for July 2012 is: 77.6°F

Using the NOAA USCRN data, the USA Average Temperature for July 2012 is: 75.5°F

The difference between the old problematic network and new USCRN is 2.1°F cooler.

This puts July 2012, according to the best official climate monitoring network in the USA at 1.9°F below the  77.4°F July 1936 USA average temperature in the NOAA press release today, not a record by any measure. Dr. Roy Spencer suggested earlier today that he didn’t think so either, saying:

So, all things considered (including unresolved issues about urban heat island effects and other large corrections made to the USHCN data), I would say July was unusually warm. But the long-term integrity of the USHCN dataset depends upon so many uncertain factors, I would say it’s a stretch to to call July 2012 a “record”.

This result also strongly suggests, that a well sited network of stations, as the USCRN is designed from inception to be, is totally free of the errors, biases, adjustments, siting issues, equipment issues, and UHI effects that plague the older COOP USHCN network that is a mishmash of problems that the new USCRN was designed to solve.

It suggests Watts et al 2012 is on the right track when it comes to pointing out the temperature measurement differences between stations with and without such problems. I don’t suggest that my method is a perfect comparison to the older COOP/USHCN network, but the fact that my numbers come close, within the bounds of the positive temperature bias errors noted in Leroy 1999, and that the more “pristine” USCRN network is cooler for absolute monthly temperatures (as would be expected) suggests my numbers aren’t an unreasonable comparison.

NOAA never mentions this new pristine USCRN network in any press releases on climate records or trends, nor do they calculate and display a CONUS value for it. Now we know why. The new “pristine” data it produces is just way too cool for them.

Look for a regular monthly feature using the USCRN data at WUWT. Perhaps NOAA will then be motivated to produce their own monthly CONUS Tavg values from this new network. They’ve had four years to do so since it was completed.

UPDATE: Some people questioned what is the difference between the mean and average temperature values. In the monthly data files from USCRN, there are these two values:

T_MONTHLY_MEAN

T_MONTHLY_AVG

http://www.ncdc.noaa.gov/crn/qcdatasets.html

The mean is the monthly (max+min)/2, and the average is the average of all the daily averages.

UPDATE2: I’ve just sent this letter to NCDC – to ncdc.info@ncdc.noaa.gov

Hello,

I apologize for not providing a proper name in the salutation, but none was given on the contact section of the referring web page.

I am attempting to replicate the CONUS  temperature average of 77.6 degrees Fahrenheit for July 2012, listed in the August 8th 2012, State of the Climate Report here: http://www.ncdc.noaa.gov/sotc/

Pursuant to that, would you please provide the following:

1. The data source of the surface temperature record used.

2. The list of stations used from that surface temperature record, including any exclusions and reasons for exclusions.

3. The method used to determine the CONUS average temperature, such as simple area average, gridded average, altitude corrections, bias corrections, etc. Essentially what I’m requesting is the method that can be used to replicate the resultant 77.6F CONUS average value.

4. A flowchart of the procedures in step 3 if available.

5. Any other information you deem relevant to the replication process.

Thank you sincerely for your consideration.

Best Regards,

Anthony Watts

===================================================

Below is the response I got to the email address provided in the SOTC release, some email addresses redacted to prevent spamming.

===================================================

—–Original Message—–
From: mailer-daemon@xxxx.xxxx.xxx
Date: Thursday, August 09, 2012 3:22 PM
To: awatts@xxxxxxx.xxx
Subject: Undeliverable: request for methods used in SOTC press release
Your message did not reach some or all of the intended recipients.
   Sent: Thu, 9 Aug 2012 15:22:43 -0700
   Subject: request for methods used in SOTC press release
The following recipient(s) could not be reached:
ncdc.info@ncdc.noaa.gov
   Error Type: SMTP
   Error Description: No mail servers appear to exists for the recipients address.
   Additional information: Please check that you have not misspelled the recipients email address.
hMailServer

===============================

UPDATE3: 8/10/2012. This may put the issue to rest about straight averaging -vs- some corrected method. From http://www.ncdc.noaa.gov/temp-and-precip/us-climate-divisions.php

It seems they are using TCDD (simple average) still. I’ve sent an email to verify…hopefully they get it.


Traditional Climate Divisional Database

Traditionally, climate division values have been computed using the monthly values for all of the Cooperative Observer Network (COOP) stations in each division are averaged to compute divisional monthly temperature and precipitation averages/totals. This is valid for values computed from 1931 to the present. For the 1895-1930 period, statewide values were computed directly from stations within each state. Divisional values for this early period were computed using a regression technique against the statewide values (Guttman and Quayle, 1996). These values make up the traditional climate division database (TCDD).


Gridded Divisional Database

The GHCN-D 5km gridded divisional dataset (GrDD) is based on a similar station inventory as the TCDD however, new methodologies are used to compute temperature, precipitation, and drought for United States climate divisions. These new methodologies include the transition to a grid-based calculation, the inclusion of many more stations from the pre-1930s, and the use of NCDC’s modern array of quality control algorithms. These are expected to improve the data coverage and the quality of the dataset, while maintaining the current product stream.

The GrDD is designed to address the following general issues inherent in the TCDD:

  1. For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially undersampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).
  2. For the TCDD, all divisional values before 1931 stem from state averages published by the U.S. Department of Agriculture (USDA) rather than from actual station observations, producing an artificial discontinuity in both the mean and variance for 1895-1930 (Guttman and Quayle, 1996).
  3. In the TCDD, many divisions experienced a systematic change in average station location and elevation during the 20th Century, resulting in spurious historical trends in some regions (Keim et al., 2003; Keim et al., 2005; Allard et al., 2009).
  4. Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).

The GrDD’s initial (and more straightforward) improvement is to the underlying network, which now includes additional station records and contemporary bias adjustments (i.e., those used in the U.S. Historical Climatology Network version 2; Menne et al., 2009).

The second (and far more extensive) improvement is to the computational methodology, which now addresses topographic and network variability via climatologically aided interpolation (Willmott and Robeson, 1995). The outcome of these improvements is a new divisional dataset that maintains the strengths of its predecessor while providing more robust estimates of areal averages and long-term trends.

The NCDC’s Climate Monitoring Branch plans to transition from the TCDD to the more modern GrDD by 2013. While this transition will not disrupt the current product stream, some variances in temperature and precipitation values may be observed throughout the data record. For example, in general, climate divisions with extensive topography above the average station elevation will be reflected as cooler climatology. A preliminary assessment of the major imapacts of this transition can be found in Fenimore, et. al, 2011.

0 0 votes
Article Rating
260 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Duncan B (UK)
August 8, 2012 11:30 pm

Love that ‘way too cool!’ sign off Anthony.
Must be frustrating not to show off their shiny new toys.

Earle Williams
August 8, 2012 11:33 pm

I’m curious how the USCRN July 2012 mean or average would compare to the Class 1 & 2 stations in USHCN. There could be some thermo spatial biasing if the numbers and distributions aren’t similar.

STuartMcL
August 8, 2012 11:35 pm

Simples.
In the next round of adjustments, they will just adjust everything in all the other datasets down by 2.1 degrees.
Problem solved

Andrew
August 8, 2012 11:36 pm

I wonder what we would do if we didn’t have dedicated people like Anthony and others to inform the public what really is happening out there. What is so disturbing is that the media doesn’t ask the same questions but instead just accepts what is spoonfed to them. Two years ago I accepted all information unconditionally that was distributed by organisations like NOAA, I trusted them like you would with your own doctor. Now we all need second opinions and WUWT is doing just that and more.

Ray Boorman
August 8, 2012 11:40 pm

Brilliant work Anthony, how long till we hear complaints from the Team about you misusing/misunderstanding the data?
REPLY: About 30 minutes, “team” member Nick Stokes has already got his knickers in a twist, see below. – Anthony

Editor
August 8, 2012 11:48 pm

So they have ignored Anthony’s excellent research because, presumably it does not fit in with their preconceived notions of AGW.
An anecdotal story here: Our house in the UK has a North-South orientation, we have a small weather station with the external temperature sensor permanently in the shade at the front of the house (facing North). Our house in Spain has an East-West orientation, our weather station there has the external temperature sensor permanently in the shade in a window recess in the West facing back yard. The back yard has terracotta tiles and in the summer it is impossible to walk on theses tiles in bare feet as they get too hot.
Any day in the UK the temperature (usually) rises slowly as the day progresses and falls slowly as the sun sinks and finally sets maximum temperature is usually about 16:00 hours. In Spain in the summer, the temperature as picked up by the sensor rises steadily until about 14:00 it then can rise by 7-10 degrees Celsius in the next 3-4 hours as the terracotta tiles heat up by the radiant heat from the almost overhead sun. Common sense tells me that this is an artificial rise, and it certainly feels a lot cooler in the grassed pool area across the road in the shade of the trees.
Hopefully, as more people read Anthony’ research, the realisation that the incorrect siting of weather centres is the main reason for temperature increases globally.

David
August 8, 2012 11:56 pm

Could I ask what you define as “mean” and “average”? Mathematically, I thought they were the same. Perhaps you are referring to a “mid-range”? Refer to : http://mathcentral.uregina.ca/qq/database/QQ.09.00/julie1.html

August 8, 2012 11:57 pm

This is comparing the absolute temperatures of two different networks. You don’t know whether, for example, the USCRN network has relatively more high altitude stations. And while you say that USCRN has “near perfect geospatial distribution” (measured?), the validity of the comparison will depend on whether the old network is comparably distributed.
That’s why climate scientists generally prefer to deal with anomalies. Otherwise you can’t compare across networks.
REPLY: And anomalies are in the eye of the beholder, pick your own baseline, like Hansen does. Record temperatures for single months reported to the public, as NOAA did today, are not useful when reported as anomalies. The fact is that a better sited and maintained network shows a cooler result. And, that network wouldn’t exist if NOAA didn’t realize what a mess the old network was in. I know you’d just prefer it if this post would go away, so you can continue in that warm cocoon of institutionalized thinking you live in, but it isn’t going to happen.
If NOAA has issues with the presentation, let them put out a CONUS monthly value, my bet is that they won’t. They’ve had 4 years to do it since the network was completed, and they are still sitting on their butts citing the old multi-mangled COOP network data. – Anthony

August 8, 2012 11:59 pm

Look for a regular monthly feature using the USCRN data at WUWT.

If you do that, how much you want to bet they either cut off access to the data or begin to “adjust” it for some reason?

Bob Koss
August 9, 2012 12:00 am

There don’t seem to be any closely paired stations west of eastern Montana. Do you average the temperatures from the individual sets of paired stations? If not, it is probably the thing to do before calculating the CONUS temperature. Or, if the pairs are reliably similar, eliminate one of each pair before calculating. It likely won’t make much difference, but would obviate complaints about areas containing paired stations being given undue weight.

James McCauley
August 9, 2012 12:06 am

Proud to be a WUWTer! This post will be shared with my Senators R. Portman and (aah-hemm) Sherrod Brown, etc. Someone needs to forward to Inhofe, et al.

Venter
August 9, 2012 12:08 am

The first defender of indefensible has already made his appearance with his usual obscure defences. Anthony is talking about what NOAA stated as the ” temperature ” and not anomaly. They stated that ” temperature ” from the old network and kept quiet about their own modern network which shows temperatures over 2 degrees F cooler, got it? Of course you know it very well and are wilfully obfuscating, true to form.
We wait next for Mosher to come with ” Hmm, do this calculation vs. that calculation and use this dataset and see the results ” kind of fly by snark.
REPLY: You can bet on it. Let NOAA put out a monthly CONUS Tavg value then if they don’t like what I’m doing. – Anthony

The Ghost Of Big Jim Cooley
August 9, 2012 12:11 am

Just wanted to echo Andrew and Ray, above. Without people like Anthony and Willis (and many others) we’d have no idea of the truth. Just a small thank you for dedicating your time to showing the actuality, rather than all the underhandedness that goes on in ‘science’ now (although it actually always has!). I love science, but it appears often that it is only a little above religion when it comes to what is real. The people who practice the ‘three-card-tricks’ in climate science don’t seem to understand the damage they are doing to science itself. It’s actually very sad.

Esko
August 9, 2012 12:11 am

How you do know that year 1936 temperature was measured in rural areas? How do you know that it is comparable to your rular area calculations?
REPLY: The US was far less urbanized in 1936, far less population, far less airports, and most airports were small affairs rather than the big concrete megaplexes of today. Read your history, ORD for example. – Anthony

August 9, 2012 12:18 am

Apparently using the old weather system allows them to make the usual claim of CAGW. It truly is a sad situation where a system set up specifically to monitor the temperature accurately is not used. One does wonder if this is an act of manipulation, intentionally instigated to promote the Global Warming hysteria. This is becoming extremely frustrating, as we can no longer rely on the Government specialized departments to demonstrate any standards or impartiality.

Peter Miller
August 9, 2012 12:21 am

Reliable data is as rare as rocking horse poo in today´s climate science, while unreliable, or adjusted, ´homogenised´, cherry picked data is the norm.
This is a classic instance of what happens when you compare reliable data with unreliable data – the scary, supposed problem of global warming goes away. But what would happen to all that funding if the scary supposed problem disappeared?
Answer: It would disappear too.
Solution: Keep using the unreliable data.
I know it would be a lot of work and would probably produce the approximate same result as previously, but would it be possible to apply some sort of area of influence weighting around each station? Some kind of polygamal shape weighting.

SandyExDerbyNowLimousin
August 9, 2012 12:30 am

The BBC are covering this in an unattributed story with no right of reply, I wonder why? Possibly because they know it’s a bit flakey?
http://www.bbc.co.uk/news/world-us-canada-19187115

August 9, 2012 12:32 am

It’s difficult to understand what is an average temperature: if you put one hand in a bucket of hot water and the other one in ice cold water will an average temperature mean anything?
If you mix 1 kg of boiling water with 1 kg of ice cold water you may expect a resulting temperature of 50 °C. But you have to mix it. So far, we cannot mix Houston with Minneapolis (any way you can’t mess with Texas)!
If a measuring station is at sea level (Tampa, FL) and another one at an altitude of one mile (Denver, CO) what is the signification of temperature averaged between these two?
Temperature anomalies (the difference between a measured temperature from a timed average for the same station) are understandable and these differences can be treated as cohort for statistical analysis. That’s what we see on all hockey stick or flat diagrams.

REPLY:
Probably the best question to ask is this – how does NOAA calculate the area average absolute temperatures (not anomalies) for the CONUS they use in those press releases, combining that mishmash of dissimilar COOP stations? As far as I can tell they have not published the exact method they use for those. – Anthony

Alex Heyworth
August 9, 2012 12:32 am

Further to your reply to Esko, Anthony, there is a reason why they used to be called airfields (in Anglo usage, at any rate – I don’t know about US usage for the 30s and 40s). Remember all that footage of Spitfires bumping over the turf?

Policy Guy
August 9, 2012 12:38 am

Anthony,
Nothing gives me greater pleasure than seeing an unconnected third party produce results from their own top notch system that contradicts their published results. What idiocy do they operate under? Are they immune to scientific data inquiry? This begs the question, are they supposed to support Hansen regardless of their superior data – so they have to ignore better data in favor of massaged data, but not tell anyone that is what they are doing?
So tell the story correctly. You have an army behind you.

Climate Refugee
August 9, 2012 12:46 am

With global temperature anomaly barely positive and US “extremely hot”, what would the global temp anomaly be with US temp anomaly?

JPG
August 9, 2012 12:48 am

I realise that Hawaii and Alaska are part of the USA, but surely when talking about a US temperature these should be excluded as they are completely different to the rest of the US. It would be like including the Faukland Islands in the temperature of England.
REPLY: They are – see the steps- Anthony

Policy Guy
August 9, 2012 12:49 am

Excuse me,
why are we using the acronym PNAS at all. This was previously stated much earlier. Its offensive. Is this on purpose?

JPG
August 9, 2012 12:52 am

Ah…my mistake. I missed that in the steps you listed. Ignore my last comment.

Allan MacRae
August 9, 2012 12:55 am

http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_July_2012.png
To:
Heads of Departments,
Proceedings, National Academy of Sciences (PNAS)
Dear PNAS Heads:
UAH Global Temperature Update for July 2012: +0.28C,
COOLER than June, 2012: +0.37 deg.
If one wants to argue about GLOBAL warming, should one not look first at GLOBAL temperatures?
Respectfully, Allan

Vince Causey
August 9, 2012 1:14 am

Is Nick Stokes suggesting that the new stations are at a higher altitude than the old stations, or is he just grasping at straws? Nick, if you have analysed the altitudes of every station and compared them with the old, then you may have some grounds for complaint. Have you?

Jimbo
August 9, 2012 1:19 am

NOAA never mentions this new pristine USCRN network in any press releases on climate records or trends, nor do they calculate and display a CONUS value for it. Now we know why.

You can bet your bottom dollar they looked at it. And if it showed a record you can bet your bottom dollar they would have announced it as coming from an untainted, pristine source. This is one of the reasons why sceptic should exists. Never take anyones word for it as the Royal Society’s motto says.

August 9, 2012 1:26 am

Allan MacRae said @ August 9, 2012 at 12:55 am

Dear PNAS Heads:

Was that deliberate? I fell off my chair laughing 🙂
Nice work BTW Anthony. Yer blood’s worth bottling…

Allan MacRae
August 9, 2012 1:27 am

OT but worthy of discussion:
[snip – sure, but on another thread]

August 9, 2012 1:27 am

In reply to Nick – I think the reason why anomalies were picked was to ‘escape’ the degree of error encountered in earlier measurements. In other words the difference between two temperatures over time could be considered more accurate then the absolute.
Given that, logically, when a reliable absolute temperature measurement network becomes available – ALL BETS ARE OFF. The walk on the ‘wild side’ of historical temperature analysis and all that implies with adjustments, tweaks and tunes is finished. The new network seems to be of sufficient quality to warrant a ‘reset’ in thinking on temperature analysis in the US.
The question should be more one of why is there such a large discrepancy between directly measured reality and the adjusted historical temperature record – why has it strayed so far from reality to flip from a record high to a probable low in comparison??

Ian of Fremantle Australia
August 9, 2012 1:36 am

Nick Stokes @11.57 pm If as you state “That’s why climate scientists generally prefer to deal with anomalies” why doesn’t the NOAA report anomalies rather than or of course as well as, absolute temperatures and why don’t they use the CONUS network? The sceptics will argue that the CONUS network isn’t giving such high temperatures. Surely NOAA could easily refute that by publishing the CONUS results. So why don’t they? Can’t you see that by not doing so this erodes the credibility of the CAGW proponents?

cunningstuff
August 9, 2012 1:52 am

I just wanted to say, I love redundancy. Also, great blog, I rarely comment, but I am lurking and enjoying it, you have great respect from me.

Athelstan.
August 9, 2012 1:55 am

British Television – last night, Ch4 news was reporting from the parched USA and it did say something to the effect of: “hottest July ever”.
They must have a direct line to God/NOAA
http://www.channel4.com/news/is-climate-change-to-blame-for-historic-us-drought

SanityP
August 9, 2012 1:56 am

Sadly, this topic will never get the attention it needs in any of the msm.

TC
August 9, 2012 2:08 am

Anthony, why not write to the NOAA and ask them point blank why they are not using the USCRN data in their press releases? Put your letter/email on the public record (WUWT) and let’s see what they can come up with. They may well have a sound rationale to justify what they’re doing. On the other hand, they may not ….

Bloke down the pub
August 9, 2012 2:15 am

I’m glad to see their investment in the new kit is finally being put to good use, though I suspect they’re wishing they hadn’t started.
When I first saw the location of the site you illustrated at the start of this post, I thought it was the Stroudwater that is near me, but the only thing of interest there is the canal. See here

Maus
August 9, 2012 2:18 am

keith: “The walk on the ‘wild side’ of historical temperature analysis and all that implies with adjustments, tweaks and tunes is finished.”
I think you rather optimistically underestimate ingenuity. From the recent TOBS discussion we’ve learned that the climate folks haven’t been able to read a thermometer and sort out the difference between the midpoint of a range and an average for nearly four decades. And that’s completely aside the notion that the atmosphere is an active ‘heating’ source based on albedo corrected black-body models of the Earth as a lightbulb. That is, a hollow sphere completely enclosing the sun at a distance of 2AU. And this rather than anything trivially or even in the neighborhood of correct by modelling the average temperature as lit by the sun from, you know, the side at a value 86K greater for the irradiated hemisphere on a tidally locked sphere.
I’d love to join you in your enthusiasm, but if the entire field cannot sort out basic mathematics or how to read numbers off a dial for these many decades? I’ll lay my bets on the continued success of NOAA keeping two sets of books.

rogerknights
August 9, 2012 2:22 am

The question should be more one of why is there such a large discrepancy between directly measured reality and the adjusted historical temperature record – why has it strayed so far from reality to flip from a record high to a probable low in comparison??

Yeah. Maybe because:
“An avalanche of answers must be found too fast.”
“To ask the question is to know the answer.”

rogerknights
August 9, 2012 2:31 am

Another question: Why wasn’t something like this set up 25 years ago, not five years ago? Since the AGW was a Big Deal by then, and the cost of accurate ground monitoring was peanuts in comparison to the cost of satellite monitoring, why not spend a penny? It would have been useful just for calibrating the satellites. (Hmmm … maybe those satellites should now be re-calibrated downward now.)
Possibly because the Team wasn’t interested in ground truth. (There’s a good title for future threads on the topic of this new network.)

rogerknights
August 9, 2012 2:34 am

Another question. Why haven’t countries in Europe established a completely automated setup like this either? A little funding from the EU would have done the trick. Same reason?

Nick Kermode
August 9, 2012 2:36 am

Is it possible to compare temperature measurements from the 1930’s to measurements from stations sited and online from 2002? Is any homogenization necessary?

Nick Stokes
August 9, 2012 2:48 am

Vince Causey says: August 9, 2012 at 1:14 am
“Is Nick Stokes suggesting that the new stations are at a higher altitude than the old stations, or is he just grasping at straws? Nick, if you have analysed the altitudes of every station and compared them with the old, then you may have some grounds for complaint. Have you?”

I hadn’t. It’s usually something you’d need to do before comparing the mean of two sets of stations. But since you asked, I did.
The mean altitude of the CRN stations was 2223 ft, or 667.6 M.2
The mean altitude of the USHCN stations in “ushcn-v2-stations.txt” (all USHCN) was 1681 ft, or 512 m.
The difference, 155.6 m, at a lapse rate of 6 °C/km, is 0.93°C, or 1.7°F. Very close to Anthony’s 2.1, and due just to the altitude difference.

REPLY:
You are assuming that they are only using USHCN stations for their press release national average temperature value. They actually provide no reference to the method. As far as I can tell, they could be using a mixture of USHCN and GHCN stations, or parts of, or the entire COOP network. They don’t specify how they come up with the number. They don’t show any source, data, or methods.
The fact that they don’t use this network at all for any public advisement is the most telling. History of actions has shown us that when the warmist crowd has a warmer result, they trumpet it as proof of AGW. They fact we’ve not seen any references to the USCRN in releases suggests they don’t favor the result.
I considered applying a lapse rate calculation. The problem with applying a lapse rate calculation is that depending on humidity, the moist adiabatic lapse rate (1.5°C/1,000 ft) or the dry adiabatic lapse rate (3.0°C/1,000 ft). Some days, depending on synoptic conditions, it might be moist, others it might be dry. To do it correctly, you’d have to link in the vagaries of weather for each station.
If NOAA publishes their method for calculation, I can then replicate it. As far as I can tell, they have not applied a lapse rate to the stations, but OTOH there’s no evidence either way, since they don’t publish their method. – Anthony

TimC
August 9, 2012 2:48 am

Priceless – all the hard work is reaping just rewards!
Modest contribution is in the tip jar …

dearieme
August 9, 2012 2:58 am

“Esko says:
How you do know that year 1936 temperature was measured in rural areas? How do you know that it is comparable to your rular area calculations?”
In addition to Antony’s excellent reply, let me add that you miss the point. I dare say nobody knows how to correct the 1936 data for any imperfections of measurement and siting, but Antony’s work is sufficient of itself to prove (and I mean prove) that comparison of the shonky 2012 data set with the 1936 data set does NOT securely demonstrate a record high temperature for July.

August 9, 2012 3:04 am

Is not “mean” the same as “average”? Is one of the terms meant to be “median” or mid-range? Reference : http://www.mathsteacher.com.au/year8/ch17_stat/02_mean/mean.htm

wayne
August 9, 2012 3:07 am

Anthony,
■ Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
That is why it is hotter in 2012 than in the 1930’s… they were not measuring Tmax’s every five minutes in the ’30’s. I have downloaded daily since June 22nd the Oklahoma City hourly records and never were the highest hourly maximum what was recorded for the maximum of the day, the maximum was consistently two degrees Fahrenheit greater than that of the highest HOUR but evidently they count 5 minute mini-microbursts of heat today instead. I guess hourly averages are not even hot enough for them (yeah, blame it on CO2). That, by itself, invalidates all records being recorded today to me, I don’t care how sophisticated their instruments are… the recording methods themselves have changed and anyone can see it in the “3-Day Climate History”, the hourly readouts, given at every city on their pages. Don’t believe me, see it for yourself what is going on in the maximums. Minimums rarely show this effect for cold is the absence of thermal energy, not the energy which can peak up for a few minutes, much more than cold readings.
You’re a meteorologist, how do you see this discrepancy?

August 9, 2012 3:13 am

Are not “mean” and “average” the same term? Do you mean “median” or “mid-range” instead?
Reference : http://www.mathsteacher.com.au/year8/ch17_stat/02_mean/mean.htm

Dodgy Geezer
August 9, 2012 3:23 am


“.. Never take anyones word for it as the Royal Society’s motto says…”
Though, famously, that is NOT what the actual Royal Society says. Their motto may say that, but the Royal Society says ‘Respect the Data’ (in other words, do not seek to investigate what we have told you is true.)…

August 9, 2012 3:26 am

I am a little confused. What is the difference between:
USA Monthly Mean
USA Monthly Average
Mean and average are synonyms. If you are using each to distinguish something, it would be nice if you defined how you’re using the terms.

cedarhill
August 9, 2012 3:26 am

Since appears NOAA is “political”, one of the committees in Congress should hold a hearing on this very subject regarding why, after a lot of tax money has been spent on “pristine”, why NOAA choices not to inform the people that paid for it.

Bill Marsh
August 9, 2012 3:28 am

So I’m an American taxpayer and I’m sitting here reading that NOAA spent millions of my tax dollars four years ago to create a this new network and they aren’t using it? Sounds like the IG of NOAA should be investigating this as an example of waste of government funds (IGs are tasked with investigating fraud,waste, and abuse in their respective agencies). The IG for other agencies will investigate things like this with a far lower dollar value.

Graham
August 9, 2012 3:42 am

Another question pertinent when comparing temperatures to 1936 would be how accurate 1936 thermometers were, where they were sited and how their temperature was recorded.
To my knowledge getting 0.01degC/F accuracy from a 1930s thermometer was impossible, another reason to dismiss the meaningless babble from the priests of pseudo-science.

Peter in MD
August 9, 2012 3:52 am

It would be interesting to see how many “old” stations are near enough to any of these new stations and compare what they each report for the same time period.

John Doe
August 9, 2012 3:55 am

Wonderful, Anthony. Take full advantage of it. At 2.1F lower than COOP-USHCN record lows ought to balance record highs making yet another point.

JJB MKI
August 9, 2012 4:11 am

Stokes
Last time I looked, the GISS data set for England was constructed from a selected homogenised set of over 70 stations in the early 20th century, spanning both rural and urban locations, narrowing down to about a dozen stations, all located at busy airports in the present day, with the information presented as anomalies. No obvious reason given for the data cull btw, as the culled stations did not stop reporting. By your own logic, it would be fallacious to use GISS to claim warming over this period.
J Burns

August 9, 2012 4:17 am

Anthony:
I am a little confused here with your terminology. You say
USA Monthly Mean
USA Monthly Average
Mean and average are synonymous. If you are using them to express a conceptual distinction, could you please define the terms?
— Sinan

JJB MKI
August 9, 2012 4:17 am

Where have we seen this before? Government agency drunk on hubris and funding spend a large amount of money on a new project designed to empirically prove AGW. New project delivers results that fail to prove AGW. Government agency pretend project never existed, rush back to safety of computer models until the data can be statistically massaged. ARGO / Envisat anyone?

Fred
August 9, 2012 4:42 am

Love the idea of a USCRN temp like Roy Spencer does for the sat record.
I am a PhD, know how hard this is, and still do not trust the ever increasing temp offered by Hansen. This ignores the clear bias of the key temp players at NOAA and NASA.
Also, please start using both poles in the ice reports…

starzmom
August 9, 2012 4:50 am

In reply to Nick–no one is talking about anomalies except you. NOAA said July 2012 was the hottest ever at 77.6 degrees, and Anthony showed that a better network came in at 75.something (depending on how you calculate). Where is the anomaly in that?

Editor
August 9, 2012 4:54 am

So, are you going to redo all that for the previous Julys? My guess is given the widespread heat, this July is warmer than the others, but worth doing anyway.

Dodgy Geezer
August 9, 2012 5:00 am


“..You can bet your bottom dollar they looked at it. And if it showed a record you can bet your bottom dollar they would have announced it as coming from an untainted, pristine source. ..”
Since the USCRN network is ‘new’, then ALL data on it will probably be a record. I’m surprised that NOAA haven’t tried that one….

August 9, 2012 5:03 am

Nick Stokes said (August 8, 2012 at 11:57 pm):
“…This is comparing the absolute temperatures of two different networks. You don’t know whether, for example, the USCRN network has relatively more high altitude stations. And while you say that USCRN has “near perfect geospatial distribution” (measured?), the validity of the comparison will depend on whether the old network is comparably distributed.
That’s why climate scientists generally prefer to deal with anomalies. Otherwise you can’t compare across networks…”
This is why people suggested that the new USCRN stations should be compared to the closest COOP/USHCN stations over the same period of time (say, for example, four years).
That way, any biases in the old stations can be identified, making it easier to merge the records.
Anthony, since you appear to have data on both, how about a comparison of the new USCRN stations to the closest COOP/USHCN stations (including their Leroy 2010 ratings)?

Editor
August 9, 2012 5:05 am

Nick Stokes says:
August 8, 2012 at 11:57 pm

… the validity of the comparison will depend on whether the old network is comparably distributed.
That’s why climate scientists generally prefer to deal with anomalies. Otherwise you can’t compare across networks.

I thought people liked to use anomalies because they’re easier to compare between different months. Suppose the average April temperature is 50°F. Even within the same old USHCN network where you’d say the spatial distribution is unchanged or the satellite record, people talk about anomalies because that allows comparison over time.
The CRN record is too short to have a really good mean to compare against, so computing an anomaly would be problematic. Perhaps if we took monthly averages from CRN and subtract the anomaly of the “compliant” USHCN stations, you’d have a less noisy record until CRN can stand on its own.

Editor
August 9, 2012 5:13 am

Bob Koss says:
August 9, 2012 at 12:00 am

There don’t seem to be any closely paired stations west of eastern Montana. Do you average the temperatures from the individual sets of paired stations?

The intent of the paired station is to test the hypothesis that two stations close together (a few miles for the Durham NH stations) will report similar data under similar conditions. I assume things like thunderstorms and sea breezes will produce mismatched data at Durham from time to time.
It would be appropriate to discard one station or average the pair.
There are also some stations that are mal-sited as experiments to see how they track better ones. After all, I sure can’t find a Leroy #1 site at my mountain property.
One is sited at Mauna Loa where the CO2 measurements are made. There may not be any others.

Steve Keohane
August 9, 2012 5:22 am

This is excellent Anthony. Thanks for letting us know. Sounds like this is just what is needed for a climate reference base.

Nick Stokes
August 9, 2012 5:24 am

Update. In my calc (2.48 am) of the average altitude of CRN stations, I hadn’t excluded AK and HI, and the form in which they were returned (from this page) included some duplicate values. Fixing this brought the average altitude to 2263 ft, or 690 m. That makes the altitude difference 178 m and the consequent difference (at lapse rate 6 °C/km) between USHCN and CRN to be 1.9°F.
REPLY: Maybe if your assumptions are correct, but you are just making guesses, see my note above. One of the things I’ve been doing the past few months is looking at the daily temps from CRN stations -vs- some USHCN and GHCN stations nearby. Since NOAA doesn’t tell us how they calculate the CONUS Tavg for their press release, we don’t know if they apply a lapse rate adjustment or not or whether they use moist adiabatic lapse rate or dry adiabatic lapse rate.
Once they publish their method, we’ll know if your approach has any merit. – Anthony

Tom in Florida
August 9, 2012 5:29 am

Stokes does make a valid point. Comparing raw data from different sets is not correct when looking for changes. If one is concerned about how much of a change has happened then anomalies are used because it really doesn’t matter what the actual data is, it is the change that matters. So the purpose of using the old temperature sets when looking for changes over time can be valid. However, in this case, the ability to accurately compare the temperature changes over time from the old COOP/USHCN network is totally compromised by siting issues, adjustments to data and the reliability of the people reporting the readings. Of course if the anomaly from the corrupted set tends to favor your point of view, you are more likely to use it and hope no one notices the problems with it. (et tu Nick?).
I believe it is better to use the new system and start over. We will now quite soon if there is a warming problem.

Kev-in-Uk
August 9, 2012 5:32 am

Gawd, I wonder if Nick Stokes has any idea how stupid he sounds (writes!)? It’s really simple Nick, there has been created a nice spanking new instrument set and subsequent data retrieval system which is all state of the art. Get lost with your anomaly usage rubbish – that’s just a way of hiding bad data via some assumed ‘self cancelling’ arrangement of errors (which is a bogus assumption to start with!) – the trouble is, this new system has NO errors (to speak of) and thus in real terms is the new reference ‘line’ – or at least darn well should be!
Anthony is perfectly correct and justified in asking the question of how a supposed perfect system does NOT actually confirm a ‘derived’ record temperature – and clearly shows said ‘derived’ data is obviously flawed …..
Excellent work Anthony – though I’d suggest that replying to Stokes was rather futile – horses to water and all that…..

Coach Springer
August 9, 2012 5:32 am

Thanks very much.
I’m with cearhill. Inhofe or someone should start an inquiry just to draw attention to the whole measurement issue. When a governemnt agency rushes out with numbers they know – or purposely ignore – are contradicted or at least seriously questioned by better measurements and compare them to measurements that have been adjusted downward contrary to the measurement bias from urbanization *and* they don’t report the contradictions with equal prominence, they are committing to misleading propaganda. Seriously, if a company tried this type of non-disclosure in financial statements, they’d be hauled up by the SEC and investors would be suing for misrepresentation and winning. (Of course, the chair of the SEC is a warmist herself and would be discinclined to take action if in charge of policiing disclosure by government scientists.)

pochas
August 9, 2012 5:37 am

Anthony deserves lots of credit for having the courage to be a critic of the waste and prevarication that comes from Big Government in charge of anything. I hope he will persist, because Big Government will resist shutting down that old useless network and saving the expense needed to maintain it. Big Government wants only to get bigger.

August 9, 2012 5:40 am

I, too, wondered what the difference between mean and average was. While I have a limited understanding of statistics, using averages and anomalies seems problematic. Averages, as another comment noted, are notoriously “average”, showing virtually nothing about the input data itself. Anomalies, I thought, were values that did not fit the “average” and may or may not have any significance. A pattern of anomalies, at some point, becomes just a pattern. This obsession with global averages seems like smoke and mirrors. It’s hot in the US and snowing in Norway. So what? Since we have these giant supercomputers, maybe we need to do data on many, many cities and towns, then compare the ups and downs of the various locations to other locations and look for trends upward and downward. It makes more sense, though I am sure there is some statistician who can explain why this is not being used.

Editor
August 9, 2012 5:41 am

George says:
August 8, 2012 at 11:59 pm

Look for a regular monthly feature using the USCRN data at WUWT.
If you do that, how much you want to bet they either cut off access to the data or begin to “adjust” it for some reason?

From the start of http://www.ncdc.noaa.gov/oa/about/open-access-climate-data-policy.pdf :

NOAA/National Climatic Data Center Open Access to Physical Climate Data Policy
December 2009
The basic tenet of physical climate data management at NOAA is full and open data access. All raw physical climate data available from NOAA’s various climate observing systems as well as the output data from state-of-the-science climate models are openly available in as timely a manner as possible. The timeliness of such data is dependent upon its receipt, coupled with the associated quality control procedures necessary to ensure that the data are valid. In addition, the latest versions of all derived data sets are made available to the public. NOAA also provides access to all of its major climate-related model simulations.

The NCDC lists “Examples of Potential Benefits” at http://www.ncdc.noaa.gov/crn/programoverview.html and refers to “Commercial sector” and ten US gov’t entities but leaves off “Citizen science.” If they do restrict access to us, it would be fun to stage a Frankenstein-like storming of NCDC HQ with pitchforks, torches, and a video production company.
Perhaps someone would like to take on the task of getting NCDC to include “Citizen science” in their list. A good candidate would be someone active in the US Variable Star Observing Program, which may be the purest example (after the NWS Coop observer program, modulo the term “high”) of citizens providing high quality data to a government research program.

Alan D McIntire
August 9, 2012 5:47 am

I’m confused by “monthly mean” and “monthly average” . “MEAN” IS the arithmetic average.
Do you mean “monthly median” = 1/2 (Hi + Lo) for monthly average,?

donald penman
August 9, 2012 5:52 am

I look forward to your paper on US stations being published and also the USCRN data being featured every month.We all know that you and others have put many years of work into this project and deserve a more even handed review from the media than you have so far had.We have to hope that the truth is given a chance to be heard and not just what some people would like to believe is going wrong with the Earth’s climate(AGW).

Colin Porter
August 9, 2012 5:53 am

Would it not be a good idea to take Google Earth screen shots of all USCRN station locations now, plus Google Earth Street View where available? Then in years to come, as urban development envelops some of these stations, but NOAA/NCDC claim that, these are quality sites not requiring UHI compensation adjustments, Anthony Watts Junior will be able to prove otherwise.
REPLY: They already have this in GE on the USCRN web page so yes you can get the pics now, but there’s no streets near these stations, so street view is out.
I suspect we won’t see much change, the leases are on places like national parks and nature reserves. – Anthony

Spence_UK
August 9, 2012 5:54 am

While I agree that care is required comparing apples and oranges (e.g. these two networks), the disparity doesn’t surprise me at all. Whilst the likes of Nick Stokes et al can hand wave about anomalies and adjustments dealing with this, the reality is this delta – 2.1 deg F – is not a fixed delta but varies with time, and the reality is that the various adjustments and statistical tricks used to generate the temperature series can only remove a percentage of this error.
Which is why I’ve argued that the REAL confidence intervals – even for the anomalies – is some large fraction of that 2.1 F – probably of the order of 1F or so (and I mean 1-sigma here). You can come up with clever algorithms that hide this error, but the corrections are imperfect and arguments will always continue about how effective they truly are. The only realistic way to deal with that today is to widen the CIs.

August 9, 2012 5:54 am

Hmm, well I would have said that using the old network to compare this year’s temperatures with temperatures gathered in the past /using the old network/ would be a more accurate comparison than comparing old network temps from the past with new network temps from the present.
I applaud the work and thought that went into the new network as described. However, comparing data from that network with data collected from the old network is apples and oranges. And I agree that the new network data is/should be more accurate than the old. It just isn’t a good comparison. I have other thoughts on Global Warming but that isn’t the subject of this discussion.

August 9, 2012 5:56 am

[SNIP: Yes it is OT. Please submit to Tips and Notes or some other more appropriate thread. -REP]

scadsobees
August 9, 2012 6:00 am

Anthony, you made one glaring error – you didn’t apply any adjustments to that dataset!! Once you do that, the average temps will come out to around 78.4F, warmer than it has been for 3500 years!!

BitBucket
August 9, 2012 6:12 am

Could you work out the peak temperature in 1936 using the class 1/2 stations of the time (if known)? Then there would be a better basis for comparison.

Ed_B
August 9, 2012 6:14 am

I don’t understand why Nick, who is described above as a “team” member, has not already done this analysis and posted it. /sarc off

August 9, 2012 6:16 am

Funnest and most satisfying thing I’ve read in quite awhile. Thanks Anthony!

Editor
August 9, 2012 6:17 am

Is there any way we can compare the sites in USCRN with sites in nearby areas in earlier years?
At least then we could look at the change in temps, and compare to the change given by USHCN.

August 9, 2012 6:25 am

Oh geez … who’s in charge over there at NOAA?
Are they operating ‘headless’ (like “Mike the Headless Chicken”)?

Bob Koss
August 9, 2012 6:30 am

Anthony,
Looking at your spreadsheet I noticed some stations have their temperature data incorrectly assigned to another station with the same base name. It only occurs when one station has a single digit appended while the other one has two digits appended. Your spreadsheet is doing an ascii sort while the CRN folder structure lists common names with appended digits by numerical value. All the data is present in the spreadsheet, so it doesn’t affect the column figures, but in these few cases it is assigned to the opposite station with the same name. It’s trivial in most cases. Just thought I’d give you a heads-up.
Here are the ones I found in the spreadsheet which are reversed from the folder structure.
GA Newton 8 W
GA Newton 11 SW
NC Ashville 8 SSW
NC Ashville 13 S
NE Lincoln 8 ENE
NE Lincoln 11 SW
REPLY: I thought I had dealt with this sorting issue early on, but I’ll check again. You are right, it doesn’t affect the outcome – Anthony

gregole
August 9, 2012 6:42 am

Thanks Anthony!
Just first rate work and it is appreciated. Also speaking personally, it is just great that you provided, essentially, instructions for how you derived your work so each of us can replicate these results going forward and don’t have to rely on the official and all too often, inaccurate results trumpeted in the media and by alarmists.

Sus
August 9, 2012 6:52 am

[snip fake email address, proxy server, policy violation]

Bill
August 9, 2012 6:58 am

Anthony,
What is the difference between the average and the mean in the way you do your calculations? Is average obtained by dividing by n-1 and mean by n?
If you include error bars, I don’t see that the difference between 1936 and 2012 for July would be significant. Adjusting temperatures for Tobs or any other type should increase the error bars and those should always be reported, even in a newspaper although if the newspaper decides to leave those off that is their call. But a real scientist would be giving the media the error bars.

Editor
August 9, 2012 6:58 am

Meanwhile July in the UK was 1.0C colder than average, This does not seem to have made headline news at the BBC. Wonder why?
http://notalotofpeopleknowthat.wordpress.com/2012/08/09/uk-weather-reportjuly-2012/

Bill
August 9, 2012 7:05 am

Actually I meant the opposite: did you get mean by dividing by n-1 giving a slightly larger number and average by dividing by n, giving a slightly smaller number?

Rick K
August 9, 2012 7:07 am

Great work, Anthony! Perhaps your “monthly feature using the USCRN data” could be compiled and listed in your reference pages…

climatebeagle
August 9, 2012 7:17 am

> Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
Is this 5 minute data available? I could only find references to hourly data for USCRN:
“Each USCRN station has three thermometers which report independent temperature measurements each hour. ”
http://www.ncdc.noaa.gov/crn/elements.html#temp

REPLY:
It is not available AFAIK, but this document might shed some light on the 5 minute data and how it is used: http://www1.ncdc.noaa.gov/pub/data/uscrn/products/hourly02/README.txt
-Anthony

JimB
August 9, 2012 7:18 am

Nick Stokes has a good point: if you are looking for trends, you need to use the data base you already have. The new system cannot be compared to old * for purposes of trend analysis*. But for a global temperature as of now, the new system is perfectly fine, thank you.
REPLY: I’m not looking at trends, just a single monthly number to compare to the monthly number in the NOAA press release. – Anthony

Bill
August 9, 2012 7:19 am

@ Nick Stokes
Nick,
Is the lapse rate really identical at every spot on the planet and is it really linear such that a site that is 100 m higher in Colorado between the two networks will have the same deltaT as a site in New Orleans that is 100 m higher?
At any rate, you are correct that this has to be taken into account and that the difference in temp. is probably much smaller than the 2.1 F Anthony was rightly upset over. However, even the 0.2 degrees difference is exactly the same as the difference between 1936 and 2012. Plus you need to add in ALL of the proper errors to get error bars.
My problem is that “scientists” go along with people like Borenstein and deliberately allow scary misrepresentations to be made. This further exacerbates the distrust that skeptics rightly have of the politicization and hyperbole that have become so common in the climate debate.

Doug Proctor
August 9, 2012 7:22 am

Paul Homewood says:
August 9, 2012 at 6:58 am
Meanwhile July in the UK was 1.0C colder than average, This does not seem to have made headline news at the BBC. Wonder why
Paul,
Because it is not news when 15 old ladies are helped across a busy street, but it is when one young girl is leered at in the subway.
There’s observation bias, recording bias and reporting bias. Bad things – or “potentially” bad things, get the benefit of all three.

BobM
August 9, 2012 7:33 am

Anthony, with all the work you’re doing on this would a new temperature record be appropriate? Perhaps the WASSP (WAtts Surface Stations Project) Temperature?

Rober Doyle
August 9, 2012 7:34 am

Anthony,
If your download process for the new network is correct,
the erroneous July press release may reveal the reason why there has
not been a switch to the new network. We would have “the coldest
[pick a month] on record”. I predict a February switch.
http://www.youtube.com/watch?v=2SlwV7mtsmw

August 9, 2012 7:35 am

Anthony’s contribution here is to flag the NOAA announcement as coming not from the USCRN, their Climate Reference Standard.
Now it is true that USCRN doesn’t have a long enough history to establish a trend. However, USCRN is long enough to SUPPORT a claim that this is the HOTTEST July. For if it is the hottest July since 1936, then it must be the hottest July since USCRN was set up.
So is 2012 July’s USCRN data hotter than all other years of July USCRN? I’m not asking anyone to do the work. Someone needs to answer it and NOAA is paid to do it.

mtl4u2
August 9, 2012 7:39 am

“If NOAA has issues with the presentation, let them put out a CONUS monthly value”
Why should they use your method? Because you say so?
REPLY: Nooo, they can use any method they want, But they should produce a monthly value. The fact that they aren’t is the most telling part. – Anthony

Jeff T
August 9, 2012 7:43 am

Comparing two different measurement networks in the way that this post does is so obviously wrong that it damages your credibility.
REPLY: And yet USCRN exists for that very purpose, to compare:
http://www.ncdc.noaa.gov/crn/programoverview.html
Performance Measures
The USCRN was established to help detect climate change in the United States. In order to assess the performance of the network in addressing this goal a performance measure (PM) was developed. This PM is an assessment of how closely the current and past configuration of the network captures the “true” national temperature and precipitation signal as defined by an area-averaged time series of annual temperature and precipitation derived from 4000 U.S. Cooperative Observer Program (COOP) Network stations scattered across the continental U.S. The configuration of the USCRN for a given point in time is used to select stations from the 4000 COOP station network, one station for each operating USCRN site (the one physically closest in location), and the time series derived from these stations is compared, statistically, to the time series derived from all 4000 stations. The result is a “variance explained” that measures how closely the “USCRN” time series follows the “true” time series.

John F. Hultquist
August 9, 2012 7:44 am

Nick Stokes says @ 2:48 and 5:24 regarding elevations
at a lapse rate of 6 °C/km, . . .
Seems this is an average of averages of lapse rates, both unsaturated (dry) and saturated (wet). I don’t know what else one would do with this issue, so I’ve no complaint. However, this just adds to the uncertainty of knowing what average temperature is. Also, thanks for the contribution and the update.
—————————
Stuff pedantic: The symbolism of six degrees Celsius should not look the same as six Celsius degrees. The first is a temperature, the second is a change. Because most everyone does this incorrectly, does not make it right. The same type of thing with A.M. and P.M. being used with 12:00 o’clock. But who cares about such truffles? Oops! Wrong word.

John@EF
August 9, 2012 7:44 am

REPLY: I understand exactly how anomalies work. Different baselines give different offsets for anomaly values. – Anthony
====
Seriously …. So what? Sus is exactly right, yours is a non-point. The only impact of baseline selection is the scale on the Y axis. When you compare two temperature series with different base periods, one simply converts them to a common base period. There’s no impact on the data line shape or trend.
Stokes’ point regarding station altitude seems valid too, at least must be considered before asserting claims. If you’re accusing him of “guessing”, you certainly are as well.
REPLY: And all that is fine, because we aren’t talking about anomalies, nor trends, but absolute temperatures for ONE MONTH. So its all just pointless distraction. Since NOAA doesn’t tell us what method they use to calculate the US monthly CONUS average temp, we are all guessing at that. The most important point here is that they aren’t using this network to try to provide any sort of sanity check to the poorly sited mishmashed highly adjusted train wreck that is the COOP/USHCN/GHCN networks – Anthony

David C. Greene
August 9, 2012 7:45 am

Great contribution, Anthony! Nick Stokes’ (corrected) comment about differences in station altitude and lapse rate makes sense to me. The paucity of quality sites at lower altitude can be verified using Google Earth. My favorite measure of long term trend is ocean heat content as obtained from the Argo buoys. Even there, (long term) time-averaging is necessary because of the annual variation in the “global” average heat content. Last I looked Loehle found recent cooling, while the Argo project’s Josh found no significant change in ocean heat content.
I am looking forward to publication of the Watts et al paper showing the sliced and diced data from the contiguous US surface stations.

RobertInAz
August 9, 2012 8:02 am

Looking at the NCDC Map here,
http://www.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html
I see “hotspots” over St. Louis, Sioux Falls, Montgomery. Other “hotspots” are not so closely tied to cities. One wonders about the sensors in these areas.

Rattus Norvegicus
August 9, 2012 8:09 am

Tony, you might try clicking on the “datasets” link from the NCDC page you linked to:
“Temperature – USHCN Version 2 Adjusted Data”
Nick is right you are wrong.
REPLY: My name is Anthony.
Show me how NOAA calculates the value in the press release, and how they deal with the fact that weather will change the adibatic lapse rate for each station on a daily basis, and how they account for that. If NOAA shows how they calculate the Tavg for the CONUS, I can duplicate it. But they aren’t showing it, so neither you nor Nick know the answer. That’s the point of this whole excercise, to get that answer and apply it to the USCRN.
No curiosity with either of you gents, just tribal derision. You aren’t even happy at the prospect that the temperature might not be as bad as proclaimed. – Anthony

Bob Koss
August 9, 2012 8:11 am

Alan D McIntire and other queries on Tmean & Taverage,
Here is how they define the CRN values.
cols 41 — 47 [7 chars] T_MONTHLY_MAX
The maximum air temperature, in degrees C, for the month. This maximum
is calculated as the average of all available day-maximums. To be
valid there must be less than 4 consecutive day maximums missing,
and no more than 5 total day maximums missing.
cols 49 — 55 [7 chars] T_MONTHLY_MIN
The minimum air temperature, in degrees C, for the month. This minimum
is calculated as the average of all available day-minimums. To be
valid there must be less than 4 consecutive day minimums missing,
and no more than 5 total day minimums missing.
cols 57 — 63 [7 chars] T_MONTHLY_MEAN
The mean temperature, in degrees C, calculated using the typical
historical approach of (T_MONTHLY_MAX + T_MONTHLY_MIN) / 2
cols 65 — 71 [7 chars] T_MONTHLY_AVG
The average air temperature, in degrees C, for the month. This average
is calculated using all available day-averages, each derived from
24 one-hour averages. To be valid there must be less than 4 consecutive
day averages missing, and no more than 5 total day averages missing.

John@EF
August 9, 2012 8:18 am

“REPLY: And all that is fine, because we aren’t talking about anomalies, nor trends, but absolute temperatures for ONE MONTH.”
===
The point being, baseline selection has no impact whatsoever on the data no matter how it’s used – you appear to believe differently.
***
“… So its all just pointless distraction. …”
===
Exactly my sentiment, although applied a bit differently.

August 9, 2012 8:20 am

Nick, what stations were in use in 1936 compared to 2012 for NOAA and what was the altitude difference?

HowardG
August 9, 2012 8:20 am

RE: Mean and Average data (are they nat the same?).
AW averages the data as provided. See how the source defines their data.
From the USCRN/USRCRN FTP MONTHLY STREAM
ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/monthly01
cols 41 — 47 [7 chars] T_MONTHLY_MAX
The maximum air temperature, in degrees C, for the month. This maximum
is calculated as the average of all available day-maximums. To be
valid there must be less than 4 consecutive day maximums missing,
and no more than 5 total day maximums missing.
cols 49 — 55 [7 chars] T_MONTHLY_MIN
The minimum air temperature, in degrees C, for the month. This minimum
is calculated as the average of all available day-minimums. To be
valid there must be less than 4 consecutive day minimums missing,
and no more than 5 total day minimums missing.
cols 57 — 63 [7 chars] T_MONTHLY_MEAN
The mean temperature, in degrees C, calculated using the typical
historical approach of (T_MONTHLY_MAX + T_MONTHLY_MIN) / 2
cols 65 — 71 [7 chars] T_MONTHLY_AVG
The average air temperature, in degrees C, for the month. This average
is calculated using all available day-averages, each derived from
24 one-hour averages. To be valid there must be less than 4 consecutive
day averages missing, and no more than 5 total day averages missing.

August 9, 2012 8:22 am

Excellent, Anthony. Thanks!
When the political climate cools in the near future we’ll get the USCRN data published?

chris y
August 9, 2012 8:38 am

Spence_UK says-
“Which is why I’ve argued that the REAL confidence intervals – even for the anomalies – is some large fraction of that 2.1 F – probably of the order of 1F or so (and I mean 1-sigma here).”
I absolutely agree. Every adjustment applied to the raw data *widens* the Confidence Interval. The list includes TOBS, altitude, UHI, equipment changes, location shifts, etc.
For example, Steven Goddard has shown an adjustment of 3.1 F between raw and twiddled July temperatures for USHCN.
All of these temperature adjustments provide fertile ground for confirmation bias at NOAA.
The claim made is that July 2012 is the hottest based on an absolute temperature, but defining ‘the temperature of what?’ is an unresolvable problem that climate science brushes under the table.
The situation is quite sad. The USHCN raw data has been adjusted by far more than the purported CACC warming.
Compared to other countries, the US has a gold standard temperature network.
Compared to ocean measurements, global land data is the gold standard.
Compared to paleo reconstructions of temperature, global land and ocean data is the gold standard.
Yet we have climate activists claiming astonishing accuracy when comparing temperatures over millenial periods.

JanF
August 9, 2012 8:41 am

Temperature is measured with triple redundant air aspirated sensors (Platinum Resistance Thermometers) and averaged between all three sensors. The air aspirated shield exposure system is the best available.

Are they just averaging the 3 sensors or only when de difference is limited? When one sensor is faulty it can ruin the average.
REPLY: There’s a sanity check on each sensor, to prevent just that – Anthony

August 9, 2012 8:44 am

“…I wondered why NOAA has never offered a CONUS monthly temperature from this new network….”
I think they’ve mentioned it a bit:
“…The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the nation changed over the past 50 years?…”
So, 50 years from now, they’ll look into it.
Also, most databases use that “30 year point” to define an averaging period, with a 5 year running average.
Maybe, since this is the “climate observation network”, there hasn’t been enough time to observe the climate. Anything less than 30 years is just weather.
REPLY: Exactly, which is why you can use it on a one month scale – Anthony

Owen in Ga
August 9, 2012 8:46 am

Curiosity here: Are they calculating average station temperature as a mean of all 288 recorded daily measurements? That seems like a much better number than the old (Tmin+Tmax)/2, and would seem to track energy budget somewhat better. Are they recording solar insolation on 5 minute intervals as well? Of course logic would indicate that on average 144 (fewer in summer more in winter) of those should be very near 0. Seems like a fairly robust network, but still a little sparse in some places where a number of microclimates can occur in relatively short distances. (Do we really need data on every little microclimate, or just areas free of UHI and land use issues to establish baseline temperatures.) On the twinned stations, are they using the second station as a check on the first, or are they oversampling that location (i.e do the paired stations report independently or as a single site)? What kind of security do they have at the site? I ask this because down here we had one of the old weather stations quit reporting and when the folks went out to find out what broke, they discovered the whole station had been destroyed/removed by scrap metal scavengers. Any explanation as to why those few stations had gaps? Any technology can break so I am not implying foul play, just wondering if they established a mean time between failure for these things.

Rattus Norvegicus
August 9, 2012 8:55 am

Anthony,
Click on the national bit in the image map and then select First Year 1930, Last Year 2012, Period July, Mean temperature, Table, Sort By Rank.
Then look at the data. Seems to agree with the article, modulo the rounding, which doesn’t make any difference to the rank.
REPLY: A link to which page you are referring to would be helpful, and we aren’t talking about rank, but absolute temperature for the month of July 2012 – Anthony

August 9, 2012 8:58 am

Nick Stokes brings up a problem that I have had with the old network for a while, namely that the stations are predominantly in lower parts of the individual states. The average elevation in the United States is 2,500 ft which lies above both of the averages that he quotes.
Incidentally I did look at the variation in temperature with elevation for each of the states at Bit Tooth Energy (see for example Michigan) and it varies quite considerably since I suspect that the changing elevations also bring other factors into play in controlling the resulting relationship. On average it was 0.02 deg F per ft much higher than the rate that Nick quotes, but I suspect that this has much to do with the fact that the stations with the lower elevations are also those with the larger populations.

Editor
August 9, 2012 9:02 am

Amidst all the discussion about anomalies/trends/absolute values, we are forgetting a key point.
There was a simple reason why USCRN was introduced – the old system was not of a good enough quality. We don’t appear to have enough data from USCRN yet to see if trends are different to USHCN, but it must be clear that the accuracy of the latter is simply not good enough to compare current temperatures with those of 80 years ago to tenths of a degree.

Rattus Norvegicus
August 9, 2012 9:02 am

Anthony, you provided the link in your previous post on this subject. The table provides both rank and absolute termperature (my reference to the rounding…).
REPLY: I provided several links, would it be too much trouble to point to the exact one you are referring to and what image map you are speaking of? – Anthony

Owen in Ga
August 9, 2012 9:10 am

I see a lot of hand waving about “comparing trends between unlike networks.” My strong contention is that that argument undermines and invalidates the old networks the CAGW folks putting those arguments forward consider as gospel, because the old COOP network in 1936 is not even close to being the “same” COOP network of 2012. Find the same stations, read the same way, on the same equipment, with the same surrounding population densities, with the same surrounding land use, measured at the same times of day between 1936 and 2012 and you can compare them accurately. Anything else is rife with expectation bias and guesstimate adjustments.

Editor
August 9, 2012 9:14 am

Interestingly, Virginia is the only state to post a record temperature in July.
And what does the NCDC summary say?
The temperature trend for the period of record (1895 to present) is 0.0 degrees Fahrenheit per decade.
http://www.ncdc.noaa.gov/oa/climate/research/cag3/va.html

Pamela Gray
August 9, 2012 9:15 am

But, the AGW crowd will say, the high quality set all set records!
Duh. Newly installed sensors set records all the time.

August 9, 2012 9:17 am

Oops! Sorry the result that I quoted should have read 0.02 degF/m not per foot, that’s the trouble with doing things in a hurry. This translates into about double the value that Nick quotes, but it varies quite considerably from state to state.

JJ
August 9, 2012 9:17 am

The headline of this article is in no way supported by the content of the article.
For July 2012 to not be a record breaker according to the USCRN dataset, July 2012 would have to not be the warmest July in the USCRN dataset. Is that the case? Given that USCRN only goes back a few years, I doubt it.
NOAA’s claim is that July 2012 is hotter than July 1936. You can’t refute that claim by comparing just NOAA 2012 to USCRN 2012. USCRN 2012 is 2F cooler than NOAA’s 2012? So what? If USCRN 1936 were also 2F cooler than NOAA’s 2012, then NOAA’s claim would still be valid. You can’t refute a comparison between apples with a single orange.
Moot question of course, given that there is no USCRN 1936. You dont have the data you need to say what you want to say, but you are saying it anyways.
Leave that crap to the Team.

August 9, 2012 9:18 am

a layman’s attempt to explain anomalies = anomalies are variances from the normal, since our climate is and always has been in a constant state of CHANGE there is NO starting point to have an anomaly from there is NO “normal” it constantly changes.
since there is no normal there is no baseline and any claim of anomaly is based on NOTHING in reality except for the persons attempt to paint a false picture by just selecting something and calling it the baseline.
Mark Twain said long ago there are liars, damned liars and statistics……the use of anomalies in the discussion of weather is an example of statistical LYING!

Jim G
August 9, 2012 9:22 am

James McCauley says:
August 9, 2012 at 12:06 am
“Proud to be a WUWTer! This post will be shared with my Senators R. Portman and (aah-hemm) Sherrod Brown, etc. Someone needs to forward to Inhofe, et al.”
Don’t waste your time on Sherrod Brown. In a town hall meeting which I attended in 1993 in Medina, OH, when I believe he was running for congress, he indicated that the crime problems in our country would not be corrected “until there was a more equal distribution of wealth”, right out of Carl Marx (also a favorite theme of Barack Obama aka Barry Soetoro), take your pick. Since the real goal of today’s “climate science” is government control and directing tax dollars to administration friends, as with Obamacare, people like Sherrod Brown are not open to facts or real science, only what furthers their power and control.

Jim G
August 9, 2012 9:30 am

JJ says:
August 9, 2012 at 9:17 am
The headline of this article is in no way supported by the content of the article.
“For July 2012 to not be a record breaker according to the USCRN dataset, July 2012 would have to not be the warmest July in the USCRN dataset. Is that the case? Given that USCRN only goes back a few years, I doubt it.
NOAA’s claim is that July 2012 is hotter than July 1936. You can’t refute that claim by comparing just NOAA 2012 to USCRN 2012. USCRN 2012 is 2F cooler than NOAA’s 2012? So what? If USCRN 1936 were also 2F cooler than NOAA’s 2012, then NOAA’s claim would still be valid. You can’t refute a comparison between apples with a single orange.
Moot question of course, given that there is no USCRN 1936. You dont have the data you need to say what you want to say, but you are saying it anyways.
Leave that crap to the Team.”
Though the article does point out a few other issues, like the non-use/publication of higher quality data, sadly, I must agree with your comment. Let’s not play their games by over attribution of relevence to non-comparable data sets.

DR
August 9, 2012 9:31 am

Anthony,
Is USCRN the same as USHCN-M ?

REPLY:
No, they have a different sensor deployment. – Anthony

Jim G
August 9, 2012 9:34 am

Owen in Ga says:
August 9, 2012 at 9:10 am
“I see a lot of hand waving about “comparing trends between unlike networks.” My strong contention is that that argument undermines and invalidates the old networks the CAGW folks putting those arguments forward consider as gospel, because the old COOP network in 1936 is not even close to being the “same” COOP network of 2012. Find the same stations, read the same way, on the same equipment, with the same surrounding population densities, with the same surrounding land use, measured at the same times of day between 1936 and 2012 and you can compare them accurately. Anything else is rife with expectation bias and guesstimate adjustments.”
True! But we should not play their game, as JJ said.

Man Bearpig
August 9, 2012 9:35 am

It seems quite obvious that NOAA should not get any funding for their temperature data anymore. They can not even perform simple statistics and prefer to use the system that gives them the result they want rather than report the truth. Shame on them. Lobby your representatives, senators, whatever.

Dave_G
August 9, 2012 9:38 am

Do comparisons of previous, recent monthly records, show the same differences?

Elmer
August 9, 2012 9:54 am

It is not really the average temperature unless the slope from minimum to high is linear which it never is.

August 9, 2012 10:03 am

Climate Refugee
The GISS global temperature anomalies explain 33% (adjusted r-squared) of the us HCN anomalies, 1895-2011. Figures likely to be similar for other comparisons.

August 9, 2012 10:16 am

My curiosity got the better of me, and so I plotted, using individual state values for the temperature decline rate and the average elevation, to see how they changed across the country.
I can’t post that plot here, but put it into a short post on Bit Tooth Energy . It turns out that the rate of temperature change is itself sensitive to elevation with higher elevation impacts occurring in those states near sea level, again providing evidence of the impact of sea temperatures on the nearby land values.

August 9, 2012 10:20 am

Michel says:
August 9, 2012 at 12:32 am
It’s difficult to understand what is an average temperature: if you put one hand in a bucket of hot water and the other one in ice cold water will an average temperature mean anything?
If you mix 1 kg of boiling water with 1 kg of ice cold water you may expect a resulting temperature of 50 °C. But you have to mix it. So far, we cannot mix Houston with Minneapolis (any way you can’t mess with Texas)!
If a measuring station is at sea level (Tampa, FL) and another one at an altitude of one mile (Denver, CO) what is the signification of temperature averaged between these two?
Temperature anomalies (the difference between a measured temperature from a timed average for the same station) are understandable and these differences can be treated as cohort for statistical analysis. That’s what we see on all hockey stick or flat diagrams.
REPLY: Probably the best question to ask is this – how does NOAA calculate the area average absolute temperatures (not anomalies) for the CONUS they use in those press releases, combining that mishmash of dissimilar COOP stations? As far as I can tell they have not published the exact method they use for those. – Anthony

RE-REPLY
I understand your curiosity. But even if it’s made by NOAA it does not make any sense to average
an intensive property like temperature between different sites. A whole discussion about a wrong approach – Michel

Crispin in Waterloo
August 9, 2012 10:23 am

The Canadian CBC reported the record July heat in the US as fact.
BTW a large solar panel like that next to the temperature CONUS measurement station produces a lot of (solar) heat that would otherwise not be there. They are nearly flat black and turn reflective grass into a >2 kW heat source.

David Meriwether
August 9, 2012 10:24 am

I’m curious if the old COOP/USHCN stations could be filtered down to the set that were there in July,1936 and still there in an essentially unchanged condition in July, 2012 to calculate a direct comparison between the two points in time. There would probably be too many uncertainties and potential anomalies to know for sure if the results are valid. There could be too few stations too. But it would be interesting to see.

August 9, 2012 10:31 am

“That’s why climate scientists generally prefer to deal with anomalies”
This is is really the problem. So long as the “climate scientists” get to define what an anomaly is (not to mention the freedom to fudge the records of absolute temps on which “anomalies” are based), they have a complete control of the reality of temperature reporting. Why should we believe their reports of anomalies if they won’t even address the critical siting and time-of-observation issues recently raised by numerous people?

JC
August 9, 2012 10:33 am

The lapse rate information of any given location is meaningless when considering ground temperatures without also considering geographical differences. Altitude alone does not control temperature. The city where I live is 500 feet lower in altitude than the city where I work. They are only 40 miles apart yet where I work is consistently 2 to 5 degrees warmer. Both cities are about the same distance inland. Just averaging altitude will tell you nothing.

TC in the OC
August 9, 2012 10:34 am

Tom in Florida says:
August 9, 2012 at 5:29 am
Stokes does make a valid point. Comparing raw data from different sets is not correct when looking for changes.
Tom brings up a good point about keeping to the same data set when comparing US temps and I will tell you why.
I like and greatly appreciate Anthony’s work on pointing out all of the hi jinks that NOAA does to the data but on this one issue of July 2012 being the hottest month ever I think everyone is approaching this in the wrong way. We need to use the same data that the warmists are using so there is no way they can wriggle out of this.
In the 76 years since July 1936 we have been told over and over that CO2 has increased exponentially and this has caused “runaway” global warming that if not stopped will cause the end of the world or something like that.
I say we embrace this and shout it from the roof tops…the US average temp has increased 0.2° F in 76 years.
OMG…really…0.2° F in 76 years!!! Where is the runaway global warming?
We need to tell everyone this shocking truth and then ask why are we so freaking worried about 0.2° F rise in temperature in 76 years. And then ask “why was it so hot in 1936 if CO2 is the climate driver.”
0.2° F in 76 years…really!!!

August 9, 2012 10:41 am

Just looking at the frequency distribution of Northern Hemisphere temperatures in GHCNv3, I cannot see a drastic change in the shape & location of the distribution other than the fact that the number of observations varies. See Dude, don’t tell me it’s raining!

JJ
August 9, 2012 10:46 am

Owen in Ga says:
I see a lot of hand waving about “comparing trends between unlike networks.” My strong contention is that that argument undermines and invalidates the old networks the CAGW folks putting those arguments forward consider as gospel, because the old COOP network in 1936 is not even close to being the “same” COOP network of 2012.

Very good point.
Stokes’ red herring about anomalies ignores the fact that simply using anomalies does not necessarily provide for legitimate cross-comparison between networks. The most common example of a failure in that vein is the use of different base periods for the anomalies, either for comparisons between two datasets or two periods within one dataset. But if the network (or simply the properties thereof) changes from the base period to the period of analysis within a single dataset, the effect could be very similar…

cms
August 9, 2012 10:47 am

Jim G. it is Karl Marx not Carl and he was not really interested in the distribution of wealth. That is more a socialist bugaboo. His analysis was a lot more sophisticated than that. It was the distribution of power exemplified by ownership of capital that was his major concern.

August 9, 2012 10:56 am

One media outlet should find the “story” in Anthony’s blog releases. Amazingly enough, Wall Street Journal, maintains a split editorial “personality” on this issue:
Today:
http://online.wsj.com/article/SB10000872396390443991704577577242186369820.html?mod=googlenews_wsj Max Taves: “July Was Hottest Month on Record”
And 5 days ago:
http://online.wsj.com/article/SB10000872396390444405804577558973445002552.html Matt Ridley on “How Bias Heats up the Warming Debate”
From Ridley’s three-part article:

I argued last week that the way to combat confirmation bias—the tendency to behave like a defense attorney rather than a judge when assessing a theory in science—is to avoid monopoly. So long as there are competing scientific centers, some will prick the bubbles of theory reinforcement in which other scientists live.
For constructive critics, this is the problem with modern climate science. They don’t think it’s a conspiracy theory, but a monopoly that clings to one hypothesis (that carbon dioxide will cause dangerous global warming) and brooks less and less dissent. Again and again, climate skeptics are told they should respect the consensus, an admonition wholly against the tradition of science.

joshv
August 9, 2012 10:59 am

Sorry Anthony, I love you man, but you are wrong here. The high quality network does have different average altitude than the full network, and thus absolute temperatures are not comparable.
Your complaints about not knowing if the high quality network is adjusted are just backpedaling. They aren’t adjusting the high quality site averages down to sea level – they just aren’t. So there is some altitude effect in your average. Nick Stokes might be wrong that it’s 2 deg F – but there is an effect, and scientifically it means that the two temperatures just aren’t comparable.
From your quote:
“The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the nation changed over the past 50 years? ”
The high quality network is not meant to be used the way you are using. It is meant to observe long term trends in a pristine environment. When NOAA observes some long term trends using this network, I assume that they will then publish some results.
REPLY: if you can show me how NOAA derives the CONUS Tavg used in the press release, I can duplicate it with CRN. So far nobody has been able to show how NOAA arrives at that result, and how/if they compensate for altitude. – Anthony

Rattus Norvegicus
August 9, 2012 11:22 am

My best guess is that it is a straight average. BTW, is 2012 a record in the CRN record, because that is the proper comparison.

davidmhoffer
August 9, 2012 11:24 am

Bill Parsons says:
August 9, 2012 at 10:31 am
“That’s why climate scientists generally prefer to deal with anomalies”
This is is really the problem. So long as the “climate scientists” get to define what an anomaly is (not to mention the freedom to fudge the records of absolute temps on which “anomalies” are based), they have a complete control of the reality of temperature reporting.
>>>>>>>>>>>>>>>>>>>>>>
Is it SO much worse than that!
What we’re supposedly trying to figure out is if CO2 increases cause an energy imbalance at earth surface. We measure CO2’s effects in w/m2 which does NOT have a linear relationship with temperature (see Stefan-Boltzmann Law in any physics text if you want the explanation).
So…
an anomaly of +1C at -40 = 2.9 w/m2
an anomaly of +1C at 0 = 4.6 w/m2
an anomaly of +1C at +40 = 7.0 w.m2
Comparing anomalies from very different temperature ranges tells us pretty much nothing about CO2’s supposed effects at earth surface. Consider for example 2 degrees of warming at -40 which happens at the same time as 1 degree of cooling at +40. According to the “average temperature” we’d conclude that the earth was warmer by 1/2 degree. But based on w/m2, we’d actually be at a LOWER energy level.
Anomalies distract us from the real issue…. which is w/m2.

Frank K.
August 9, 2012 11:31 am

From NOAA’s own press release, the difference between 2012 (77.6 F) and 1936 (77.4 F) is miniscule and well within anyone’s error estimate for this kind of temperature measurement. So all we can really say is that the “average” temperature (as defined non-uniquely by NOAA algorithms) for July 2012 is comparable to (really the same as) July 1936 in the continental United States. That’s it.
Of course, please remember (as we are reminded whenever we cite any climate records for the U.S.) that the U.S. represents only about 2% of the Earth’s surface. Is this year remarkably warm globally? Not really…

Mindbuilder
August 9, 2012 11:34 am

Anthony, you need to update your original post right away with the lapse rate calculation prominately at the top. Don’t wait to do the calculations yourself, state Nick Stokes numbers as speculative until you can finish your caclulations. If you don’t do this as quickly as possible, you will be even more guilty than those who decided to “hide the decline”. You need to correct your mistakes, if any, as quickly as possible. If you delay, it makes you look like an oil company shill.
REPLY: Oh, please. People call me an ‘oil shill’ and bunches of other derogatory names no matter what I do or say. Typically that’s anonymous cowards like yourself. First, show me how NOAA calculated the national average. Did they do a straight average, or an altitude weighted one? Until we know, its all just speculation about what the correct method to match theirs is. For all we know, they may just use a straight area average. Nick Stokes method doesn’t deal with weather variation, which on the short time scale (1 month) may over/under correct. My point of this post, lost on you and Stokesy in the attempts to play “gotcha”, is that NOAA has this new network where they could use it to correctly weight the problems with the old troublesome uber adjusted network, or publish a new result outright. After four years of it being complete, they don’t say a peep. That’s the point. Be as upset as you wish. – Anthony

Mindbuilder
August 9, 2012 11:38 am

They really should put in some of the old temperature measuring devices along side the modern ones in the CRN. Perhaps just with digital thermometers. They also need to expand the network globally.

Paul K2
August 9, 2012 11:40 am

Anthony, How did you grid the USCRN station data? TIA.

JJ
August 9, 2012 11:57 am

Rattus Norvegicus says:
BTW, is 2012 a record in the CRN record, because that is the proper comparison.

No, it isnt. That would only be a proper comparison if their periods of record were comparable. They are not.

Paul K2
August 9, 2012 12:03 pm

Rattus: Yes, very good idea. Use all the USCRN July data from each station since 2008 to calculate the average for that station. Then calculate the average from that station for each July, and compare the anomalies. July 2012 will be likely be the highest for many of the stations.
Then the anomalies can be averaged using a gridded procedure to get an estimate of the USCRN July 2012 CONUS anomaly.
Finally, the USCRN CONUS anomalies for each July can be compared to the anomalies reported by the NCDC to see if the trends match. The baselines of course, will still be different, but the trends should be comparable.
If one had the time and inclination, the baseline for 2008-2012 for the NCDC data could determined, and adjusted out, to get a better comparison.
The result is likely to be very close.

August 9, 2012 1:01 pm

Wouldn’t this also give them an excellent means to “Adjust” the old network in a manner that matches reality? If the new network is designed to not have all the issues they are constantly trying to adjust for in the old network, then they merely have to adjust to old network results so that they match the new network. They can they revise all the old results accordingly.

August 9, 2012 1:19 pm

Here is how I’d suggest comparing USHCN and USCRN to see which July is the “hottest”:
Take all USHCN stations, turn them into anomalies relative to a particular baseline period (unilike in Hansen’s paper, for these purposes the choice isn’t particularly important), assign them to 2.5×3.5 lat/lon grids, average anomalies within grid cells, apply a land mask, and weight each grid cell by its resulting area to create a CONUS-weighted average anomaly.
Do the same process for USCRN. Now take the resulting anomalies and fit them together over a common period of overlap (say, 2004 to 2008). Now you have a more apples-to-apples way to compare a 1930s USHCN temperature with a 2012 USCRN temperature, given that the stations in both networks likely have different absolute temperatures due to elevation, spatial coverage, siting, etc.
REPLY: The CRN network wasn’t complete until 2008. From 2002 to 2008 there were large spatial distribution gaps, so calculation of grids/anomalies is really problematic. Four years of complete data isn’t enough to calculate a meaningful baseline from. Besides, this article is about absolute temperatures, not anomalies. What we really need to know is how NOAA determines their CONUS area average for a month. Once that is known, then I can replicate with CRN. IMHO that’s the right way, not trying to calc anomalies to compare to an absolute number issued by NOAA. – Anthony

Jim G
August 9, 2012 1:20 pm

cms
Spelling aside, from each according to his means to each according to his needs, says it all and the concept that crime is purely a socioeconomic problem along with the socialist theories of economics have not worked out well even when applied in a more benevolent fashion than in the USSR, just look at Europe’s general economic condition, and where the USA is headed. The gentlemen to whom I was referring are both socialists irrespective of the accuracy of the quote from Marx, C or K as I have seen it both ways. You may be right but it is of little consequence. Communism did not even work in the early Christian Church as there will always be those who freeload on such a system, see St. Peter’s letter on this in the Bible.

Mindbuilder
August 9, 2012 1:27 pm

@Anthony – If you object to how NOAA is doing things then explain that in your post, but you know that many people will not think of an altitude adjustment to temperature and your original post makes NOAA look very bad if you don’t mention that. You don’t want to give bad impressions of people for false reasons, because that makes you look very bad. If they deserve to look bad, then explain accurate reasons why. Bring to the front every significant thing that casts doubt on your theory. That is the scientific way and the way of any rational person who cares about their credibility. At Real Climate they would have probably just deleted or altered my or Nick Stokes post, but I think you have more integrity than that. Update your original post, let it show, and do it quick.
[REPLY: Anthony has already responded to your concerns. Continuing this approach qualifies as badgering and thread-bombing. You’ve had your say, now drop it or be snipped. -REP]

Kev-in-UK
August 9, 2012 1:28 pm

Tom in Florida says:
August 9, 2012 at 5:29 am
>>Stokes does make a valid point. Comparing raw data from different sets is not correct when looking for changes>>
Excuse me? – so if I have, say, a dataset of measured adult heights in one county, and another in a separate county,- and I see that one set shows increasing height with time, and the second set also shows increasing height – what do the two datasets confirm? That there is a gradual increasing in height! They don’t mix, they only independently correlate the ‘observation’! Whats the flipping problem?
Anthony decides to look and see if one dataset confirms//correlates with the other – they don’t! so either the hypothesis is wrong or one of the datasets is wrong – Now, I don’t know which is right or wrong without trawling through all the data and mechanisms, etc, etc – but based on Anthony’s description, I’d hazard a guess that it’s the historical adjusted data that’s a bit iffy. What’s to argue about? As I see it – there isn’t really an apples and oranges thing here – lapse rate adjustment or whatever – if there is an underlying trend (of increasing temps – and specifically ‘record’ July temps) it would surely be apparent in any RAW data? It is not apparent in the spanky new dataset – Watts up with that?

Rattus Norvegicus
August 9, 2012 1:30 pm

JJ,
The USCRN and the USHCN networks are in no sense comparable — the sets of stations are entirely disjoint. All you can say is that they are different because, well, they are. Paul’s suggestion is good, although there really is not enough data available for trend computation, but computing the anomalies is a good idea.

Paul K2
August 9, 2012 1:32 pm

Not entirely, but yes, the fully adjusted results from the old network should match the USCRN network gridded anomaly. And from the work done by Menne in a paper in 2010 using partial data up through 2009, they matched beautifully. But now for many of the USCRN stations, we have over five years of data.
This has always been the weakest point in the SurfaceStations project logic. If one really wanted to quantify the siting issues, just compare the gridded anomalies from the old network with the anomalies from the USCRN stations in and surrounding that grid.

Dean Chancey
August 9, 2012 1:34 pm

I work in the health-care field. Another profession known for creative use of statistics for personal gain…
All testing should (but rarely is) held to a common standard wherein all measurements must be sensitive, specific AND meaningful. When held to this standard, medical testing is abysmal at best.
I’m oddly uplifted to see that we (as a profession) are not alone in this performance. In reality, I’m saddened to see that any other field which claims to be “scientific” is as bad as we are. Congratulations “climatology” – you’ve made it.

Skeptic
August 9, 2012 1:36 pm

It’s getting to the point that I do not believe ANYTHING that comes out of the government’s mouth anymore. These liars are practicing a faith-based religion every bit as superstitious as any religion or superstition attributed to deities, and they will do anything to boost their religion and downplay anything that does not jive with their “scriptures”.

Mindbuilder
August 9, 2012 1:53 pm

Maybe sombody should do a freedom of information act request for the NOAA method of calculating the temps.

Dell from Michigan
August 9, 2012 1:54 pm

Interestingly July 2012 saw the highest level of solar flare activity of any July in the past 10 years, and probably af any other month (although I haven’t had a chance to go through data for every single month)
Note especially the last 3 columns of Solar flares.
http://www.solen.info/solar/old_reports/2012/july/indices.html
http://www.solen.info/solar/old_reports/2011/july/indices.html
http://www.solen.info/solar/old_reports/2010/july/indices.html
http://www.solen.info/solar/old_reports/2009/july/indices.html
http://www.solen.info/solar/old_reports/2008/july/indices.html
http://www.solen.info/solar/old_reports/2007/july/indices.html
http://www.solen.info/solar/old_reports/2006/july/indices.html
http://www.solen.info/solar/old_reports/2005/july/indices.html
http://www.solen.info/solar/old_reports/2004/july/indices.html
http://www.solen.info/solar/old_reports/2003/july/indices.html
For solar activity data since 2003 here are monthly statistics.
http://www.solen.info/solar/old_reports/
Interestingly July 2008, didn’t see a single flare, and was 2.63 degrees colder (by noaa standards) than July 2012.
July 2009, which is the lowest July of the past decade, saw only 2 small class c flares.
Is it a coincidence that when the Sun flares up, temps on Earth go up, and when the sun stops flaring, temps on Earth go down?

JJ
August 9, 2012 1:57 pm

Rattus Norvegicus says:
The USCRN and the USHCN networks are in no sense comparable — the sets of stations are entirely disjoint. All you can say is that they are different because, well, they are.

They are comparable in some senses, but not all such comparisons are legitimate or meaningful. They are not at all comparable with respect to period of record. The period of record of CRN is so short as to render any claim of “its a record!” (as you suggested be done) to be absolutely meaningless.
When backed by HCN, such claims are merely essentially meaningless.
Paul’s suggestion is good, although there really is not enough data available for trend computation, but computing the anomalies is a good idea..
No, that is pointless.
The proper use of CRN wrt HCN is to point to the former and say “if we had done that 150 years ago, we’d be a lot further along toward finding a correct answer to the wrong question than we are now”, while pointing at the latter and laughing derisively.

Paul K2
August 9, 2012 2:14 pm

Actually the USCRN has plenty of data to identify siting issues, and measurement problems, because you can use daily or weekly data. It doesn’t take 30 years of data to identify measurement problems; five years of daily data should be plenty.
And regarding sampling requirements; if I recall correctly, only 13 properly sited USCRN station sites in the CONUS would give reasonably accurate anomalies. The 107 USCRN stations are plenty, and in fact over samples the region.
The claims made in Watts et.al. 2012 draft would have been easy to verify, if the USCRN data had been used.

Marlow Metcalf
August 9, 2012 2:15 pm

I would like to know how this set of 600 stations compares.
“Unadjusted data of long period stations in GISS show a virtually flat century scale trend
Posted on October 24, 2011 by Anthony Watts”
http://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/
“There are several examples of long-running temperature records that fail to show any
substantial long-term warming signal; examples are the Central England Temperature record and the one from Hohenpeissenberg, Bavaria. It therefore seemed of interest to look for long-running US stations in the GISS dataset. Here, I selected for stations that had continuously reported at least one monthly average value (but usually many more) for each year between 1900 and 2000. This criterion yielded 335 rural stations and 278 non-rural ones.

August 9, 2012 2:16 pm

Kev-in-UK,
To extend your toy example a bit, if you were comparing heights in two different countries and sampled 100 people in each country in a non-random manner, you would have to make sure to control for factors like age (or altitude) that are correlated with height (or absolute temperature), otherwise you wouldn’t really be comparing like groups and would draw incorrect conclusions from your data.

joshv
August 9, 2012 2:18 pm

I have to say Anthony, that I find your responses, and the responses of your moderators to be disappointing. You are getting defensive and irrational. You simply have to admit that the two networks really are not comparable via absolute temperature measurements, nonsense about not knowing the adjustments done to one or the other of the networks doesn’t help you any. If you know less about how one number was made, I’d think that would make it even less comparable to another number.
What you’ve done is the equivalent to answer the claim “2012 was the hottest month on record in Chicago” – with “No it isn’t, it’s two degrees cooler in Springfield!”. It’s a non-sequiter – so what if a different network produces a different absolute number?
Also, all the conspiratorial stuff about NOAA not using the new network really makes you look bad. The new network simply doesn’t have enough data to be of much use. If you have some evidence that it is producing a statistically significant lower warming trends, which is being ignored, by all means produce that evidence.
I agree that I think it’s odd that NOAA is publishing absolute temperatures for one month, for 2% of the landmass of the world, when they’ve spent all this time telling us global warming is long term, and well, global, and about changes in temperature.
REPLY: The CRN has enough data to do one month, and that’s all I’m talking about in this post. All these other issues are overreaching. I’ll be thrilled to adjust my method once we can find out how NOAA creates their CONUS Tavg. Until they do, all concerns about matching networks are speculative. Bear in mind that Gavin Schmidt once said that all we need is about 50 stations. At 111, I think we have more than enough to get a good CONUS reading. My interest is getting the procedure from NOAA, and then we’ll revisit. – Anthony

August 9, 2012 2:19 pm

I have a naive question about this: Anthony is comparing a supposed near-perfect sighting climate station data and its results to the temperature record that NOAA said it was. I sense something wrong here. Anthony takes good “unbiased” data that says it was 75F degrees. He then compares that to the historical record of BIASED temperatures over the last 100 years. He then tries to plug his temperature in that BIASED record to see where the “unbiased” value will fit.
Would not a better way to handle the “unbiased” data is to compare it to the data pulled from the Watts et al 2012 paper?
.. just a thought.

Tom in Florida
August 9, 2012 2:21 pm

Kev-in-UK says:
August 9, 2012 at 1:28 pm
Tom in Florida says:
August 9, 2012 at 5:29 am
>>Stokes does make a valid point. Comparing raw data from different sets is not
correct when looking for changes>>
“Excuse me? – so if…..”
Kev,
First of all that was only the lead in line of my post. I hope you went back and read the whole thing. Now, you cannot look for anomalies by comparing corrupted data to uncorrupted data when the data was taken by different methods, which is what Stokes was saying. That is a valid point. What Stokes was also implying was that it is OK to use corrupted data for anomalies as long as the corruption continues throughout the whole data set. Now that is silly when looking for what is actually happening in the real world as Anthony was doing. I went on to say that when a data set is as corrupted as the old COOP/USHCN network is, it cannot be relied upon to be used for anything.

Theo Goodwin
August 9, 2012 2:27 pm

Who is/was responsible for creating USCRN? How is he doing these days?
It will be really interesting to see the twists, turns, and flips as Hansen and friends do whatever they must to make these readings from USCRN go away.
REPLY: The USCRN was created by Tom Karl of NCDC, the current director. – Anthony

FijiDave
August 9, 2012 2:38 pm

When I went looking for the USCRN sites as per Anthony’s attached pdf file. I couldn’t find even one after looking at about a dozen of them. Then it dawned on me that the positions given for the sites are as useless as hip pockets on a singlet, as 45.2 N 113.0 W (Bannack State Park (Old Freight Road Site)) for example, is six nautical miles south of 45.1 N 113.0 W. So just rounding to one decimal place, not to mention typos, can put the position miles out.
As the furore on temperatures is down to hundredths of a degree, why on earth can’t geograhical positions be indicated in (at least) hundredths of a degree?
Great article, BTW, Anthony. Thank you!

Theo Goodwin
August 9, 2012 2:39 pm

Anthony’s brilliant achievement is the following:
“The most important point here is that they aren’t using this network to try to provide any sort of sanity check to the poorly sited mishmashed highly adjusted train wreck that is the COOP/USHCN/GHCN networks – Anthony”
The ball is in NOAA/NCDC’s court. Until they respond to Anthony’s question, there is no point in attempting to debate the standards for comparison of the two networks. NOAA/NCDC has to clearly define USCRN and then state the important relationships between the two networks.
Shame on anyone for reporting a record high temperature for July when some conflicting data are omitted on purpose. Shame on NOAA/NCDC for their tardiness with USCRN. The person responsible for the decisions not to report these matters should explain himself/herself.

A. Scott
August 9, 2012 2:49 pm

Appears to me if NOAA is not altitude adjusting EVERY station – regardless of CRN or USHCN then there is no accurate temp record at all.
And even this, as others have noted, isn’t really sufficient as true altitude is not the accurate measure – rather density altitude on any given day and hour will be true measure

Kev-in-UK
August 9, 2012 3:40 pm

Tom in Florida says:
August 9, 2012 at 2:21 pm
Yes Tom – I realised your argument, but I simply did not agree with your opening statement.
@Zeke,
thats correct of course, I’m sure you realise I was trying to use a single simple metric analogy (ignoring other variables) in order to illustrate that different datasets can be used to see ‘trends’ – especially, as in this case, they are supposed to be ‘seeing’ the same darned metric i.e. Tmax/Tmin !
It matters not if one set of data is altitude corrected – if the trend is there it will still be there after correction as all corrections on any given station will (or should) be the same…..unless they move the thing! The whole metric is supposed to be average record temp increases – any half decent rural station would be expected to show this (if it’s there!), so surely a top quality dataset of top quality well sited stations and top quality instruments MUST be expected to show this simple alleged UNDERLYING trend? Yet they don’t and are NOT reported…..

Bruce
August 9, 2012 4:00 pm

The altitude calculation done by Nick Stokes is not useful; it is the wrong metric.
Far more useful would be a calculation of weighted average altitude for each set of stations. Some stations are close to many other stations; some stations are off by themselves, and in the gridding process carry a larger area weight.
I have not done the more useful calculation. For all I know, the temperature delta will be even greater. I just know the work to date has no value in this discussion. I also understand Nick did what another poster proposed he do; taking the other poster’s words literally. Nick probably already knows his number is not useful.

August 9, 2012 4:04 pm

I hope someone is keeping a back-up of all of the USCRN numbers. Sooner or later, these will have to be “adjusted” to fit the CAGW meme. The RAW data will have to disappear…

Sus
August 9, 2012 4:21 pm

[snip fake email address, proxy server, policy violation]

August 9, 2012 4:44 pm

Comparing absolute temperatures for different time period using a variable network of observing stations is extremely dicey. For example, in Anthony’s comparison potential difficulties involve elevations differences (as suggested by Nick Stokes), areal averaging (northern (colder) stations may dominate a non-areal average), different sensors (CRN sensors are aspirated, a lot of USHCN stations, especially back in the 1930s, were not—aspirated sensors, I think, tend to record lower temperature than naturally aspirated CRS thermometers) and there are others. The three I listed may impart a cold bias in the CRN observations compared with the USHCN observations.
But similar (and additional) problems undoubtedly are present when trying to compare USHCN CONUS temperatures in 2012 with those in 1936 (or any other time period). Maybe Nick could calculate the average station elevation in the 1936 network for comparison.
So, Anthony’s point is a good one—until we know how NCDC calculates the CONUS absolute temperatures, it is impossible to judge the validity of the NOAA press release—either internally, or via external comparisons such as to the CRN observations.
If were doing it, I would probably grid the data, establish an average absolute temperature for each gridcell for some baseline period, and then only deal with anomalies from that point onwards. Suggesting, as Hansen does at GISS, to just add the anomaly to the baseline average to get the absolute temperature at any point in time. This method is not perfect, as the variability of the anomalies may change as stations come and go (or are otherwise modified) within each gridcell, but it is less sensitive to station changes than other methods.
But, until we know how NCDC solves all these issues my take home message from Anthony’s post is that comparing absolute temperatures over time is extremely non-robust—something not really emphasized in the NOAA press release and resulting media coverage.
Just my two cents.
-Chip

pcknappenberger
August 9, 2012 4:45 pm

Comparing absolute temperatures for different time period using a variable network of observing stations is extremely dicey. For example, in Anthony’s comparison potential difficulties involve elevations differences (as suggested by Nick Stokes), areal averaging (northern (colder) stations may dominate a non-areal average), different sensors (CRN sensors are aspirated, a lot of USHCN stations, especially back in the 1930s, were not—aspirated sensors, I think, tend to record lower temperature than naturally aspirated CRS thermometers) and there are others. The three I listed may impart a cold bias in the CRN observations compared with the USHCN observations.
But similar (and additional) problems undoubtedly are present when trying to compare USHCN CONUS temperatures in 2012 with those in 1936 (or any other time period). Maybe Nick could calculate the average station elevation in the 1936 network for comparison.
So, Anthony’s point is a good one—until we know how NCDC calculates the CONUS absolute temperatures, it is impossible to judge the validity of the NOAA press release—either internally, or via external comparisons such as to the CRN observations.
If were doing it, I would probably grid the data, establish an average absolute temperature for each gridcell for some baseline period, and then only deal with anomalies from that point onwards. Suggesting, as Hansen does at GISS, to just add the anomaly to the baseline average to get the absolute temperature at any point in time. This method is not perfect, as the variability of the anomalies may change as stations come and go (or are otherwise modified) within each gridcell, but it is less sensitive to station changes than other methods.
But, until we know how NCDC solves all these issues my take home message from Anthony’s post is that comparing absolute temperatures over time is extremely non-robust—something not really emphasized in the NOAA press release and resulting media coverage.
Just my two cents.
-Chip

August 9, 2012 5:13 pm

Maus:”I think
you rather optimistically underestimate ingenuity. From the recent TOBS discussion we’ve learned that the climate folks haven’t been able to read a thermometer
and sort out the difference between the midpoint of a range and an average for nearly four decades. And that’s completely aside the notion that the atmosphere
is an active ‘heating’ source based on albedo corrected black-body models of the Earth as a lightbulb. That is, a hollow sphere completely enclosing the sun at
a distance of 2AU. And this rather than anything trivially or even in the neighborhood of correct by modelling the average temperature as lit by the sun from,
you know, the side at a value 86K greater for the irradiated hemisphere on a tidally locked sphere. I’d love to join you in your enthusiasm, but if the entire
field cannot sort out basic mathematics or how to read numbers off a dial for these many decades? I’ll lay my bets on the continued success of NOAA keeping two
sets of books.”
Agreed, but the new data set has made the temperature illusion that much harder to pull off; they need to reconcile two sets of data now if the newer, more accurate, data set becomes common knowledge (basically share this post as far and wide as you can). As I said, the need to explain the difference (in terms of a historical ‘high’ as compared to an actual low) becomes key.
I vote for a monthly update, if not a dedicated page on this site for USCRN – given this site’s ranking it will be easily found…

Darren Potter
August 9, 2012 5:16 pm

My two cents.
I am somewhat disappointed to learn the following: “No stations are near any cities, “.
Would it not be prudent to have an equal number of stations located in cities for the purpose of monitoring the Urban Heat Island effect?
Wouldn’t this information be useful for coming up with scientific/statistical bias values that could be use to correct for the Urban Heat Island effect on non-USCRN stations? Especially for working with historic values where a station was encroached upon by a expanding city.
Wouldn’t this information also be useful for predicting the impact of the growing number of expanding cities?
Such data could help us to understand when the density of a city starts to negatively impact itself. More people, more air conditioners, more local heat, requiring air conditioners to run longer or more powerful (BTU’s transferred) air conditioners, thus even more local heat generated.

John F. Hultquist
August 9, 2012 5:19 pm

TC in the OC says:
August 9, 2012 at 10:34 am
“In the 76 years since July 1936 we have been told over and over that CO2 has increased exponentially . . .

Up? Yes. Exponentially? That seems to be a stretch. See here:
http://www.esrl.noaa.gov/gmd/ccgg/trends/
Not that it apparently makes any difference. See here:
http://notrickszone.com/2012/08/07/epic-warmist-fail-modtran-doubling-co2-will-do-nothing-to-increase-long-wave-radiation-from-sky/
~ ~ ~ ~
In Ellensburg, WA the Max. temp. on Tuesday was 103 °F. and today it is 86 °F. At this rate of cooling we expect local lakes to freeze over by next Wednesday. Thursday at the latest.

Darren Potter
August 9, 2012 5:32 pm

FijiDave says: “As the furore on temperatures is down to hundredths of a degree, ”
Which once again brings up the whole issue of AGW Alarmists predicting future temperatures out to hundredth of a degree, based upon current temperatures that are barely accurate to a degree, and past temperatures that are doubtfully accurate to several degrees.
Question for Historical-Scientists: Has there been any investigation into the accuracy of thermometers of 1800s & 1900s? Were the thermometers of the past accurate at one end, both ends, linear accurate, or all over the place by varying degrees?

August 9, 2012 6:00 pm

Look at http://www1.ncdc.noaa.gov/pub/data/cirs/state.README for a description of methodology.

cms
August 9, 2012 6:53 pm

Sorry Jim G. Take my word for it or google it, It is Karl. Also you have not done even the basic research on Marx and the quote you are fond of. http://en.wikipedia.org/wiki/From_each_according_to_his_ability,_to_each_according_to_his_need You will note his analysis has nothing to do with yours.

Matt
August 9, 2012 6:58 pm

I apologize if in the flurry of comments, I missed this proposal. It would seem to me that the best method to determine the question of July record temperatures would be to look at the stations that existed in 1936, continue to report, are well sighted, and not affected by urban effects. I feel this would be superior to comparing new networks to old ones, or attempting to somehow correct for siting, elevation, or sensor differences. While the new, well-sighted stations will have great long term benefits, I feel using them solely as a justification for or against a record temperature for a monthly record at a time they did not exist is not valid. If indeed, this becomes the hottest month on record, then in 2018 the question comes up again, we’ll have highly accurate sensors deployed to either support or refute that conclusion, but not against 76 year old records.

Simon
August 9, 2012 7:02 pm

The two sets of weather stations are in different places.Therefore the averages are different. Nick has calculated the average altitude of the two sets and the historical network set was on average 178m higher. The NOAA report the historic set average so that it is directly comparable with previous years. [Snip. Do not do that again. ~dbs, mod.]

Editor
August 9, 2012 7:57 pm

Anthony reports:
Subject: Undeliverable: request for methods used in SOTC press release
Your message did not reach some or all of the intended recipients.
Oh my. I was going to suggest they might have meant info@…. but at their contact page http://www.ncdc.noaa.gov/oa/about/ncdccontacts.html they have names like ncdc.orders and ncdc.webmaster, so it appears to me that ncdc.info has passed into history.
Maybe you’ll have to go all formal at http://www.rdc.noaa.gov/~foia/index.html
OTOH, Tom Karl promises at http://www.ncdc.noaa.gov/oa/about/welcomefromdirector.html that “As stated above, the Center is a service organization. I invite you to explore our climate, radar, and satellite resources. We continuously endeavor to improve the services provided and welcome your comments and suggestions. I can assure you we review each one. Please direct your comments and suggestions to ncdc.webmaster@noaa.gov. ”
At the very least, the ncdc webmaster would appreciate hearing that there are some references to ncdc.info they haven’t purged yet.

HowardG
August 9, 2012 9:22 pm

Bob Koss (at August 9, 2012 at 12:00 am) asked if the paired stations could be calculated as single sources.
I took the liberty of doing that and found the result was a negative 0.18 degrees F correction making the July average Using the NOAA USCRN data, 75.3°F vice 75.5
Also the two stations in Goodwell OK (in the western pan handle) are about 3 miles apart and should also be considered a pair making 8 CONUS pairs. All the individual paired data was very similar as would be expected. I simply used the average of each pair’s data as a single station value.
To Moderator: Excel File available for the asking. I simply made a copy of the existing worksheet and adapted it so you can switch between the two. And converted some of the fixed data to calculated and highlighted where I changed formulas.

August 9, 2012 9:22 pm

It would seem that the secondary email address is the one to use
Climate Services and Monitoring Division
NOAA/National Climatic Data center
151 Patton Avenue
Asheville, NC 28801-5001
fax: +1-828-271-4876
phone: +1-828-271-4800
email: ncdc.info@ncdc.noaa.gov
To request climate data, please E-mail:ncdc.orders@ncdc.noaa.gov

Brian H
August 9, 2012 9:34 pm

Alan D McIntire says:
August 9, 2012 at 5:47 am
I’m confused by “monthly mean” and “monthly average” . “MEAN” IS the arithmetic average.
Do you mean “monthly median” = 1/2 (Hi + Lo) for monthly average,?

Damn straight.
That’s how I learned it in school, and I’m fracked if I’ll buy the smearing of terminology necessary to cover the AGW climatology rot.
Median is midpoint. Mean is total of all data points divided by the number of data points. No fuzzing allowed.

August 9, 2012 9:59 pm

did anyone answer wayne’s comment? It is very important. Here it is again:
Anthony,
■ Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
That is why it is hotter in 2012 than in the 1930′s… they were not measuring Tmax’s every five minutes in the ’30′s. I have downloaded daily since June 22nd the Oklahoma City hourly records and never were the highest hourly maximum what was recorded for the maximum of the day, the maximum was consistently two degrees Fahrenheit greater than that of the highest HOUR but evidently they count 5 minute mini-microbursts of heat today instead. I guess hourly averages are not even hot enough for them (yeah, blame it on CO2). That, by itself, invalidates all records being recorded today to me, I don’t care how sophisticated their instruments are… the recording methods themselves have changed and anyone can see it in the “3-Day Climate History”, the hourly readouts, given at every city on their pages. Don’t believe me, see it for yourself what is going on in the maximums. Minimums rarely show this effect for cold is the absence of thermal energy, not the energy which can peak up for a few minutes, much more than cold readings.
You’re a meteorologist, how do you see this discrepancy?

August 9, 2012 10:23 pm

Matt, your idea about “looking at the stations that existed in 1936, continue to report, are well sighted, and not affected by urban effects” to determine if there is any warming since 1936 would not account for the problem “wayne” brought up about the way new sensors can measure Tmax as a short duration burst, while the old sensors could not. This suggests to me that there is no way to determine if it is warmer now than in the 1930s (unless, of course, some stations maintained the same technology from the 1930s right up to the present (do any?) and also followed the criteria you suggested)

Len
August 9, 2012 10:41 pm

I fear that the new data will somehow be corrupted to produce the same spurious warming as the old station data. I can also almost see a new paper from NOAA “A new calibration of the…”
Anthony, your unadjusted data will be priceless when they adjust the new data to show that the new dataset shows warming too.
Thanks for a nice analysis and showing another portion of the truth.

Jeff Alberts
August 9, 2012 10:59 pm

It’s really too bad that averaging temperatures is meaningless.

August 9, 2012 11:02 pm

cols 57 — 63 [7 chars] T_MONTHLY_MEAN
The mean temperature, in degrees C, calculated using the typical
historical approach of (T_MONTHLY_MAX + T_MONTHLY_MIN) / 2

WHAT ?!?!?!? This is supposed to be the USCRN: US Climate REFERENCE Network.
A net work of stations that measure temperatures every 5 minutes. A high priced network of the highest quality stations. Most scientists and statisticians would expect “Mean” to be the centroid of all temperatures sampled, approximately 8640 data points / month = (12 points / hr * 24 hr/day * 30 day/month).
But NOAA has the GALL to define “Mean” to be the mid point between the single warmest and single coldest measured temperature in the month ??
That should not be called T_MONTHLY_MEAN.
Nor should it be T_MONTHLY_MEDIAN
Maybe it should be called T_MONTHLY_MIDDLE.
but I think truth in advertising requires it to be T_MONTHLY_MUDDLE.
Truely, I am shocked that anyone, much less curators of the Climate Reference Network, would apply the term “Mean” to apply to only two outlier points in a dataset of 8640 data points. NOAA credibility goes “Crash and Burn”.

Bill
August 10, 2012 2:29 am

Re: NCDC email address…
From the other article “Dear NOAA and Seth, which 1930′s were you comparing to when you say July 2012 is the record warmest?” the USA graphics indicate the correct email address is just noaa.gov without the leading ncdc:
ncdc.info@noaa.gov
NOT the @ncdc.noaa.gov
Cheers and hope this helps

Nick Stokes
August 10, 2012 2:38 am

Stephen Rasey says: August 9, 2012 at 11:02 pm
“But NOAA has the GALL to define “Mean” to be the mid point between the single warmest and single coldest measured temperature in the month ??”

No, you’ve read it wrongly. T_MONTHLY_MAX is the average for the month of the daily maxima, and T_MONTHLY_MIN is the average of the daily minima. T_MONTHLY_MEAN is the mean of those two numbers.
The fact is that these files have over a century of data. For the majority there are not 8640 data points, but just 3 per day (Max, Min and temp at time of reading). That’s all we have. For modern instrumentation, they emulate that measure. Otherwise comparison with the past would not work.

Gail Combs
August 10, 2012 4:08 am

Alex Heyworth says:
August 9, 2012 at 12:32 am
Further to your reply to Esko, Anthony, there is a reason why they used to be called airfields (in Anglo usage, at any rate – I don’t know about US usage for the 30s and 40s). Remember all that footage of Spitfires bumping over the turf?
________________________________
The Raleigh-Durham International Airport didn’t even exist in 1934 and 1936: The General Assembly of North Carolina charters the Raleigh-Durham Aeronautical Authority in 1939, which would be changed in 1945 to the Raleigh-Durham Airport Authority. Before that there was a tiny airfield near the city of Raleigh.

Gail Combs
August 10, 2012 4:46 am

JJB MKI says:
August 9, 2012 at 4:11 am
Stokes
Last time I looked, the GISS data set for England was constructed from a selected homogenised set of over 70 stations in the early 20th century, spanning both rural and urban locations, narrowing down to about a dozen stations, all located at busy airports in the present day, with the information presented as anomalies. No obvious reason given for the data cull btw, as the culled stations did not stop reporting. By your own logic, it would be fallacious to use GISS to claim warming over this period.
_______________________________
TO add to that the Station Dropout vs Temperature Graph The number of stations used to determine Global Warming has not been constant over the entire 170 year period.

Digging in the Clay The ‘Station drop out’ problem
… From 1880 onwards there is a more or less linear increase in the no. of reporting stations (i.e. stations that have raw data) from 1880 to about 1950 when the number reaches a little over 3300. After this point, within the space of four years, there is a sudden expansion in the number to over 4500, which then reaches a peak of 5348 stations in 1966. Its worth mentioning that there are 7250 records in the GHCN station inventory file (v2.temperature.inv) some of which are for ‘Ships’ but it is clear from this peak count that there isn’t raw temperature data available in the GHCN v2.mean file for all the stations listed in the NOAA GHCN station inventory file.
After peaking in 1966 the total raw data station count then declines in a more or less linear fashion to about 3750 in 1989. Over the next couple of years there is a sudden ‘drop out’ of stations from the total station count to about 1900 in 1992….

Therefore there is no reason not to use the new pristine high quality station 1&2 data instead of the munged-up data set with a large number of 3&4&5 stations. Unless of course the whole objective of the exercise is something different than reporting the weather.
“The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.” ~ H. L. Mencken
And do not forget GISS was caught data tampering before GISS caught red-handed manipulating data to produce Arctic Climate History Revision

Gail Combs
August 10, 2012 6:17 am

Another set of interesting graphs at Digging in the Clay show the number of North & Central America stations and how that changes over time and how much of the “World” is actually just North America temperature. Graph 1 and Graph 2
For North America reading off the graph, the data set goes from ~ 200 data stations in 1840 to ~ 100 stations now with a maximum of over 2000. IF GISS is using ~ 100 stations now, how is that different than the U.S. Climate Reference Network (USCRN) consisting of 114 stations?
If they ditched over two thousand data stations why would they not now be using this highest quality data set?
Also as Anthony has suggested if all the data adjustments were done correctly and the station siting is correct then there should be very little difference between these two data sets. Afterall these data sets are BOTH supposed to give us the temperature of the USA.
Then there is the fact that the monthly surface temperature updates for the contiguous U.S., based upon 280 International Surface Hourly (ISH) stations which have reasonably complete temperature records since 1973 and the “TEAM’s” calculated data sets are diverging. graph
differences graph
SO for those who are suggesting an apples and oranges problem. A separate set done by someone besides Anthony picks up the same problems.

….A few of the findings:
1) Essentially all of the +0.20 deg. C/decade average warming trend over the U.S. in the last 40 years computed from the CRUTem3 dataset (which the IPCC relies upon for its official global warming pronouncements) evaporates after population adjustment (no claim is made for countries other than the U.S.)
2) Even without any adjustments, the ISH data have a 20% lower warming trend than the CRUTem3 data, a curious result since the CRUTem3 dataset is supposedly adjusted for urban heat island effects….

G David
August 10, 2012 6:33 am

I ran this past a Warmist and his reply was:
” that’s an oranges / apples comparison. You’re comparing 76-year-old records from all US weather stations with records from a new, specialised climate monitoring network. It’d be like measuring the length of a football field with an old fibreglass tape, and comparing it to the measurement made with a high-precision laser rangefinder. You’re going to see a difference in the reported number, despite them being measurements of the exact same thing.
Have you adjusted (I believe “homogenised” is the term climate scientists use) the USCRN results to match those from the old temperature series? Or vice versa?
Yes, the USCRN network is designed to give the best possible results, free of any local influences (other than the weather!). But, given that we’re interested in the trend here, homogenised results from the old network (which correct, as much as possible, for those “errors, biases, adjustments, siting issues, equipment issues, and UHI effects” you talk about) still give us useful information about the long-term trend, from prior to the USCRN coming into existence (which was only 12 years ago, IIRC).
Or is it your position that, because we didn’t have a super-high-quality climate monitoring network in place, that we should completely ignore any climate data gathered prior to the USCRN being commissioned?
Never mind that the USHCN data correlates very well with the satellite data, which is completely unaffected by those problems with the surface data you’re talking about…”

DCA
August 10, 2012 6:44 am

I was looking at the NOAA site for adjustments,
ttp://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html#KDK88
For urbanizaion they referrence Karl et al. 1988 but when searching google scholar the link gives an “Error – Page Not Found”. Does anyone have a valide link for the paper?

Bill Illis
August 10, 2012 7:05 am

The UAH July satellite temperatures for the Lower US 48 are out and do not show anything remarkable at +0.91C.
For those that do not know this, UAH lower troposphere does provide a very good match to the NOAA/NCDC’s temperature record for the Contiguous US.
The NOAA/NCDC has, however, a 27.2% higher trend than UAH. This is “another” check on the NCDC’s temperature adjustments which are obviously too high and are not justifiable.
This chart shows there was nothing noteworthy about July, 2012. The anomaly was down from a few months earlier and UAH has a much lower anomaly in July.
http://s10.postimage.org/rtykuwrgp/US_UAH_vs_USHCN_V2_July2012.png
And then the daily Global UAH satellite temps back to 2010 – showing nothing remarkable happening at all last month – flat.
http://s11.postimage.org/8j5gu2nf7/Daily_UAH_Temp_July2012.png

pcknappenberger
August 10, 2012 7:23 am

Steve (at August 9, 2012 at 6:00 pm),
I am pretty sure that is not how they compute the national average. The national average *used* to be calculated in that manner by aggregating the individual climate division data, but NCDC stopped doing it that way a number of years ago. Consequently, the national average calculated from climate divisions does not match the national average that NCDC commonly reports (the underlying temperature data is not the same…the climate division datasets only use a TOBS adjustment).
So the national data (code 110) in the file whose README you point to does *not* derive from the other data in the file. I thought that NCDC made an annotation about this fact after I discussed it with them several years ago, but I can’t seem to find it.
As you might guess, the national average temperature since 1895 warms up quite a bit slower if aggregated from the climate division data than in the NCDC “official” national temperature record–the construction about which Anthony is inquiring about (but we do know that it is constructed using the fully adjusted USGCN v2 data).
-Chip

August 10, 2012 8:04 am

But….. But…. When Obama became president all this was supposed to change! /sarc

wayne
August 10, 2012 8:31 am

Harris at August 9, 2012 at 9:59 pm
Thanks Tom for taking the moment to understand what I was saying, it seems few grasped it. That aspect began worrying me when I read a story, I think here at wuwt last year, when an anomalous high record in the climate record was recorded in Kansas at 1 a.m. in the morning at 122°F. It was called a microburst and only lasted a few minutes but our new instruments are so sensitive in time, the temporal axis, that this was actually recorded as the high for the day. Now the hourly average was much lower, can’t remember exactly but it was something like 101°F. Now recording that freak temperature for historic reasons is fine but I care little in the climate sense what a five minute slice of temperature was.
So, I started looking at this hourly in June here in okc and sure enough, the maximums were usually, not always, but usually 1 to 2°F higher than the highest hourly temperature. Once again, that is fine for history books of curious peaks but the shame is that this is the temperature that is carried on into the climate records, not the highest hourly temperature. That to me is wrong and is another factor that is skewing our ‘maximum highs’ and ‘minimum lows’. They are now basically instantaneous where in the past they were hourly or even from bi-daily readings.
On day here in July NOAA read 113°F as maximum according but I watched that continuously for the two hour period from my patio and it never got over 111°F even in the shade above my 200 sq.ft. concrete patio that is in full sunlight. Talk about contaminated reading, my patio could not be worse! That is when I knew something was amiss. I don’t question that at the airport for a few minutes that it actually reached 113°F, I’m a sailplane pilot and I do realize this phenomena of hot downdrafts (read potential temperature) do frequently occur but they only last minutes and when averaged into an hour they are rather meaningless… but that’s what the nightly news will report and that is what you will see in the climate records.
In the 1930’s they probably were not even aware of this effect but our current high-speed high-accuracy temperature devices detect these bursts.
Well, thanks for realizing the magnitude this can have on our climate records. Are we really 1°C warmer now that then, are we having more new highs? I doubt it and have evidence to back that up, it’s in the NOAA hourly records. It’s all a matter of ‘time’ and the way the records are being recording.

Rattus Norvegicus
August 10, 2012 8:35 am

Here is how it works, from an email sent in reply to my question:
station data => divisional data => area weighted to the larger climate regions => national number
And there you have the method.
REPLY: Show the whole email. I simply don’t trust you given your history here. – Anthony

Rattus Norvegicus
August 10, 2012 8:37 am

Yes, it was using the TCDD, they haven’t made the transition yet.
REPLY: I’ll take NOAA’s reply as final. NSIDC once slipped in a new algorithm with nobody noticing at NSDIC brass, we bloggers had to point it out. – Anthony

August 10, 2012 8:41 am

Gail Combs,
Station dropout! Now that makes me nostalgic. Try this data instead: http://curryja.files.wordpress.com/2012/07/fig2.jpg

August 10, 2012 8:43 am

DCA,
The Karl et al reference is out of date. USHCN v2 doesn’t have any explicit UHI adjustment, and relies (albeit imperfectly) on the Pairwise Homogenization Algorithm to remove any urban bias.

Rattus Norvegicus
August 10, 2012 8:57 am

Anthony,
Here:
“Hi John,
No problem. TCDD is still being used this year…we expect to transition to GrDD next year.
Right now, the station data ===> divisional data ===> area weighted to the larger climate regions ===> national number.
Have a look at
http://www.ncdc.noaa.gov/temp-and-precip/us-climate-divisions.php
REPLY: That’s not a complete email. What was the request, and who sent the reply? Show the whole email, please, it is public record since it was NOAA correspondence. – Anthony

August 10, 2012 9:01 am

For those of you who are confused about “mean” and “average” and “median”. Mean is the value of (Tmax+Tmin) / 2. Average is the the sum of all Tmax readings plus the sum of all Tmin readings divided by the total number of readings. Median is the value of the mid point of a set of values.

greymouser70
August 10, 2012 9:14 am

For those of you confused by Mean, Average and Median I offer the following. Mean is simply: (Tmax+Tmin)/2. Average is the sum of all Tmax plus the sum of all Tmin divided by the total number of data pairs. Median is the middle number of a set of data points. If you have an even number of points (e.g. 6), then the median is the average of value of points three and point four. If you have 7 data points then the median is the value of point 4)

greymouser70
August 10, 2012 9:17 am

Actually it should be: mean is the second definition and average is the first definition

Rattus Norvegicus
August 10, 2012 9:19 am

It was Scott Stephens, and that is the entire contents of the reply.
[Then show the context with earlier emails. ~dbs, mod.]

pcknappenberger
August 10, 2012 9:54 am

The NCDC national average is *not* computed from the same data used to produce climate division (CD) averages. So Rattus’ chain is incorrect… unless NCDC is computing an alternate set of CD averages using fully-adjusted USHCN v2 that they are not reporting.
That the national average is not derived from the traditional NCDC CD data can be verified from the dataset itself.
The data is here and the areal weightings are here. When I calculate a national average for the US for July based on the weighted regional averages, I get 78.0 deg F, which is different from the 77.6 that is being advertised.
Using NCDC’s CD data produces a different national timeseries than the one that NCDC makes available.
-Chip

pcknappenberger
August 10, 2012 10:06 am

Agreed, Anthony!
-Chip

August 10, 2012 10:14 am

@ Nick Stokes says: Aug 10 at 2:38 am
The fact is that these files have over a century of data. For the majority there are not 8640 data points, but just 3 per day (Max, Min and temp at time of reading). That’s all we have. For modern instrumentation, they emulate that measure. Otherwise comparison with the past would not work.
No, Nick, you have it wrong. The subject is the new USCRN, the Climate REFERENCE Network created with automation to record temperatures every 5 minutes. The USCRN was created to correct the errors of past practice! With the USCRN, we should be doing things right. We should be using all 8640 readings each month, not just the min and max of yesterday.
What you described is a red carpet to adjusting USCRN data to match the much longer historical records of COOP. We must not do that. With USCRN we do it right from day one.

wayne
August 10, 2012 10:33 am

@greymouser70, think you meant the mid-range or average of the extents when referring to (Tmax+Tmin)/2. A ‘mean’ and a common ‘average’ are define as the same thing, the sum of all values divided by the number of values.

wayne
August 10, 2012 11:07 am

@ Nick Stokes says: Aug 10 at 2:38 am
The fact is that these files have over a century of data. For the majority there are not 8640 data points, but just 3 per day (Max, Min and temp at time of reading). That’s all we have. For modern instrumentation, they emulate that measure. Otherwise comparison with the past would not work.

No Nick, that is not the same. [my bold] You are assuming that ‘maximum temperatures’ were measured and recorded the in the same manner in both year periods. I really doubt that each station had someone standing out in a hot field in the afternoon waiting for the ‘absolute’ maximum one minute maximum to occur. But that is exactly what these resisting platinum wire temperature measurements do. They have basically no thermal mass as old glass thermometers and can record instantaneous maximum (or minimum) temperatures no matter how short the temporal duration of that “maximum” occurs. Much of the new “record temperatures” are merely better more sensitive, and much faster to equalize, temperature measurements. It’s just simple sense that this is actually occurring.

highflight56433
August 10, 2012 11:24 am

Looking at the map Anthony posted, it appears those stations are evenly distributed as well as being quality assured. However; no doubt that if a selection were taken where there is a higher percentage per area of stations midwest and southeast, compared to fewer distribution percentage of area in the west and northwest, then the data would be weighted toward warmer areas of the country.
However; the people misrepresenting the truth are usually slow to produce how they got their numbers as we frequently see in this field of science.

JJ
August 10, 2012 12:35 pm

Simon says:
The two sets of weather stations are in different places.Therefore the averages are different. Nick has calculated the average altitude of the two sets and the historical network set was on average 178m higher. The NOAA report the historic set average so that it is directly comparable with previous years

If USCRN and USHCN are not comparable because they are two sets of weather stations in different places, then how are USHCN 1936 and USHCN 2012 comparable? They are also two sets of weather stations in different places…

August 10, 2012 12:36 pm

So, based on what wayne is saying, even if temperatures in the 30’s were exactly the same as now, today’s would be recorded at higher because we now can record short duration peaks that would have been missed in the 30s.
Do people agree with this?
If so, this tells me that we simply do not know if we are now warmer or colder than in the 30s, unless there are some stations that maintained the same type of equipment between the 302 and today – are there?
I am told the accuracy of the temp in the part of the record where they used thermometers was +/1 0.5 deg C. Is that right? What is the accuracy now? When were the sensors in the old COOP/USHCN network moved over to the new more accurate sensors.
Sorry that these are likely old questions but I am about to give a talk and want to have these things straight in my mind.

JJ
August 10, 2012 12:55 pm

G David says:
I ran this past a Warmist and his reply was:
” that’s an oranges / apples comparison. You’re comparing 76-year-old records from all US weather stations with records from a new, specialised climate monitoring network.

That is true. It is also true that NOAA is comparing 76 year old records from the 1936 network against records from a different, newer 2012 network.
“It’d be like measuring the length of a football field with an old fibreglass tape, and comparing it to the measurement made with a high-precision laser rangefinder.”
Actually, it is more like measuring a football field in 1936 with an old kinky metal tape, measuring it again in 2012 with an old frayed fiberglass tape and calling the comparison “good” … and then measuring it again in 2012 with a laser rangefinder and complaining about the comparison.
“Have you adjusted (I believe “homogenised” is the term climate scientists use) the USCRN results to match those from the old temperature series?”
Probably – becuase ‘adjusting’ better data to match worse data is what they tend to do. Can’t imagine that they have forsaken this opportunity.
“Or vice versa?”
Not possible. You can “adjust” inconsistent data to “match” better data all you want. It will still be inconsistent.
” But, given that we’re interested in the trend here, …”
Dutiful regurigitation of the standard warmist talking point, but we are not talking about trend here. We are talking about a rank comparison.
“Or is it your position that, because we didn’t have a super-high-quality climate monitoring network in place, that we should completely ignore any climate data gathered prior to the USCRN being commissioned?”
Completely? No. But we should absolutely ignore it when it is not sufficient to the task at hand. That would include
“Never mind that the USHCN data correlates very well with the satellite data, …”
No it doesnt. The different surface records give different rankings from each other, which are different from the satellite rankings – rankings being what we are talking about now. Of course,they also give different trends …

JJ
August 10, 2012 1:09 pm

Tom Harris says:
So, based on what wayne is saying, even if temperatures in the 30′s were exactly the same as now, today’s would be recorded at higher because we now can record short duration peaks that would have been missed in the 30s.
Do people agree with this?

I’m not sure that is a valid argument. The new sensors may have a faster response time, but the response time of the system is determined by the sampling interval. Given that the recording interval is given as 5 minutes, that is likely the sampling interval as well.
So, how does the response time of a max/min LIG thermometer compare to 5 minutes?
There are likely numerous other instrumentation issues that could be affected by transient temp spikes. One of the fundamental deficiencies of the historic networks is the reliance on the faulty Tmax-Tmin/2 – Tmean idea of average temperature. That is very sensitive to transient spikes, and also to more diffuse differences in temp distribution. Everyone knows that this deficiency exists, but no one wants to face it because they want to use those data.
It is very similar to the way that no one faces the fact that we are arguing global heat content in terms of surface temperature…

August 10, 2012 1:31 pm

So NOAA is using “Mean” and Average backwards?
I can sit still for “Average” be the (Min+Max)/2.
the Mean is almost always to meant to be an centroid of all measurements. So when you are making two measurements a day, min and max, ok, I can grudgingly accept mean.
But that is not the convention adopted by NOAA?
As I understand it,
T_MONTHLY_MEAN = (T_MONTHLY_MAX + T_MONTHLY_MIN) / 2
(which really should be written
T_MONTHLY_MEAN = (T_MONTHLY_AVGMAX + T_MONTHLY_AVGMIN) / 2 )
T_MONTHLY_AVG = (Sum of 24*(days in month) AVG (or mean!) temperature readings.)
So it doesn’t apply when all you have is a min max stations? In my book, this is closer to the true meaning of “mean”.
Mean ought to be the integration of all data points in the record, divided by the time frame. It should never have been adopted as mid point of the min-max outliers on any time scale. What’s done is done. This confusing and irregular terminology is another example of poorly the whole system has been set up.

wayne
August 10, 2012 1:32 pm

Anthony, very kindly, I have to disagree that this is a non-issue. The reason is looking at the final results and what is actually coming out of the climate system. I have used two sources:
http://w1.weather.gov/data/obhistory/KOKC.html
This is the last three days hourly data. You have to be quick to capture this level of data for it doesn’t seem this data further back is available to the public.
and
http://www.srh.noaa.gov/oun/climate/get_f6.php
For monthly maximum and minimum temperatures with other data.
For August 2012 so far:

 dy max min …..
 --  ----   ----  ------------------
 1 112  79  …
 2 112  82  …
 3 113  84  …
 4 109  80  …
 5  99  77  …
 6 105  77  …
 7 106  76  …
 8 101  76  …
 9 103  71  …

This is what is reported and I assume this is what is passed on up stream to create the national climatology data.
Now here is what you would get if you merely took the hourly maximum from the hourly data:

 dy max  …..
 --  ----   ----
 1 111
 2 111
 3 112
 4 108
 5  97
 6 104
 7 104
 8  99
 9 102

Now from what I can tell so far it is the top version that gets passed along and this must be a maximum small slice of an hour’s maximum.
That is all I wanted someone to realize. I think it is relevant when you are comparing maximum temperatures far back in time when sub-hour measurements were not even being made. How would you ever do you reconcile this except by ignoring these instantaneous temperatures and sticking with the hourly averages?
So I think it is *reported* warmer in 2012 compared to the 1930’s partially due to something this simple. This is not to marginalize what this thread is about but just to add credence to your point.

August 10, 2012 1:40 pm

I still don’t understand.
Let’s say the new thermocouples can pick up temperature heat bursts of, say, 1/10 of a second duration (Does anyone know how short a spike they can resolve?), then, if no filtering was done on the data, a 1/10th second heat spike could make it through into the record of the highest temperature that day.
So, I assume (but don’t know – does anyone?), that they must filter the data so that temperature spikes less than a certain time duration are filtered out. Is this so? If so, how long does a spike have to last before it is accepted into the final data for that station?
If the thermal inertia of a thermometer in the 30s was such that it effectively took an average temp over a, say, 3 min period, then the only way one could compare those readings with today would be if today’s data was filtered so that very short duration spikes were removed and the data averaged over the 3 min, with all the short time duration bursts removed. Do they do that?

August 10, 2012 1:43 pm

If the Stevenson boxes used the Six registering thermometer, it might be interesting to hear how these were reset. I get that they would only register one maximum and one minimum temperature per day – the limits that the liquids were pushed by internal pressures for that period. Still, a Wiki article points out that the thermometer was prone (actually, “notorious”) for a few design flaws:
From article, “Six’s thermometer”:

The Six’s thermometer is notoriously known for separations in the mercury column, in particular after shipment, though accidental knocks have been known causes as well. Separations can usually be corrected by swinging the thermometer as is done to reset a mercury clinical thermometer;

http://en.wikipedia.org/wiki/Maximum_minimum_thermometer
Beyond any chemistry issues that could affect their function, I can’t help but wonder how human error might have biased a long term record – say, forgetting to shake (or otherwise reset) the thermometer columns back into place every day. I suppose both the same max / min temps would be carried across to the second day, so maybe that’s not such an issue. Just curious.

JJ
August 10, 2012 3:04 pm

Tom Harris says:
I still don’t understand.
Let’s say the new thermocouples can pick up temperature heat bursts of, say, 1/10 of a second duration (Does anyone know how short a spike they can resolve?), then, if no filtering was done on the data, a 1/10th second heat spike could make it through into the record of the highest temperature that day.

Your problem is that you are assuming the sampling rate is the same as the thermocouple response time. If the thermocouple can respond to an event of 1/10 second duration, but is only sampled once every 5 minutes then a 1/10 second event is only going to get recorded about 1 in every 3000 times it occurs – i.e. when it happens to occur at the moment a sample is taken.
Above, someone says the recording rate is 1 record every 5 minutes. Not given is the sampling rate, or the method of aggregating sample values into a record, if there is more than 1 sample per record.

August 10, 2012 3:15 pm

FYI, all, Anthony just told me that, when you count the thermal mass of the PRT case holding the device, the response times of LIG and PRT are not very different. So this seems like a non-issue.

DCA
August 10, 2012 4:17 pm

JJ says:
August 10, 2012 at 12:55 pm
“Actually, it is more like measuring a football field in 1936 with an old kinky metal tape, measuring it again in 2012 with an old frayed fiberglass tape and calling the comparison “good” … and then measuring it again in 2012 with a laser rangefinder and complaining about the comparison.”
As an old land surveyor, I like to mostly lurk but let me give my two cents worth. Being an old land surveyor, I could measure a football field with the old kinky metal tape (chain) just as accurately as the laser rangefinder. The reason why I say this is because the I looked up the accuracy for a laser rangefinder to find it’s +-10 cm or 0.33 feet.
Real chains, called Gunters chains, were used in the 18th and 19th centuries by surveyors but when the 100 or 200 foot steal tapes were introduced in the early 20th they were still called chains. Surveyors, at least in Kansas, use feet as the basic unit but with a base ten system of tenths and hundreds of feet. This is because most old land deeds use these units so it’s easy to keep it the consistent.
You are right about the use of fiberglass tapes today but we use them for approximate measurements and for something the size of a football field we discovered their accuracy is +-0.5 ft. When I first started in the 70’s we still used chains and I actually did stake out a football field once with a chain and we and our accuracy was +-0.35 ft or almost as accurate as today’s langefinders. Now I know the old kinky chain will have issues but there are techniques used to calibrate it to get an accurate measurement. Even the old frayed fiberglass tape that stretches easily can be calibrated if you know what you’re doing.
A better analogy would be to compare the metal or fiberglass tapes to either a theadolite-electronic distance meter(EDM) or the latest and most accurate GPS geosystems. We are able to get +-0.05 ft./1.5 cm for the EDM and +-0.02 ft./0.5 cm for the GPS.
I’m sure everyone gets your point but I thought I’d educate you a little about taking accurate distance measurements. We are also able to get accurate vertical measurements too with a +-1 cm accuracy for both electronic systems.

wayne
August 10, 2012 4:40 pm

Harris says:
August 10, 2012 at 3:15 pm
“FYI, all, Anthony just told me that, when you count the thermal mass of the PRT case holding the device, the response times of LIG and PRT are not very different. So this seems like a non-issue.”
OK, I’ll buy that for now but it seems most people I know, if they knew what they were really comparing, would care much whether the average of the warmest few minutes, averaged, now were a degree or a fraction of a degree higher than the average of the warmest few minutes were back in the ‘30s.
I seem to have always assumed that these numbers were at least the warmest average ‘hour’ so those fast fluctuations with warm winds were just averaged out of the record but it seems I was incorrect. Learn something every day.
REPLY:
I had looked at the specs on the PRT, and due to it being encased, it has a thermal mass. From the spec sheet:
Time Constant: 63% of thermal response in 13 sec when immersed from 20°C air into 50°C water flowing at 0.2 m/s
http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/site/sensors/airtemperature/Descriptions/summarycurrentairtempsensor.doc
So it’s not far off the old LIG response time of 10-30 seconds due to its thermal mass. All three PRT’s have to be within 0.3 of each other for it to be a real value that passes QC.
-Anthony

wayne
August 10, 2012 7:31 pm

Thank you Anthony. Didn’t mean to raise a meaningless question. I just always considered some thermal mass, to a certain degree, to be good, for that in itself helps in averaging out fast and spurious peaks and troughs in the temperature readings. Seems everyone else (climatologists) tend to like fast thermometers to maintain those fast fluctuations but that does magnify the extremes on both sides. Thanks again for the input. (Also never thought I would even spend a summer with an electronic thermometer outdoors next to me most of the time, gives you a new perspective on these issues)

JJ
August 10, 2012 8:36 pm

DCA,
Your $0.02 is appreciated. I routinely use both survey grade GPS and a laser theodolite (total station) in my work. It was that laser range finder that I was thinking of above.
The kinky steel tape I was picturing wouldn’t have been an old surveyor’s chain. You old surveyors tend to appreciate your equipment and care for it a bunch better than some of our field crews. Several tenths here and there doesn’t really affect their work, and the battered remains of a dozen tapes in our storeroom are testament to the fact that they know it. The fiberglass? Good grief. If one of those bedraggeled 100m reels has more than 297 ft left on it, that’s the “good” one.
That is about how I see weather data. The equipment (either the individual instruments or networks of same) may be capable of X level of accuracy, but that isn’t going to be achieved if the operators understand that the goal of the effort is consistent with a lower standard of rigor …

Carter
August 11, 2012 6:21 am

[Snip. Repreated ‘denial’ comments. ~dbs, mod.]

Mike Bromley the Kurd
August 11, 2012 6:25 am

Even if their assertion was ‘accurate’ it is only accurate by +0.2 degree. Ooooh! Scary, scary, scary, and so far inside the bounds of instrument error as to be meaningless. Basically the SAME RESULT as July 1936. In other words, it took 76 years for all that huffing and puffing of supposed AGW to REWARM the planet to the same temperature. And then Nick Stokes gets his knickers in a knot? C’mon, Nick, you need better tweezers to pick fly shytte out of black pepper. What an inane waste of effort. This is the “warmest ever” that all the warmists are bleating about. Let’s see: 0.2/377K=0.053050397878%. This is so bloody ludicrous as to make a person glad that he is old.

Carter
August 11, 2012 12:02 pm

[Snip. This is a potholer-free blog. Read the archives to see why. ~dbs, mod.]

August 11, 2012 2:35 pm

The analogue used previously with the tape and laser is rather interesting. It would be akin to saying that the old sailing ships were more efficient than the new technology model because they were wind powered and therefore better. Forgetting about the amount of crew it required to handle it. It has already been stated that the temps taken way back in the early 1900s were random at best, taken at different times at worst. Both those efforts would vary the temps results.
Now we have technology making adjustments using algorithms and the claim stands that this would not make any difference and yet the new system in place is claimed to be affected by elevation, when in fact it still only records the temps in order to record warming or cooling. That entire argument just demonstrates that cherry picking by warmists is a preference and does not really require substantiation as long as it shows their required end result. is demonstrated. It is truly becoming quite a farce.

August 12, 2012 7:26 am

The Washington Post article on US temperature (Sunday, August 12th) presents a NOAA temperature graph overlaid with a straight line showing a steady increase in temperature from 1900 all the way to 2010. The global cooling betwixt the 40s and 70s, and the flat global temp from about 1998 till now doesn’t show up. Is this a regional issue, or would the better temperature records you are using show this difference?

August 12, 2012 7:27 am

The Washington Post (August 11th) shows a NOAA temperature graph with an overlaid straight line average which increases steadily from 1900 to 2010. Is it the case that the there is merely a regional discrepancy with global temperatures reported elsewhere, namely the cooling during the 40s to 70s and the straight line temp from about 1998 to 2010?

August 12, 2012 8:10 am

Forget my (just earlier) query. The new improved stations can’t do anything about the earlier NOAA temp records

August 12, 2012 1:43 pm

Looks as if all 3 of the above can be deleted. My bad. But I did send a link to this site to The Washington Post author, and mentioned the 2 degree discrepancy. He acknowledged, and is investigating. I’d still like to know why the NOAA graph (in the Post) shows a steady increase. Is that a problem with their curve fitting? I can’t believe the US has been showing a temp increase for 15 years when the global temp has been flat. (Also, same issue, what about the 1940s to 1970s ?)

Nia
August 13, 2012 6:04 am

Pardon me if this has been mentioned; I couldn’t wade through all the comments. But I don’t see the big deal here — so, the average of a brand new data set doesn’t match the old data set. Wow! Hold the presses! (And do you think this may be why they aren’t using the new data set yet – to do so would require adjustment, and since that’s a dirty word here, I’m surprised you object.)

Kforestcat
August 13, 2012 11:39 am

Dear climatebeagle says: August 9, 2012 at 7:17 am & Anthony’’s Reply
Where you asked if the 5 minute temperature data was available at NOAA’s site. The technical answer is yes. To get this data:
1) Go to the “Observations” section here: http://www.ncdc.noaa.gov/crn/observations.htm
2) Pick an individual station
3) Under “Station Information” pick “Sensor Data”
4) The Pick “Temperature”
At this point you can see the individual 5-minute sensor readings as well as the calculated value for the site – as a table. You have to manually grab the data and insert it into say excel… so it’s not exactly user friendly and it would take forever to get a month’s worth of data for a single station. But it can, technically, be done.
NOAA also has a price sheet for detailed climate data. See at the bottom of the site here: http://www.ncdc.noaa.gov/crn/qcdatasets.html.
Where NOAA states:

“Some 5-minute data are available in the NCDC Climate Data Online system in the Quality Controlled Local Climate Data Products for USCRN stations listed at
http://cdo.ncdc.noaa.gov/qclcd/QCLCD.
Bulk transfers of 5-minute data for research purposes are best handled by NCDC Customer Service at:
http://www.ncdc.noaa.gov/oa/about/ncdcordering.html.
USCRN contacts can also help direct persons with special requests; contacts are listed at the bottom link on the blue navigation bar to the left.”

Obviously the data would be a gold mine for those of us interested in TOB issues and any bias that the use of a daily Tmax and Tmin would have (in comparison to simply using the average of the five minute readings). But the price of a full years worth of data for all stations looks to be in the range of $100 and is bit above my level of affordability.
Regards, Kforestcat

August 14, 2012 7:59 pm

Reblogged this on The GOLDEN RULE and commented:
Keeping an eye on another global scam, that of introducing political and social control on the pretext of Catastrophic Anthropogenic Global Warming. As we have come to expect, valuable information from WUWT proving the lack of scientific evidence for the introduction of carbon controls and taxes,

GaryM
August 14, 2012 10:44 pm

Several days late to this thread, so perhaps no one will see this comment, but I have a question. The comparison between the CRN and USHCN networks seems to be to determine the accuracy of the average reported by NOAA, as compared to the “real” average temperature of the CONUS. The point being that the newer, better sited CRN sites better reflect “true” temperatures needing no adjustments.
Nick Stokes then complained that the CRN sites were at a higher average altitude than the USHCN sites, making the comparison invalid without adjustment for altitude. But, again, it seems the whole point of the comparison is to determine the accuracy of the USHCN sites against real temps.
As a later commenter pointed out, the U.S. Geological Survey estimates the mean U.S. elevation at 2500 feet, while Nick Stokes claims the average elevation of CRN sites is 2263 feet, and the USHCN sites’ average elevation is 1,681 feet.
Wouldn’t that mean that, based on Stokes’ analysis, the CRN sites overstate the real average temperature since they are on average 237 feet below the real mean elevation of the U.S.? And wouldn’t that also mean that the USHCN sites overstate temperature even more since they are on average 819 feet below the mean elevation of the U.S.?
Not only are the CRN stations better cited to avoid urbanization and other influences, but it seems to me they are much better sited as far as average elevation as well. I think Nick Stokes’ argument makes this article an even stronger critique of the NOAA announcement, whatever the full method of their calculation.

August 16, 2012 9:30 am

RE: your Update #3. From NOAA: The NCDC’s Climate Monitoring Branch plans to transition from the TCDD to the more modern GrDD by 2013. While this transition will not disrupt the current product stream, some variances in temperature and precipitation values may be observed throughout the data record.
For what time period will results of TCDD and GrDD overlap?
How long will TCDD remain available for study.

August 16, 2012 10:02 am

There was a subtle change in focus in the course of the main post and updates.
Anthony Watts rightly pointed out that USCRN data is seldom used by NOAA or other researchers when making pronouncements about the “warmest ever…” such records. Despite having a very short history, USCRN should be used to confirm or contrast statements and conclusions based upon the dirtier, more problematic, more adjusted USHCN.
The subtle change above was in the discussion of the transition from TCDD to GrDD. These are global databases. Since USCRN is a high-quality, but non-global network and therefore, even if it is included in GrDD, it will be swamped by the more poorly sited stations. The Fenimore et al 2011 paper linked above makes no mention of USCRN or “Reference” Network. It does say this:

The GrDD’s initial (and more straightforward) improvement is to the underlying network, which now includes additional station records and contemporary bias adjustments (i.e., those used in the U.S. Historical Climatology Network version 2; Menne et al., 2009).

So I have to wonder if transition to a new database with new adjustments to old untrustworthy, UHI contaminated stations, is really an exercise in rearranging the deck chairs on the Titanic. When analyzing GrDD, check for holes below the waterline.

obahama
August 16, 2012 10:12 am

This from Stu Ostro’s Twitter stream: https://nes.ncdc.noaa.gov/pls/prod/f?p=100:1:4202314326918058::::P1_ARTICLE_SEARCH:360

obahama
August 16, 2012 10:13 am

Stu Ostro tweeted: https://nes.ncdc.noaa.gov/pls/prod/f?p=100:1:4202314326918058::::P1_ARTICLE_SEARCH:360

Ty
August 20, 2012 7:38 am

Hi all. This is my first post but I’ve been reading for a while. I hope I’m not too late to the party…
Wayne’s comment about hourly temperature readings in the 30s vs daily max piqued my interest. It seems it was ultimately determined to be a non-issue but if so I don’t understand why. The USCRN data descriptions explain the sampling rate and how the data is averaged and, to some extent “passed up.”
Excerpts of relevant descriptions are below. To summarize, as I undestand it, max temp is recorded every 5 minutes from the average of samplings taken every 10 seconds. If so, then any spike that lasted at least 10 seconds would be captured and result in a higher maximum than if the sampling rate were 5 minutes or anything less frequent.
Here are the descriptions.
– – – – –
NOTE: Each of the descriptions below also include the following note on the site that I eleiminated here for brevity: “Note: USCRN/USRCRN stations have multiple co-located temperature sensors that record independent measurements. This value is a single temperature number that is calculated from the multiple independent measurements.”
From their hourly data set:
T_CALC: Average temperature, in degrees C, during the last 5 minutes of the hour.
T_HR_AVG: Average temperature, in degrees C, during the entire hour.
T_MAX: Maximum temperature, in degrees C, during the hour. [Note] The independent measurements are the maximum for each sensor of 5-minute average temperatures measured every 10 seconds during the hour.
From their daily data set:
T_DAILY_MAX: Maximum temperature, in degrees C, during the day. [Note] The independent measurements are the maximum for each sensor of 5-minute average temperatures measured every 10 seconds during the day.
From their monthly set:
T_MONTHLY_MAX: The maximum air temperature, in degrees C, for the month. This maximum is calculated as the average of all available day-maximums. To be valid there must be less than 4 consecutive day maximums missing, and no more than 5 total day maximums missing.
– – – –
It appears that T_CALC has the data needed to compute a T_DAILY_HR_MAX based on an hourly reading (specifically, the average of 10 second readings for the last 5 min of each hour). And from that to compute a T_MONTHLY_HR_MAX. If of any value, When time allows I want to do this and see how that compares to the monthly average they are now computing. If that would be a waste of time, I won’t bother. Can anyoen let me know?

wayne
August 21, 2012 3:57 pm

Ty, I think any information in this area would be welcomed, it sounds like you might even have access to the 5-minute data and it would be great to see what that was reporting on a record afternoon, such as Aug.3 in Oklahoma, looked like in the late afternoon at either KOKC (Will Roger Int’l) or KPWA (Wiley Post Airport) stations. Glad to see someone see’s what I seem to question. It’s not that new sensors are incorrect, it is just that older Stevenson type cages may have been much slower to change and then not to register fast transient ups and downs in the temperature on hot afternoons. Those transients seem to occur on very mildly gusty days especially with cumulus clouds present. You can see the temperature vacillate when either between or conversely underneath clouds marking the thermal bases. Get’s a bit into vertical ‘potential temperature’ changes in the thermals. Oklahoma’s a great place for this if you’re into the sport of sailplanes. (I haven’t since the 80’s and really miss it, your engine *is* these temperature variances and they are always in flux, minute by minute)