An 'inconvenient result' – July 2012 not a record breaker according to data from the new NOAA/NCDC U.S. Climate Reference Network

I decided to do myself something that so far NOAA has refused to do: give a CONUS average temperature for the United States from the new ‘state of the art’ United States Climate Reference Network (USCRN). After spending millions of dollars to put in this new network from 2002 to 2008, they are still giving us data from the old one when they report a U.S. national average temperature. As readers may recall, I have demonstrated that old COOP/USHCN network used to monitor U.S. climate is a mishmash of urban, semi-urban, rural, airport and non-airport stations, some of which are sited precariously in observers backyards, parking lots, near air conditioner vents, airport tarmac, and in urban heat islands. This is backed up by the 2011 GAO report spurred by my work.

Here is today’s press release from NOAA, “State of the Climate” for July 2012 where they say:

The average temperature for the contiguous U.S. during July was 77.6°F, 3.3°F above the 20th century average, marking the hottest July and the hottest month on record for the nation. The previous warmest July for the nation was July 1936 when the average U.S. temperature was 77.4°F. The warm July temperatures contributed to a record-warm first seven months of the year and the warmest 12-month period the nation has experienced since recordkeeping began in 1895.

OK, that average temperature for the contiguous U.S. during July is easy to replicate and calculate using NOAA’s USCRN network of stations, shown below:

Map of the 114 climate stations in the USCRN, note the even distribution.

In case you aren’t familiar with his network and why it exists, let me cite NOAA/NCDC’s reasoning for its creation. From the USCRN overview page:

The U.S. Climate Reference Network (USCRN) consists of 114 stations developed, deployed, managed, and maintained by the National Oceanic and Atmospheric Administration (NOAA) in the continental United States for the express purpose of detecting the national signal of climate change. The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the nation changed over the past 50 years? These stations were designed with climate science in mind. Three independent measurements of temperature and precipitation are made at each station, insuring continuity of record and maintenance of well-calibrated and highly accurate observations. The stations are placed in pristine environments expected to be free of development for many decades. Stations are monitored and maintained to high standards, and are calibrated on an annual basis. In addition to temperature and precipitation, these stations also measure solar radiation, surface skin temperature, and surface winds, and are being expanded to include triplicate measurements of soil moisture and soil temperature at five depths, as well as atmospheric relative humidity. Experimental stations have been located in Alaska since 2002 and Hawaii since 2005, providing network experience in polar and tropical regions. Deployment of a complete 29 station USCRN network into Alaska began in 2009. This project is managed by NOAA’s National Climatic Data Center and operated in partnership with NOAA’s Atmospheric Turbulence and Diffusion Division.

So clearly, USCRN is an official effort, sanctioned, endorsed, and accepted by NOAA, and is of the highest quality possible. Here is what a typical USCRN station looks like:

USCRN Station at the Stroud Water Research Center, Avondale, PA

A few other points about the USCRN:

  • Temperature is measured with triple redundant air aspirated sensors (Platinum Resistance Thermometers) and averaged between all three sensors. The air aspirated shield exposure system is the best available.
  • Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
  • All stations were sited per Leroy 1999 siting specs, and are Class 1 or Class 2 stations by that siting standard. (see section 2.2.1 here of the USCRN handbook PDF)
  • The data goes through quality control, to ensure an errant sensor hasn’t biased the values, but is otherwise unchanged.
  • No stations are near any cities, nor have local biases of any kind that I have observed in any of my visits to them.
  • Unlike the COOP/USHCN network where they fought me tooth and nail, NOAA provided station photographs up front to prove the “pristine” nature of the siting environment.
  • All data is transmitted digitally via satellite uplink direct from the station.

So this means that:

  1. There are no observer or transcription errors to correct.
  2. There is no time of observation bias, nor need for correction of it.
  3. There is no broad scale missing data, requiring filling in data from potentially bad surrounding stations. (FILNET)
  4. There are no needs for bias adjustments for equipment types since all equipment is identical.
  5. There are no need for urbanization adjustments, since all stations are rural and well sited.
  6. There are no regular sensor errors due to air aspiration and triple redundant lab grade sensors. Any errors detected in one sensor are identified and managed by two others, ensuring quality data.
  7. Due to the near perfect geospatial distribution of stations in the USA, there isn’t a need for gridding to get a national average temperature.

Knowing this, I wondered why NOAA has never offered a CONUS monthly temperature from this new network. So, I decided that I’d calculate one myself.

The procedure for a CONUS monthly average temperature from USCRN:

  1. Download each station data set from here: USCRN Quality Controlled Datasets.
  2. Exclude stations that are part of the USHCN-M (modernized USHCN) or USRCRN-Lite stations which are not part of the 114 station USCRN master set.
  3. Exclude stations that are not part of the CONUS (HI and AK)
  4. Load all July USCRN 114 station data into an Excel Spreadsheet, available here: CRN_CONUS_stations_July2012_V1.2
  5. Note stations that have missing monthly totals data. Three in July 2012, Elgin, AZ, (4 missing days) Avondale, PA,(5 missing days) McClellanville, SC, (7 missing days) and  set their data aside to be dealt with separately.
  6. Do sums and calculate CONUS area averages from the Tmax, Tmin, Tavg and Tmean data provided for each station.
  7. Do a separate calculation to see how much difference the stations with missing/partial data make for the entire CONUS.

Here are the results:

USA Monthly Mean for July 2012:   75.72°F 

(111 stations)

USA Monthly Average for July 2012:   75.51°F 

(111 stations)

USA Monthly Mean for July 2012:   75.74°F 

(114 stations, 3 w/ partial missing data, difference  0.02)

USA Monthly Average for July 2012:   75.55°F 

(114 stations, 3 w/ partial missing data, difference  0.04)


Comparison to NOAA’s announcement today:

Using the old network, NOAA says the USA Average Temperature for July 2012 is: 77.6°F

Using the NOAA USCRN data, the USA Average Temperature for July 2012 is: 75.5°F

The difference between the old problematic network and new USCRN is 2.1°F cooler.

This puts July 2012, according to the best official climate monitoring network in the USA at 1.9°F below the  77.4°F July 1936 USA average temperature in the NOAA press release today, not a record by any measure. Dr. Roy Spencer suggested earlier today that he didn’t think so either, saying:

So, all things considered (including unresolved issues about urban heat island effects and other large corrections made to the USHCN data), I would say July was unusually warm. But the long-term integrity of the USHCN dataset depends upon so many uncertain factors, I would say it’s a stretch to to call July 2012 a “record”.

This result also strongly suggests, that a well sited network of stations, as the USCRN is designed from inception to be, is totally free of the errors, biases, adjustments, siting issues, equipment issues, and UHI effects that plague the older COOP USHCN network that is a mishmash of problems that the new USCRN was designed to solve.

It suggests Watts et al 2012 is on the right track when it comes to pointing out the temperature measurement differences between stations with and without such problems. I don’t suggest that my method is a perfect comparison to the older COOP/USHCN network, but the fact that my numbers come close, within the bounds of the positive temperature bias errors noted in Leroy 1999, and that the more “pristine” USCRN network is cooler for absolute monthly temperatures (as would be expected) suggests my numbers aren’t an unreasonable comparison.

NOAA never mentions this new pristine USCRN network in any press releases on climate records or trends, nor do they calculate and display a CONUS value for it. Now we know why. The new “pristine” data it produces is just way too cool for them.

Look for a regular monthly feature using the USCRN data at WUWT. Perhaps NOAA will then be motivated to produce their own monthly CONUS Tavg values from this new network. They’ve had four years to do so since it was completed.

UPDATE: Some people questioned what is the difference between the mean and average temperature values. In the monthly data files from USCRN, there are these two values:



The mean is the monthly (max+min)/2, and the average is the average of all the daily averages.

UPDATE2: I’ve just sent this letter to NCDC – to


I apologize for not providing a proper name in the salutation, but none was given on the contact section of the referring web page.

I am attempting to replicate the CONUS  temperature average of 77.6 degrees Fahrenheit for July 2012, listed in the August 8th 2012, State of the Climate Report here:

Pursuant to that, would you please provide the following:

1. The data source of the surface temperature record used.

2. The list of stations used from that surface temperature record, including any exclusions and reasons for exclusions.

3. The method used to determine the CONUS average temperature, such as simple area average, gridded average, altitude corrections, bias corrections, etc. Essentially what I’m requesting is the method that can be used to replicate the resultant 77.6F CONUS average value.

4. A flowchart of the procedures in step 3 if available.

5. Any other information you deem relevant to the replication process.

Thank you sincerely for your consideration.

Best Regards,

Anthony Watts


Below is the response I got to the email address provided in the SOTC release, some email addresses redacted to prevent spamming.


—–Original Message—–
Date: Thursday, August 09, 2012 3:22 PM
Subject: Undeliverable: request for methods used in SOTC press release
Your message did not reach some or all of the intended recipients.
   Sent: Thu, 9 Aug 2012 15:22:43 -0700
   Subject: request for methods used in SOTC press release
The following recipient(s) could not be reached:
   Error Type: SMTP
   Error Description: No mail servers appear to exists for the recipients address.
   Additional information: Please check that you have not misspelled the recipients email address.


UPDATE3: 8/10/2012. This may put the issue to rest about straight averaging -vs- some corrected method. From

It seems they are using TCDD (simple average) still. I’ve sent an email to verify…hopefully they get it.

Traditional Climate Divisional Database

Traditionally, climate division values have been computed using the monthly values for all of the Cooperative Observer Network (COOP) stations in each division are averaged to compute divisional monthly temperature and precipitation averages/totals. This is valid for values computed from 1931 to the present. For the 1895-1930 period, statewide values were computed directly from stations within each state. Divisional values for this early period were computed using a regression technique against the statewide values (Guttman and Quayle, 1996). These values make up the traditional climate division database (TCDD).

Gridded Divisional Database

The GHCN-D 5km gridded divisional dataset (GrDD) is based on a similar station inventory as the TCDD however, new methodologies are used to compute temperature, precipitation, and drought for United States climate divisions. These new methodologies include the transition to a grid-based calculation, the inclusion of many more stations from the pre-1930s, and the use of NCDC’s modern array of quality control algorithms. These are expected to improve the data coverage and the quality of the dataset, while maintaining the current product stream.

The GrDD is designed to address the following general issues inherent in the TCDD:

  1. For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially undersampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).
  2. For the TCDD, all divisional values before 1931 stem from state averages published by the U.S. Department of Agriculture (USDA) rather than from actual station observations, producing an artificial discontinuity in both the mean and variance for 1895-1930 (Guttman and Quayle, 1996).
  3. In the TCDD, many divisions experienced a systematic change in average station location and elevation during the 20th Century, resulting in spurious historical trends in some regions (Keim et al., 2003; Keim et al., 2005; Allard et al., 2009).
  4. Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).

The GrDD’s initial (and more straightforward) improvement is to the underlying network, which now includes additional station records and contemporary bias adjustments (i.e., those used in the U.S. Historical Climatology Network version 2; Menne et al., 2009).

The second (and far more extensive) improvement is to the computational methodology, which now addresses topographic and network variability via climatologically aided interpolation (Willmott and Robeson, 1995). The outcome of these improvements is a new divisional dataset that maintains the strengths of its predecessor while providing more robust estimates of areal averages and long-term trends.

The NCDC’s Climate Monitoring Branch plans to transition from the TCDD to the more modern GrDD by 2013. While this transition will not disrupt the current product stream, some variances in temperature and precipitation values may be observed throughout the data record. For example, in general, climate divisions with extensive topography above the average station elevation will be reflected as cooler climatology. A preliminary assessment of the major imapacts of this transition can be found in Fenimore, et. al, 2011.


newest oldest most voted
Notify of
Duncan B (UK)

Love that ‘way too cool!’ sign off Anthony.
Must be frustrating not to show off their shiny new toys.

Earle Williams

I’m curious how the USCRN July 2012 mean or average would compare to the Class 1 & 2 stations in USHCN. There could be some thermo spatial biasing if the numbers and distributions aren’t similar.


In the next round of adjustments, they will just adjust everything in all the other datasets down by 2.1 degrees.
Problem solved


I wonder what we would do if we didn’t have dedicated people like Anthony and others to inform the public what really is happening out there. What is so disturbing is that the media doesn’t ask the same questions but instead just accepts what is spoonfed to them. Two years ago I accepted all information unconditionally that was distributed by organisations like NOAA, I trusted them like you would with your own doctor. Now we all need second opinions and WUWT is doing just that and more.

Ray Boorman

Brilliant work Anthony, how long till we hear complaints from the Team about you misusing/misunderstanding the data?
REPLY: About 30 minutes, “team” member Nick Stokes has already got his knickers in a twist, see below. – Anthony

So they have ignored Anthony’s excellent research because, presumably it does not fit in with their preconceived notions of AGW.
An anecdotal story here: Our house in the UK has a North-South orientation, we have a small weather station with the external temperature sensor permanently in the shade at the front of the house (facing North). Our house in Spain has an East-West orientation, our weather station there has the external temperature sensor permanently in the shade in a window recess in the West facing back yard. The back yard has terracotta tiles and in the summer it is impossible to walk on theses tiles in bare feet as they get too hot.
Any day in the UK the temperature (usually) rises slowly as the day progresses and falls slowly as the sun sinks and finally sets maximum temperature is usually about 16:00 hours. In Spain in the summer, the temperature as picked up by the sensor rises steadily until about 14:00 it then can rise by 7-10 degrees Celsius in the next 3-4 hours as the terracotta tiles heat up by the radiant heat from the almost overhead sun. Common sense tells me that this is an artificial rise, and it certainly feels a lot cooler in the grassed pool area across the road in the shade of the trees.
Hopefully, as more people read Anthony’ research, the realisation that the incorrect siting of weather centres is the main reason for temperature increases globally.


Could I ask what you define as “mean” and “average”? Mathematically, I thought they were the same. Perhaps you are referring to a “mid-range”? Refer to :

This is comparing the absolute temperatures of two different networks. You don’t know whether, for example, the USCRN network has relatively more high altitude stations. And while you say that USCRN has “near perfect geospatial distribution” (measured?), the validity of the comparison will depend on whether the old network is comparably distributed.
That’s why climate scientists generally prefer to deal with anomalies. Otherwise you can’t compare across networks.
REPLY: And anomalies are in the eye of the beholder, pick your own baseline, like Hansen does. Record temperatures for single months reported to the public, as NOAA did today, are not useful when reported as anomalies. The fact is that a better sited and maintained network shows a cooler result. And, that network wouldn’t exist if NOAA didn’t realize what a mess the old network was in. I know you’d just prefer it if this post would go away, so you can continue in that warm cocoon of institutionalized thinking you live in, but it isn’t going to happen.
If NOAA has issues with the presentation, let them put out a CONUS monthly value, my bet is that they won’t. They’ve had 4 years to do it since the network was completed, and they are still sitting on their butts citing the old multi-mangled COOP network data. – Anthony

Look for a regular monthly feature using the USCRN data at WUWT.

If you do that, how much you want to bet they either cut off access to the data or begin to “adjust” it for some reason?

Bob Koss

There don’t seem to be any closely paired stations west of eastern Montana. Do you average the temperatures from the individual sets of paired stations? If not, it is probably the thing to do before calculating the CONUS temperature. Or, if the pairs are reliably similar, eliminate one of each pair before calculating. It likely won’t make much difference, but would obviate complaints about areas containing paired stations being given undue weight.

James McCauley

Proud to be a WUWTer! This post will be shared with my Senators R. Portman and (aah-hemm) Sherrod Brown, etc. Someone needs to forward to Inhofe, et al.


The first defender of indefensible has already made his appearance with his usual obscure defences. Anthony is talking about what NOAA stated as the ” temperature ” and not anomaly. They stated that ” temperature ” from the old network and kept quiet about their own modern network which shows temperatures over 2 degrees F cooler, got it? Of course you know it very well and are wilfully obfuscating, true to form.
We wait next for Mosher to come with ” Hmm, do this calculation vs. that calculation and use this dataset and see the results ” kind of fly by snark.
REPLY: You can bet on it. Let NOAA put out a monthly CONUS Tavg value then if they don’t like what I’m doing. – Anthony

The Ghost Of Big Jim Cooley

Just wanted to echo Andrew and Ray, above. Without people like Anthony and Willis (and many others) we’d have no idea of the truth. Just a small thank you for dedicating your time to showing the actuality, rather than all the underhandedness that goes on in ‘science’ now (although it actually always has!). I love science, but it appears often that it is only a little above religion when it comes to what is real. The people who practice the ‘three-card-tricks’ in climate science don’t seem to understand the damage they are doing to science itself. It’s actually very sad.


How you do know that year 1936 temperature was measured in rural areas? How do you know that it is comparable to your rular area calculations?
REPLY: The US was far less urbanized in 1936, far less population, far less airports, and most airports were small affairs rather than the big concrete megaplexes of today. Read your history, ORD for example. – Anthony

Apparently using the old weather system allows them to make the usual claim of CAGW. It truly is a sad situation where a system set up specifically to monitor the temperature accurately is not used. One does wonder if this is an act of manipulation, intentionally instigated to promote the Global Warming hysteria. This is becoming extremely frustrating, as we can no longer rely on the Government specialized departments to demonstrate any standards or impartiality.

Peter Miller

Reliable data is as rare as rocking horse poo in today´s climate science, while unreliable, or adjusted, ´homogenised´, cherry picked data is the norm.
This is a classic instance of what happens when you compare reliable data with unreliable data – the scary, supposed problem of global warming goes away. But what would happen to all that funding if the scary supposed problem disappeared?
Answer: It would disappear too.
Solution: Keep using the unreliable data.
I know it would be a lot of work and would probably produce the approximate same result as previously, but would it be possible to apply some sort of area of influence weighting around each station? Some kind of polygamal shape weighting.


The BBC are covering this in an unattributed story with no right of reply, I wonder why? Possibly because they know it’s a bit flakey?

It’s difficult to understand what is an average temperature: if you put one hand in a bucket of hot water and the other one in ice cold water will an average temperature mean anything?
If you mix 1 kg of boiling water with 1 kg of ice cold water you may expect a resulting temperature of 50 °C. But you have to mix it. So far, we cannot mix Houston with Minneapolis (any way you can’t mess with Texas)!
If a measuring station is at sea level (Tampa, FL) and another one at an altitude of one mile (Denver, CO) what is the signification of temperature averaged between these two?
Temperature anomalies (the difference between a measured temperature from a timed average for the same station) are understandable and these differences can be treated as cohort for statistical analysis. That’s what we see on all hockey stick or flat diagrams.

Probably the best question to ask is this – how does NOAA calculate the area average absolute temperatures (not anomalies) for the CONUS they use in those press releases, combining that mishmash of dissimilar COOP stations? As far as I can tell they have not published the exact method they use for those. – Anthony

Alex Heyworth

Further to your reply to Esko, Anthony, there is a reason why they used to be called airfields (in Anglo usage, at any rate – I don’t know about US usage for the 30s and 40s). Remember all that footage of Spitfires bumping over the turf?

Policy Guy

Nothing gives me greater pleasure than seeing an unconnected third party produce results from their own top notch system that contradicts their published results. What idiocy do they operate under? Are they immune to scientific data inquiry? This begs the question, are they supposed to support Hansen regardless of their superior data – so they have to ignore better data in favor of massaged data, but not tell anyone that is what they are doing?
So tell the story correctly. You have an army behind you.

Climate Refugee

With global temperature anomaly barely positive and US “extremely hot”, what would the global temp anomaly be with US temp anomaly?


I realise that Hawaii and Alaska are part of the USA, but surely when talking about a US temperature these should be excluded as they are completely different to the rest of the US. It would be like including the Faukland Islands in the temperature of England.
REPLY: They are – see the steps- Anthony

Policy Guy

Excuse me,
why are we using the acronym PNAS at all. This was previously stated much earlier. Its offensive. Is this on purpose?


Ah…my mistake. I missed that in the steps you listed. Ignore my last comment.

Allan MacRae
Heads of Departments,
Proceedings, National Academy of Sciences (PNAS)
Dear PNAS Heads:
UAH Global Temperature Update for July 2012: +0.28C,
COOLER than June, 2012: +0.37 deg.
If one wants to argue about GLOBAL warming, should one not look first at GLOBAL temperatures?
Respectfully, Allan

Vince Causey

Is Nick Stokes suggesting that the new stations are at a higher altitude than the old stations, or is he just grasping at straws? Nick, if you have analysed the altitudes of every station and compared them with the old, then you may have some grounds for complaint. Have you?


NOAA never mentions this new pristine USCRN network in any press releases on climate records or trends, nor do they calculate and display a CONUS value for it. Now we know why.

You can bet your bottom dollar they looked at it. And if it showed a record you can bet your bottom dollar they would have announced it as coming from an untainted, pristine source. This is one of the reasons why sceptic should exists. Never take anyones word for it as the Royal Society’s motto says.

Allan MacRae said @ August 9, 2012 at 12:55 am

Dear PNAS Heads:

Was that deliberate? I fell off my chair laughing 🙂
Nice work BTW Anthony. Yer blood’s worth bottling…

Allan MacRae

OT but worthy of discussion:
[snip – sure, but on another thread]

In reply to Nick – I think the reason why anomalies were picked was to ‘escape’ the degree of error encountered in earlier measurements. In other words the difference between two temperatures over time could be considered more accurate then the absolute.
Given that, logically, when a reliable absolute temperature measurement network becomes available – ALL BETS ARE OFF. The walk on the ‘wild side’ of historical temperature analysis and all that implies with adjustments, tweaks and tunes is finished. The new network seems to be of sufficient quality to warrant a ‘reset’ in thinking on temperature analysis in the US.
The question should be more one of why is there such a large discrepancy between directly measured reality and the adjusted historical temperature record – why has it strayed so far from reality to flip from a record high to a probable low in comparison??

Ian of Fremantle Australia

Nick Stokes @11.57 pm If as you state “That’s why climate scientists generally prefer to deal with anomalies” why doesn’t the NOAA report anomalies rather than or of course as well as, absolute temperatures and why don’t they use the CONUS network? The sceptics will argue that the CONUS network isn’t giving such high temperatures. Surely NOAA could easily refute that by publishing the CONUS results. So why don’t they? Can’t you see that by not doing so this erodes the credibility of the CAGW proponents?


I just wanted to say, I love redundancy. Also, great blog, I rarely comment, but I am lurking and enjoying it, you have great respect from me.


British Television – last night, Ch4 news was reporting from the parched USA and it did say something to the effect of: “hottest July ever”.
They must have a direct line to God/NOAA


Sadly, this topic will never get the attention it needs in any of the msm.


Anthony, why not write to the NOAA and ask them point blank why they are not using the USCRN data in their press releases? Put your letter/email on the public record (WUWT) and let’s see what they can come up with. They may well have a sound rationale to justify what they’re doing. On the other hand, they may not ….

Bloke down the pub

I’m glad to see their investment in the new kit is finally being put to good use, though I suspect they’re wishing they hadn’t started.
When I first saw the location of the site you illustrated at the start of this post, I thought it was the Stroudwater that is near me, but the only thing of interest there is the canal. See here


keith: “The walk on the ‘wild side’ of historical temperature analysis and all that implies with adjustments, tweaks and tunes is finished.”
I think you rather optimistically underestimate ingenuity. From the recent TOBS discussion we’ve learned that the climate folks haven’t been able to read a thermometer and sort out the difference between the midpoint of a range and an average for nearly four decades. And that’s completely aside the notion that the atmosphere is an active ‘heating’ source based on albedo corrected black-body models of the Earth as a lightbulb. That is, a hollow sphere completely enclosing the sun at a distance of 2AU. And this rather than anything trivially or even in the neighborhood of correct by modelling the average temperature as lit by the sun from, you know, the side at a value 86K greater for the irradiated hemisphere on a tidally locked sphere.
I’d love to join you in your enthusiasm, but if the entire field cannot sort out basic mathematics or how to read numbers off a dial for these many decades? I’ll lay my bets on the continued success of NOAA keeping two sets of books.


The question should be more one of why is there such a large discrepancy between directly measured reality and the adjusted historical temperature record – why has it strayed so far from reality to flip from a record high to a probable low in comparison??

Yeah. Maybe because:
“An avalanche of answers must be found too fast.”
“To ask the question is to know the answer.”


Another question: Why wasn’t something like this set up 25 years ago, not five years ago? Since the AGW was a Big Deal by then, and the cost of accurate ground monitoring was peanuts in comparison to the cost of satellite monitoring, why not spend a penny? It would have been useful just for calibrating the satellites. (Hmmm … maybe those satellites should now be re-calibrated downward now.)
Possibly because the Team wasn’t interested in ground truth. (There’s a good title for future threads on the topic of this new network.)


Another question. Why haven’t countries in Europe established a completely automated setup like this either? A little funding from the EU would have done the trick. Same reason?

Nick Kermode

Is it possible to compare temperature measurements from the 1930’s to measurements from stations sited and online from 2002? Is any homogenization necessary?

Vince Causey says: August 9, 2012 at 1:14 am
“Is Nick Stokes suggesting that the new stations are at a higher altitude than the old stations, or is he just grasping at straws? Nick, if you have analysed the altitudes of every station and compared them with the old, then you may have some grounds for complaint. Have you?”

I hadn’t. It’s usually something you’d need to do before comparing the mean of two sets of stations. But since you asked, I did.
The mean altitude of the CRN stations was 2223 ft, or 667.6 M.2
The mean altitude of the USHCN stations in “ushcn-v2-stations.txt” (all USHCN) was 1681 ft, or 512 m.
The difference, 155.6 m, at a lapse rate of 6 °C/km, is 0.93°C, or 1.7°F. Very close to Anthony’s 2.1, and due just to the altitude difference.

You are assuming that they are only using USHCN stations for their press release national average temperature value. They actually provide no reference to the method. As far as I can tell, they could be using a mixture of USHCN and GHCN stations, or parts of, or the entire COOP network. They don’t specify how they come up with the number. They don’t show any source, data, or methods.
The fact that they don’t use this network at all for any public advisement is the most telling. History of actions has shown us that when the warmist crowd has a warmer result, they trumpet it as proof of AGW. They fact we’ve not seen any references to the USCRN in releases suggests they don’t favor the result.
I considered applying a lapse rate calculation. The problem with applying a lapse rate calculation is that depending on humidity, the moist adiabatic lapse rate (1.5°C/1,000 ft) or the dry adiabatic lapse rate (3.0°C/1,000 ft). Some days, depending on synoptic conditions, it might be moist, others it might be dry. To do it correctly, you’d have to link in the vagaries of weather for each station.
If NOAA publishes their method for calculation, I can then replicate it. As far as I can tell, they have not applied a lapse rate to the stations, but OTOH there’s no evidence either way, since they don’t publish their method. – Anthony


Priceless – all the hard work is reaping just rewards!
Modest contribution is in the tip jar …


“Esko says:
How you do know that year 1936 temperature was measured in rural areas? How do you know that it is comparable to your rular area calculations?”
In addition to Antony’s excellent reply, let me add that you miss the point. I dare say nobody knows how to correct the 1936 data for any imperfections of measurement and siting, but Antony’s work is sufficient of itself to prove (and I mean prove) that comparison of the shonky 2012 data set with the 1936 data set does NOT securely demonstrate a record high temperature for July.

Is not “mean” the same as “average”? Is one of the terms meant to be “median” or mid-range? Reference :


■ Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
That is why it is hotter in 2012 than in the 1930’s… they were not measuring Tmax’s every five minutes in the ’30’s. I have downloaded daily since June 22nd the Oklahoma City hourly records and never were the highest hourly maximum what was recorded for the maximum of the day, the maximum was consistently two degrees Fahrenheit greater than that of the highest HOUR but evidently they count 5 minute mini-microbursts of heat today instead. I guess hourly averages are not even hot enough for them (yeah, blame it on CO2). That, by itself, invalidates all records being recorded today to me, I don’t care how sophisticated their instruments are… the recording methods themselves have changed and anyone can see it in the “3-Day Climate History”, the hourly readouts, given at every city on their pages. Don’t believe me, see it for yourself what is going on in the maximums. Minimums rarely show this effect for cold is the absence of thermal energy, not the energy which can peak up for a few minutes, much more than cold readings.
You’re a meteorologist, how do you see this discrepancy?

Are not “mean” and “average” the same term? Do you mean “median” or “mid-range” instead?
Reference :

Dodgy Geezer

“.. Never take anyones word for it as the Royal Society’s motto says…”
Though, famously, that is NOT what the actual Royal Society says. Their motto may say that, but the Royal Society says ‘Respect the Data’ (in other words, do not seek to investigate what we have told you is true.)…

I am a little confused. What is the difference between:
USA Monthly Mean
USA Monthly Average
Mean and average are synonyms. If you are using each to distinguish something, it would be nice if you defined how you’re using the terms.


Since appears NOAA is “political”, one of the committees in Congress should hold a hearing on this very subject regarding why, after a lot of tax money has been spent on “pristine”, why NOAA choices not to inform the people that paid for it.