I decided to do myself something that so far NOAA has refused to do: give a CONUS average temperature for the United States from the new ‘state of the art’ United States Climate Reference Network (USCRN). After spending millions of dollars to put in this new network from 2002 to 2008, they are still giving us data from the old one when they report a U.S. national average temperature. As readers may recall, I have demonstrated that old COOP/USHCN network used to monitor U.S. climate is a mishmash of urban, semi-urban, rural, airport and non-airport stations, some of which are sited precariously in observers backyards, parking lots, near air conditioner vents, airport tarmac, and in urban heat islands. This is backed up by the 2011 GAO report spurred by my work.
Here is today’s press release from NOAA, “State of the Climate” for July 2012 where they say:
The average temperature for the contiguous U.S. during July was 77.6°F, 3.3°F above the 20th century average, marking the hottest July and the hottest month on record for the nation. The previous warmest July for the nation was July 1936 when the average U.S. temperature was 77.4°F. The warm July temperatures contributed to a record-warm first seven months of the year and the warmest 12-month period the nation has experienced since recordkeeping began in 1895.
OK, that average temperature for the contiguous U.S. during July is easy to replicate and calculate using NOAA’s USCRN network of stations, shown below:
![crn_map[1]](http://wattsupwiththat.files.wordpress.com/2012/08/crn_map1.jpg)
The U.S. Climate Reference Network (USCRN) consists of 114 stations developed, deployed, managed, and maintained by the National Oceanic and Atmospheric Administration (NOAA) in the continental United States for the express purpose of detecting the national signal of climate change. The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the nation changed over the past 50 years? These stations were designed with climate science in mind. Three independent measurements of temperature and precipitation are made at each station, insuring continuity of record and maintenance of well-calibrated and highly accurate observations. The stations are placed in pristine environments expected to be free of development for many decades. Stations are monitored and maintained to high standards, and are calibrated on an annual basis. In addition to temperature and precipitation, these stations also measure solar radiation, surface skin temperature, and surface winds, and are being expanded to include triplicate measurements of soil moisture and soil temperature at five depths, as well as atmospheric relative humidity. Experimental stations have been located in Alaska since 2002 and Hawaii since 2005, providing network experience in polar and tropical regions. Deployment of a complete 29 station USCRN network into Alaska began in 2009. This project is managed by NOAA’s National Climatic Data Center and operated in partnership with NOAA’s Atmospheric Turbulence and Diffusion Division.
So clearly, USCRN is an official effort, sanctioned, endorsed, and accepted by NOAA, and is of the highest quality possible. Here is what a typical USCRN station looks like:

A few other points about the USCRN:
- Temperature is measured with triple redundant air aspirated sensors (Platinum Resistance Thermometers) and averaged between all three sensors. The air aspirated shield exposure system is the best available.
- Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
- All stations were sited per Leroy 1999 siting specs, and are Class 1 or Class 2 stations by that siting standard. (see section 2.2.1 here of the USCRN handbook PDF)
- The data goes through quality control, to ensure an errant sensor hasn’t biased the values, but is otherwise unchanged.
- No stations are near any cities, nor have local biases of any kind that I have observed in any of my visits to them.
- Unlike the COOP/USHCN network where they fought me tooth and nail, NOAA provided station photographs up front to prove the “pristine” nature of the siting environment.
- All data is transmitted digitally via satellite uplink direct from the station.
So this means that:
- There are no observer or transcription errors to correct.
- There is no time of observation bias, nor need for correction of it.
- There is no broad scale missing data, requiring filling in data from potentially bad surrounding stations. (FILNET)
- There are no needs for bias adjustments for equipment types since all equipment is identical.
- There are no need for urbanization adjustments, since all stations are rural and well sited.
- There are no regular sensor errors due to air aspiration and triple redundant lab grade sensors. Any errors detected in one sensor are identified and managed by two others, ensuring quality data.
- Due to the near perfect geospatial distribution of stations in the USA, there isn’t a need for gridding to get a national average temperature.
Knowing this, I wondered why NOAA has never offered a CONUS monthly temperature from this new network. So, I decided that I’d calculate one myself.
The procedure for a CONUS monthly average temperature from USCRN:
- Download each station data set from here: USCRN Quality Controlled Datasets.
- Exclude stations that are part of the USHCN-M (modernized USHCN) or USRCRN-Lite stations which are not part of the 114 station USCRN master set.
- Exclude stations that are not part of the CONUS (HI and AK)
- Load all July USCRN 114 station data into an Excel Spreadsheet, available here: CRN_CONUS_stations_July2012_V1.2
- Note stations that have missing monthly totals data. Three in July 2012, Elgin, AZ, (4 missing days) Avondale, PA,(5 missing days) McClellanville, SC, (7 missing days) and set their data aside to be dealt with separately.
- Do sums and calculate CONUS area averages from the Tmax, Tmin, Tavg and Tmean data provided for each station.
- Do a separate calculation to see how much difference the stations with missing/partial data make for the entire CONUS.
Here are the results:
USA Monthly Mean for July 2012: 75.72°F
(111 stations)
USA Monthly Average for July 2012: 75.51°F
(111 stations)
USA Monthly Mean for July 2012: 75.74°F
(114 stations, 3 w/ partial missing data, difference 0.02)
USA Monthly Average for July 2012: 75.55°F
(114 stations, 3 w/ partial missing data, difference 0.04)
============================
Comparison to NOAA’s announcement today:
Using the old network, NOAA says the USA Average Temperature for July 2012 is: 77.6°F
Using the NOAA USCRN data, the USA Average Temperature for July 2012 is: 75.5°F
The difference between the old problematic network and new USCRN is 2.1°F cooler.
This puts July 2012, according to the best official climate monitoring network in the USA at 1.9°F below the 77.4°F July 1936 USA average temperature in the NOAA press release today, not a record by any measure. Dr. Roy Spencer suggested earlier today that he didn’t think so either, saying:
So, all things considered (including unresolved issues about urban heat island effects and other large corrections made to the USHCN data), I would say July was unusually warm. But the long-term integrity of the USHCN dataset depends upon so many uncertain factors, I would say it’s a stretch to to call July 2012 a “record”.
This result also strongly suggests, that a well sited network of stations, as the USCRN is designed from inception to be, is totally free of the errors, biases, adjustments, siting issues, equipment issues, and UHI effects that plague the older COOP USHCN network that is a mishmash of problems that the new USCRN was designed to solve.
It suggests Watts et al 2012 is on the right track when it comes to pointing out the temperature measurement differences between stations with and without such problems. I don’t suggest that my method is a perfect comparison to the older COOP/USHCN network, but the fact that my numbers come close, within the bounds of the positive temperature bias errors noted in Leroy 1999, and that the more “pristine” USCRN network is cooler for absolute monthly temperatures (as would be expected) suggests my numbers aren’t an unreasonable comparison.
NOAA never mentions this new pristine USCRN network in any press releases on climate records or trends, nor do they calculate and display a CONUS value for it. Now we know why. The new “pristine” data it produces is just way too cool for them.
Look for a regular monthly feature using the USCRN data at WUWT. Perhaps NOAA will then be motivated to produce their own monthly CONUS Tavg values from this new network. They’ve had four years to do so since it was completed.
UPDATE: Some people questioned what is the difference between the mean and average temperature values. In the monthly data files from USCRN, there are these two values:
T_MONTHLY_MEAN
T_MONTHLY_AVG
http://www.ncdc.noaa.gov/crn/qcdatasets.html
The mean is the monthly (max+min)/2, and the average is the average of all the daily averages.
UPDATE2: I’ve just sent this letter to NCDC – to ncdc.info@ncdc.noaa.gov
Hello,
I apologize for not providing a proper name in the salutation, but none was given on the contact section of the referring web page.
I am attempting to replicate the CONUS temperature average of 77.6 degrees Fahrenheit for July 2012, listed in the August 8th 2012, State of the Climate Report here: http://www.ncdc.noaa.gov/sotc/
Pursuant to that, would you please provide the following:
1. The data source of the surface temperature record used.
2. The list of stations used from that surface temperature record, including any exclusions and reasons for exclusions.
3. The method used to determine the CONUS average temperature, such as simple area average, gridded average, altitude corrections, bias corrections, etc. Essentially what I’m requesting is the method that can be used to replicate the resultant 77.6F CONUS average value.
4. A flowchart of the procedures in step 3 if available.
5. Any other information you deem relevant to the replication process.
Thank you sincerely for your consideration.
Best Regards,
Anthony Watts
===================================================
Below is the response I got to the email address provided in the SOTC release, some email addresses redacted to prevent spamming.
===================================================
—–Original Message—–
From: mailer-daemon@xxxx.xxxx.xxx
Date: Thursday, August 09, 2012 3:22 PM
To: awatts@xxxxxxx.xxx
Subject: Undeliverable: request for methods used in SOTC press release
Your message did not reach some or all of the intended recipients.
Sent: Thu, 9 Aug 2012 15:22:43 -0700
Subject: request for methods used in SOTC press release
The following recipient(s) could not be reached:
ncdc.info@ncdc.noaa.gov
Error Type: SMTP
Error Description: No mail servers appear to exists for the recipients address.
Additional information: Please check that you have not misspelled the recipients email address.
hMailServer
===============================
UPDATE3: 8/10/2012. This may put the issue to rest about straight averaging -vs- some corrected method. From http://www.ncdc.noaa.gov/temp-and-precip/us-climate-divisions.php
It seems they are using TCDD (simple average) still. I’ve sent an email to verify…hopefully they get it.
Traditional Climate Divisional Database
Traditionally, climate division values have been computed using the monthly values for all of the Cooperative Observer Network (COOP) stations in each division are averaged to compute divisional monthly temperature and precipitation averages/totals. This is valid for values computed from 1931 to the present. For the 1895-1930 period, statewide values were computed directly from stations within each state. Divisional values for this early period were computed using a regression technique against the statewide values (Guttman and Quayle, 1996). These values make up the traditional climate division database (TCDD).
Gridded Divisional Database
The GHCN-D 5km gridded divisional dataset (GrDD) is based on a similar station inventory as the TCDD however, new methodologies are used to compute temperature, precipitation, and drought for United States climate divisions. These new methodologies include the transition to a grid-based calculation, the inclusion of many more stations from the pre-1930s, and the use of NCDC’s modern array of quality control algorithms. These are expected to improve the data coverage and the quality of the dataset, while maintaining the current product stream.
The GrDD is designed to address the following general issues inherent in the TCDD:
- For the TCDD, each divisional value from 1931-present is simply the arithmetic average of the station data within it, a computational practice that results in a bias when a division is spatially undersampled in a month (e.g., because some stations did not report) or is climatologically inhomogeneous in general (e.g., due to large variations in topography).
- For the TCDD, all divisional values before 1931 stem from state averages published by the U.S. Department of Agriculture (USDA) rather than from actual station observations, producing an artificial discontinuity in both the mean and variance for 1895-1930 (Guttman and Quayle, 1996).
- In the TCDD, many divisions experienced a systematic change in average station location and elevation during the 20th Century, resulting in spurious historical trends in some regions (Keim et al., 2003; Keim et al., 2005; Allard et al., 2009).
- Finally, none of the TCDD’s station-based temperature records contain adjustments for historical changes in observation time, station location, or temperature instrumentation, inhomogeneities which further bias temporal trends (Peterson et al., 1998).
The GrDD’s initial (and more straightforward) improvement is to the underlying network, which now includes additional station records and contemporary bias adjustments (i.e., those used in the U.S. Historical Climatology Network version 2; Menne et al., 2009).
The second (and far more extensive) improvement is to the computational methodology, which now addresses topographic and network variability via climatologically aided interpolation (Willmott and Robeson, 1995). The outcome of these improvements is a new divisional dataset that maintains the strengths of its predecessor while providing more robust estimates of areal averages and long-term trends.
The NCDC’s Climate Monitoring Branch plans to transition from the TCDD to the more modern GrDD by 2013. While this transition will not disrupt the current product stream, some variances in temperature and precipitation values may be observed throughout the data record. For example, in general, climate divisions with extensive topography above the average station elevation will be reflected as cooler climatology. A preliminary assessment of the major imapacts of this transition can be found in Fenimore, et. al, 2011.
The altitude calculation done by Nick Stokes is not useful; it is the wrong metric.
Far more useful would be a calculation of weighted average altitude for each set of stations. Some stations are close to many other stations; some stations are off by themselves, and in the gridding process carry a larger area weight.
I have not done the more useful calculation. For all I know, the temperature delta will be even greater. I just know the work to date has no value in this discussion. I also understand Nick did what another poster proposed he do; taking the other poster’s words literally. Nick probably already knows his number is not useful.
I hope someone is keeping a back-up of all of the USCRN numbers. Sooner or later, these will have to be “adjusted” to fit the CAGW meme. The RAW data will have to disappear…
[snip fake email address, proxy server, policy violation]
Comparing absolute temperatures for different time period using a variable network of observing stations is extremely dicey. For example, in Anthony’s comparison potential difficulties involve elevations differences (as suggested by Nick Stokes), areal averaging (northern (colder) stations may dominate a non-areal average), different sensors (CRN sensors are aspirated, a lot of USHCN stations, especially back in the 1930s, were not—aspirated sensors, I think, tend to record lower temperature than naturally aspirated CRS thermometers) and there are others. The three I listed may impart a cold bias in the CRN observations compared with the USHCN observations.
But similar (and additional) problems undoubtedly are present when trying to compare USHCN CONUS temperatures in 2012 with those in 1936 (or any other time period). Maybe Nick could calculate the average station elevation in the 1936 network for comparison.
So, Anthony’s point is a good one—until we know how NCDC calculates the CONUS absolute temperatures, it is impossible to judge the validity of the NOAA press release—either internally, or via external comparisons such as to the CRN observations.
If were doing it, I would probably grid the data, establish an average absolute temperature for each gridcell for some baseline period, and then only deal with anomalies from that point onwards. Suggesting, as Hansen does at GISS, to just add the anomaly to the baseline average to get the absolute temperature at any point in time. This method is not perfect, as the variability of the anomalies may change as stations come and go (or are otherwise modified) within each gridcell, but it is less sensitive to station changes than other methods.
But, until we know how NCDC solves all these issues my take home message from Anthony’s post is that comparing absolute temperatures over time is extremely non-robust—something not really emphasized in the NOAA press release and resulting media coverage.
Just my two cents.
-Chip
Comparing absolute temperatures for different time period using a variable network of observing stations is extremely dicey. For example, in Anthony’s comparison potential difficulties involve elevations differences (as suggested by Nick Stokes), areal averaging (northern (colder) stations may dominate a non-areal average), different sensors (CRN sensors are aspirated, a lot of USHCN stations, especially back in the 1930s, were not—aspirated sensors, I think, tend to record lower temperature than naturally aspirated CRS thermometers) and there are others. The three I listed may impart a cold bias in the CRN observations compared with the USHCN observations.
But similar (and additional) problems undoubtedly are present when trying to compare USHCN CONUS temperatures in 2012 with those in 1936 (or any other time period). Maybe Nick could calculate the average station elevation in the 1936 network for comparison.
So, Anthony’s point is a good one—until we know how NCDC calculates the CONUS absolute temperatures, it is impossible to judge the validity of the NOAA press release—either internally, or via external comparisons such as to the CRN observations.
If were doing it, I would probably grid the data, establish an average absolute temperature for each gridcell for some baseline period, and then only deal with anomalies from that point onwards. Suggesting, as Hansen does at GISS, to just add the anomaly to the baseline average to get the absolute temperature at any point in time. This method is not perfect, as the variability of the anomalies may change as stations come and go (or are otherwise modified) within each gridcell, but it is less sensitive to station changes than other methods.
But, until we know how NCDC solves all these issues my take home message from Anthony’s post is that comparing absolute temperatures over time is extremely non-robust—something not really emphasized in the NOAA press release and resulting media coverage.
Just my two cents.
-Chip
Maus:”I think
you rather optimistically underestimate ingenuity. From the recent TOBS discussion we’ve learned that the climate folks haven’t been able to read a thermometer
and sort out the difference between the midpoint of a range and an average for nearly four decades. And that’s completely aside the notion that the atmosphere
is an active ‘heating’ source based on albedo corrected black-body models of the Earth as a lightbulb. That is, a hollow sphere completely enclosing the sun at
a distance of 2AU. And this rather than anything trivially or even in the neighborhood of correct by modelling the average temperature as lit by the sun from,
you know, the side at a value 86K greater for the irradiated hemisphere on a tidally locked sphere. I’d love to join you in your enthusiasm, but if the entire
field cannot sort out basic mathematics or how to read numbers off a dial for these many decades? I’ll lay my bets on the continued success of NOAA keeping two
sets of books.”
Agreed, but the new data set has made the temperature illusion that much harder to pull off; they need to reconcile two sets of data now if the newer, more accurate, data set becomes common knowledge (basically share this post as far and wide as you can). As I said, the need to explain the difference (in terms of a historical ‘high’ as compared to an actual low) becomes key.
I vote for a monthly update, if not a dedicated page on this site for USCRN – given this site’s ranking it will be easily found…
My two cents.
I am somewhat disappointed to learn the following: “No stations are near any cities, “.
Would it not be prudent to have an equal number of stations located in cities for the purpose of monitoring the Urban Heat Island effect?
Wouldn’t this information be useful for coming up with scientific/statistical bias values that could be use to correct for the Urban Heat Island effect on non-USCRN stations? Especially for working with historic values where a station was encroached upon by a expanding city.
Wouldn’t this information also be useful for predicting the impact of the growing number of expanding cities?
Such data could help us to understand when the density of a city starts to negatively impact itself. More people, more air conditioners, more local heat, requiring air conditioners to run longer or more powerful (BTU’s transferred) air conditioners, thus even more local heat generated.
TC in the OC says:
August 9, 2012 at 10:34 am
“In the 76 years since July 1936 we have been told over and over that CO2 has increased exponentially . . .”
Up? Yes. Exponentially? That seems to be a stretch. See here:
http://www.esrl.noaa.gov/gmd/ccgg/trends/
Not that it apparently makes any difference. See here:
http://notrickszone.com/2012/08/07/epic-warmist-fail-modtran-doubling-co2-will-do-nothing-to-increase-long-wave-radiation-from-sky/
~ ~ ~ ~
In Ellensburg, WA the Max. temp. on Tuesday was 103 °F. and today it is 86 °F. At this rate of cooling we expect local lakes to freeze over by next Wednesday. Thursday at the latest.
FijiDave says: “As the furore on temperatures is down to hundredths of a degree, ”
Which once again brings up the whole issue of AGW Alarmists predicting future temperatures out to hundredth of a degree, based upon current temperatures that are barely accurate to a degree, and past temperatures that are doubtfully accurate to several degrees.
Question for Historical-Scientists: Has there been any investigation into the accuracy of thermometers of 1800s & 1900s? Were the thermometers of the past accurate at one end, both ends, linear accurate, or all over the place by varying degrees?
Look at http://www1.ncdc.noaa.gov/pub/data/cirs/state.README for a description of methodology.
Sorry Jim G. Take my word for it or google it, It is Karl. Also you have not done even the basic research on Marx and the quote you are fond of. http://en.wikipedia.org/wiki/From_each_according_to_his_ability,_to_each_according_to_his_need You will note his analysis has nothing to do with yours.
I apologize if in the flurry of comments, I missed this proposal. It would seem to me that the best method to determine the question of July record temperatures would be to look at the stations that existed in 1936, continue to report, are well sighted, and not affected by urban effects. I feel this would be superior to comparing new networks to old ones, or attempting to somehow correct for siting, elevation, or sensor differences. While the new, well-sighted stations will have great long term benefits, I feel using them solely as a justification for or against a record temperature for a monthly record at a time they did not exist is not valid. If indeed, this becomes the hottest month on record, then in 2018 the question comes up again, we’ll have highly accurate sensors deployed to either support or refute that conclusion, but not against 76 year old records.
The two sets of weather stations are in different places.Therefore the averages are different. Nick has calculated the average altitude of the two sets and the historical network set was on average 178m higher. The NOAA report the historic set average so that it is directly comparable with previous years. [Snip. Do not do that again. ~dbs, mod.]
Anthony reports:
Subject: Undeliverable: request for methods used in SOTC press release
Your message did not reach some or all of the intended recipients.
Oh my. I was going to suggest they might have meant info@…. but at their contact page http://www.ncdc.noaa.gov/oa/about/ncdccontacts.html they have names like ncdc.orders and ncdc.webmaster, so it appears to me that ncdc.info has passed into history.
Maybe you’ll have to go all formal at http://www.rdc.noaa.gov/~foia/index.html
OTOH, Tom Karl promises at http://www.ncdc.noaa.gov/oa/about/welcomefromdirector.html that “As stated above, the Center is a service organization. I invite you to explore our climate, radar, and satellite resources. We continuously endeavor to improve the services provided and welcome your comments and suggestions. I can assure you we review each one. Please direct your comments and suggestions to ncdc.webmaster@noaa.gov. ”
At the very least, the ncdc webmaster would appreciate hearing that there are some references to ncdc.info they haven’t purged yet.
Bob Koss (at August 9, 2012 at 12:00 am) asked if the paired stations could be calculated as single sources.
I took the liberty of doing that and found the result was a negative 0.18 degrees F correction making the July average Using the NOAA USCRN data, 75.3°F vice 75.5
Also the two stations in Goodwell OK (in the western pan handle) are about 3 miles apart and should also be considered a pair making 8 CONUS pairs. All the individual paired data was very similar as would be expected. I simply used the average of each pair’s data as a single station value.
To Moderator: Excel File available for the asking. I simply made a copy of the existing worksheet and adapted it so you can switch between the two. And converted some of the fixed data to calculated and highlighted where I changed formulas.
It would seem that the secondary email address is the one to use
Climate Services and Monitoring Division
NOAA/National Climatic Data center
151 Patton Avenue
Asheville, NC 28801-5001
fax: +1-828-271-4876
phone: +1-828-271-4800
email: ncdc.info@ncdc.noaa.gov
To request climate data, please E-mail:ncdc.orders@ncdc.noaa.gov
Damn straight.
That’s how I learned it in school, and I’m fracked if I’ll buy the smearing of terminology necessary to cover the AGW climatology rot.
Median is midpoint. Mean is total of all data points divided by the number of data points. No fuzzing allowed.
did anyone answer wayne’s comment? It is very important. Here it is again:
Anthony,
■ Temperature is measured continuously and logged every 5 minutes, ensuring a true capture of Tmax/Tmin
That is why it is hotter in 2012 than in the 1930′s… they were not measuring Tmax’s every five minutes in the ’30′s. I have downloaded daily since June 22nd the Oklahoma City hourly records and never were the highest hourly maximum what was recorded for the maximum of the day, the maximum was consistently two degrees Fahrenheit greater than that of the highest HOUR but evidently they count 5 minute mini-microbursts of heat today instead. I guess hourly averages are not even hot enough for them (yeah, blame it on CO2). That, by itself, invalidates all records being recorded today to me, I don’t care how sophisticated their instruments are… the recording methods themselves have changed and anyone can see it in the “3-Day Climate History”, the hourly readouts, given at every city on their pages. Don’t believe me, see it for yourself what is going on in the maximums. Minimums rarely show this effect for cold is the absence of thermal energy, not the energy which can peak up for a few minutes, much more than cold readings.
You’re a meteorologist, how do you see this discrepancy?
Matt, your idea about “looking at the stations that existed in 1936, continue to report, are well sighted, and not affected by urban effects” to determine if there is any warming since 1936 would not account for the problem “wayne” brought up about the way new sensors can measure Tmax as a short duration burst, while the old sensors could not. This suggests to me that there is no way to determine if it is warmer now than in the 1930s (unless, of course, some stations maintained the same technology from the 1930s right up to the present (do any?) and also followed the criteria you suggested)
I fear that the new data will somehow be corrupted to produce the same spurious warming as the old station data. I can also almost see a new paper from NOAA “A new calibration of the…”
Anthony, your unadjusted data will be priceless when they adjust the new data to show that the new dataset shows warming too.
Thanks for a nice analysis and showing another portion of the truth.
It’s really too bad that averaging temperatures is meaningless.
cols 57 — 63 [7 chars] T_MONTHLY_MEAN
The mean temperature, in degrees C, calculated using the typical
historical approach of (T_MONTHLY_MAX + T_MONTHLY_MIN) / 2
WHAT ?!?!?!? This is supposed to be the USCRN: US Climate REFERENCE Network.
A net work of stations that measure temperatures every 5 minutes. A high priced network of the highest quality stations. Most scientists and statisticians would expect “Mean” to be the centroid of all temperatures sampled, approximately 8640 data points / month = (12 points / hr * 24 hr/day * 30 day/month).
But NOAA has the GALL to define “Mean” to be the mid point between the single warmest and single coldest measured temperature in the month ??
That should not be called T_MONTHLY_MEAN.
Nor should it be T_MONTHLY_MEDIAN
Maybe it should be called T_MONTHLY_MIDDLE.
but I think truth in advertising requires it to be T_MONTHLY_MUDDLE.
Truely, I am shocked that anyone, much less curators of the Climate Reference Network, would apply the term “Mean” to apply to only two outlier points in a dataset of 8640 data points. NOAA credibility goes “Crash and Burn”.
Re: NCDC email address…
From the other article “Dear NOAA and Seth, which 1930′s were you comparing to when you say July 2012 is the record warmest?” the USA graphics indicate the correct email address is just noaa.gov without the leading ncdc:
ncdc.info@noaa.gov
NOT the @ncdc.noaa.gov
Cheers and hope this helps
Stephen Rasey says: August 9, 2012 at 11:02 pm
“But NOAA has the GALL to define “Mean” to be the mid point between the single warmest and single coldest measured temperature in the month ??”
No, you’ve read it wrongly. T_MONTHLY_MAX is the average for the month of the daily maxima, and T_MONTHLY_MIN is the average of the daily minima. T_MONTHLY_MEAN is the mean of those two numbers.
The fact is that these files have over a century of data. For the majority there are not 8640 data points, but just 3 per day (Max, Min and temp at time of reading). That’s all we have. For modern instrumentation, they emulate that measure. Otherwise comparison with the past would not work.
Alex Heyworth says:
August 9, 2012 at 12:32 am
Further to your reply to Esko, Anthony, there is a reason why they used to be called airfields (in Anglo usage, at any rate – I don’t know about US usage for the 30s and 40s). Remember all that footage of Spitfires bumping over the turf?
________________________________
The Raleigh-Durham International Airport didn’t even exist in 1934 and 1936: The General Assembly of North Carolina charters the Raleigh-Durham Aeronautical Authority in 1939, which would be changed in 1945 to the Raleigh-Durham Airport Authority. Before that there was a tiny airfield near the city of Raleigh.