USHCN2 unadjusted and adjusted CONUS max and min temperatures – click for larger image
Background for new readers: The US Historical Climatological Network is a hand picked set of 1221 weather stations around the USA used to monitor climate. The network is currently being surveyed by a team of volunteers at www.surfacestations.org
When I visited The National Climatic Data Center a couple of weeks ago, one of the presentations made to me was on the upcoming implementation of the USHCN2 adjustments to the NOAA surface temperature record. Many have been critical of USHCN1 adjustments and for good reason – it completely misses some documented and undocumented station moves and biases that we have shown to be present via photography from the www.surfacestations.org project.
The goal of USHCN2 appears to be mostly in finding undocumented change points in the data, since from my discussions at NCDC it became clear to me that the metadata they have on hand is not always accurate (or even submitted by NWS personnel to NCDC in some instances). The weak link is the field reporting of metadata. They recognize that.
NCDC is thus faced with the task of finding and correcting such change points so that the bias from a site move to a warmer or cooler measurement environment doesn’t show up as a false trend in data. In some cases, such moves are easy to spot in the data, such as the USHCN station in Lampasas, TX that got moved from an observer’s back yard to a parking lot location 30 feet from the main highway through downtown. The changepoint made it all the way through the NOAA data into GISS as shown by the GISS graph below:
Click to see the full sized GISS record
Matt Menne of NOAA is leading the USHCN2 implementation, and was kind enough to present a slide show showing how it works for me.
You can see the PowerPoint here watts-visit (PPT 6.1MB)
To get an idea of the differences, here is a summary from the slide show:
USHCN1 Originally released in 1987, 1221 stations
Addresses:
- Time of observation bias (Karl et al. 1986; Vose et al. 2003)
- Station History Changes (Karl and Williams 1987)
- MMTS instrument change (Quayle et al. 1991)
- Urbanization (Karl et al. 1988 )
USHCN2, To be released in 2008, 1218 stations (actually more stations have been closed than this)
Addresses:
- Time of observation bias (Karl et al. 1986; Vose et al. 2003)
- Station History and Undocumented Changes (Menne and Williams, Journal of Climate, in review)
While it seems that USHCN2 will be an improvement to detecting and correcting undocumented station moves and biases, it remains to be tested in a real world scenario where an undocumented station move is known to occur, but hasn’t been reported by NWS COOP managers for inclusion into the NCDC MMS metadatabase.
Fortunately, during my week long survey trip in NC and TN that coincided with my NCDC visit, I found two such stations that have in fact been moved, with significant differences in their surroundings, but have not been reported to NCDC.
Matt Menne has graciously agreed to run a blind test on the data for these two stations I’ve located with undocumented changes to see if the new USHCN2 algorithms can in fact detect a changepoint and correct for it. I’ll keep you posted.
A good portion of the changepoint detection work can be traced back to:
Lund, R., and J. Reeves, 2002: Detection of undocumented changepoints: a revision of the two-phase regression model. J. Climate, 15, 2547-2554.
Abstract:
Changepoints (inhomogeneities) are present in many climatic time series. Changepoints are physically plausible whenever a station location is moved, a recording instrument is changed, a new method of data collection is employed, an observer changes, etc. If the time of the changepoint is known, it is usually a straightforward task to adjust the series for the inhomogeneity. However, an undocumented changepoint time greatly complicates the analysis. This paper examines detection and adjustment of climatic series for undocumented changepoint times, primarily from single site data. The two-phase regression model techniques currently used are demonstrated to be biased toward the conclusion of an excessive number of unobserved changepoint times. A simple and easily applicable revision of this statistical method is introduced.
I talked with Robert Lund at length last summer at Roger Pielke’s conference about USHCN2 and his view then was that the NOAA implementation wasn’t as “robust as he’d hoped” (my description based on the conversation).
Station moves are only part of the potential biases that can creep into the surface record of a station. USHCN2 will not catch and correct for things like:
- Gradual UHI increase in the surrounding area
- Tree shading/vegetation growth/loss near the sensor increasing/decreasing gradually
- A gradual buildup of surface elements around the sensor, such as buildings, asphalt, concrete etc. Though is an asphalt parking lot suddenly went up close by that would likely show as a step which may be detected.
- Drift of the temperature sensor +/- over time.
- Other low frequency changes that don’t show a step function in the data
When I queried Matt Menne during the presentation at NCDC about the sensitivity of the new algorithm to detect changepoints, he suggested it would have about a 0.5°C step threshold. I had hoped it would be more sensitive.
While I look forward to seeing the improvements that USHCN2 has to offer, I feel that what also needs to be done is a historic time-line reconstruction for each USHCN site to help locate other possible change points that may be missed by the USHCN2 algorithms. I feel that a qualitative analysis can help zero in on potential errors in the station record. NCDC’s claim that they can statistically adjust the poorly sited locations so as to add skillful trend information is contradicted by some of Roger Pielke’s papers, so it will be interesting to see if the test cases I’m submitting will be detected or not.
Here are the cites for several of Pielke’s papers on the problems with the land surface temperature record as applied to long term trends:
That is part of the reason NCDC invited me for a visit, they feel the USHCN station surveys are a valuable tool to help them judge the health of the sites and network. They look forward to the completion of the surfacestations.org USHCN survey.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


This is beyond silly. Why have an algorithm ( a one-word oxymoron, by the way ) triggered on a 0.5 degree centigrade change threshold where direct annual or more frequent observation would be more effective in identifying circumstances that would alter the climatic observations? Why use a formula or model when direct empirical observation is so obviously superior? Laziness, perhaps?
Thanks Anthony
Well let us hope that these changes will give us “skeptics” more confidence in the numbers, although I find no reason why UAH and RSS figures should not be used to determine trends at this point.
Jerry
The more adjustment I see, the more I believe in MAN-MADE global warming. Wink, wink, nudge, nudge
I guess that I am lost. The “adjusted” maps appear warmer than the unadjusted maps (more red, less blue). What is being reported now – adjusted or unadjusted? Is the goal of the project to “fix” adjustment errors? What is the basic flaw in the unadjusted maps? I have read enough to know that the sites are not conducive to giving accurate readings – implications mostly that they are too warm. Is this the source of the unadjusted information?
As you can see, I am totally lost.
Here’s what I just don’t get–if you look at Anthony’s four plots at the top, the unadjusted trends are all cooler than the adjusted trends, pretty much everywhere in the country. Yet, it’s painfully obvious to anyone who looks at any actual data (pictures, individual temperature trends) of the USHCN stations that the predominant non-climactic influences on stations are urbanization and sighting (both positive biases), which should mean that the overall USHCN adjustments should make trends slightly cooler, if anything.
I think the TOBS adjustment is too large, and has the wrong sign in many cases, and the UHI/microclimate adjustments are too small. I’m betting the US temperature trend is 30-50% overstated, just as Michaels and McKitrick showed by correlation with economic indictors in their paper. And if NOAA would open their eyes they’d see it too, instead of burying their heads in computer modeling corrections instead of actual data.
REPLY: FYI the four plots at the top were provided by NCDC in the PowerPoint presentation available in the link in the article. I captured the image and presented it.
USHCN2 sounds wonderful but not even in service yet and we are hearing adjustments. I must agree with Mike. It doesn’t take a genius to take a picture of an original installation and surrounding area and compare it to the current site structure. If the site has changed then maybe adjustments but not until. If the data indicates some sort of change send a person or team out to the site to find the reason. Any other method is foolish and we will end up with what we have with Jim Hansen’s handling of the USHCN as we now use.
I stress go to the site and physically inventory what has changed and what the bias is. Then if the site can’t be prepared move it. and document the move and the reason why. (SITE COMPROMISED) We need to properly measure the data or it will sneak up on us and then we are in trouble.
I am not a scientist but this is something that sets me off. I get beat up about my demand of the use of common sense in measurement on other blogs but it is the only policy to get good results. You must go see what is wrong anything else is just guessing. That is what we are doing now.
My 2 cents.
Bill
You can make your own graphs at their site with the USHCN 2 data here.
http://www.ncdc.noaa.gov/oa/climate/research/cag3/na.html
Anthony,
not very precise, but, my eyeball says they warmed the max temp substantially and cooled the min temps west and warmed them in the south east?? Guess I shouldn’t be surprised based on the excellent work you and the other volunteers are doing!!
Makes me wonder even more what the average is covering.
Anyone planning an analysis??
REPLY: This is all preliminary stuff, and it was provided as an example to me. I agree that the adjustments look a bit lumpy, but I’ll wait for the final output before I begin a criticism.
If the stations I’ve found with undocumented moves aren’t detected, then I’ll have real cause for complaint.
Lets play around with NCDC’s graph maker with trend lines.
Let’s use the top 2 warmest years 1934-1998
http://climvis.ncdc.noaa.gov/tmp/graph-May1322:53:081667785644.gif
Trend is 0.01 degF/Decade
1895-1933
http://climvis.ncdc.noaa.gov/tmp/graph-May1322:58:106560974121.gif
Trend is 0.21 degF/Decade
1999-2007
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:00:306210327148.gif
Trend is 0.22 degF/Decade
1895-2007
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:06:482291259765.gif
Trend is 0.11 degF/Decade
Lets try this one.
1921-1980
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:20:243466186523.gif
Trend is -0.12 degF/Decade
1895-1920
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:22:589812011718.gif
Trend is -0.12 degF/Decade
1981-2007
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:25:454860534667.gif
Trend is 0.59 degF/Decade
Pick your cherry, just don’t choke on the pits.
I would like to see a network of sensors mounted 100 feet above ground level on rural communications towers across the country. I would expect that to be far enough above the small things such as shade and air conditioner exhaust that can impact ground level sensors.
Well, at least the adjustments are not consistent with AGW, given that the max have increased more than the minimum.
I can remember reading on another site about how interpolations were better than direct observations because observations could be corrupted by deliberate human manipulation or laziness. At the very least you need some original observations to interpolate from; .5C is a big interpolation threshold; it would be interesting to know whether the majority of the exclusions were +ve or -ve anomalies; and how many below average anomalies were replaced with a positive interpolation and vice-versa.
I see a blemish to the far left, just below the equator, on the latest solar photo (13 May 17:41). Is this anything significant?
Appears to be a small sunspot forming (Cycle 23?).
http://www.spaceweather.com/
surely lampasas MUST be removed from Giss data?
REPLY: Any sensible person would think that, but sadly, no.
Seems somewhat medieval and a waste of money/resources to insist on using ground stations in limited locations on land to attempt to determine global temperature.
Prior to the space age it was all we had, but since 1979 we have a better, more cost effective method. Why does NOAA and NASA, of all agencies, continue to insist on using ground based, limited coverage, easily biased systems in preference to satellites and take the measurements in preference to the available satellite data (three separate, and I assume independent, sources)?
Only reason I can think is that the ground data is more amenable to manipulation to allow them to provide the data they want.
I agree with John Bunt, given what you have demonstrated with USHCN, an adjustment that shows greater warming than the actual measurements is unlikely to be correct.
I think they hope no one will notice that they have introduced an unsupported fudge factor.
I agree with Bill, you have to maintain the quality by having a look at what is happening to the sites.
And Crosspatch has a grea idea, put a fiberglass beam 100′ up every cell tower and mount a sensor that takes hourly readings and phones them in once a day to a central location. Each would have a known altitude above sea level to account for any problems that causes.
Anthony,
Besides the blind test of your two stations with moves, there ought to be a couple more with no moves to check for false positive results. Actually, a whole suite of stations with known provenance and a variety of changes really is required to test the proposal adequately. There probably are a couple of dozen suitable candidates already surveyed in SurfaceStations.
A step threshold sensitivity of +- 0.5 C is about the accuracy of the old min-max mercury thermometers and I’d really like to see it proven that the new adjustments will be even this good. There’s been some research in regime change analysis in ecosystem studies to detect environmental changes soon after they happen so that management practices can be altered effectively. It would be appropriate for NCDS to test their model against lots of different algorithms that have a track record.
Off topic:
The April GISSTEMP anomaly has just been released. The figure is 0.41 . This is lower than their March figure of 0.60. (This had been previously announced as 0.67)
I just looked at the chart maker for USHCN 2 and found it quite interesting it seems that on average in the U.S. 2006 was (eyeball) 3.5 to nearly 4.0 degrees warmer than 1998 with a 0.84f/decade trend
January 1996 – 2008 Data Values:
January 2008: 30.70 degF Rank: 4
January 2007: 31.45 degF Rank: 5
January 2006: 39.52 degF Rank: 13
January 2005: 33.45 degF Rank: 8
January 2004: 30.45 degF Rank: 2
January 2003: 33.00 degF Rank: 7
January 2002: 35.02 degF Rank: 11
January 2001: 31.63 degF Rank: 6
January 2000: 34.10 degF Rank: 9
January 1999: 34.28 degF Rank: 10
January 1998: 35.52 degF Rank: 12
January 1997: 30.48 degF Rank: 3
January 1996: 30.22 degF Rank: 1
January 1977 – 1933 Average = NaN degF
January 1996 – 2008 Trend = 0.84 degF / Decade
——————————————————————————–
NCDC / Climate At A Glance / Climate Monitoring / Search / Help
It will not display the graph but sure makes the past decade a wonder. I think there are some real adjustments being made, I am glad this is just preliminary findings.
Bill Derryberry
The presentation shows that the TOBs adjustment will add 0.2C to the trend (from 1920 on for both the Maximum and Minimum temperature).
The new Homogenization adjustment will add a further 0.45C to the Maximum temperature and 0.0C to the Minimum temperature trend (from roughly 1900 on).
The total adjustments will add approximately 0.65C to the Maximum raw data trend and 0.2C to the Minimum temperature trend.
I note the previous charts I have seen had only adjusted the average trend by 0.55F up to 1999 so these new adjustment add a further 0.25C to the previous adjustments (assuming I can do the math right when averaging the Maximum and Minimum. )
Given there is no increase in the adjusted average US temperature data since 1934, please show us the raw data as well (given the trend is artificially increased by approximately 0.5C to 0.6C.)
And Anthony, please E-mail NOAA and ask them to provide the average temperature data (and not obscure everything with separate Maximum and Minimum trends which are supposed to be averaged out.)
I always have a problem with anyone interpolating such an important metric. But if the database has been corrupted, as suggested by all the discoveries made by Tony and his team, something has to be done. An analysis of past data can isolate the “outliers” and/or generated figures, but application of any formula to correct these figures must be open and above board.
Given the importance of this task and the fact many eyes will be watching (thanks to Tony et al), perhaps we might just see something come out of this after all. AS LONG AS HANSEN AND HIS COHARTS DON’T HAVE THEIR DIRTY FINGERS IN THE MIX. I trust him like I trust his good buddy, Goofy Gore!
We should also note that if they raise past temperature and trends with faulty numbers, the current cooling will look even more dramatic. On the flip side, if they lower them too much, the current cooling will look rather plebeian.
Jack Koenig, Editor
The Mysterious Climate Project
http://www.climateclinic.com
Sorry, I did the math wrong, the new adjustments add 0.45C to the raw data trend (not 0.5C to 0.6C as I wrote above) but still up 0.15C from the previous adjustment regime.
I can understend site changes, concrete pads and the like, but how do we get ‘undocumented’ station moves ? Somebody had to give the order, fork over a few bucks, etc. Unless these things have grown legs, there should be some kind of record. After all, the taxpayers eventually foot the bill, and bureaucracies thrive on paperwork.
REPLY: NWS and NCDC are like families with kids. They have the same parent (NOAA) but they don’t always cooperate fully.
Any statistician will tell you 10 samples is a huge improvement over 1 sample. 100 samples is a modest improvement over 10 samples. 1000 samples has no real significance over 100 samples.
The point being we don’t need 1200 stations of questionable quality to tell us the temperature trend, we need a 100 good stations. The other 1100 stations will just tell us how badly they are corrupting the trend shown by the 100 good stations.