USHCN Version 2 – prelims, expectations, and tests

USHCN2 unadjusted and adjusted CONUS max and min temperatures – click for larger image

Background for new readers: The US Historical Climatological Network is a hand picked set of 1221 weather stations around the USA used to monitor climate. The network is currently being surveyed by a team of volunteers at www.surfacestations.org

When I visited The National Climatic Data Center a couple of weeks ago, one of the presentations made to me was on the upcoming implementation of the USHCN2 adjustments to the NOAA surface temperature record. Many have been critical of USHCN1 adjustments and for good reason – it completely misses some documented and undocumented station moves and biases that we have shown to be present via photography from the www.surfacestations.org project.

The goal of USHCN2 appears to be mostly in finding undocumented change points in the data, since from my discussions at NCDC it became clear to me that the metadata they have on hand is not always accurate (or even submitted by NWS personnel to NCDC in some instances). The weak link is the field reporting of metadata. They recognize that.

NCDC is thus faced with the task of finding and correcting such change points so that the bias from a  site move to a warmer or cooler measurement environment doesn’t show up as a false trend in data. In some cases, such moves are easy to spot in the data, such as the USHCN station in Lampasas, TX that got moved from an observer’s back yard to a parking lot location 30 feet from the main highway through downtown. The changepoint made it all the way through the NOAA data into GISS as shown by the GISS graph below:

lampasas_tx_ushcn_plot.png

Click to see the full sized GISS record

Matt Menne of NOAA is leading the USHCN2 implementation, and was kind enough to present a slide show showing how it works for me.

You can see the PowerPoint here watts-visit (PPT 6.1MB)

To get an idea of the differences, here is a summary from the slide show:

USHCN1 Originally released in 1987, 1221 stations

Addresses:

  • Time of observation bias (Karl et al. 1986; Vose et al. 2003)
  • Station History Changes (Karl and Williams 1987)
  • MMTS instrument change (Quayle et al. 1991)
  • Urbanization (Karl et al. 1988 )

 

USHCN2, To be released in 2008, 1218 stations (actually more stations have been closed than this)

Addresses:

  • Time of observation bias (Karl et al. 1986; Vose et al. 2003)
  • Station History and Undocumented Changes (Menne and Williams, Journal of Climate, in review)

While it seems that USHCN2 will be an improvement to detecting and correcting undocumented station moves and biases, it remains to be tested in a real world scenario where an undocumented station move is known to occur, but hasn’t been reported by NWS COOP managers for inclusion into the NCDC MMS metadatabase.

Fortunately, during my week long survey trip in NC and TN that coincided with my NCDC visit, I found two such stations that have in fact been moved, with significant differences in their surroundings, but have not been reported to NCDC.

Matt Menne has graciously agreed to run a blind test on the data for these two stations I’ve located with undocumented changes to see if the new USHCN2 algorithms can in fact detect a changepoint and correct for it. I’ll keep you posted.

A good portion of the changepoint detection work can be traced back to:

Lund, R., and J. Reeves, 2002: Detection of undocumented changepoints: a revision of the two-phase regression model. J. Climate, 15, 2547-2554.

Abstract:

Changepoints (inhomogeneities) are present in many climatic time series. Changepoints are physically plausible whenever a station location is moved, a recording instrument is changed, a new method of data collection is employed, an observer changes, etc. If the time of the changepoint is known, it is usually a straightforward task to adjust the series for the inhomogeneity. However, an undocumented changepoint time greatly complicates the analysis. This paper examines detection and adjustment of climatic series for undocumented changepoint times, primarily from single site data. The two-phase regression model techniques currently used are demonstrated to be biased toward the conclusion of an excessive number of unobserved changepoint times. A simple and easily applicable revision of this statistical method is introduced.

I talked with Robert Lund at length last summer at Roger Pielke’s conference about USHCN2 and his view then was that the NOAA implementation wasn’t as “robust as he’d hoped” (my description based on the conversation).

Station moves are only part of the potential biases that can creep into the surface record of a station. USHCN2 will not catch and correct for things like:

  • Gradual UHI increase in the surrounding area
  • Tree shading/vegetation growth/loss near the sensor increasing/decreasing gradually
  • A gradual buildup of surface elements around the sensor, such as buildings, asphalt, concrete etc. Though is an asphalt parking lot suddenly went up close by that would likely show as a step which may be detected.
  • Drift of the temperature sensor +/- over time.
  • Other low frequency changes that don’t show a step function in the data

When I queried Matt Menne during the presentation at NCDC about the sensitivity of the new algorithm to detect changepoints, he suggested it would have about a 0.5°C step threshold. I had hoped it would be more sensitive.

While I look forward to seeing the improvements that USHCN2 has to offer, I feel that what also needs to be done is a historic time-line reconstruction for each USHCN site to help locate other possible change points that may be missed by the USHCN2 algorithms. I feel that a qualitative analysis can help zero in on potential errors in the station record. NCDC’s claim that they can statistically adjust the poorly sited locations so as to add skillful trend information is contradicted by some of Roger Pielke’s papers, so it will be interesting to see if the test cases I’m submitting will be detected or not.

Here are the cites for several of Pielke’s papers on the problems with the land surface temperature record as applied to long term trends:

 
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken, 2007: Unresolved issues with the assessment of multi-decadal global land surface temperature trends. J. Geophys. Res., 112, D24S08, doi:10.1029/2006JD008229.
 
Pielke Sr., R.A. J. Nielsen-Gammon, C. Davey, J. Angel, O. Bliss, N. Doesken, M. Cai., S.  Fall, D. Niyogi, K. Gallo, R. Hale, K.G. Hubbard, X. Lin, H. Li, and S. Raman, 2007: Documentation of uncertainties and biases associated with surface temperature measurement sites for climate change assessment. Bull. Amer. Meteor. Soc., 88:6, 913-928.
 
Lin, X., R.A. Pielke Sr., K.G. Hubbard, K.C. Crawford, M. A. Shafer, and T. Matsui, 2007: An examination of 1997-2007 surface layer temperature trends at two heights in Oklahoma. Geophys. Res. Letts., 34, L24705, doi:10.1029/2007GL031652.
 

That is part of the reason NCDC invited me for a visit, they feel the USHCN station surveys are a valuable tool to help them judge the health of the sites and network. They look forward to the completion of the surfacestations.org USHCN survey.

0 0 votes
Article Rating
45 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mike
May 13, 2008 6:28 pm

This is beyond silly. Why have an algorithm ( a one-word oxymoron, by the way ) triggered on a 0.5 degree centigrade change threshold where direct annual or more frequent observation would be more effective in identifying circumstances that would alter the climatic observations? Why use a formula or model when direct empirical observation is so obviously superior? Laziness, perhaps?

Jerry
May 13, 2008 6:29 pm

Thanks Anthony
Well let us hope that these changes will give us “skeptics” more confidence in the numbers, although I find no reason why UAH and RSS figures should not be used to determine trends at this point.
Jerry

Vic Sage
May 13, 2008 6:44 pm

The more adjustment I see, the more I believe in MAN-MADE global warming. Wink, wink, nudge, nudge

John Bunt
May 13, 2008 7:20 pm

I guess that I am lost. The “adjusted” maps appear warmer than the unadjusted maps (more red, less blue). What is being reported now – adjusted or unadjusted? Is the goal of the project to “fix” adjustment errors? What is the basic flaw in the unadjusted maps? I have read enough to know that the sites are not conducive to giving accurate readings – implications mostly that they are too warm. Is this the source of the unadjusted information?
As you can see, I am totally lost.

ChuckC
May 13, 2008 7:25 pm

Here’s what I just don’t get–if you look at Anthony’s four plots at the top, the unadjusted trends are all cooler than the adjusted trends, pretty much everywhere in the country. Yet, it’s painfully obvious to anyone who looks at any actual data (pictures, individual temperature trends) of the USHCN stations that the predominant non-climactic influences on stations are urbanization and sighting (both positive biases), which should mean that the overall USHCN adjustments should make trends slightly cooler, if anything.
I think the TOBS adjustment is too large, and has the wrong sign in many cases, and the UHI/microclimate adjustments are too small. I’m betting the US temperature trend is 30-50% overstated, just as Michaels and McKitrick showed by correlation with economic indictors in their paper. And if NOAA would open their eyes they’d see it too, instead of burying their heads in computer modeling corrections instead of actual data.
REPLY: FYI the four plots at the top were provided by NCDC in the PowerPoint presentation available in the link in the article. I captured the image and presented it.

May 13, 2008 7:46 pm

USHCN2 sounds wonderful but not even in service yet and we are hearing adjustments. I must agree with Mike. It doesn’t take a genius to take a picture of an original installation and surrounding area and compare it to the current site structure. If the site has changed then maybe adjustments but not until. If the data indicates some sort of change send a person or team out to the site to find the reason. Any other method is foolish and we will end up with what we have with Jim Hansen’s handling of the USHCN as we now use.
I stress go to the site and physically inventory what has changed and what the bias is. Then if the site can’t be prepared move it. and document the move and the reason why. (SITE COMPROMISED) We need to properly measure the data or it will sneak up on us and then we are in trouble.
I am not a scientist but this is something that sets me off. I get beat up about my demand of the use of common sense in measurement on other blogs but it is the only policy to get good results. You must go see what is wrong anything else is just guessing. That is what we are doing now.
My 2 cents.
Bill

Brian D
May 13, 2008 7:46 pm

You can make your own graphs at their site with the USHCN 2 data here.
http://www.ncdc.noaa.gov/oa/climate/research/cag3/na.html

KuhnKat
May 13, 2008 8:25 pm

Anthony,
not very precise, but, my eyeball says they warmed the max temp substantially and cooled the min temps west and warmed them in the south east?? Guess I shouldn’t be surprised based on the excellent work you and the other volunteers are doing!!
Makes me wonder even more what the average is covering.
Anyone planning an analysis??
REPLY: This is all preliminary stuff, and it was provided as an example to me. I agree that the adjustments look a bit lumpy, but I’ll wait for the final output before I begin a criticism.
If the stations I’ve found with undocumented moves aren’t detected, then I’ll have real cause for complaint.

Brian D
May 13, 2008 8:29 pm

Lets play around with NCDC’s graph maker with trend lines.
Let’s use the top 2 warmest years 1934-1998
http://climvis.ncdc.noaa.gov/tmp/graph-May1322:53:081667785644.gif
Trend is 0.01 degF/Decade
1895-1933
http://climvis.ncdc.noaa.gov/tmp/graph-May1322:58:106560974121.gif
Trend is 0.21 degF/Decade
1999-2007
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:00:306210327148.gif
Trend is 0.22 degF/Decade
1895-2007
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:06:482291259765.gif
Trend is 0.11 degF/Decade
Lets try this one.
1921-1980
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:20:243466186523.gif
Trend is -0.12 degF/Decade
1895-1920
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:22:589812011718.gif
Trend is -0.12 degF/Decade
1981-2007
http://climvis.ncdc.noaa.gov/tmp/graph-May1323:25:454860534667.gif
Trend is 0.59 degF/Decade
Pick your cherry, just don’t choke on the pits.

crosspatch
May 13, 2008 11:31 pm

I would like to see a network of sensors mounted 100 feet above ground level on rural communications towers across the country. I would expect that to be far enough above the small things such as shade and air conditioner exhaust that can impact ground level sensors.

cohenite
May 13, 2008 11:34 pm

Well, at least the adjustments are not consistent with AGW, given that the max have increased more than the minimum.
I can remember reading on another site about how interpolations were better than direct observations because observations could be corrupted by deliberate human manipulation or laziness. At the very least you need some original observations to interpolate from; .5C is a big interpolation threshold; it would be interesting to know whether the majority of the exclusions were +ve or -ve anomalies; and how many below average anomalies were replaced with a positive interpolation and vice-versa.

Pierre Gosselin
May 14, 2008 12:24 am

I see a blemish to the far left, just below the equator, on the latest solar photo (13 May 17:41). Is this anything significant?

Pierre Gosselin
May 14, 2008 12:32 am

Appears to be a small sunspot forming (Cycle 23?).
http://www.spaceweather.com/

rex
May 14, 2008 1:07 am

surely lampasas MUST be removed from Giss data?
REPLY: Any sensible person would think that, but sadly, no.

Bill
May 14, 2008 5:21 am

Seems somewhat medieval and a waste of money/resources to insist on using ground stations in limited locations on land to attempt to determine global temperature.
Prior to the space age it was all we had, but since 1979 we have a better, more cost effective method. Why does NOAA and NASA, of all agencies, continue to insist on using ground based, limited coverage, easily biased systems in preference to satellites and take the measurements in preference to the available satellite data (three separate, and I assume independent, sources)?
Only reason I can think is that the ground data is more amenable to manipulation to allow them to provide the data they want.

Wondering Aloud
May 14, 2008 5:50 am

I agree with John Bunt, given what you have demonstrated with USHCN, an adjustment that shows greater warming than the actual measurements is unlikely to be correct.
I think they hope no one will notice that they have introduced an unsupported fudge factor.

Bob_L
May 14, 2008 6:01 am

I agree with Bill, you have to maintain the quality by having a look at what is happening to the sites.
And Crosspatch has a grea idea, put a fiberglass beam 100′ up every cell tower and mount a sensor that takes hourly readings and phones them in once a day to a central location. Each would have a known altitude above sea level to account for any problems that causes.

Gary
May 14, 2008 6:08 am

Anthony,
Besides the blind test of your two stations with moves, there ought to be a couple more with no moves to check for false positive results. Actually, a whole suite of stations with known provenance and a variety of changes really is required to test the proposal adequately. There probably are a couple of dozen suitable candidates already surveyed in SurfaceStations.
A step threshold sensitivity of +- 0.5 C is about the accuracy of the old min-max mercury thermometers and I’d really like to see it proven that the new adjustments will be even this good. There’s been some research in regime change analysis in ecosystem studies to detect environmental changes soon after they happen so that management practices can be altered effectively. It would be appropriate for NCDS to test their model against lots of different algorithms that have a track record.

Patrick Hadley
May 14, 2008 6:49 am

Off topic:
The April GISSTEMP anomaly has just been released. The figure is 0.41 . This is lower than their March figure of 0.60. (This had been previously announced as 0.67)

May 14, 2008 6:56 am

I just looked at the chart maker for USHCN 2 and found it quite interesting it seems that on average in the U.S. 2006 was (eyeball) 3.5 to nearly 4.0 degrees warmer than 1998 with a 0.84f/decade trend
January 1996 – 2008 Data Values:
January 2008: 30.70 degF Rank: 4
January 2007: 31.45 degF Rank: 5
January 2006: 39.52 degF Rank: 13
January 2005: 33.45 degF Rank: 8
January 2004: 30.45 degF Rank: 2
January 2003: 33.00 degF Rank: 7
January 2002: 35.02 degF Rank: 11
January 2001: 31.63 degF Rank: 6
January 2000: 34.10 degF Rank: 9
January 1999: 34.28 degF Rank: 10
January 1998: 35.52 degF Rank: 12
January 1997: 30.48 degF Rank: 3
January 1996: 30.22 degF Rank: 1
January 1977 – 1933 Average = NaN degF
January 1996 – 2008 Trend = 0.84 degF / Decade
——————————————————————————–
NCDC / Climate At A Glance / Climate Monitoring / Search / Help
It will not display the graph but sure makes the past decade a wonder. I think there are some real adjustments being made, I am glad this is just preliminary findings.
Bill Derryberry

Bill Illis
May 14, 2008 6:58 am

The presentation shows that the TOBs adjustment will add 0.2C to the trend (from 1920 on for both the Maximum and Minimum temperature).
The new Homogenization adjustment will add a further 0.45C to the Maximum temperature and 0.0C to the Minimum temperature trend (from roughly 1900 on).
The total adjustments will add approximately 0.65C to the Maximum raw data trend and 0.2C to the Minimum temperature trend.
I note the previous charts I have seen had only adjusted the average trend by 0.55F up to 1999 so these new adjustment add a further 0.25C to the previous adjustments (assuming I can do the math right when averaging the Maximum and Minimum. )
Given there is no increase in the adjusted average US temperature data since 1934, please show us the raw data as well (given the trend is artificially increased by approximately 0.5C to 0.6C.)
And Anthony, please E-mail NOAA and ask them to provide the average temperature data (and not obscure everything with separate Maximum and Minimum trends which are supposed to be averaged out.)

May 14, 2008 7:00 am

I always have a problem with anyone interpolating such an important metric. But if the database has been corrupted, as suggested by all the discoveries made by Tony and his team, something has to be done. An analysis of past data can isolate the “outliers” and/or generated figures, but application of any formula to correct these figures must be open and above board.
Given the importance of this task and the fact many eyes will be watching (thanks to Tony et al), perhaps we might just see something come out of this after all. AS LONG AS HANSEN AND HIS COHARTS DON’T HAVE THEIR DIRTY FINGERS IN THE MIX. I trust him like I trust his good buddy, Goofy Gore!
We should also note that if they raise past temperature and trends with faulty numbers, the current cooling will look even more dramatic. On the flip side, if they lower them too much, the current cooling will look rather plebeian.
Jack Koenig, Editor
The Mysterious Climate Project
http://www.climateclinic.com

Bill Illis
May 14, 2008 7:04 am

Sorry, I did the math wrong, the new adjustments add 0.45C to the raw data trend (not 0.5C to 0.6C as I wrote above) but still up 0.15C from the previous adjustment regime.

Retired Engineer
May 14, 2008 7:44 am

I can understend site changes, concrete pads and the like, but how do we get ‘undocumented’ station moves ? Somebody had to give the order, fork over a few bucks, etc. Unless these things have grown legs, there should be some kind of record. After all, the taxpayers eventually foot the bill, and bureaucracies thrive on paperwork.
REPLY: NWS and NCDC are like families with kids. They have the same parent (NOAA) but they don’t always cooperate fully.

Philip_B
May 14, 2008 8:07 am

Any statistician will tell you 10 samples is a huge improvement over 1 sample. 100 samples is a modest improvement over 10 samples. 1000 samples has no real significance over 100 samples.
The point being we don’t need 1200 stations of questionable quality to tell us the temperature trend, we need a 100 good stations. The other 1100 stations will just tell us how badly they are corrupting the trend shown by the 100 good stations.

rex
May 14, 2008 8:19 am
May 14, 2008 9:54 am

rex said: “This might keep J Hansen on the ball for May
http://discover.itsc.uah.edu/amsutemps/ go to 600mb graph and Gisstemp 0.32C with sea data and the correct 250 Km radius. 0.http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2008&month_last=4&sat=4&sst=1&type=anoms&mean_gen=04&year
Rex, your URL’s don’t seem to work.
Jack Koenig, Editor
The Mysterious Climate Project
http://www.climateclinic.com

Sam
May 14, 2008 10:00 am

I don’t see how the credibility of USHCN data and metrics are in any way enhanced by “adjusting” the raw data. Why is it these adjustments seem to always go in one direction, ie add to any supposed warming trends.
If data is for some reason incorrect or contaminated in some manner, wouldn’t the best approach be to throw it out and not use it? Why not determine which rural stations are empirically sound and restrict the data from any site which is not sound on its own merits.
This really seems very ridiculous to me and just moves from one problem situation to another.

An Inquirer
May 14, 2008 10:32 am

Replly to Philip_B (08:07:02) :
“Any statistician will tell you 10 samples is a huge improvement over 1 sample. 100 samples is a modest improvement over 10 samples. 1000 samples has no real significance over 100 samples.
. . . we don’t need 1200 stations . . .”
I believe your point would be valid if we were talking about samples from the same population. However, the station in Lampass may be different from a different population than the one at Detroit Lakes. Yes, they are both on the same planet, but climate trends in one place may not match climate trends in another place. I have not seen a good statistical discussion of how many “samples” are needed to determine global climate direction. Of course, that raises the question whether it is legitimate to talk about THE global climate when climate changes might be regional.

SteveSadlov
May 14, 2008 10:33 am

Wow. Given that a statistically significant portion of the span of 1896 – 2006 data would include the 1930s (and earlier) … seeing these adjustments, it looks like a bit of splainin’ that Dr. Hansen should be called upon to do.

crosspatch
May 14, 2008 11:52 am

I wanted to add that in thinking about my suggestion that a network be built on communications towers, I put a lot of thought into why it is done as it is currently. I believe the methodology surrounding these observations got its start in the late 19th century. At that time there were no radio towers. The primary concern would have been to obtain a reliable stream of data. That means the data measurement had to be recorded by a person who could gain access to it safely in any weather. If an observer of the 1890’s were required to climb a telegraph pole in freezing rain, I have a feeling they might have missed some data points or possibly made up data for insertion into the record. So you had to make the placement such that it was reasonably accessible to the observer under all conditions.
Today we have electronic devices that can measure and transmit data either by wire or wireless communications. Those data can now be automatically recorded and stored by remote electronic devices and reviewed at the leisure of the observer. There is no longer any requirement that the data be accessible every day. Access would be required only in case of malfunction and having a pair of measuring devices in each housing would allow a visit to be scheduled.
So basically, being able to walk to the thermometer is no longer a requirement today and taking the measurement 100 feet into the troposphere might give a better idea of “climate” vs. local micro-climate.

Patrick Hadley
May 14, 2008 12:24 pm

Perhaps Philip B can explain why opinion pollsters like to get a sample of over 1000 if it has no more real signficance than one of 100.

May 14, 2008 1:46 pm

REPLY: I already posted but thanks, see main page.
Off topic, but important.
THE HEARTLAND INSTITUTE
19 South LaSalle Street #903
Chicago, IL 60603
phone 312/377-4000 · fax 312/377-5000
http://www.heartland.org
MEDIA ADVISORY: Polar Bear Decision Defies Scientific Evidence
Author: Dan Miller
Published by: The Heartland Institute
Published in: News Releases
Publication date: May 2008
(Chicago, IL — May 14, 2008) The U.S. Department of the Interior decided today to list polar bears as a “threatened” species under the Endangered Species Act. The decision was based on predictions that future global warming will negatively affect polar bear populations.
Experts contacted by The Heartland Institute note global temperatures have not risen in the past 10 years, and scientists with the United Nations’ Intergovernmental Panel on Climate Change predict temperatures will cool for at least the next 10 years. Moreover, polar bear populations have been increasing during recent decades.
The expert statements below can be quoted directly, or the experts can be contacted for additional information at the telephone numbers and email addresses provided below.
“The U.S. Fish and Wildlife Service has just taken its place alongside Miss Cleo and the Psychic Friends Network in terms of a complete divorce from scientific reality. FWS apparently believes it has the clairvoyance to forecast sharp declines in polar bear populations even though temperatures for most of the past 10,000 years have been warmer than today and polar bears have flourished. Moreover, global polar bear populations have been rising for decades, even as temperatures have recovered from the end of the Little Ice Age 100 years ago.
“The only plausible basis for ruling polar bears as threatened is blind faith in alarmist computer models that have been no more accurate than Chicken Little’s claim that the sky is falling. Compare the alarmist computer models to the real world. Global temperatures have not risen one bit during the past decade. Before that, for 30 of the preceding 50 years, global temperatures fell. And now even IPCC scientists are predicting global temperatures will cool for at least the next decade.
“Only by completely ignoring real-world scientific evidence and jumping head-first into the world of special-interest group propaganda can one justify listing polar bears as a threatened species.”
James M. Taylor
Senior Fellow for Environment Policy
The Heartland Institute
taylor@heartland.org
941/776-5690
“This decision represents a conflict between politics and science. Polar bear populations have been increasing in recent decades, so there is no current problem. The concern is based on forecasts. However, the government forecasts used to support the decision violate basic scientific principles, and thus provide no scientific support for the listing.
“There are no scientific forecasts that would suggest a reduction in polar bear populations. It would be improper, then, to designate polar bears as endangered. Application of proper forecasting methods suggests a small short-term rise in polar bear populations followed by a leveling off. We provide full disclosure to support these statements at publicpolicyforecasting.com and at theclimatebet.com. In the long term, science will prevail.”
Scott Armstrong
Professor
Wharton School of the University of Pennsylvania
armstrong@wharton.upenn.edu
215/898-5087
“Canadians, who manage two-thirds of all polar bear populations, just reviewed their listing status and decided not to up-list the bear to a more serious status. Activists are attempting to politically interfere and change that reasonable and informed decision so today’s U.S. listing would not look extreme, unwarranted, and political, which it is.
“The listing is lunacy because carbon dioxide emissions–the real target of activists–are surging worldwide, and unless all other countries cut their carbon emissions, atmospheric concentrations will continue to rise even if the entire West shuts down its emissions. If the United States were to go 100 percent CO2 emissions-free, just the projected growth in China’s and India’s emissions would replace U.S. ‘savings’ in about a decade.
“The self-inflicted economic wound of making the use of carbon fuels more expensive in the United States than in China will merely transfer carbon emissions and jobs to that regime, which already has one of the worst environmental records in the world, and will deploy the profits toward the continued expansion of its own network of uniquely dirty, coal-fired power stations, to the detriment of the environment, without any benefit to the climate or polar bears whatsoever.”
Robert Ferguson
President
Science and Public Policy Institute
bferguson@sppinstitute.org
703/753-7846

Editor
May 14, 2008 6:41 pm

Question for Patrick Hadley… where did you find the April GISS temp? I use http://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt and it still only goes through March. While we’re at it it, are there any other “pre-announcement” URLs for the other sites? I ‘m referring to the main data URLs…
Hadley CRU http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
RSS ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_1.txt
and UAH http://vortex.nsstc.uah.edu/public/msu/t2lt/tltglhmam_5.2

Editor
May 14, 2008 7:33 pm

A suggestion for our own USHCN2 project along the lines of an open source project. To quote Eric Raymond (a linux developer) “given enough eyeballs, all bugs are shallow”. Another relevant quote is an old Russian proverb that Ronald Reagan used a lot during nuclear arms negotions with Russia… “Trust, but verify”.
– start with 50 volunteers, each of whom is assigned a couple of dozen stations. This should cover all the stations.
– each volunteer will plot the graph for each of their stations, and note which, if any, of them seems to have a step change
– any stations that are flagged at this stage will be looked at by a second group to confirm that a step change happened
– the second group will issue a list of suspect stations
I’m not sure exactly what to do after that. I suggest a 2 step process. Submit the list and documentation to NCDC and/or GISS, via Anthony. If NCDC/GISS don’t act on it, they’ll hand us a major PR coup. We can go public with the list and show people that “the top ten all time hottest years in the USA” are the result of garbage data.

Editor
May 14, 2008 10:39 pm

Never mind that question about the GISS temperature. I forgot to clear my cache, and was picking up last month’s version. So much for my geekdom status.

steven mosher
May 15, 2008 6:02 am

giss is in. also note that march was adjusted downward.

An Inquirer
May 15, 2008 8:29 am

Reply to Patrick Hadley (12:24:01) :
“Perhaps Philip B can explain why opinion pollsters like to get a sample of over 1000 if it has no more real signficance than one of 100.”
I will take a crack at it. The first reason is to enable analysts to look at subgroups within the general population. If we take a sample of 1000, we still want to get over 100 samples (for example) from each race, and if we want to look at how white women look at an issue versus black men, then we need a valid sample by race and gender. You can see how we quickly get over 1000 in required total sample size if the analysis for each subgroup is to be valid. The BLS surveys 30,000 households to get valid statistics for each state, for each race, for each gender, for each metropolitan area.
The second reason is that a 96% confidence level is not good enough to make a projection on national T.V. election night. You need to well over 99%.

Pamela Gray
May 15, 2008 6:53 pm

The private weather station in Joseph, Oregon records that the current unseasonally (NOAA description, not mine) high temperature was still 3 degrees cooler than this same time last year. The NOAA prediction was 10 degrees hotter than it actually was (86 versus 76).
Are these stations that form the bulk of the data private? I don’t think the Joseph station is on the grid.

Evan Jones
Editor
May 15, 2008 8:38 pm

What the heck is this?
Obviously they are making a hash out of SHAP (and probably FILENET, as well).
It is an outrage! I shall tell everybody!

Evan Jones
Editor
May 15, 2008 8:46 pm

What is the basic flaw in the unadjusted maps?
The basic flaw in the unadjusted maps is that they show TOO MUCH WARMING.
Why do you ask? #B^U

Evan Jones
Editor
May 15, 2008 10:13 pm

Sorry, I did the math wrong, the new adjustments add 0.45C to the raw data trend (not 0.5C to 0.6C as I wrote above) but still up 0.15C from the previous adjustment regime.
Christ. Half again worse than their previous outrage. I picked over those adjustments step by step and they were an scandal to the jaybirds. looks as if I am going to have to do the same for this USHCN2 adjustment outrage.
I have been to the NOAA page. They have seen the error of their ways. It was easy to pick apart USHCN1. It was ridiculous on the face of it. But they are much cleverer this time. They graphed each item of the overall adjustment for anyone to see. It was one of the most-cited graphs by the skeptics. And for good reason.
But there ain’t no graphs on THIS page. Just a phony-baloney bromide job gussied up to sound reasonable.
I found no obvious link to any powerpoint presentation. If it’s there, they are hiding it very well. I can’t find it.
Rev, would you PLEASE post the link to the powerpoint.
I need to find the same adjustment data for USHCN2 that they provided for version . I need to pin down the adjustments piece by piece the same way they had it in USHCN1. Even those maps only have the overall, not the piece-by-piece. You can’t pin them on it alone–except in tandem with USHCN1, which, at the least, I intend to do.
“Trust, but verify”.
Not possible.
The more I think about this the madder I get.
This is not going to go unsung. I swear it.
REPLY: The link to the powerpoint is there in the post as “watts-visit”
but here is the URL:
http://wattsupwiththat.files.wordpress.com/2008/05/watts-visit.ppt

Patrick Hadley
May 16, 2008 6:48 am

I am grateful to the reply from “The Inquirer”. I agree, I think, that the size of the sample needed will vary greatly depending on the task.
Imagine you were asked (in the days before planes and satellites) to produce a relief map of the United States – how many readings would you need to take? You would not need to measure the height of absolutely every square inch, but just make a sample. But, on the other hand, a sample of just 100 measures of height above sea-level would not give a decent map. I would guess that you would need a sample containing many millions of measurements before a detailed map could be produced .
Perhaps you were asked to work out the average height above sea level of the USA. In how many places would you need to take data in order to come up with a reasonable estimate? To my simple mind that problem seems very similar to trying to find the average temperature – except of course that, unlike height, temperature is constantly changing.

Evan Jones
Editor
May 16, 2008 9:04 am

Thanks very much for the link. They’re doing a snow job on you. you know. And they’re being very subtle about it. It all leads in the opposite direction of the final bottom line.
I strongly suspect that they are justifying the “good” stations and the “bad” not by adjusting the bad stations down, but by adjusting the good stations up. That is the only way I can explain the bottom line. (They are also burying it under offset rather than absolute values.)
They also use USHCN2 comparisons without noting the change from the USHCN1 procedure. They come up with a much higher upward adjustment than last time and they do not show how much of the adjustment comes from each step in the procedure. That makes me very, very suspicious.
I am continuing the analysis. But so far, I don’t trust the historical network. And if it weren’t for RSS and UAH, I wouldn’t trust them with the current tracking either.

August 11, 2008 6:01 am

[…] (the nationalversion is available but there is no ready access to the individual stations yet). As Anthony Watts commented after his visit to NCDC at their invitation to discuss his efforts to document siting issues: the new change point […]