On ‘denying’ Hockey Sticks, USHCN data, and all that – part 2

In part one of this essay which you can see here, I got quite a lot of feedback on both sides of the climate debate. Some people thought that I was spot on with criticisms while others thought I had sold my soul to the devil of climate change. It is an interesting life when I am accused of being in cahoots with both “big oil” and “big climate” at the same time. That aside, in this part of the essay I am going to focus on areas of agreement and disagreement and propose a solution.

In part one of the essay we focus on the methodology that was used that created a hockey stick style graph illustrating missing data. Due to the missing data causing a faulty spike at the end, Steve McIntyre commented, suggesting that it was more like the Marcott hockey stick than it was like Mann’s:

Steve McIntyre says:

Anthony, it looks to me like Goddard’s artifact is almost exactly equivalent in methodology to Marcott’s artifact spike – this is a much more exact comparison than Mann. Marcott’s artifact also arose from data drop-out.

However, rather than conceding the criticism, Marcott et al have failed to issue a corrigendum and their result has been widely cited.

In retrospect, I believe McIntyre is right in making that comparison. Data dropout is the central issue here and when it occurs it can create all sorts of statistical abnormalities.

Despite some spirited claims in comments in part one about how I’m “ignoring the central issue”, I don’t dispute that data is missing from many stations, I never have.

It is something that has been known about for years and is actually expected in the messy data gathering process of volunteer observers, electronic systems that don’t always report, and equipment and or sensor failures. In fact there is likely no weather network in existence that has perfect data without some being missing. Even the new U.S. Climate Reference Network, designed to be state-of-the-art and as perfect as possible has a small amount of missing data due to failures of uplinks or other electronic issues, seen in red:


Source: http://www.ncdc.noaa.gov/crn/newdaychecklist?yyyymmdd=20140101&tref=LST&format=web&sort_by=slv

What is in dispute is the methodology, and the methodology, as McIntyre observed, created a false “hockey stick” shape much like we saw in the Marcott affair:


After McIntyre corrected the methodology used by Marcott, dealing with faulty and missing data, the result looked like this:



McIntyre points out this in comments in part 1:

In Marcott’s case, because he took anomalies at 6000BP and there were only a few modern series, his results were an artifact – a phenomenon that is all too common in Team climate science.

So, clearly, the correction McIntyre applied to Marcott’s data made the result better, i.e. more representative of reality.

That’s the same sort of issue that we saw in Goddard’s plot; data was thinning near the endpoint of the present.


[ Zeke has more on that here: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/ ]

While I would like nothing better than to be able to use raw surface temperature data in its unadulterated “pure” form to derive a national temperature and to chart the climate history of the United States, (and the world) the fact is that because the national USHCN/co-op network and GHCN is in such bad shape and has become largely heterogeneous that is no longer possible with the raw data set as a whole.

These surface networks have had so many changes over time that the number of stations that have been moved, had their time of observation changed, had equipment changes, maintenance issues,or have been encroached upon by micro site biases and/or UHI using the raw data for all stations on a national scale or even a global scale gives you a result that is no longer representative of the actual measurements, there is simply too much polluted data.

A good example of polluted data can be found in Las Vegas Nevada USHCN station:


Here, growth of the city and the population has resulted in a clear and undeniable UHI signal at night gaining 10°F since measurements began. It is studied and acknowledged by the “sustainability” department of the city of Las Vegas, as seen in this document. Dr. Roy Spencer in his blog post called it “the poster child for UHI” and wonders why NOAA’s adjustments haven’t removed this problem. It is a valid and compelling question. But at the same time, if we were to use the raw data from Las Vegas we would know it would have been polluted by the UHI signal, so is it representative in a national or global climate presentation?


The same trend is not visible in the daytime Tmax temperature, in fact it appears there has been a slight downward trend since the late 1930′s and early 1940′s:


Source for data: NOAA/NWS Las Vegas, from


The question then becomes: Would it be okay to use this raw temperature data from Las Vegas without any adjustments to correct for the obvious pollution by UHI?

From my perspective the thermometer at Las Vegas has done its job faithfully. It has recorded what actually occurred as the city has grown. It has no inherent bias, the change in surroundings have biased it. The issue however is when you start using stations like this to search for the posited climate signal from global warming. Since the nighttime temperature increase at Las Vegas is almost an order of magnitude larger than the signal posited to exist from carbon dioxide forcing, that AGW signal would clearly be swamped by the UHI signal. How would you find it? If I were searching for a climate signal and was doing it by examining stations rather than throwing out blind automated adjustments I would most certainly remove Las Vegas from the mix as its raw data is unreliable because it has been badly and likely irreparably polluted by UHI.

Now before you get upset and claim that I don’t want to use raw data or as some call it “untampered” or unadjusted data, let me say nothing could be further from the truth. The raw data represents the actual measurements; anything else that has been adjusted is not fully representative of the measurement reality no matter how well-intentioned, accurate, or detailed those adjustments are.

But, at the same time, how do you separate all the other biases that have not been dealt with (like Las Vegas) so you don’t end up creating national temperature averages with imperfect raw data?

That my friends, is the $64,000 question.

To answer that question, we have a demonstration. Over at the blackboard blog, Zeke has plotted something that I believe demonstrates the problem.

Zeke writes:

There is a very simple way to show that Goddard’s approach can produce bogus outcomes. Lets apply it to the entire world’s land area, instead of just the U.S. using GHCN monthly:

Averaged Absolutes

Egads! It appears that the world’s land has warmed 2C over the past century! Its worse than we thought!

Or we could use spatial weighting and anomalies:


Gridded Anomalies

Now, I wonder which of these is correct? Goddard keeps insisting that its the first, and evil anomalies just serve to manipulate the data to show warming. But so it goes.

Zeke wonders which is “correct”. Is it Goddard’s method of plotting all the “pure” raw data, or is it Zeke’s method of using gridded anomalies?

My answer is: neither of them are absolutely correct.

Why, you ask?

It is because both contain stations like Las Vegas that have been compromised by changes in their environment, that station itself, the sensors, the maintenance, time of observation changes, data loss, etc. In both cases we are plotting data which is a huge mishmash of station biases that have not been dealt with.

NOAA tries to deal with these issues, but their effort falls short. Part of the reason it falls short is that they are trying to keep every bit of data and adjust it in an attempt to make it useful, and to me that is misguided, as some data is just beyond salvage.

In most cases, the cure from NOAA is worse than the disease, which is why we see things like the past being cooled.

Here is another plot from Zeke just for the USHCN, which shows Goddard’s method “Averaged Absolutes” and the NOAA method of “Gridded Anomalies”:

Goddard and NCDC methods 1895-2013

[note: the Excel code I posted was incorrect for this graph, and was for another graph Zeke produced, so it was removed, apologies – Anthony]

Many people claim that the “Gridded Anomalies” method cools the past, and increases the trend, and in this case they’d be right. There is no denying that.

At the same time, there is no denying that the entire CONUS USHCN raw data set contains all sorts of imperfections, biases, UHI, data dropouts and a whole host of problems that remain uncorrected. It is a Catch-22; on one hand the raw data has issues, on the other, at the bare minimum some sort of infilling and gridding is needed to produce a representative signal for the CONUS, but in producing that, new biases and uncertainty is introduced.

There is no magic bullet that always hits the bullseye.

I’ve known and studied this for years, it isn’t a new revelation. The key point here is that both Goddard and Zeke (and by extension BEST and NOAA) are trying to use the ENTIRE USHCN dataset, warts and all, to derive a national average temperature. Neither method produces a totally accurate representation of national temperature average. Keep that thought.

While both methods have flaws, the issue that Goddard raised has one good point, and an important one; the rate of data dropout in USHCN is increasing.

When data gets lost, they infill with other nearby data, and that’s an acceptable procedure, up to a point. The question is, have we reached a point of no confidence in the data because too much has been lost?

John Goetz asked the same question as Goddard in 2008 at Climate Audit:

How much Estimation is too much Estimation?

It is still an open question, and without a good answer yet.

But at the same time we are seeing more and more data loss, Goddard is claiming “fabrication” of lost temperature data in the final product and at the same advocating using the raw surface temperature data for a national average. From my perspective, you can’t argue for both. If the raw data is becoming less reliable due to data loss, how can we use it by itself to reliably produce a national temperature average?

Clearly with the mess the USHCN and GHCN are in, raw data won’t accurately produce a representative result of the true climate change signal of the nation because the raw data is so horribly polluted with so many other biases. There are easily hundreds of stations in the USHCN that have been compromised like Las Vegas has been, making the raw data, as a whole, mostly useless.

So in summary:

Goddard is right to point out that there is increasing data loss in USHCN and it is being increasingly infilled with data from surrounding stations. While this is not a new finding, it is important to keep tabs on. He’s brought it to the forefront again, and for that I thank him.

Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. [ added: His method allows for biases to enter that are mostly about station composition, and less about infilling see this post from Zeke]

As a side note, claiming “fabrication” in a nefarious way doesn’t help, and generally turns people off to open debate on the issue because the process of infilling missing data wasn’t designed at the beginning to be have any nefarious motive; it was designed to make the monthly data usable when small data dropouts are seen, like we discussed in part 1 and showed the B-91 form with missing data from volunteer data. By claiming “fabrication”, all it does is put up walls, and frankly if we are going to enact any change to how things get done in climate data, new walls won’t help us.

Biases are common in the U.S. surface temperature network

This is why NOAA/NCDC spends so much time applying infills and adjustments; the surface temperature record is a heterogeneous mess. But in my view, this process of trying to save messed up data is misguided, counter-productive, and causes heated arguments (like the one we are experiencing now) over the validity of such infills and adjustments, especially when many of them seem to operate counter-intuitively.

As seen in the map below, there are thousands of temperature stations in the US co-op and USHCN network in the USA, by our surface stations survey, at least 80% of the USHCN is compromised by micro-site issues in some way, and by extension, that large sample size of the USHCN subset of the co-op network we did should translate to the larger network.


When data drops out of USHCN stations, data from nearby neighbor stations is infilled to make up the missing data, but when 80% or more of your network is compromised by micro-site issues, chances are all you are doing is infilling missing data with compromised data. I explained this problem years ago using a water bowl analogy, showing how the true temperature signal gets “muddy” when data from surrounding stations is used to infill missing data:


The real problem is the increasing amount of data dropout in USHCN (and in Co-op and GHCN) may be reaching a point where it is adding a majority of biased signal from nearby problematic stations. Imagine a well sited long period station near Las Vegas out in a rural area that has its missing data infilled using Las Vegas data, you know it will be warmer when that happens.

So, what is the solution?

How do we get an accurate surface temperature for the United States (and the world) when the raw data is full of uncorrected biases and the adjusted data does little more than smear those station biases around when infilling occurs? Some of our friends say a barrage of  statistical fixes are all that is needed, but there is also another, simpler, way.

Dr. Eric Steig, at “Real Climate”, in a response to a comment about Zeke Hausfather’s 2013 paper on UHI shows us a way.

Real Climate comment from Eric Steig (response at bottom)

We did something similar (but even simpler) when it was being insinuated that the temperature trends were suspect, back when all those UEA emails were stolen. One only needs about 30 records, globally spaced, to get the global temperature history. This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.

For those who don’t know what the Rossby radius is, see this definition.

Steig claims 30 station records are all that are needed globally. In a comment some years ago (now probably lost in the vastness of the Internet) we heard Dr. Gavin Schmidt said something similar, saying that about “50 stations” would be all that is needed.

[UPDATE: Commenter Johan finds what may be the quote:

I did find this Gavin Schmidt quote:

“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”

http://earthobservatory.nasa.gov/Features/Interviews/schmidt_20100122.php ]

So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?

It is a question nobody at NOAA has ever really been able to answer for me. While it is certainly important to keep these records from all these stations for local climate purposes, but why try to keep them in the national and global dataset when Real Climate Scientists say that just a few dozen good stations will do just fine?

There is precedence for this, the U.S. Climate Reference Network, which has just a fraction of the stations in USHCN and the co-op network:


NOAA/NCDC is able to derive a national temperature average from these few stations just fine, and without the need for any adjustments whatsoever. In fact they are already publishing it:


If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal. By doing so not only do we eliminate a whole bunch of make work with questionable/uncertain results, and we end all the complaints data falsification and quibbling over whose method really does find the “holy grail of the climate signal” in the US surface temperature record.

Now you know what Evan Jones and I have been painstakingly doing for the last two years since our preliminary siting paper was published here at WUWT and we took heavy criticism for it. We’ve embraced those criticisms and made the paper even better. We learned back then that adjustments account for about half of the surface temperature trend:

We are in the process of bringing our newest findings to publication. Some people might complain we have taken too long. I say we have one chance to get it right, so we’ve been taking extra care to effectively deal with all criticisms from then, and criticisms we have from within our own team. Of course if I had funding like some people get, we could hire people to help move it along faster instead of relying on free time where we can get it.

The way forward:

It is within our grasp to locate and collate stations in the USA and in the world that have as long of an uninterrupted record and freedom from bias as possible and to make that a new climate data subset. I’d propose calling it the the Un-Biased Global Historical Climate Network or UBGHCN. That may or may not be a good name, but you get the idea.

We’ve found at least this many good stations in the USA that meet the criteria of being reliable and without any need for major adjustments of any kind, including the time-of-observation change (TOB), but some do require the cooling bias correction for MMTS conversion, but that is well known and a static value that doesn’t change with time. Chances are, a similar set of 50 stations could be located in the world. The challenge is metadata, some of which is non-existent publicly, but with crowd sourcing such a project might be do-able, and then we could fulfill Gavin Schmidt and Eric Steig’s vision of a much simpler set of climate stations.

Wouldn’t it be great to have a simpler and known reliable set of stations rather than this mishmash which goes through the statistical blender every month? NOAA could take the lead on this, chances are they won’t. I believe it is possible to do independent of them, and it is a place where climate skeptics can make a powerful contribution which would be far more productive than the arguments over adjustments and data dropout.



newest oldest most voted
Notify of

Yup! good stuff.

What you haven’t quite addressed, which Steve Goddard nails a lot
Is the changing of adjustments to past records that we’ve got
There are TOBS adjustments, fair enough, but then adjusted TOBS
And more TOBS changes, “fixing history” with major probs
===|==============/ Keith DeHavelle

There is evidence that a lot of the “Estimated” USHCN data, that Steve Goddard has found, has nothing to do with station dropout.
At Luling, Texas, which just happens to be the top of the USHCH Final dataset, there are ten months in 2013, which are shown as “Estimated”. Yet, station records indicate that full daily data is available for every single month.
So why have they been estimated?
Worse still, the temperature estimates are more than 1C higher than the real actual measurements. This should not be TOBS, as this is usually applied to historic temperatures, and not current ones.

As I said in the comments for part one. Zeke may advocate anomalies, but in the Final monthly dataset (I repeat FINAL) there are only 51 stations with 360 values without an E flag from 1961-1990.
There is only ONE station with 360 values with no flags.
Only 64% of the 2013 values do not have an E flag.

[snip – off-topic and we also don’t do bargaining here about what you might do if we do something -mod]


I pointed out the Goddard/Marcott analogy on the 24th, twice, and was roundly attacked for it since my name isn’t McIntyre, attacked for my nefarious motives that are merely to strongly shun mistakes, outlandish conspiracy theories and outspoken crackpot comments on mainstream skeptical blogs that afford the usual hockey stick team members newfound media attention with very lasting damage to skepticism and also strongly hinder my ability to reach out to reasonable people who are strongly averse to extremism as nearly all normal people happen to be for very good reason.
When I’m attacked like this I say you’re on your own now boys. The thousands of comments a week on these skeptical blogs represent a vast opportunity cost as Al Gore tutored activists continue to blanket news sites with comments, often only opposed by amateur hour skeptics who are readily shot down. I worked much harder for this than the vast majority of you mere blog citizens, and I’m sick of fighting two fronts.

Several things one learns from studying statistics : 1) one gets increases in accuracy as sample size increases, but it is a matter of dimishing returns – you don’t need a large sample
to get a pretty good fix on the population’s characteristics , and 2) biased samples will kill you.

Brian R

A couple of things. Has anybody done a comparison between the old USHCN and new USCRN data? I know the USCRN is a much shorter record but it should be telling about data quality of the USHCN. Also if a majority of the measured temperature increase is from UHI affecting the night time temps, why not use TMAX temps only?. It seems to figure that if “Global Warming” was true it should effect daytime temps a much as night time.


Thanks Anthony
Nice and thorough job of explaining things.
Looking forward to the revised paper.
Regards Ed

Doug Danhoff

Anyone who has paid attention to weather and climate for the past fifty years knows that there has not been a spike in temperature. Giss claims a .6 C rise over the last decade or two and that is obvious a fabrication. Are they thinking that that is about as big a lie they can tell without getting too much flack? In reality temperatures have trended down for thousands of years once the “adjustments” are removed. I have been told by many that I will never convince the liars by calling them liars, but I am far past the point of worrying about them their feelings or sensibilities.
All I want now , and will never get, is their appropriate punishment for the consequences of their destructive self serving participation in the biggest science scam of our lifetime.

Brian R,
NCDC lets you compare USHCN and USCRN here:
They are very similar over the period from 2004 to present (USCRN actually has a slightly higher trend). However, Goddard’s misconceptions notwithstanding, there has been very little adjustments to temperatures after 2004, so its difficult to really say whether raw or homogenized data agrees more with USCRN.
If you look at satellite records, raw data agrees better with RSS and homogenized data agrees better with UAH over the U.S. from 1979-present.


Often Steve graphs data only from stations with long histories so in a sense he is already doing what you suggest. I just don’t like when people claiming to be scientists take daily averages, spatial averages, monthly averages, geographic averages and then averages all the averages together you kind of lose all frame of reference and the margin of error explodes exponentially, even more so if you insist on using anomalies.


if I remember correctly UAH 6 will be released soon, matching #RSS more closely.
[Nbr 3RSS (^”#” ?) perhaps? .mod]


I find it odd that the past is always colder and the recent era always warmer after the adjustments. ALWAYS.
And I think many of you are very naïve to believe NOAA and GISS and NASA wouldn’t doctor the data to make the Climate Liars (global warmers) position look correct. The people running these organizations are appointed by a President who has decreed that the science is settled. I seriously doubt one of his appointed hacks is going to release data that doesn’t support the President. More then likely they are there to reinforce the President’s position, by hook or crook.
Stop being so damn gullible. Global warming has nothing to do with science. It’s a political agenda being imposed by people who have no inclination to play by the rules like the skeptics.


Paul Homewood says:
June 26, 2014 at 11:54 am

Good work Paul. Is there anyone out there comparing estimated temperatures with actual recorded measurements, like you did with Luling Texas? Increasing temperatures of a rural site by 2.26C of warming since 1934 seems indefensible. It is doing the opposite of adjusting for UHI. I would like to hear their explanation if they have one.


What is the justification for adjusting past values, and is there any way to convey the increasing level of statistical uncertainty in the USHCN values, like confidence intervals or error bars on charts?


I did find this Gavin Schmidt quote:
“Global weather services gather far more data than we need. To get the structure of the monthly or yearly anomalies over the United States, for example, you’d just need a handful of stations, but there are actually some 1,100 of them. You could throw out 50 percent of the station data or more, and you’d get basically the same answers”

Alexej Buergin

It is a bit OT, but I still would like to know if the annual mean temperature in Reykjavik in 1940 is, as the Icelanders say, 5°C, or, as GISS and Stokes say, 3°C?

A C Osborn

I cannot believe that you agree that 50-60 stations (even perfect stations) can give you the average temperature of the world.
How many countries are there in the world?
Do you believe that on average 1 thermometer per country would do it?

Otter (ClimateOtter on Twitter)

I seem to recall someone- was it Paul Homewood? – saying that adjustments had been made to daily temperatures all the way back into the 1880s. 1) Why would they need to make such adjustments, and B) why would they make the distant past even cooler than was recorded, while appearing to warm everything after 1963?

Zeke: “there has been very little adjustments to temperatures after 2004”
Because all the dirty work was done in cooling the past. The trend is still up. And adjusting the past continues.


What needs to stop is “activist scientists” saying we need to “act now”, while they have a “data” set that is so full of holes no rational person would use it. Just looking at the stations Geographically near me you get stuff like this:

USH00xxxxxx 2004  -695     -217      482      942     1668a    1865a    2141     1931     1795a    1097      601a     -72h       0
USH00xxxxxx 2005  -323     -100       96      938     1286     2285     2351     2249a    1851a    1174a     583a    -337a       0
USH00xxxxxx 2006   268c    -202      284a    1053     1505     1882     2367b    2161     1602      938a     563b     317a       0
USH00xxxxxx 2007   -83b    -764      421b     769     1664a    2012a    2092c    2184c    1848     1475      426a     -34f       0
USH00xxxxxx 2008  -220     -376a      21     1057a    1249a    2131a    2189     2048a    1807c     981a     367a    -140a       0
USH00xxxxxx 2009  -789d    -183      329b     924c    1492a    1944E    1978E    2119E    1700E     930E     710E    -146E       0
USH00xxxxxx 2010  -455E    -436E     443E    1174E    1688E    2164E    2349E    2249E    1777E    1144E     481E    -433E       0
USH00xxxxxx 2011  -606E    -311E     214E     932E    1583E    2064E    2476E    2110E    1707E    1080E     750E     211E       0
USH00xxxxxx 2012   -87E      61E    1011E     839E    1823E    2095E    2462E    2091E    1629E    1058E     349E     251E       0
USH00xxxxxx 2013  -189E    -295E      53E     848E    1675E    2008E    2219E    2022E    1693E    1154E     300E    -105E       0
USH00xxxxxx 2014  -793E    -699E    -169E     929E    1564E    1989E   -9999    -9999    -9999    -9999    -9999    -9999        0

No reported data since 2009, but a “full record” nonetheless. Instead of making up data, places like this need to either be dropped, or only the truncated history used. Or you just beg for people to claim nefarious intent. I’ll stick with the opinion Goddard helped me form: none of you know whether it’s getting warmer, colder, or staying the same. He doesn’t either, how could he?
According to the paper records that you can see here: http://www.ncdc.noaa.gov/IPS/coop/coop.html the above referenced station was “officially” closed in 2011, and yet has data up until today. This is what we are supposed to accept as “good science”? Argue all you want, it isn’t. What happened to scientists who could say: “I don’t know”? Because none of you do.
REPLY: since you didn’t identify what station it is, I can’t check. However, the “closure” you see in 2011 might actually be a station move, and the data since then is from the new station. Sometimes observers die, or decide they don’t want to do it anymore. If you can tell me what station it is (name and the number, or optionally lat/lot) I can check and tell you if that is the case. – Anthony

G. Karst

What “circa 1950ish futurist” would have predicted that by the year 2014, our relatively advanced civilization, would not have sorted out global surface temperatures… yet? GK

Eugene WR Gallun

i just semi-finished this today. Surprisingly it is on topic so I will post it. Professor Jones is English so I tried to write something the English might enjoy (or maybe groan loudly about). Americans might not appreciate that right behind “Rule Britannia” the song “Jerusalem” holds second place in the national psyche of England.
The English Prometheus
To tell the tale as it began
An ego yearned
Ambition burned
Inside a quiet little man
No one had heard of Phillip Jones
Obscure to fame
(And likewise blame)
The creep of time upon his bones
Men self-deceive when fame is sought
Their fingers fold
Their ego told
That fire is what their fist has caught
So self-deceived, with empty hand
Jones made it plain
That Hell would reign
In England’s green and pleasant land
Believe! Believe! In burning heat!
In mental fight
To get it right
I’ve raised some temps and some delete!
And with his arrows of desire
He pierced the backs
Of any hacks
That asked the question — where’s the fire?
East Anglia supports him still
Whitewash was used
The truth abused
Within that dark Satanic mill
Prometheus or Burning Man?
The truth will out
And none will doubt
The evil that this imp began
Eugene WR Gallun

Lance Wallace

Anthony says
“Wouldn’t it be great to have a simpler and known reliable set of stations…We’ve found at least this many good stations in the USA that meet the criteria of being reliable…
Anthony, your crowd source approach has already found most or all of the US stations that meet (at the 1 or 2 level) the siting criteria. Why not publish your best list? Then let the crowd (skeptics and lukewarmers alike) comment on individual stations, (e.g., using their specialized knowledge about individual stations to comment on whether they deserve to be there). with a final selection made in a few months after all comments are received? That would constitute a “consensus” best set for the US.
It would be great if the Surface Stations project could be expanded to be global, but realistically there seems no chance.


It’s so obvious that you should only use quality measurements. Identify less than 100 very good stations, document them thoroughly and use only them. There is no point in throwing thousands of bad ones in the equation. That’s what HARRY_READ_ME.TXT is all about. A complete and unbeliavable mess.
When you have those 100 stations, the next step is to use only a subset of them. Pick randomly 10 to simulate a perfect proxy reconstruction. When you do a number of those, will you get different results? If you do, there’s no chance whatsoever, that an actual proxy reconstruction (trees, corals, whatever) can tell us anything reliable about past temperatures.

A C Osborn

What the Raw data shows is that CO2 has absolutley nothing to do with the Temperature increases.
Temps were stable at 12 degees from 1900 to 1950.
Jumped in a year or 2 over 1.5 degrees to over 13.5 degrees and slowly reduced back to 12.25 degrees over the next 50 years.
Jumped 1.75 in another year and then fell back over a degree after 10 years and has remained stable at 13 degrees for about 10 years.
Warmists would not want the world to see this graph.

“Goddard is wrong to say we can use all the raw data to reliably produce a national average temperature because the same data is increasingly lossy and is also full of other biases that are not dealt with. ~AW
But that is not exactly what Goddard is saying. Goddard is saying that all adjustments always go in favor of global warming. Always. And we can never leave the G** D**** past alone. The past always cools. 1940 is a moving Fracking Target! Come on now; the past continues to change? Is this one of those very tiresome scifi novels about time travel?
Goddard is not telling you how to come up with a national average. He is telling you that the present method of doing that is not sane. It does not pass a sanity test. (I would use stronger language but the mods tell me that the software will then stuff my comment in moderation)
At present they are using F**udulent methods to push a national agenda of De-industrialaztion on us against our wills.

Las Vegas is a bit of an outlier due to the fact that within 3 or 4 decades a desert was changed into a green paradise. More vegetation does in fact trap heat. (see my table on minima for Las Vegas). The opposite, i.e. the removal of trees [vegetation] does exactly the opposite. (see my table on minima for Tandil, Argentina). It causes cooling.
As to how to balance a sample of weather stations, I have explained that too,

A couple of issues an a repeated request i made two years ago
Issue 1
USHCN is a tiny fraction of all the daily data available for the US.
Issue 2.
over two years ago you “published” a draft of your paper.
That paper relied on ‘rating’ some 700 stations.
My request 2 years ago was that you release the station ratings under a NON disclosure release.
That is I will sign a licence to not use your data for any publication or transmit it to anyone.
I will use that data for one purpose only.
To create a automated process for rating stations.
You dont even have to seen the full data. In fact I prefer to only have half of the stations.
Half of the stations from each category ranking. This should be enough for me to build a classification tree. Then when you publish your paper I will get the other half to test the tree.
REPLY: While it is tempting, as you know I’ve been burned twice by releasing data prior to publication. Once by Menne in 2009 (though Karl made him do it) and again by the your boss at BEST, Muller, who I had an agreement with in writing (email) not to use my data except for publishing a paper. Two weeks later he was parading it before Congress and he came up with all sorts of bullshit justification for that. I’ll never, ever, trust him again.
My concern is that you being part of BEST, that “somehow” the data will find its way back to Muller again, even if you yourself are a man of integrity. Shit happens, but only if you give it an opportunity and I’m not going to. Sorry. – Anthony


You asked: “So if that is the case, and one of the most prominent climate researchers on the planet (and his associate) says we need only somewhere between 30-50 stations globally…why is NOAA spending all this time trying to salvage bad data from hundreds if not thousands of stations in the USHCN, and also in the GHCN?”
So they can tell a story, as opposed to presenting the data. The data doesn’t tell the story they want told.
Another good question is why is so much time and energy wasted discussing the mashing of numbers that don’t give any meaningful information? All of the temp series/statistical masters. Maybe there is a new term in the field: climatological masturbation.

@Col Mosby at 12:11 pm
1) one gets increases in accuracy as sample size increases, but it is a matter of dimishing returns – you don’t need a large sample to get a pretty good fix on the population’s characteristics ,
On the other hand, you need a very large sample to measurably reduce the reported uncertainty. This is the attraction to use 10,000 stations when 100 will get you close with unacceptable (to politicians) uncertainty.
The rub, I suspect, is that they are not adequately modeling the uncertainty ADDED to the analysis by including thousands of infill data points.
Wattts: If it were me, I’d throw out most of the the USHCN and co-op stations with problematic records rather than try to salvage them with statistical fixes, and instead, try to locate the best stations with long records, no moves, and minimal site biases and use those as the basis for tracking the climate signal.
Yes, sir. At the very least, accept that as an INDEPENDENT analytical process. They ought to achieve the same result as the more complicated NOAA, NCDC, BEST, CRU approaches. If they are not similar, THERE’s you real uncertainty.


Could you please read before hitting the submit button?


That’s interesting, your comments are in many ways a mirror of mine.
I’m just going to roll my eyes at the whole “fabricated” vs “infilled” thing and move on. Word games are silly. Data is interesting.
Also, Steve’s character has been questioned, as has yours. Again, eyeroll. I will accept results from Satan himself if they can be independently replicated by skeptical observers. Ad hominems are boring. Data is interesting.
I applaud your idea for a simplified network, but unless Rand Paul is elected I do not think it is remotely plausible that it will ever be adopted by the powers-that-be. OTOH, should WUWT start publishing its own such temperature, I will upgrade my applause to a standing ovation.
Here’s a point I’ve made a few times now, that I like more and more the more I consider it: isn’t it possible, even likely given the UHI problems you, McIntyre, and others have pointed out, that Steve’s simple average, despite all its major flaws, is more accurate than what USHCN is officially reporting? And I think we can probably prove that with correlations to proxies like Great Lakes ice, economic reports, etc.

By the way, I have a new post up looking in more detail about potential biases in Goddard’s averaged absolutes approach for the U.S.: http://rankexploits.com/musings/2014/how-not-to-calculate-temperatures-part-3/


As far as I can tell…Each station has its own climatology by which the anomalies are calculated.
So when a station does not report and it is infilled using the nearby station the nearby stations climatology is used, the anomaly is just moved across to the missing station.
Now as far as I can tell this will alter the climatology of the infilled station to that of the nearby station and because RECENT data is favoured over older data this kicks up a red flag with regard to past climatology of the infilled station.
The Algorithm is trained to spot problems and because it trusts new data more than old data it changes the old data. Now this is reversed again when real data is finally reported the climatology calculation is done again. The past data is then adjusted again.
Man o Man talk about giving a junkie the keys to the pharmacy,

Bad Andrew

“Maybe there is a new term in the field: climatological masturbation.”
I call it “Squiggology.”
That’s when you apply advanced stupidity to make better squiggles in graphs that have no relevance to real life.

Frank K.

“This is because there is a spatial scale (roughly a Rossby radius) over which temperatures are going to be highly correlated for fundamental reasons of atmospheric dynamics.”
Are there papers which support this assertion? I would find this very hard to believe in general. In fact, the original Hansen and Lebedeff (1987) paper (despite their claim in the abstract) found very poor correlation over many parts of the globe. Read it for yourself…

Frank K.

Zeke – any valid links to NCDC data processing software (TOBS etc.) – or are you too busy??

“Steig claims 30 station records are all that are needed globally. ”
I disagree. The problem is choosing which which stations best represent the region. Microclimates often have varying trends which are most pronounced in minimum temperatures. Maximum temperatures will likely provide better trend data. Furthermore rmaximum temperatures provide the more reliable measure of heat accumulation. However droughts and land use changes can raise temperatures by lowering heat capacity which can then raise maximum temperatures with much LESS heat.


Mark Stoval (@Mark Stoval) @ 1.22pm
Totally agree with your post !!


1.I would not trust Steig for anything. Isnt he the fellow who smeared 1 antarctic station all over antarctica and said the whole place was warming? A completely debunked piece of work I believe by CA.
2. That said it seems that WUWT and Real Science have more in common than not.
3. All the adjustments are towards warming…not possible.
4. Goddard is correct.
5. WUWT is partly correct but only in this case and overlooks the BIG picture of other 100’sof examples. Adjustments upwards
6. http://notalotofpeopleknowthat.wordpress.com/2014/06/26/massive-temperature-adjustments-at-luling-texas/ is correct
7. LOL


Goddard is not trying to compute some temperature….
…he’s talking about the way they are doing it
His 40% was referring to infilling…..which he calls the F word
..his 30% was not some snap shot waiting on someone to mail in their results in a week like you played it
his 30% was for the YEAR 2013..and was not talking about waiting for some.stations to mail it in….it was 40% stations dropped…..gone…..bombed back to the stone age….not in existence
REPLY: “it was 40% stations dropped…..gone…..bombed back to the stone age….not in existence”
Nope, sorry. You are 110% wrong about that, many of those stations are still reporting. His method looks at every data point, so if for example Fen 2014 daily data from Feb 14th and 27th are missing because the observer didn’t write anything down for those days, that gets counted. The station reported the rest of February, March, etc. There are a lot of stations like that where observers may miss a couple of days per month. That’s a whole different animal than being “gone…..bombed back to the stone age….not in existence”.

Eugene WR Gallun @1:14 pm,
You have a real talent.

Gunga Din

Mr Layman here.
Am I in the ballpark when I say the sites and stations are being used to arrive at something they were never designed for?
I mean, an airport station was put there to give ground conditions to the pilots landing the planes.
Most, if not all, other stations were set up to give local conditions for local use (and weather forecasting, of course).
But now their readings are being used for global temperatures. Square peg, round hole.
It would seem to me that only satellites are in a position to give a good idea of a global temperature. But I don’t think even they can read the deep oceans.
To paraphrase an old song, “Does anybody really know what temp it is?”


Col Mosby: “Several things one learns from studying statistics : 1) one gets increases in accuracy as sample size increases, but it is a matter of dimishing returns – you don’t need a large sample to get a pretty good fix on the population’s characteristics , and 2) biased samples will kill you.”
I think I need to correct you on ‘1’ above. One actually may achieve increased precision as sample size increases, assuming sample variation is random. Accuracy and precision are independent characteristics of data samples. Accuracy can be no better than that of each individual observed value as determined by instrumentation calibration, coupling of sensor to process measured, and recording ability of values. Precision can be greater than accuracy, meaning that in some situations you may observe values with measurement increments finer than the accuracy of the measurement. It other situations, precision may be poorer than the accuracy.
‘2’ above attempts to reconcile this disparity between accuracy and precision by calling accuracy errors ‘bias’. There are many other kinds of errors besides what we would normally label as bias. One source of inaccuracy could simply be that the instrument used to collect an observation was never designed to provide high accuracy or even repeatability of measurements. Issues such as short and long term drift, aging, hysteresis, and non-linearity in each of these, as well as the quality of initial calibration and calibration markings all introduce errors within an instrument that are not removed by averaging observations. Likewise, things that effect the coupling of a sensor to a measured process can introduce errors in observations that do not average out over time.
So, simply assuming errors are a combination of random noise plus a static bias is quite invalid. Never accept claims of increased accuracy of observations based upon averaging, or statistically massaging data in any way. A claim of improved precision may be justified in some cases. However, accuracy is always that of each individual observation included in the statistical reduction.
I am pleased that Anthony acknowledges that the USHCN is doing its job correctly as designed. It was never intended as a fractional degree Fahrenheit or Celsius long term climate trend instrumentation system. That this system shows human impacts and large Urban Heat Island temperature trends is both correct and desired. People really are interested in what the temperature actually is, not necessarily what it would have been if humans hadn’t has an effect on it. The USCRN is our first real shot at looking for that long term trend. Let’s hope that trend will stay positive and not negative!

“The question is, have we reached a point of no confidence in the data because too much has been lost?”
There is always a question of how much data you need. It’s familiar in continuum science. But there are answers.
We’re trying to get an average for the whole US. That is, to calculate an integral over space. Numerical integration is a well studied problem. You have a formula for a function that provides an interpolate everywhere, based on the data, and you integrate that function by calculus. It will probably be used as a weighted sum expression, but that is the basis for it.
So how much data do you need? Basically, enough so you can almost adequately interpolate each value from its neighbours. That’s testable. I say almost, because if you can interpolate with fully adequate accuracy, you have more data than you need. But if the accuracy is very poor, then even with the info about that point added, there are likely areas remaining that are poorly interpolated.
That’s why it’s so silly to talk of infilling as “faking” data. The whole concept that we’re dealing with (average US) is based on interpolation. And there is enough US data that known data is, in monthly average, close to its interpolate from other data.
It’s also yet another reason for using anomalies. Real temperature is the sum of climatology and anomaly. You can interpolate anomaly; it varies fairly smoothly. Climatology varies with every hill. So to find temperature for the whole US, you average anomaly, and add it to locally known climatology.

Gunga Din

dbstealey says:
June 26, 2014 at 2:12 pm
Eugene WR Gallun @1:14 pm,
You have a real talent.

At times I’ve done parodies of poems and songs. That means that somebody else did the hard work of meter, rhyme etc. I just have some fun with it.

Owen in GA

Ok, You are definitely in cahoots with big oil climate…(or is that big climate oil)../sarc
Pollution of records is a very difficult problem. I still don’t agree with changing the past data in the time series, especially if those changes are based on interpolated estimates. There really needs to be some sort of way to automate some of these night time rises or declines, but how to do it without very good station metadata I don’t know. It just seems like whatever we do we are guessing. Then we misapply the law of large numbers to say there is no way that many stations could all be wrong, and apply tiny error bars on populations that still have large unresolved systematic errors and biases. Then the activists who have been trying to deindustrialize us since the beginning of the industrial revolution take that false certainty and exclaim “it’s worse than we thought, we are all going to die!!!!!”

“Steig claims 30 station records are all that are needed globally”
He actually suggested 60, divided into halves that you could compare. There was a thread at Jeff Id’s; thiscomment of Eric’s is a good starting point.
I got involved, and did an actual calculation with 60 stations. One can ask for them to be rural, at least 90 years data, etc. It wasn’t bad, and the result not far from a full station calc. Later I updated with better area weighting here and here.
Zeke and I have done tests with just rural, excluding airports etc. It just doesn’t make a big difference.