On the "march of the thermometers"

I’ve been away from WUWT this weekend for recovery from a cold plus family time as we have visitors, so I’m just now getting back to regular posting.  Recently on the web there has been a lot of activity and discussions around the issue of the dropping of climatic weather stations aka “the march of the thermometers” as Joe D’Aleo and I reported in this compendium report on issues with surface temperature records.

Most of the station dropout issue covered in that report is based on the hard work of E. M. Smith, aka “chiefio“, who has been aggressively working through the data bias issues that develop when thermometers have been dropped from the Global Historical Climate Network. My contribution to the study of the dropout issue was essentially zero, as I focused on contributing what I’ve been studying for the past three years, the USHCN. USHCN has had a few station dropout issues, mostly due to closure, but nothing compared to the magnitude of what has happened in the GHCN.

That said, the GHCN station dropout Smith has been working on is a significant event, going from an inventory of 7000 stations worldwide to about 1000 now, and with lopsided spatial coverage of the globe. According to Smith, there’s also been an affinity for retaining airport stations over other kinds of stations. His count shows 92% of GHCN stations in the USA are sited at airports, with about 41% worldwide.

The dropout issue has been known for quite some time. Here’s a video that WUWT contributor John Goetz made in March 2008 that shows the global station dropout issue over time. You might want to hit the pause button at time 1:06 to see what recent global inventory looks like.

The question that is being debated is how that dropout affects the outcome of absolutes, averages, and trends. Some say that while the data bias issues show up in absolutes and averaging, it doesn’t effect trends at all when anomaly methods are applied.

Over at Lucia’s Blackboard blog there have been a couple of posts on the issue that raise some questions on methods.  I’d like to thank both Lucia Liljegren and Zeke Hausfather for exploring the issue in an “open source” way. All the methods and code used have been posted there at Lucia’s blog which enables a number of people to have a look at and replicate the issue independently. That’s good.

E.M Smith at “chiefio” has completed a very detailed response to the issues raised there and elsewhere. You can read his essay here.

His essay is lengthy, I recommend giving yourself more than a few minutes to take it all in.

Joe D’Aleo and I will have more to say on this issue also.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

239 Comments
Inline Feedbacks
View all comments
March 8, 2010 5:09 am

Satellite derived global mean temps show trends that are similar to thermometer temp trends. If station dropout were causing an artificial warming, then how does one explain satellite data?
Are you arguing about a few trees while missing the forest?

Frank K.
March 8, 2010 5:32 am

“But the climate scientists do it differently. They do two things that prevent that bias. One is the use of anomalies. That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period. If you are only looking at temperatures relative to those means, it scarcely matters whether stations being dropped are hot or cold. It only matters whether they are rising relative to that local long term mean.”
Nick – could you please explain how the use of “anomalies” makes any sense thermodynamically? What does a “world average temperature” arrived at with this method really mean? Thanks.

roger
March 8, 2010 5:35 am

re: Philhippos
not sure if UK stations have been checked for siting. it would be interesting to contribute if it hasn’t been done. they are generally at RAF bases which have been around since prop planes, they wouldn’t have considered the exhaust from jets. also maybe the areas of tarmac may have changed over 50 years or so.
Roger

Editor
March 8, 2010 5:56 am

Nick Stokes (04:31:38) :
re Report “Summary point 5:
There has been a severe bias towards removing higher-altitude, higher-latitude, and rural stations, leading to a further serious overstatement of warming.”
Again you have to read it in context of what E.M.Smith is saying. He is looking at (for the moment anyway) biases in the raw data that go into GIStemp. The statement above is correct in that context as the data that therefore goes into GIStemp is warming. The report may well take it out of context and infer that the this also causes the global average temperature to warm also, but EMS is not saying that – yet anyway – as he has not yet got to that part of his analysis.
On the other hand, as this post outlines, stations from cooler areas (high altitude, high latitude) have greater seasonal temperature variations and are more susceptible temperature extremes. I understand that much of “Global Warming” has actually been shown to be “Winter warming”; minimum temperatures are not as low. When global temperatures are falling (as in now)this may mean lower lows in stations with this type of location, and we should therefore not lose them from the record.
If anyone is in any doubt about the cyclical nature of climate and its effect on the temperature data recorded worldwide, take a look at this post on
mapping global warming (Figures 8, 9 and 10 in particular)
Also Mark (04:31:39) : said it well – I concur.

March 8, 2010 5:57 am

Re: Frank K. (Mar 8 05:32),
Well, Celsius itself is an “anomaly” – it’s the difference between a temperature and the freezing point of water. And it’s fine for most thermodynamics. Anomalies can tell you about change in world average temperature, which is generally proportional to change in heat content (except for phase change etc).

Tim Clark
March 8, 2010 6:01 am

Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies, and likes to do his analysis with absolute temperatures, In that world, the “march of the thermometers” towards the Equator, or wherever, may have cause a real temperature bias.
But the climate scientists do it differently. They do two things that prevent that bias. One is the use of anomalies. That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period. If you are only looking at temperatures relative to those means, it scarcely matters whether stations being dropped are hot or cold. It only matters whether they are rising relative to that local long term mean.

jack mosevich
March 8, 2010 6:11 am

Do land based thermometers matter any more what with Dr. Roy’s satellite measurements?

B.D.
March 8, 2010 6:12 am

The other main protection is gridding. The global average is not an average of stations. It’s an average of grid cell averages, which you can see on the GISS plots.
It should be noted that gridding causes the influence of the dense (e.g. N. America) networks to be reduced. With gridding, the globe is significantly warmer now than the 1930s. Without it, the globe is barely warmer.

Tim Clark
March 8, 2010 6:17 am

Nick Stokes (02:22:28) :
E.M.Smith does not like anomalies, and likes to do his analysis with absolute temperatures, In that world, the “march of the thermometers” towards the Equator, or wherever, may have cause a real temperature bias.
But the climate scientists do it differently. They do two things that prevent that bias. One is the use of anomalies. That is, you form the global mean by averaging differences of station temps from their a local mean over a fixed period. If you are only looking at temperatures relative to those means, it scarcely matters whether stations being dropped are hot or cold. It only matters whether they are rising relative to that local long term mean
.
Uh, Nick, I think you need to rethink that statement. We’re not talking absolute station temp here. If you drop stations that are showing no trend or a cooling trend and leave mostly stations that have a warming trend (airports), you bias the trend upward, regardless of gridding.
But the climate scientists do it differently.
Yes, we have noticed they are somewhat……kinky.

Tim Clark
March 8, 2010 6:30 am

Forgive the double post, it’s a Monday.

Frank K.
March 8, 2010 6:32 am

Nick Stokes (05:57:35) :
Re: Frank K. (Mar 8 05:32),
“Well, Celsius itself is an anomaly its the difference between a temperature and the freezing point of water. And its fine for most thermodynamics. ”
Actually, the appropriate temperature scales for thermodynamics are Kelvin or Rankine degrees, as you know, since these are absolute scales. Try using Celsius (or Fahrenheit) in the ideal gas law…
“Anomalies can tell you about change in world average temperature, which is generally proportional to change in heat content (except for phase change etc).”
Is that correct? The heat content is given by the first law of thermodynamics, which can be written for a closed system as:
dE/dt = Q-W
where E is the sensible+potential+kinetic energies, Q is the heat transfer, and W is the work transfer. If you integrate this equation over a given time interval, you see that the change in energy E is relative to an ** initial ** state E(t=0), not to some time-averaged state!
Anomalies are useful for characterizing trends and interpolating data, but one shouldn’t endow them with any thermodynaic meaning. Moreover, this idea that you ** can’t ** use absolute temperatures to characterize global surface temperatures is just silly.

March 8, 2010 6:38 am

“Gridding” is not a solution to missing data. One cannot manufacture statistical precision by averaging an existing dataset into subgroups. The only way to create extra statistical precision is by adding more independent (and hopefully, identically distributed) data. The solution to missing data is to use an analytical method that does not rely on a uniformly populated data space.
This is similar to “naive” analysis of econometric and financial data (my fields) in which data gaps are “filled forward.” Although that operation is defensible as a prediction of what value the missing data would actually have taken it is *not* useful when performing an analysis of the entire dataset. In creates an autocorrelation in the process which leads to misleading error analysis.

March 8, 2010 6:38 am

vjones (05:56:30)
” I understand that much of “Global Warming” has actually been shown to be “Winter warming”; minimum temperatures are not as low.”
That would be consistent with a simple acceleration of the longitudinal progression of air circulation systems.
Faster west/east movement such as we did see during the period 1975 to 2000 would mean that on average air would spend less time over the continents where fastest cooling occurs.
Slower west/east movement would allow continental interiors to cool off more and lead to a cessation of the apparent warming effect.
Are we quite sure that all we have been observing is not just a simple function of the speed of movement of air masses around the globe ?
Losing the stations most likely to be affected would be a neat method of ‘hiding a decline’ now that the speed appears to have fallen once more.

Pascvaks
March 8, 2010 6:39 am

Tis not the thermometer that is the problem..
Tis the reader of the thermometer that is the problem.
Tis not the number on the thermometer that is the problem..
Tis the number the computer program crunches that is the problem.

March 8, 2010 6:57 am

Well, Celsius itself is an “anomaly” – it’s the difference between a temperature and the freezing point of water. And it’s fine for most thermodynamics.

Nope, thermodynamics is based on absolute temperature scales. It must be this way, otherwise the results are dependent on the relative scales used.

Sarnac
March 8, 2010 7:01 am

Simple airport temperature anomaly test …
1: Identify a list of temperature sensors at airports inside the US
2: Look for a temperature trend DROP on 9/12 and 9/13 2001 when US airports were completely closed down compared to 9/10 2001, when airports were at regular-use levels.
3: Compare these temperatures to the change on 9/10 to 9/12 & 9/13 of 2000, 2002, 2003, … up to 2009
Granted, these are different days, and 2-3 days later in the early fall, but if the drop from 9/10 to 9/12 in 2001 is statistically significant compared to the change in all the other years, then we can say those airports are jet-usage-biased.

Sarnac
March 8, 2010 7:22 am

Re: vjones (05:56:30) :
“I understand that much of “Global Warming” has actually been shown to be ‘Winter warming’; minimum temperatures are not as low.”
So if the mins rise but the maxes don’t, this sounds like some form of validation of Willis Eschenbach’s Thunderstorm Thermostat Hypothesis (and its follow-up, sense-and-sensitivity)
So …
1: the planet warms (coming out of the Little Ice Age) (IMHO without statistically significant human help, but that is irrelevant to this argument)
2: assume everywhere warms evenly (absurd but useful for trivial analysis)
3: but where the planet starts to locally overheat, it follows Eschenbach’s logic and locally-thunderstorms, blocking energy absorption by putting up a sun-reflecting “umbrella” of white-topped thunderclouds, increasing local albedo.
THIS IS WONDERFUL

geo
March 8, 2010 7:25 am

It seems to me the range of skeptics runs to three main lines of thought:
1). The globe isn’t warming at all –we’re measuring it wrong (siting, land use, dropouts, whatever).
2). The globe is warming somewhat, but not as much as we think because we’re measuring it wrong, and what is left if you could do it right falls well within the range of expected natural variability.
3). The globe is warming somewhat, but not as much as we think because we’re measing it wrong, and after you take out what is reasonable to expect for natural variability, the C02 warming signal is much less (to the point of not being a threat in any urgent timeframe and a smallish fraction of what IPCC assigns to C02).
Reading Chiefio’s article, he seems to be firmly in camp #1. I respect that he gets there by intense “data diving” of thermometer data. I also recognize that if an analysis seems to conflict with the “real world”, it’s likely there’s a flaw with the analysis rather than the real world, even if I can’t put my finger on exactly what it is –the famous “bumble-bees can’t fly; here’s the analysis proving it” scenario.
I generally find myself in catagory 3.

Richard M
March 8, 2010 7:30 am

I believe Nick Stokes has admitted he has never read chiefio’s reports yet tells us that he knows exactly what he has done. Sounds exactly like Nick’s work with Miskolczi.
In other words, he’s so embedded in groupthink that he refuses to objectively read something that might change his beliefs. Is that about right, Nick?

carrot eater
March 8, 2010 7:50 am

Tim Clark (06:17:00) :
” We’re not talking absolute station temp here.”
It’s all you see as you flip through EM Smith’s work, or the SPPI report. So clearly, somebody is talking absolute station temps. EM Smith calls it ‘measuring the data’, or somesuch.
“If you drop stations that are showing no trend or a cooling trend and leave mostly stations that have a warming trend (airports), you bias the trend upward, regardless of gridding.”
Indeed you would. Which is exactly the point of the analysis of Zeke, clear climate code, and Tamino. Before the time of the station number drop, the global trends calculated from the dropped stations and the surviving stations are the same.

carrot eater
March 8, 2010 7:53 am

Dan Hughes (06:57:37) :
“Nope, thermodynamics is based on absolute temperature scales. It must be this way, otherwise the results are dependent on the relative scales used.”
Depends on what you’re doing. In many cases, simply knowing the relative change or difference in temperature is enough. If all you want to know is whether something is increasing or decreasing in temperature, then relative changes are all you need.

March 8, 2010 7:56 am

geo (07:25:50),
It appears that one of your three scenarios is correct. There may be some overlap, because we don’t have all the answers [and we certainly don’t have sufficient data due to its being “lost,” repeatedly adjusted, fabricated, etc.]
I’m with Prof Lindzen [probably between #2 and #3, with 3 being most likely]. Lindzen thinks the climate sensitivity is below 1, which makes the effect of CO2 insignificant, even if it doubles from here, which is very unlikely.
That’s why the alarmist crowd is flailing around, looking for an alternate excuse to force Cap & Trade. Methane currently seems to be their fallback position.

Sarnac
March 8, 2010 8:26 am

oops … last comment incomplete … hit tab instead of turning off CAPS, then space and accidentally posted … (WUWT could really use a preview button as the first tab after the comment-post-box) … continuing …
THIS IS WONDERFUL
Instead of overheating the planet, we instead are un-freezing after the LIA …
Places that are warm won’t get that much warmer if at all, but the likely will get wetter (and grow more crops and other plants) … and interestingly, the sahara is turning green
(darn … WUWT link has quotes IN the URL, don’t know if my href= can use single-quotes, so here it is again: http://wattsupwiththat.com/2009/12/16/another-al-gore-reality-check-“rising-tree-mortality”)
Places that are colder will get warmer (so they can extend their growing season … and drop more plants and crops).
So the thunderstorms dynamically thermo-regulate, we get more food just as we seem to really need it (human populations are expected to peak between 8B and 10Billion around 2050(wikipedia) to 2070(Nature, $, 2001)) … and hopefully this isn’t the warm-bounce before the end of the Holocene interglacial optimum

March 8, 2010 8:39 am

carrot eater (07:53:33) :
Nope, thermodynamics is based on absolute temperature scales. All thermodynamics textbooks will clearly state this fact early on in the text.
As to the correctness on my statement I will point to all textbooks that are presently in use at all universities on the entire planet.
Now, you point me to texts that say that relative temperature scales can be use in thermodynamics. As Frank K noted above, try using C or F in the perfect gas law.

mikef2
March 8, 2010 8:47 am

Hi Carrot Eater,
I’ve just posted a similar comment over at Lucias, as I’m serially lurking today.
I’m really trying hard to get my head arround this issue, which at Lucias seems to be “Smith has got it wrong” but I think the Lucia crowd are starting from a position of acceptenance of the raw data, where Smith is not, so your comment above makes me ask again, because I’m really not sure, if you could explain to me how such a thing as the Campbell island scenario can be handled by the anomoly method.
As I understand it, EMSmith has looked at raw data for New Zealand including Campbell island and this shows no significant temp trend.
Campbell island is then dropped from the network, and without it New Zealand now shows a positive warming trend. So he is showing that its the choice of data that creates the bias.
How can any anomaly method get around this….the Temp trend today shows a positive trend compared to the baseline which had Campbell island in it (as it is bound to as campbell island was a ‘cold’ input) but surely todays trend is just a statistical artifact because Campbell is no longer recorded.
Isn’t this fundementally biasing the result?

Verified by MonsterInsights