I’ve been away from WUWT this weekend for recovery from a cold plus family time as we have visitors, so I’m just now getting back to regular posting. Recently on the web there has been a lot of activity and discussions around the issue of the dropping of climatic weather stations aka “the march of the thermometers” as Joe D’Aleo and I reported in this compendium report on issues with surface temperature records.
Most of the station dropout issue covered in that report is based on the hard work of E. M. Smith, aka “chiefio“, who has been aggressively working through the data bias issues that develop when thermometers have been dropped from the Global Historical Climate Network. My contribution to the study of the dropout issue was essentially zero, as I focused on contributing what I’ve been studying for the past three years, the USHCN. USHCN has had a few station dropout issues, mostly due to closure, but nothing compared to the magnitude of what has happened in the GHCN.
That said, the GHCN station dropout Smith has been working on is a significant event, going from an inventory of 7000 stations worldwide to about 1000 now, and with lopsided spatial coverage of the globe. According to Smith, there’s also been an affinity for retaining airport stations over other kinds of stations. His count shows 92% of GHCN stations in the USA are sited at airports, with about 41% worldwide.
The dropout issue has been known for quite some time. Here’s a video that WUWT contributor John Goetz made in March 2008 that shows the global station dropout issue over time. You might want to hit the pause button at time 1:06 to see what recent global inventory looks like.
The question that is being debated is how that dropout affects the outcome of absolutes, averages, and trends. Some say that while the data bias issues show up in absolutes and averaging, it doesn’t effect trends at all when anomaly methods are applied.
Over at Lucia’s Blackboard blog there have been a couple of posts on the issue that raise some questions on methods. I’d like to thank both Lucia Liljegren and Zeke Hausfather for exploring the issue in an “open source” way. All the methods and code used have been posted there at Lucia’s blog which enables a number of people to have a look at and replicate the issue independently. That’s good.
E.M Smith at “chiefio” has completed a very detailed response to the issues raised there and elsewhere. You can read his essay here.
His essay is lengthy, I recommend giving yourself more than a few minutes to take it all in.
Joe D’Aleo and I will have more to say on this issue also.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
To be fair to Stokes, he did mention that they were looking at changes in temperature, and delta T is the same on the Celsius (relative) and Kelvin (absolute) scales.
Carrot Eater,
If all you want to know is whether something is increasing or decreasing in temperature, then relative changes are all you need.
What is that ‘something’? Is it an object, a location? If so then you can only determine an increase/decrease from a continuous series of readings taken at that point. A break in the readings means that you can only make the delta determination within the time-period of the continuous sequence. And so, you will have to start again with the start of the next unbroken sequence.
I’ll give you a simple analog. Say I am the weather recorder for my village, but I go on holiday every year for the month August, and so no records are taken. It follows that my records to determine the average annual village temperature, are useless !
And, in comparison or in concert with other local village records, my records can ONLY be included in creating averages of monthy temperatures where I have data. As I have no records for any August, my records cannot be used for annual averaging. And, any attempts to overcome this problem by filling-in or averaging by reference to other local site records will result in an UNKNOWABLE ERROR in the result.
..or to use Lucias Toy Planet analogy..
I’ve got 10 temp readings on Toy Island and they average 21C
I take one away so I’ve only got 9 readings and now my average is 22C
Therefore I’ve warmed by 1C
…unless the one I took away was always ‘too cold’ and the temp was actually always 22C…..in which case I’m at the same temp as I always was but I now think I’m at +1C
???????
off topic: apparently carbon dioxide is depleting the oceans oxygen: http://news.yahoo.com/s/mcclatchy/20100307/sc_mcclatchy/3444187
— I’m no expert but couldn’t this be more easily blamed on other kinds of pollution like fertilizer runoff?
geo (07:25:50) :
It seems to me the range of skeptics runs to three main lines of thought:……..
Another camp…….warming isn’t necessarily evil (and probably good for mankind) and even if we could do something about it, we probably shouldn’t.
or another, it is impossible to distinguish “natural” occurring CO2 from man caused emissions and ridiculous to even have a conversation about it.
If losing thermometers “doesn’t matter” then why not just remove all except one and use the one remaining thermometer to plot the graphs.
This could be in a big “uber-grid” representing the whole world – and we have already learned that thermometers can be dropped from a grid without affecting the grid.
@Sarnac (07:01:02) :
Oooh, I kinda like that one! There may be some other adjustments to look at to take changing weather patterns out (say a front is so ungracious as to choose just that moment to move through). You’d probably want to look at the temp change from other close sites to those airports to confirm their temps were pretty reasonably close to each other on 9/10 vs 9/12-9/13.
Doesn’t help with all that tarmac tho. . .just the operation of the jets themselves, and all those cars going to and fro in the close vicinity (that wouldn’t have been on those days).
Was the HO-83 hygrothermometer issue taken into account for all this? Weren’t they replaced during about the same time the ‘march of the thermometers’ time frame?
I have no idea how GISS manages to justify this:
http://gallery.surfacestations.org/main.php?g2_view=core.DownloadItem&g2_itemId=57846&g2_serialNumber=2
when the raw data for Ashland, Oregon looks like this:
http://www.robertb.darkhorizons.org/TempGr/AshOre1.GIF
other than to use the former as an excuse to drop stations, after monkeying with the data.
Smokey (07:56:34) :
Methane, running all round my brain. They say it will kill you faster than previously imagined, but they won’t say when.
Smokey,
Why do you favour Lindzen’s estimate of climate sensitivity over the large number of studies which have estimated it to be much higher, particularly in the light of the response by Trenberth et al? And what makes you think that the “alarmists” want to force Cap & Trade on everyone, any why would they?
Stephen Wilde (06:38:36) :
“Are we quite sure that all we have been observing is not just a simple function of the speed of movement of air masses around the globe ?”
Interesting – I hadn’t thought of that angle or heard it discussed anywhere, but it seems inherently sensible to consider it.
geo (07:25:50) :
From a postion of being a warmist (several years ago, before my skeptic husband challenged me to actually look at the science) I moved to Cat 3, then to Cat 1 (as I dug deeper). I’m probably now in Cat 2., but basically open to whatever I find the data says.
Ref – geo (07:25:50) :
“It seems to me the range of skeptics runs to three main lines of thought:
1). The globe isn’t warming at all –we’re measuring it wrong (siting, land use, dropouts, whatever).
2). The globe is warming somewhat, but not as much as we think because we’re measuring it wrong, and what is left if you could do it right falls well within the range of expected natural variability.
3). The globe is warming somewhat, but not as much as we think because we’re measing it wrong, and after you take out what is reasonable to expect for natural variability, the C02 warming signal is much less (to the point of not being a threat in any urgent timeframe and a smallish fraction of what IPCC assigns to C02).
_______________________
geo, I’m a 4).
4.) The globe is warming somewhat, the way it always does following a “Little Ice Age” type of event; one day it will cool again, the way it always does following a Warming type of event; the thermometers we’re using are OK; the compters we’re using are OK; it’s the computer programmers and Chicken Littles in the World that you have to look out for, and squash whenever they start squaking; it’s also the Fat Alberts of the World that you need to throw rotten fruit and vegetables at, whenever they pull into town with their snakeoil cures and rainmaker gizmos.
I will presume to summarize what E.M Smith is doing.
1) GISS and similar programs have taken raw temperature DATA as input and, after treatment, come out with an OUPUT that says the world is catastrophically warming.
2)To check this prophecy :
a) he looked at the programs coming out with the prophecy of catastrophe
b)he looked at the DATA that are used by the programs to come out with the prophecy
It is the raw DATA that is being described as “the march of the thermometers”
If, for example, he looked at the raw data and he found that it was numbers from the New York phone book , it would be clear to all that no matter what the program did, the output would be nonsense.
He did look at the data and he found that systematically cold places etc were dropped and the DATA entering as input to the program, before any manipulation, are biased towards the prophecy of warming. This is before looking at grids averages etc and if there are errors within the programs. The DATA is biased.
Biases have to be corrected to use the data meaningfully.
He has not found in the programs that prophecy catastrophe and want to stampede the western world into economic stagnation, an effort to acknowledge and correct for these biases.
If a Pamina or a Papageno has a program that corrects for these biases, it cannot correct the IPCC report that has used the biased data to project catastrophe unless the world repents with a major “mea culpa”.
That’s about it, as far as the march of the thermometers goes.
Dan Hughes (08:39:39) :
Take a look in your thermo textbooks, and show me where it says that it matters whether you express a change in temperature in Kelvin or Celsius.
Yes, you need the absolute temperature in the gas laws. You need the absolute temperature for Stefan-Boltzmann law.
You definitely don’t need absolute temperatures to express how much a temperature has risen or dropped over time.
And this is exactly what we’re doing here – seeing how much a temperature has risen or dropped over time.
Jack Hughes (09:14:25) :
“If losing thermometers “doesn’t matter” then why not just remove all except one and use the one remaining thermometer to plot the graphs.”
Losing thermometers matters if it leaves you undersampled.
Extreme examples, to illustrate the point:
Say you have 1,000 weather stations in North Dakota. All of them say that North Dakota is warming at some x C/decade, +/- some variance. Losing a few of these stations won’t make any significant difference; you simply don’t need that many weather stations to get a good idea of the temperature trends in North Dakota. This is oversampling.
On the other hand, say you only have three thermometers in Brasil. Say North Brasil is warming, and two of your thermometers are there. South Brasil is cooling, and you had one thermometer there, but it broke. Now, you have a problem because of the lost weather station. It leaves you undersampled: not enough measurements to describe the variations in trend over space.
What Zeke Hausfather, the clear climate code and Tamino have done is shown that by only using the stations that remain after 1992, you get the same global results as what you get by using the stations that had dropped off. This tells you that overall, the missing and the current stations were not different in their trends before 1990.
Now, if you look hard enough, you might find some region where the dropped stations were behaving differently from the surviving ones, before 1990. This would be a problem if you wanted to know the trends at that particular location. But we’ve already seen that this would be the exception to the rule.
Now, some parts of the earth were possibly undersampled both before and after 1990. The Arctic comes to mind. Maybe some parts of Africa. So adding measurements from there would be helpful.
anna v (10:12:12) :
“He did look at the data and he found that systematically cold places etc were dropped and the DATA entering as input to the program, before any manipulation, are biased towards the prophecy of warming.”
This is precisely why you need to use anomalies. If you use anomalies (and everybody does), then simply dropping a cold place will not make the average warmer. It just won’t.
However, dropping a place that was cooling will make the average trend warmer.
Anna V
Thats how I understand it too.
I just can’t get my head around how lots of seriously bright people want to ignore that part and go off in tangents about their elaborate math to make these anamolys. Cool, great, good luck to them. It makes no difference if the data is contaminated BEFORE they start….do they just not want to see what is painfully obviously staring them in the face?
And the vibe back is that its just cherry picking a few sites to cause a stir…but the actual reporting sites used is shockingly low, so these ‘cherry picked’ sites start to take on some meaning.
Just to help Nick Stokes 2:22:18 and subsequent comments, and to build on Graham Giller 6:38:15, data manipulation to create anomolies doesn’t add any information to your data set and gridding removes it.
So using anomolies isn’t a silver bullet that removes bias (removing or adjusting your dataset for corrupt data may).
In fact the big risk with using anomolies is that the normalisation of the data seduces people into thinking that they have converted a data set from something that is clearly derived from sub-climates with quite different characteristics to something much more homogeneous.
They haven’t. To repeat there is no more or no less information, and its quality stays the same (at least on the assumption you are using 1-1 determanistic transformations of your data).
Better for this reason alone not to use anomolies because the point where you think it is simplifying things is the very point at which you are making unwarrented assumptions about your data (e.g. combining observations into averages without thinking about the underlying distribution of the individual observations).
Sarnac (07:01:02) : “Simple airport temperature anomaly test …”
I like it.
Re mikef2 (08:47:25) :
I do think this is the issue, if the bias (false warming or cooling relative to time, within stations choosen and dropped, composing a given grid; compared to including all the available stations within the grid over the same period and arriving at a different anomaly and or mean temperature) is in the GISstep temperatures, then something is allowing the bias to leak through.
andrew adams (09:46:35) : “And what makes you think that the “alarmists” want to force Cap & Trade on everyone…”
Not every “alarmist” favors C&T. Hansen, IIRC, prefers taxing fossil fuels instead. But the almost universal “warmist” goal is to reduce CO2 emissions by making carbon combustion more expensive, forcing a switch to other energy sources that would otherwise be more expensive (some include nuclear in the list, others don’t).
“…any why would they?”
Cap & trade has the potential to make a lot of money for people on the inside, like Al Gore (carbon credits), commodities brokers, financial institutions, and not least the oil companies (who always make money when the price of oil rises). “Making” money out of a politically-created market (i.e. out of literally nothing) is a financial wizard’s dream; it’s even better than inflated mortgage-backed securities.
The CO2 emission reduction exercise is pointless, of course, because rising emissions from China, India, and other developing countries will negate any reductions by the gullible, who will end paying for their reductions and the dislocations (if any) their reductions were supposed to prevent. Guess Who ends up paying the bill?
Anyone know what the reason is for the “pulse” of blue reporting stations at the start of each decade? The number of blue stations goes up dramatically in years ending in zero and then drops immediately back to the norm. I don’t know if it’s important but at the very least it seems weird.
Nick Stokes,
Ok, lets say you were using a gridding system, and that particular grid was fairly “cold”. If thermometers were removed from higher elevations and more rural areas, whereas thermometers in lower elevations and more urban areas were kept, should that not make the overall grid “less cold” than it was with all of the thermometers included?
If not, why not?
Re: carrot eater (Mar 8 10:31),
And this is exactly what we’re doing here – seeing how much a temperature has risen or dropped over time.
You have to realize that as far as global warming or cooling goes, which is the real problem we are being forced to face, it is the value of the temperature in Kelvin that is important. That is what turns temperature from a measure of whether to expect ice on the road or melting tarmac into a heat gauge. It is excess or lack of heat that will tell us whether the planet is warming or cooling.
An attempt has been made to make a heat budget for the earth, but as far as I can see in the 2008 Trenberth paper it was done using satellite data, not temperature data.
The temperature is measured at 2m in the atmosphere. Most of the heat content or lack thereof is in the surface layers of ocean and land. If there were no winds and no evaporation etc the surface temperature and the 2m temperature would be at equilibrium and one could use the 2m temperature to gauge the heat radiated.
Because there are convection currents, storm systems etc, and large bodies of atmosphere are moved to surface areas with temperatures that do not correspond to the surface temperature, the correspondence of energy with temperature is broken. Thus there could be large changes because of redistribution of heat and lack thereof on planet wide scales , in a non linear manner. When one comes to calculating anomalies over the average temperature of the region the correlation with what is really happening with heating and cooling of the planet is really third hand. It could be that the temperatures are absolutely stable and anomalies show heating because of peculiar redistributions of heat, (PDO, ENSO etc. etc) as seen in http://nsidc.org/images/arcticseaicenews/20100303_Figure4.png .
If anybody has a link of the time dependence of “radiation energy in in minus radiation energy out” from satellites I would really like to see it.