The Smoking Gun At Darwin Zero

by Willis Eschenbach

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport – by Dominic Perrin via Panoramio

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.
  • The shortest record is only nine years long.
  • There are gaps of a month and more in almost all of the records.
  • It looks like there are problems with the data at around 1941.
  • Most of the datasets are missing months.
  • For most of the period there are few nearby stations.
  • There is no one year covered by all five records.
  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My previous post on this subject.

The late and much missed John Daly, irrepressible as always.

More on Darwin history, it wasn’t Stevenson Screens.

NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
4.7 3 votes
Article Rating
909 Comments
Inline Feedbacks
View all comments
December 9, 2009 2:23 pm

CodeTech (13:55:17) :
Tom in Texas (13:26:56) :
BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.
WinRar does it…
Thanks. Will try it.

geo
December 9, 2009 2:39 pm

What I want to know is does GHCN make those kind of adjustments in a. . .ahh. . . “artisanal” (to borrow a wonderful application of the word from Steve McI) manner, individually, one human studying the individual records for a station, and making the changes. . . . or does this happen as a some impersonal computer wholescales an algorithm across its dataset, and rarely does a human look at outliers and go “oh, gee, that can’t be an acceptable result”. And even if they do the latter, do they have any ability to change that one result and make the change “stick” for the future.
The two models of how this happens (individually vs mass computer) do make a difference it seems to me. Both as to going to intent in the face of an obviously wrong result, and in finding out “who did this?” and how to fix it going forward.
If this is the work of a mass of interns one station at a time over years. . . .that’d really be one heckuva mess to untangle now. If it is the work of a computer program, it can probably be nudged somehow to kick out outliers on some criteria or other to be adjusted individually if necessary by another file identifiying them by station id to do something along the lines of “don’t do your usual algorithm here, because it sucks –do this instead”.

Slartibartfast
December 9, 2009 2:40 pm

Oh, I just have Cygwin installed. Presto! You have a lot of *nix capabilities at your fingertips.

geo
December 9, 2009 2:44 pm

I guess what I’m wondering is did a human do that on purpose, or is this yet another application of the 80/20 rule of computer programming? (i.e. 80 percent of the work to do 20 percent of the cases)

Joel D
December 9, 2009 2:46 pm

http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php?utm_source=sbhomepage&utm_medium=link&utm_content=channellink
Willis,
You have got to love this post. You published your findings yesterday and already they have proven them “false”. Now they make some claim that you doctored data. You were trying to find how much of an adjustment to the data was necessary to go from the raw data to the “corrected” data. How would you do this without trying to make modifications to the raw data? Everything in their article is pure crap and they have quite obvoiously not even taken the time to read your article. What is wrong with these people?

John Smelt
December 9, 2009 2:57 pm

I’m sitting here watching the BBC following the warmist party line – no mention of opposing views of course.
I cannot help but think that all this, when coupled with the events in the UK of the last 12 years that this is a precursor to Orwell’s 1984. As series of excuses to govern and dominate the proles.

geo
December 9, 2009 3:00 pm

Oh, the 80/20 rule could be applied from paper application of the algorithm as well, manually, by someone who doesn’t have the authority, respect, and/or knowledge to challenge it.
I just wonder the *actual* mechanics of how these adjustments get made. Is it Phil Jones and Gavin Schmidt poring over each one and coming to an agreement, with Michael Mann called in to break ties? It is at least a climate PHD working from a script who has the ability to go back to the script writer and say “look, your algo needs work –this is a ridiculous result when I follow it for this case”? Or is it interns working from a 20 page script of what to do, and not a chance in the world they’d tell anyone “this is rubbish!”, or anyone in authority would listen to them if they did? Or is a computer program doing it?
How does that *actually get done*? I want to know that before I start making decisions on incompetence vs bias.

zosima
December 9, 2009 3:06 pm

@OP
You make basic mistakes that undermine your results. Some very basic background:
http://data.giss.nasa.gov/gistemp/
For example, your complaint about 500 km separation is simply facile:
“The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.”
Avoiding the mistakes of previous generations is exactly why scientific research is left to people with specializations in these fields. If you insist on doing amateur analysis, I would suggest you start at the most recent publications and following the citations backward. That way you can understand exactly how and why the corrections are applied rather than just guessing based on one input and their output.
@Street
Normal distribution will only apply if measurements are samples from a random variable. You cannot assume this and in this assumption’s absence, your analysis is false.

David
December 9, 2009 4:08 pm

Joel D, did you actually read that article? It’s pretty clear on the reasons why using the raw data without adjustments is going to give you bad results.

Methow Ken
December 9, 2009 4:12 pm

O.K.: Think we can now reasonably submit that WUWT has gone mainstream; at least in some quarters:
For those who may not have noticed, James Delingpole at the UK Telegraph now has a major piece up dated 8 Dec, where he quotes extensively from this thread start (including a graph) AND links directly to it. See:
http://blogs.telegraph.co.uk/news/jamesdelingpole/100019301/climategate-another-smoking-gun/
Not only that: The above piece was prominently featured in the ”Climate Change Debate” headline section on RealClearPolitics. Regardless of desperate attempts by AGW partisans to subvert and suppress it, I think the message is starting to get out to the wider world. . . .

Mike
December 9, 2009 4:29 pm

I’m waiting for RealClimate’s response. Perhaps you can prod them.
At the moment, as a casual observer I notice this: plots 2–5 look the same as the IPCC plot 1. All you have to do is look at the time period they have in common. It seems a bit shady that IPCC chose not to show pre-1920 data, which indicates a cooling trend, but presumably their full-time quantitative analyses take this into account.

M. Johnson
December 9, 2009 4:39 pm

As someone who works with analysis of scientific data for a living, the thing that most strikes me about the removal of “inhomogeneities” is that it is a technique that is generally thought necessary only for small data sets. When dealing with a sufficiently large data set, random errors cancel themselves out. For example, for every station moved to a warmer location, one would expect there would be one moved to a cooler location – so why correct at all?
Surely the surface temperature record of the entire world is a sufficiently large data set to analyze, at least once, without “correction” for random errors.

Ken
December 9, 2009 4:52 pm

Do realize what you have found here. The “homogonized” data matches exactly the “fudge factor” code in the CRU source code. Here is the code:
;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
(…)
;
; APPLY ARTIFICIAL CORRECTION
;
yearlyadj=interpol(valadj,yrloc,x)
densall=densall+yearlyadj
What this does is it establishes an array that artificially injects increases in temperature that will automatically turn the data into a hockey stick. The hockeystick it creates matches exactly to the Station Zero at Darwin, showing the raw and the homogenized versions.
Many people trying to debunk the source code say it is common practice in modeling to include adhoc code for test purposes not to be used to publish actual data. This proves a real life application of the “fudge factor” code.

December 9, 2009 5:46 pm

“Warwick Hughes (13:29:22) :
I think Willis is not correct to assume CRU have used the GHCN Darwin Zero. I also think it is wrong to assume Jones / CRU have simply use the GHCN station data. See my take over at;
http://www.warwickhughes.com/blog/?p=357
Well, thanks Warwick, much appreciate that .
Refer Steve Short (17:18:27) :
“(2) Can we rely on the accuracy (?) of an interpretation (?) that ‘the emails’ (remarkable isn’t it we at least all know exactly what these are 😉 suggest (?) that the CRU database relies (?) on GHCN (whew)?
Willis :
“That’s what Phil Jones said, and until they release their data, there’s no way to verify it.”
Exactly. Yet more evidence this Phil Jones is one ultra-slippery character. Glad I never had him as a ‘peer reviewer’. He doesn’t even have a manicured beard either – shame on him!

Slartibartfast
December 9, 2009 6:09 pm

Yes, verily. But if you’re Tim Lambert, it’s more about demonizing people you disagree with.

Street
December 9, 2009 6:09 pm

M. Johnson – Exactly my reasoning as well. I’d love to hear a plausible reason why it’s necessary to add positive adjustments to more stations than you are adding negative adjustments. Has anyone ever run a calculation of what the net adjustment is globally and what the distribution looks like?
Another thing pointed out in the article:
“The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.”
That seems to be a sensible approach to automated adjustment. However, I think problems may crop up if the areas covered by the individual stations are not equal. Your more likely to get mutliple stations in metro areas. Let’s say you have a cluster of 5 stations around a city. All of them should exhibit similar heat island effects, right? If the algorithm references the 5 nearest stations, this cluster would only reference itself and confirm it’s common heat island effect as ‘real’. No real problem there.
Problem is stations that are not clustered (likely when they are rural) may use reference stations in these metro clusters for adjustment. This may result in the positive trend due to heat island effect being ‘transmitted’ into the rural station via the adjustment process.
@zosima – The random variable is the differences in temp bias between the old and new locations and the old and new instrumentation. You can probably argue that newer instruments might be designed to prevent solar heating and run cooler, but location changes should exhibit a random bias on temps, shouldn’t it?

carrot eater
December 9, 2009 6:12 pm

Willis,
Some questions, then.
How is anomaly defined in Fig 6 and 7? What is your baseline period? Is it a coincidence that Fig 7 starts at an anomaly of zero, or did you adjust the whole plot downwards to make it so?
You seem to recognise there was a station move in 1941. I find your ‘judgment call’ to not make an adjustment there to be quite odd. I’d suggest repeating Fig 6 and 7 with an adjustment for the station move, to see what it looks like. Further, did you ask what sorts of site adjustments might have been made after that time?

1 19 20 21 22 23 37