The Smoking Gun At Darwin Zero

by Willis Eschenbach

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport - by Dominic Perrin via Panoramio

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

DATA SOURCE: http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=darwin

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.
  • The shortest record is only nine years long.
  • There are gaps of a month and more in almost all of the records.
  • It looks like there are problems with the data at around 1941.
  • Most of the datasets are missing months.
  • For most of the period there are few nearby stations.
  • There is no one year covered by all five records.
  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say in An Overview of the GHCN Database:

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My previous post on this subject.

The late and much missed John Daly, irrepressible as always.

More on Darwin history, it wasn’t Stevenson Screens.

NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony

The Smoking Gun At Darwin Zero

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. <a href=” http://www.iarc.uaf.edu/highlights/2007/akasofu_3_07/Earth_recovering_from_LIA.pdf”>Akasufo</a>. Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted <a href=”http://wattsupwiththat.com/2009/11/29/when-results-go-bad/“>here</a>:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from <a href=http://www.appinsys.com/GlobalWarming/climate.aspx>AIS</a>:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

DATA SOURCE: http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=darwin

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.

  • The shortest record is only nine years long.

  • There are gaps of a month and more in almost all of the records.

  • It looks like there are problems with the data at around 1941.

  • Most of the datasets are missing months.

  • For most of the period there are few nearby stations.

  • There is no one year covered by all five records.

  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say in <a href=http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_overview.pdf>An Overview of the GHCN Database</a>:

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My <a href=” http://wattsupwiththat.com/2009/11/29/when-results-go-bad/”>previous </a>post on this subject.

The late and much missed John Daly, irrepressible <a href=” http://www.john-daly.com/darwin.htm”>as always</a>.

More on Darwin history, it wasn’t <a href=” http://www.warwickhughes.com/blog/?p=302#comment-23412

“>StevensonScreens. </a>

0 0 vote
Article Rating
909 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michael
December 8, 2009 12:18 am

John F. Kennedy was assassinated Friday, November 22, 1963. I think that’s when all the shenanigans began.

Vote Quimby
December 8, 2009 12:31 am

Anyone can tell you Darwin and Northern Australia is the Tropical part of the country, where it’s always warm! 😉

December 8, 2009 12:31 am

Wow, just amazing work and effort. This has just got to get publicity in the msm somehow to spark an investigation and rework of the temperature record.

tokyoboy
December 8, 2009 12:33 am

So the forged hokey sticks are everywhere?

December 8, 2009 12:33 am

This is exactly the kind of public, open examination of the raw and adjusted data that needs to be done for ALL stations globally to establish, once and for all, IF the entire earth is warming un-naturally. (and I have yet to see any definitive proof that the current warming is un-natural).
IF after that process has been done, out in the open for ALL interested parties to see, examine and pick holes in, and IF that shows un-natural warming, THEN I would be more than happy for John Travolta, Leanardo di Caprio and Al Gore to give up their private jets to save the earth!

david atlan
December 8, 2009 12:35 am

Impressive detective work, congratulations!
I wish that journalists today would work this way, instead of copy pasting simply some press releases and showing a few pictures of polar bears looking lost in the water…

debreuil
December 8, 2009 12:35 am

Sure that UFO sighting was faked. But look at the other 2500 UFO sightings, they obviously couldn’t all be faked (very tired of hearing that argument regarding both the data and the ‘scientists’).
Not sure why I still find these things so shocking, I think mostly how they didn’t even use some made up excuse and hide things in complex math. They literally just move the line by hand and then submit it. Usually cheating is ‘going against the spirit of the game’, but I guess sometimes it is just cheating.
Excellent work figuring all this out, kudos.

December 8, 2009 12:36 am

Michael, we do not need wild conspiracy theories to distract us. If you want wild conspiracy theories, go to a warmist site. As a heads up, I believe that their latest, baseless, conspiracy theory involves Russians hacking the servers at CRU!

Nick Stokes
December 8, 2009 12:36 am

Darwin was extensively bombed in Feb 1942, which may explain the 1941 issue.

dearieme
December 8, 2009 12:37 am

Feed ’em to the crocodiles. Crooks!

Michael
December 8, 2009 12:37 am

Wasn’t the E-mail leak 1 day before Kennedy’s assassination date?

manfred
December 8, 2009 12:40 am

Peterson’s adjustments,… climate science is really a small world.
peterson is also the person with the deliberate untrue statements in his NOAA internal ‘talking points’ about anthony watt’s study and, of course, he is well represented in the CRU emails.

Henrik
December 8, 2009 12:45 am

I am speechless! And I am getting very angry…
Thank you for your work – you are a true scientist!

tallbloke
December 8, 2009 12:48 am

Top work Willis, can’t see them wriggling out of this one. That’s cooked data.

December 8, 2009 12:51 am

Michael. Or could it have been in December 1942 when the British radio station in Hong Kong picked up radio traffic about the forthcoming attack on Pearl Harbour and decyphered it – and it was made known to Roosevelt, who kept it to himself in order to allow a way in to the war? Ooohh!

singularian
December 8, 2009 12:51 am

In Australia they call it Climb-mate change.
up, up and away.

Ben M
December 8, 2009 12:51 am

Darwin was bombed in February, 1942. There was a build-up of military presence prior to that date (and certainly afterwards), so perhaps that has something to do with this anomalous 1941-data.

Stephen Wilde
December 8, 2009 12:51 am

It’s interesting that at least parts of the Southern Hemisphere appear to show a slight cooling during the recent period of an active sun once artificial ‘adjustments’ are stripped out.
I have seen recent empirical evidence that counterintuitively indicates that a more turbulent flow of energy from the sun cools the stratosphere rather than warming it and that the cooling effect in the stratosphere exceeds the value of any warming effect on the seas and troposphere from any small increase in solar power.
http://www.nasa.gov/topics/earth/features/AGU-SABER.html
Thus overall during the 20th Century for rural and unadjusted uncorrupted sites we might see a cooling trend especially in the southern hemisphere where land heating from the small increase in solar power is less significant.
However it will still depend on the precise balance between energy release from oceans to troposphere and energy transfer from troposphere to stratosphere and thence to space.
Nevertheless I think we urgently need to get a precise grip on the temperature trend from suitably ‘pure’ sites as soon as possible.
I think there may be surprises in store in the light of that observed effect of a more turbulent solar flow actually increasing the rate of energy loss to space from the stratosphere.

Jack Green
December 8, 2009 12:52 am

This is why we need the FOIA Nasa. Get the Senators to read this. Right on. Hot Air has a good link on this latest bit of corruption “end justifies the means” argument.
http://hotair.com/greenroom/archives/2009/12/07/the-first-sign-of-corruption/
Thanks Willis and Anthony.

December 8, 2009 12:54 am

It can’t be a one off. The link below is to a graph from the NOAA’s website, and it shows that over 0.5 degree F of warming is all down to the adjustments. I find this hard to reconcile with common sense – surely they would have to adjust down for UHI effects?
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

Adrian Taylor
December 8, 2009 1:00 am

Willis, A fascinating article. Many thanks for doing the work…
one question, how does the data from ground stations in australia compare to the data from satellite ? I can’t find the data to compare.
It certainly looks like there is some pretty clumsy homogenising going on. How many ground stations are used in the ipcc figures ? I wonder how long it would take to re-assess all of them as you have done ?
Thanks again
regards
Adrian

Bryn
December 8, 2009 1:01 am

For the record, Darwin was bombed by the Japanese (using more bombs than were used on Pearl Harbor) in Feb 1942. The town’s population was only 2000 at that time and would hardly warrant adjustments for UHI. The town was razed by cyclone Tracy on Christmas Day in 1974, by which time the population was 42000. Currently the town has a population of over 120,000 (see Wikipedia records).
It seems neither catastrophe resulted in station moves — something I expected to read about when I first started to follow this contribution.
I congratulate the author’s careful analysis and comparison of the raw and “value added” records. This sort of commentary cannot be ignored and needs explanation by the “powers that be”.
Also remember that Darwin is well within the tropics (12 degS). Would such large shifts in temperature (alleged) be expected at that latitude?

outoftown
December 8, 2009 1:02 am

Willis – ty for the time and effort you have put into your research on the above work – This sort of work is where the real fraudulant nature of the shonky scientists will be revealed – the code holds the answers – its people like you Willis that will out this lot – once again- Thank you

Rhys Jaggar
December 8, 2009 1:02 am

I think that this paper should be emailed to every single delegate in Copenhagen, every single Cabinet Minister in every country in the world which uses Representative Government and every editor of every national newspaper and media station with one simple question:
‘This paper judiciously, ruthlessly, relentlessly and scientifically demonstrates the true nature of the difference between raw data and adjusted data where temperature records are concerned. Given that the raw data shows cooling whilst adjusted temperature shows rapid warming, would you agree that until ALL raw data for ALL sites used in the three global temperature record organisations GISS, CRU and GCHN are made public, examined critically by INDEPENDENT parties and the nature of the adjustments understood, that NO DECISIONS CONCERNING GLOBAL ACTION ON SUPPOSED GLOBAL WARMING SHOULD TAKE PLACE?’
And if they say no, I think we know what the attitude to all those folks is to rigorous science.
Not in my back yard, buster.
Well done, keep it up and produce 50 to 100 similar analyses.
Whatever that shows up. Criminal fraud or the occasional mistake. Well-meaning mistakes or the greatest scam in history.
And make all those politicians out there face the consequences. Resignations, imprisonments, whatever.
It’s time for the gloves to come off…..

michel
December 8, 2009 1:02 am

The whole thing clearly needs to be gone through with a fine toothcomb, station by station, and the histories and the adjustments taken into account, and it needs to be done with double blind methodology. What is critical is that the person doing the adjustment to the readings must not know what the effect of those adjustments will be on temperature. Don’t know exactly how you do that, but its the only way to get an unbiased set of adjustments solely on the merits of the station histories.
The ones who adjust must do it by objective criteria, which have to be tested in advance to make sure that they are robust between different operators, to verify the methodology is sound, and they must adjust without knowing the effect. It is like medical double blind studies, those rating the patient symptoms do so without knowing whether the patient is part of the drug or part of the control group. Then they are applied. The results might then be superior to raw unadjusted data.
Or maybe you just have to decide that the raw data is all we have, and that we cannot improve on it, uncertain as it is, and so you have to accept a larger measure of uncertainty in the conclusions drawn than any of us would like.

RC Saumarez
December 8, 2009 1:03 am

This is apalling. Speaking as a signal processor, this is psuedo science. The GHCN should release all their data so that the nature of adjustments can be seen. They should have to validate the adjustments.
Why? I guess is that the idea of global warming is so engrained that anything that doesn’t conform to it is regarded as an error.
It is astonishing that the MSM and wider scientific community havn’t really understood what is going on and our economies are going to be reshaped on the basis of evidence such as this. The political mainstream in the UK will not engage in any argument of AGW, “the science is settled”. Maybe in 5 years time, when the arctic ice remains normal and the Earth hasn’t warmed up, common sense may prevail.
We live in an age of enlightenment.

John Graham
December 8, 2009 1:04 am

You will find the problem in 1941 was due to the first Japanese air attack on Darwin and most of the population headed south as fast as possible

tallbloke
December 8, 2009 1:04 am

I hope we are going to see Anthony’s work on surface data published not too long after Copenhagen.

Rhys Jaggar
December 8, 2009 1:04 am

Send it to every delegate at Copenhagen, every President, Prime Minister or Cabinet Minister, every TV station, every school and every media mogul.
Tell ’em that the data in Darwin stinks. Stinks of shit.
And as sewage recycling is high on the Copenhagen agenda, you’d like to stick your nosey ass into another 100 stations in the GHCN record.

vdb
December 8, 2009 1:05 am

An adjustment trend slope of 6 degrees C per century? Maybe the thermometer was put in a fridge, and they cranked the fridge up by a degree in 1930, 1950, 1962 and 1980?

Neville
December 8, 2009 1:05 am

Willis I’ve just looked at the BOM site here in Australia and 2 stations cover the period 1882 to 2009.
The first is the Darwin post office 1882 to 1941 no. 014016 and Darwin airport 1941 to 2009 no. 014015.
The average mean temp ( high) is 32.7c for the PO and 32.0c for the airport.

David Mellon
December 8, 2009 1:05 am

I am not surprised with your findings but I am very impressed with the work you have done and shared with us. Whenever someone withholds scientific data, it is natural to ask a lot of questions. In this case I have so many questions for the three climate information holders I am not sure where to start. Thank you!

December 8, 2009 1:07 am

This behavior keeps popping up over and over and I can’t believe that they are brazen enough to hide it in plain site. I’ve been looking over data from historical weather stations where I live (Kamloops BC, Canada) and I find variations between stations just in a small area which fits with the siting of stations; ie airport temperatures appear higher. Since I’ve started looking into climatology, I’ve been comparing “official” temperatures in comparison to my house thermometer and there are differences of a few degrees. Using the USB temperature monitors this summer I found it difficult to compute the average temperature of even my yard as each physical location appears to have it’s own unique temporal heat signature. This is amplified by adjacent plants in the summer and the only time one sees homogeneity is in the winter where every place in my yard not close to the heated house is uniformly cold (about -10 C today).
I suggest that we do some distributed processing by chosing weather stations where we live and performing the same type of data analysis. I’m in the process of writing a scraping program for environment Canada weather sites and then it’s just a matter of averaging daily temperatures to get monthly values and comparing them to “adjusted” values. This would be similar to Anthony’s surface stations project. If anyone has data analysis software already written to deal with averaging/displaying the data I’d be interested in getting it as while I like programming as a hobby, I don’t want to reinvent the wheel unless I absolutely have to.
I’ll choose Kamloops BC as my part of the project.

Jason
December 8, 2009 1:07 am

According to breakfast TV in the UK the met have just realeased data that proves the last decade is the warmest on record

Invariant
December 8, 2009 1:09 am

Brilliant!

ForestGirl
December 8, 2009 1:11 am

Thanks for the brilliant and painstaking work. Just one question: Is there any information about the actual locations of the stations? That is, does the homogenization account for *actual knowledge* about the possibly changing environment around the stations. If, for example, Station 2, added around 1950, was in a field surrounded by trees whereas Station 1was on the runway…

Peter Plail
December 8, 2009 1:11 am

Hey, I’ve seen some adjustments similar to that somewhere else:
[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,2.6,2.6,2.6]*0.75 ; fudge factor

December 8, 2009 1:13 am

Willis, greetings from the lucky country and – awesome post, thanks.
As you say “So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead.”
I have the same fascination, I’m just glad you have done all the hard work and turned it into a something clear and digestible.
I’m sure most informed skeptics believe the world has been warming for 150 years. But when you see a post like this you do start to wonder. Maybe the modern warming is a “regional phenomenon”.. UAH does show much less warming in the southern hemisphere..
Are you going to do more posts on this subject? Please!

Nick Stokes
December 8, 2009 1:13 am

Here is a little history of the effect of the 1942 bombing on Darwin’s met operations.

UK Sceptic
December 8, 2009 1:14 am

I think the entire temperature record needs to be overhauled before politicians commit us to a path of economic meltdown.

Chilled Out
December 8, 2009 1:20 am

Your excellent analysis of this manipulation of raw data at Darwin shows what all programmers and business analysts know, a computer model will give the results it is programmed to give. In this case the artificial increase in weather station temperature readings appears fraudulent.
An obvious question is: “Where is the scientific research that can justify or legitimise the level of GHCN adjustments shown in your Figure 8?”
In any scientific study you inevitably have to sample datasets and sometimes remove bias from results. But such changes would normally be carefully documented and tested to demonstrate that they were not biasing the results. You would also expect error bars or confidence levels to be clearly identified so that the “accuracy” of the resultscan be assessed. Given the level of manipulation you have revealed on this site, the uncertainty equals or exceeds the alleged temperature rise.
The behaviour of the climate scientists involved in this work does not inspire confidence – the apparent lack of transparency and their refusal to share data or clarify their sources and methods points to chicanery and deception rather than open scientific enquiry and debate.

TomVonk
December 8, 2009 1:22 am

I must say that this is methodologically impressive . Really nothing to say about the analysis .
But why isn’t there a program that’d do a systematic screening ?
It would take all the unadjusted data in a given region , then the adjusted data and compute the adjustment .
That seems a rather easy way to check if the Darwin craziness appears in many other cases or not .
I know that I probably underestimate the amount of coding but from the functional analysis point of view it really doesn’t seem a very hard task .
At least not as compared to the problems of atmospheric non equilibrium dynamics .
If such a program doesn’t exist , it would certainly be worth to do it .

Martin Brumby
December 8, 2009 1:24 am

The ‘adjustments’ in Fig.7 wouldn’t be based on the atmospheric CO2 levels at Moana Loa, by any chance?
That would be a neat way of ‘adjusting’ the data. (Data Rape, I’d call it).
After all, we have Pershing’s evidence that the ‘science is incredibly robust’.
Well, he got the ‘incredibly’ bit right.

Pteradactyl
December 8, 2009 1:25 am

Will we ever get a politician brave enough to stand and fight the ever more obvious corruption in the world of climate change. Look how the Saudi minister has been vilified for speaking out about it, yes he may have oil interests and that has been jumped on, but the climate change gang have also got big business on their side for the renewable energy products. No-one disagrees that the climate is varying, but it is natural and we have to prepare for what ever it is going to do. Let’s stop talking about ‘greenhouse gas’ being the cause, once it is proven that Co2 is not the problem, are they then going to blame the only ‘real’ greenhouse gas – water vapour – clouds!
Thank you Anthony for keeping the real world sane!

yonason
December 8, 2009 1:26 am

What they are hiding, apart from just “the decline.’
“Iceland Temperatures Higher In Both Roman & Medieval Warming Periods Than Present Temps Peer-Research Confirms”
http://www.c3headlines.com/2009/12/iceland-temperatures-higher-in-both-roman-medieval-warming-periods-than-present-temps-peerresearch-c.html
“Climategate: Is There Evidence That NASA/GISS Researchers Have Fabricated Global Warming? If There’s Smoke, It’s Usually A Fire “
http://www.c3headlines.com/2009/12/climategate-is-there-evidence-that-nasagiss-researchers-have-fabricated-global-warming-if-theres-smoke.html
(Or, context is everything.)
“The Climate Liars: Obama Administration Claims Fossil Fuels Kills Millions – A 100% Lie, Opposite of All Known Health Facts & Statistics”
http://www.c3headlines.com/2009/11/the-climate-liars-obama-administration-claims-fossil-fuels-kills-millions-a-100-lie-opposite-of-all-.html
And, so very much more.
The data is the data. The only reason to “adjust” it is to hide the fact that it was bad data to begin with. On top of that, the “adjustments,” made by the same people who couldn’t do the measurements properly, are only likely to multiply rather than “correct” any “errors” in the data. There is no reason to trust people who have been lying for decades. They have been doing it so long, it’s second nature to them. They no longer care about or know how to tell the truth.

December 8, 2009 1:32 am

Thank you so much for this. I’m currently learning R, although it’s slow going due to other commitments, but it’s exactly these type of articles that someone like me needs. It looks like we’re losing the political fight, so the only way to respond is with the science.
From what I’ve seen we have E.M. Smith’s work, A.J. Strata, a site called CCC, Steve McIntyre and Jean S over at CA, and I’m sure various others, and of course your good self, all working in various ways on the various datasets.
Can a way be found to get you all together, plus interested parties willing to do some work (like me), to really work on this and produce a single temperature record, but rather than rehash CRU’s, GHCN’s or GISS’s code in something like R, actually come up with a new set of rules for adjusting temperatures.
I’ve seen yourself, among others, complain about the way they adjust for TOBS and FILNET, and now we have this article, demonstrably showing other shenanigans going on. Whatever was come up with would probably need to be peer-reviewed to get the methodolgy accepted, but at least there’d be something we could all trust.
I’d be willing to put in work, I have plenty of spare bandwidth on a fast shared server, and skills in web programming, but that said it would probably be better co-ordinated from here or CA, as you already have the presence and the interested parties coming here.
Unless something like this is done, we’ll see the Met spending three years using the same code and coming up with almost exactly the same dataset, and we’ll have lost.

Patrik
December 8, 2009 1:34 am

I must contend that no homogenisation at all should be used on a global scale!
The averaging will sort it self out since what one is showing is: averages.
All homogenisation will lead to distorted data on a global scale.
Of course if one specifically needs to collect trends for a smaller region, then it could be necessary to homogenise, but never on a global scale.

Capn Jack Walker
December 8, 2009 1:35 am

As a walker, in this nation. I have mixed with people who keep rainfall records in Australia. Australia is a harsh hot place, mostly desert or desert borderline, it’t not temp it’s rainfall, we watch, hot is hot, cold is cold.
No one dicks with rainfall measure. Not done.
No one dicks with station data.
Australia needs Stevenson screens Aus wide.

LB
December 8, 2009 1:37 am

OT, but Jack the Insider of The Australian has a Climategate blog. He is usually very good but completely misses the point, perhaps more knowledgeable chaps/chapettes would like to set the record straight:
http://blogs.theaustralian.news.com.au/jacktheinsider/index.php/theaustralian/comments/climategate_lame_by_any_other_name/

Jack Green
December 8, 2009 1:38 am

Stephen Wilde (00:51:39) : Nasa’s SABER satellite is on to something. Thanks Stephen. This program needs to be extended.

December 8, 2009 1:41 am

Wow. More manufactured fakeness than a million Hollywood blockbusters! Not a smoking gun, not a nuclear explosion, the birth of new GALAXY! When this hits the fan, Copenhagen will become “broken wagon” – the wheels are falling off! Great job!

yonason
December 8, 2009 1:44 am

RE my yonason (01:26:02) :
I said, “The only reason to “adjust” it is to hide the fact that it was bad data to begin with.” By that I meant, of course, that if “adjustments” really were needed, then the data was bad. However, when the data is good, that’s even worse, because we are no longer dealing with incompetence, but premeditated deliberate deception.

rcrejects
December 8, 2009 1:46 am

Could this perhaps explain the reluctance of CRU to release the data?

harpo
December 8, 2009 1:48 am

Greetings from Australia.
Nobody in the current Australian government cares a damn about whether the temperature has gone up down or side ways. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).
Climate Change, Climate Change, Climate Change, Tax will fix it, Tax will fix it, Tax will fix it… Climate Change, Climate Change…..
No will you critical thinkers just give up and love Big Brother. 2 + 2 = 5 remember…. 2 + 2 has always equalled 5…..
As an engineer educated in Australia in the 80’s it both breaks my heart and scares the [snip] out of me…
(Interestingly, before 1984, Orwells’ 1984 was required reading for all year 11/12(?) students in Victoria… now I can’t find anybody under the age of 40 who has read it)

Andrew P
December 8, 2009 1:49 am

“Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.”
Wow. So GHCN blatently adjusts the raw data, to create a post-war warming trend where none existed. And the GISS data appears to match GHCN’s (once it has also been ‘adjusted’). And CRU won’t release their raw data, which doesn’t inspire much confidence – not that I had much in them after recent developments.
From this example, and given the implications for the world economy currently being discussed in Copenhagen, I think that the precautionary principle should be adopted, and all adjusted data from GHCN, GISS and CRU should be classed as fraudulent, until proven otherwise. Willis’ essay should be sent to every politician and journalist possible.

December 8, 2009 1:50 am

“Barry Foster (00:51:02) :
Michael. Or could it have been in December 1942 when the British radio station in Hong Kong picked up radio traffic about the forthcoming attack on Pearl Harbour and decyphered it – and it was made known to Roosevelt, who kept it to himself in order to allow a way in to the war? Ooohh!”
Seeing how the Japanese attack on Pearl Harbour took place a year earlier then I think this is unlikely.
Seriously, though – this is great work and demonstrates exactly why climate science must be conducted openly and with free access not only to the raw data, but the methodology used to analyse it.
As a layman I look at the raw data and the “homogenized” version and can only assume that “homogenized” actually means massaged to fit a political preconception.

John in NZ
December 8, 2009 1:55 am

Thank you Willis.
I really learn a lot from your posts.

Jack Green
December 8, 2009 1:56 am

I think NASA knows that their CO2 models are flawed. Note the comment in this paper (thanks Stephen Wilde) that SABER is direct measuring CO2 ratios where GCM models are being used in climate simulations. Interesting that one hand doesn’t know what the other is doing or do they?
Abstract from:
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20090004446_2009001269.pdf
The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) experiment is one of four instruments on NASA’s Thermosphere-Ionosphere-Energetics and Dynamics (TIMED) satellite. SABER measures broadband infrared limb emission and derives vertical profiles of kinetic temperature (Tk) from the lower stratosphere to approximately 120 km, and vertical profiles of carbon dioxide (CO2) volume mixing ratio (vmr) from approximately 70 km to 120 km. In this paper we report on SABER Tk/CO2 data in the mesosphere and lower thermosphere (MLT) region from the version 1.06 dataset. The continuous SABER measurements provide an excellent dataset to understand the evolution and mechanisms responsible for the global two-level structure of the mesopause altitude. SABER MLT Tk comparisons with ground-based sodium lidar and rocket falling sphere Tk measurements are generally in good agreement. However, SABER CO2 data differs significantly from TIME-GCM model simulations. Indirect CO2 validation through SABER-lidar MLT Tk comparisons and SABER-radiation transfer comparisons of nighttime 4.3 micron limb emission suggest the SABER-derived CO2 data is a better representation of the true atmospheric MLT CO2 abundance compared to model simulations of CO2 vmr.

Stacey
December 8, 2009 1:57 am

Dear Willis
This is a great post and it demonstrates something that I have always intuitively felt about the treatment of data and the way th graphs are deliberately drawn to create alarm.
I know you looked at the CET and would appreciate a link, which I have lost?

Deadman
December 8, 2009 2:01 am

there’s a typo in the penultimate paragraph: in omnis is Latin for “in all.”

Ken Seton
December 8, 2009 2:02 am

Willis – Great read and easy to follow.
You do us a great service.
Thanks from Sydney (Aus).

Scott of Melb Australia
December 8, 2009 2:12 am

I have just forwarded this link to one of our more science savy opposition senators in Australia.
(you know the ones that revolted against the Carbon tax here in Australia and voted it down)
Plus had to include Andrew Bolt hoping it will get into his column in the MSM tomorrow.
Nothing like giving them a little Ammo.
Great article
Scott

KeithGuy
December 8, 2009 2:12 am

Excellent work Willis,
When it comes to the agreement between the three main global temperature data sets the term “self fulfilling prophecy” comes to mind.
Someone somewhere started with the notion that there has been a warming of something like 0.7 of a degree over the twentieth century, and the CRU, GISS, and GHCN have manipulated their data, using three different, contrived methodologies, until it agrees with their pre-conceived ideal.
It appears to me that the whole exercise of reconstructing historic global temperatures owes very little to science and much more to “clever” statistics.

supercritical
December 8, 2009 2:13 am

Raw data is just that.
And, anybody subsequently approaching the raw data must have an a-priori motivation, which must be made explicit.
If there are gaps, and bits of the raw data that are unsuitable for the purposes of the current study, then why not leave them out altogether?
IF the motivation is to determine whether or not today’s ambient global air temperatures are hotter or colder than they were, then a continuous record is NOT required. Rather, as long as there were matching continuous sequences of a few years, this would be sufficient for the purpose.
So why do the climate scientists need a ‘continous record’? And for what purpose are they trying to create an artefactual proxy of the real raw data? And in so doing, aren’t they creating a subjective fiction? an artefact? A man-made simulation?
Isn’t this similar to producing a marked-up copy of the dead-sea scrolls, with the corrections in felt-tipped pen, and missing bits added-in in biro, and then calling it ‘the definitive data-set’ ?

Geoff Sherrington
December 8, 2009 2:14 am

There is an unknown in the equation.
The Darwin data were collected by the Bureau of Meteorology, who have their own sets of “adjustments”. I am trying to discover if the Darwin data sent to GNCN are unadjusted or Australian-adjusted. By coincidence, I have been working on Darwin records for some months. There was an early station shift from the Post Office to the Regional Office near the town in 1962 (which would have gradually built up some UHI) to the airport, which in 1940 was way, way out of town but which is now surrounded by suburbs on most sides, so UHI is complicit again.
There is a BOM record of data overlap in the period 1967 to 1973. Overall, the Tmax was about 0.5 deg C higher at the airport and the Tmin was about 0.5 deg C lower at the airport during these 7 years. The Tmax averaged 31.5 deg C and 32.1 deg C at Office and Airport respectively. The Tmin averaged 23.8 and 23.2 at Office and Airport. Of course, if you take the mean, the Office is the same as the airport.
However, my problem is that I do not know if the Office and the Airport use raw or Australian adjusted data. I suspect the latter. If you can tell me how to display graphs on this blog I’ll put up a spaghetti graph of 5 different versions of annual Taverage at Darwin, 1886 to 2008. The worst years show a difference between adjusters of about 3.5 deg C, with KNMI GHCN version 2 adjusted being lower than recent BOM official figures.
I still do not know if any of us has seen truly raw data for Darwin.
Or from any other Australian station.

December 8, 2009 2:16 am

http://www.bbc.co.uk/blogs/ethicalman/2009/03/obama_will_circumvent_congress_to_limit_us_emissio.html
Democracy surplanted by the will of Obama and the unelected EPA of America.
See what John Podesta, Obama’s top adviser has to say.

Aligner
December 8, 2009 2:23 am

An excellent article.
But it doesn’t stop there. In order to arrive at a “global average” gridding is used with backfilling of sparse areas using methods such as averaging or interpolating from “nearby” stations, taking no account of topography, etc. Any error such as this therefore carries more weight than in areas where records are more prolific.
And nowhere is the margin of error introduced accounted for. It has always seemed to me that the degree of warming being measured is probably less than the margin of error of the temperature record itself, especially when SSTs from bucket measurements are added. Add in UHI effect (even if you adjust for that too) and the margin for error increases again.
Ultimately, therefore, IMHO the whole exercise becomes meaningless and making alarmist observations, let alone blaming CO2, preposterous.

Donald (Australia)
December 8, 2009 2:24 am

It would be interesting to feed this through to Copenhagen, and have some brave soul present it to the assembled zombies.
A clove of garlic, sunlight, or the sight of a wooden stake could not arouse more panic, or howls of anger.

December 8, 2009 2:26 am

Lets all “homogenize” our data
Into chunks of bits and pieces,
Lets forget which way is up or down
And randomize our thesis,
So black is white and white is brown
And purple wears a hat,
And when our data’s goose is cooked,
We’ll all say, “How HOT is that?”
.
.
©2009 Dave Stephens
http://www.caricaturesbydave.com

skylarker
December 8, 2009 2:29 am

From a tyro sceptic. Thank you for this excellent paper. More please.

Rob
December 8, 2009 2:38 am

This could explain the MSM reaction: he owns ALL the papers in Australia. This country has no freedom of the press anymore basically
copied from another site
“Phil Kean wonders why Sky gives so much time to the global warming scare. Perhaps it could be because it is owned by News International which is run by James Murdoch who is married to a climate change fanatic. Kathryn Hufschmid runs the Clinton Climate Initiative.
.
I understand that News International also owns a number of newspapers in this country. I don’t suppose that the fact that the boss’s wife is AGW nutter has any influence on the editorial policy of those newspapers.
.
It almost makes me wish that Daddy Rupert still had personal control of the media in this country.

Phillip Bratby
December 8, 2009 2:43 am

Onwards and upwards.
Great work Willis; much appreciated amidst all the BS surrounding Copenhagen.

December 8, 2009 2:44 am

The lack of transparency is the problem. The adjustments should be completely disclosed for all stations including reasons for those adjustments. You have to be careful drawing conclusions without knowing why the adjustments were made. It certainly looks suspicious. In Torok, S. and Nicholls, N., 1996, An historical temperature record for Australia. Aust. Met. Mag. 45, 251-260 which I think was the first paper developing a “High Quality” (not sure that is how I would personally describe it given the Australian data and station history but moving along…) one example of adjustments is given for 224 stations used in that paper and they are for Mildura. The adjustments and reasons (see p.257):
<1989 -0.6 Move to higher, clearer ground
<1946 -0.9 Move from Post Office to Airport
<1939 +0.4 New screen
<1930 +0.3 Move from park to Post Office
1943 +1.0 Pile of dirt near screen during construction of air-raid shelter
1903 +1.5 Temporary site one mile east
1902 -1.0 Problems with shelter
1901 -0.5 Problems with shelter
1900 -0.5 Problems with shelter
1892 +1.0 Temporary site
1890 -1.0 Detect
“Detect” refers to use of the Detect program (see paper). The “<” symbol indicates that the adjustment was made to all years prior to the indicated year.
The above gives an idea of the type of adjustments used in that paper and the number of adjustments made to data. For the 224 candidate stations 2,812 adjustments were made in total. A couple of points, the adjustments are subjective by their very nature. Use of overlapping multi station data can assist. I have concerns about the size of the errors these multiple adjustments introduce but I am certainly no expert. I wonder what the error bar is on the final plot when we are talking of average warming in the tenths of a degree C over a century. The stations really never were designed to provide the data that it is being used for but that is well known.
My point is without the detailed station metadata it might be too early to draw a conclusion. This is why we need to know what were the adjustments made to each station and the reasons. Surely this data exists (if it doesn’t then the entire adjusted data series is useless as it can’t be scrutinised by other scientists – maybe they did a CRU with it!?) and if they do why are they not made public or at the very least made available to researchers. Have the data keepers been asked for this? I am assuming they have.

Charles. U. Farley
December 8, 2009 2:45 am

From FOIA2009.zip/documents/osborn-tree6/tsdependent/compute_neff.pro
***Although there are no programming errors, as far as I know, the
; ***method would seem to be in error, since neff(raw) is always greater
; ***than neff(hi) plus neff(low) – which shouldn’t be true, otherwise
; ***some information has somehow been lost. For now, therefore, run
; ***compute_neff for unfiltered series, then for low-pass series, and
; ***subtract the results to obtain the neff for the high-pass series!

Ryan Stephenson
December 8, 2009 2:46 am

Can I please correct you. You keep using the phrase “raw data”. Averaged figures are not “raw data”. Stevenson screens record maximum and minimum DAILY temperatures. This is the real RAW data.
When you do an analysis of temperature data over one year then you should always show it as a distribution. It will have a mean and a standard deviation. Take the UK. It may have a mean annual temperature of 15Celsius with a standard deviation of 20Celsius.
Without the distribution the warmists can say “The mean of 2001 was 0.1Celsius higher than the mean of 2000. This is significant – we are heating the planet”. With the distribution you would say “The mean of 2001 was 0.1Celsius higher than 2000 but since the standard deviation of the annual distribution is 20Celsius, we cannot consider this as being statistically significant”.
If we had the REAL raw data we could almost certainly show that the off-trend averages of the last few decades was of no statistical significance anyway, before we got into the nitty-gritty of fudges to the data. By using slack language to describe the mean annual temperatures as “Raw Data” we are falling into a trap set by the warmists.

December 8, 2009 2:46 am

Willis,
Please email me your last graph as I would like to use it in public lectures, with attribution. A low resolution one would look shoddy. david.archibald@westnet.com.au
With thanks

pyromancer76
December 8, 2009 2:48 am

Amazinging. Couldn’t sleep (West Coast); when I began reading WUWT there was only one comment. Now 55. When I finish commenting, probably over 200.
Anthony and Willis Eschenbach, masterful work, skillful expose of the purposeful fraud. This becomes a whodunnit escapade and I am beginning to want to know WHEN it began in earnest. When was the temperature data of Darwin doctored? Who ordered it! Sometime between 1998 and 2001 (change in IPCC reports)?
I no longer believe “they probably began with good moral purposes from a desire to save humankind”. This deed was foul from the beginning. The 2008 U.S. election cycle had to be part of the “plan”. Too much fraud; too much unexplained; too much money from financial types; too much money from overseas; the ballot boxes stuffed or votes changed at the end in too many areas, not just the “normal” expected areas of voting fraud (such as Chicago) in American history that goes back into the 19th century. This was/is massive.
harpo (01:48:28) : “Greetings from Australia.
Nobody in the current Australian government cares a damn about whether the temperature has gone up down or side ways. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).”
Harpo seems to have a handle on the matter, or at least on the “rationale”. The “they” who are implementing the tax are also getting large salaries, excellent medical care, fantastic retirement bundles, and jet set perks (Copenhagen anyone, with free prostitute services?). “They” also can direct this tax money any which way “they” want. This “they” also includes corporations that are no longer making a profit on their products so they are turning to trading fees and largesse from the “they” their money helped elect; Enron seemed to begin this kind of trading scam. It is like vultures descending on the savings and retirements of the hard-working developed-world individuals and families — now that manufacturing and its collateral industries have left for China, India and other parts of the world.
Keep up the good work. Maybe “we” can “save the world” from the “they”.

Charles. U. Farley
December 8, 2009 2:50 am

FOI2009.zip\FOIA\documents\ipcc-tar-master
Lot of dissent displayed.
What happened to it all?

December 8, 2009 2:56 am

Great analysis.
The scale of the deception just keeps getting bigger.
Incredible.

Ripper
December 8, 2009 3:02 am

Warwick Hughes has a bit more info on this
http://www.warwickhughes.com/blog/?p=317

December 8, 2009 3:05 am

The last step up seems to be around 1979. It would be interesting to see if any other upsteps have happened since then, once satellite data went online.

Phillip Bratby
December 8, 2009 3:10 am

The Met Office has released data as a zip file at http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.

sHx
December 8, 2009 3:13 am

Supreb case study! Absolutely superb! It is also a lesson to other scientists on how to articulate issues in simple, layman’s terms. Sir, I salute your communication skills.
And I am sure the people from the crocodile country would be delighted to hear that a world class blogger is putting Darwin on to the world map.

Richard
December 8, 2009 3:16 am

So lets sum up then:
IPCC data = GHCN data = GISS data = CRU data
The official line + genuine believers CRU data is reliable, why because it “independently” shows the same profile and trends as the “independent” GHCN data and the GISS data.
Actually the 3 data sets are very nearly one and the same.
[ While on the subject] – but hey the satellite data also show the same trends and same profile since 1979. But a little bell rings – somewhere I read – satellite data is “calibrated” with ground data. There is only 1 ground data for all practical purposes GHCN. So does the satellite data unwittingly correspond to the ground data because it is calibrated with them?
But then again I read that the satellite temperatures correspond with the balloon transonde temps. So maybe that cannot be?
Dr Spencer if you read this could you comment please?
GHCN Aussie adjusted data big warming since 1950. (Funnily enough this is the period also when AGW started and was identified by the IPCC.) Raw data shows cooling. (Sounds familiar? NIWA?)
Darwin adjustments – oh oh….

Robinson
December 8, 2009 3:17 am

They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).

I don’t believe this. Even an economic illiterate knows that increasing costs on business are passed on to the consumer. The margins don’t change, do they? Not to mention the fact that the poor working man (in the UK at least) will have to holiday at Bognor Regis, rather than in Alecante and he’ll find it increasingly difficult to have his heating on in the winter, etc. Not really the kind of policy platform designed to improve the life of the poor, unwashed masses, is it?
No, I think this is more to do with energy security (vital national interest), coupled with a lot of highly stupid activist scientists. I would use the word opportunistic, but I don’t think it strong enough.

December 8, 2009 3:25 am

I have often wanted to check out the long term Darwin record but never got around to it. Since I have been a boy (which is some time) I have noticed Darwins temperature on the evening news is always between 32-34 C. A perfect place to test climate change.
Thanks for the great work….this could grow.

Peter Pond
December 8, 2009 3:28 am

I have the good fortune to have been born in Darwin, not so many years after the Japanese bombs stopped falling.
Darwin Post Office site temp records (1882-1942) show an mean max of 32.6C and a mean min of 23.6C. The monthly mean max temps ranged from 30.6C in January to 34.2C in November (just before the monsoon arrives). It is 24 metres above sea level.
Darwin Airport site temp records (1941 – 2009) show a mean max of 32.0C and a mean min of 23.2C. The monthly mean max temps range from 30.5C in January to 33.3C in November. It is 30 metres above sea level.
Both sites are close to the sea on three sides.
It can be easily seen that (a) being tropical, temps in Darwin do not vary all that much over the year and (b) that the temps were slightly higher in the earlier years – i.e. there would appear to have been some cooling since the early part of the 20th century.

jamesafalk
December 8, 2009 3:28 am

I’ve never said this a to a datahead man (or any other sort) before…but I think I love you. This is about as clear, convincing and robust a paper as anyone could put together given the available data.
Is that a Copenhagen herring I smell, or just something altogether fishy?

Nick
December 8, 2009 3:38 am

Why homogenise?
I see no need to homogenise at all. If you have a record starting in 1950 and ending in 1980, then you can fit a line and get the trend for that period. Is it up or down?
Likewise for the adjacent sites.
1. There is no need to join up different temperature records to make one.
2. Picking one site and not others is a cherry pick. Why exclude the other sites from the records if you use them for adjusting another site? There is no reason.
3. Missing months are relatively easy to fix. Create an average seasonal record. Average all Jan, Feb, Mar, … figures. If you miss Feb’s figure you can interpolate based on this curve.
4. Accuracy should improve as you get closer to today. Anything else is evidence of fraud. So if the adjustments become larger as we get to today, its fraud.
5. I’m not sure what you do about the UHI. It seems to me that the adjustments are large positive adjustments. At the same time the alarmists are say its a small effect. It should be a negative effect. It’s not consistent.
6. Surveys of sites are the only way of deciding if they are fit for purpose.
There is a research paper that can be easily done in the US. With Anthony’s data, you can show that the rural/urban decisions of Nasa base on lights etc is wrong.
Nick

December 8, 2009 3:39 am

Willis – hats off to you – an amazing post and superb clear analysis of the data.
Have blogged/linked/twittered and sent to the Opposition Leader here in the UK.

Fair Go
December 8, 2009 3:40 am

If we don’t have documentation for each and every adjustment made to homogenize the data, such homogenization must be considered suspect. Similarly that audit trail must be externally audited before the assumption of being suspect is changed! If the UEA/CRU don’t have that information they simply haven’t been doing their job and one has to wonder what they’ve been doing to earn their grants. The process of adjusting has the smell of “synthesising data” as we used to call making up results for high school science pracs.
Repetitive and unexciting work? Perhaps, but that’s what lies at the heart of much data collection in experimental science.

KimW
December 8, 2009 3:42 am

A clear and well reasoned argument that shows what a proper analysis should be. How did the CRU get so far from what is so clearly outlined here ?. My thanks.

P Gosselin
December 8, 2009 3:46 am

Dave UK:
– Podesta:
Leadership role my fanny!
This is what we call authoritarianism based on Stalinist Science..
Next it’s:
1. water (a water crisis is currently being manufactured)
2. food (meat, sugar and fat)
3. information
Is there a Paul Revere left in USA?

Mike in London
December 8, 2009 3:50 am

Thank you for Mr Eschenbach for a superb article – a tribute to genuine investigation and perseverence. It is the sort of thing that the “Quality” papers in the UK would once have done themselves in the “Special Investigations” they so love to pursue, but for the subject now being utterly inimical to their current editorial positions. The lack of journalistic investigation of climategate and the whole AGW movement is rapidly emerging as the 2nd most scandalous aspect (after the suborning of the scientifice method itself) of the whole global warming farrago.
I am not very hopeful that the global warming juggernaut, even after climatege can be prevented from causing at least tempoarary economic damage to the planet, rich and poor nations alike. But resistance is not futile; the true position is gradually being established with work such as yours, and I am quite sure that the future at least will universally recognise the voices of sanity such as yours and McCintyre’s that were raised at this time of Global Warming hysteria; just as the MSM will look back on it as their time of greatest shame.

Jim Greig
December 8, 2009 3:52 am

I know what we can do with Guantanamo. We can fill it with all the AGW desciples who are terrorizing the Earth with their fraudulant data!

hillbilly76
December 8, 2009 3:54 am

As a 7th generation Tasmanian I’m proud that many scientists have taken up the work of the Tasmania based late great John L Daly. I recommend his “What’s Wrong With Surface Record?” at http://www.john-daly.com/ges/surftmp/surftemp.htm as a great resource. Read about Low Head ground station and how the scientists ignored the changed circumstances there even when told. Also see http://www.john-daly.com-cru-index.htm for his email exchanges (not leaked) with East Anglia CRU head Phil Jones after John had caught them out in an obvious mistake. It is a great insight intothe mindset of those scientists and very relevant to the current climategate scandal. No wonder that on hearing of John’s death Jones callously told Mike Mann that “in an odd way, that’s cheering news”!

boballab
December 8, 2009 4:04 am

Is it just me or does the raw dat in Fig 2 and Fig 6 look like it corresponds to the GCM’s non CO2 forced blue section? You know what the temps would be if there was no CO2 forcing it just looks like they correlate to my MK1 eyeball while scrolling back and forth. Willis have you tried to superimpose the raw to the Model Non CO2 forced temp graph? It would be funny if they did match, becasue then there own model would show that the only Man made warming is adjustments to the Raw data.

December 8, 2009 4:05 am

“The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then.”
Especially steady has been Copenhagen: http://i45.tinypic.com/ele3bp.jpg

Richard
December 8, 2009 4:06 am

I have been a lurker on this website for some time and continue to be impressed with the quality of analyses presented here.
Keep up the great work!!

Turboblocke
December 8, 2009 4:11 am

Satellite measurements are calibrated against SI standard measures or by in situ temperature measurements.

Campbell Oliver
December 8, 2009 4:12 am

I’m very interested in following the discussions here. But there’s one thing I don’t understand, and I know that this is going to sound really seriously dumb, but I do want to know. It’s this. When people talk about an “anomaly”, I understand that this means a deviation from some value (probably some average, or a single reference value). But it seems that this value is never mentioned. So, my question is this: is there some standard definition of the value upon which temperature (and other) anomolies are based? If so, what is it? If not, how do people know what the actual temperature for some point is, given the value for the anomaly at that point?
Many thanks for any pointers to some 101 (or even a kid’s pre-101).
PS – I’ve tried googling “temperature anomaly definition” etc., with no luck.

December 8, 2009 4:14 am

Further to my last post,
This item was aired on UK TV 9 months ago. Yesterdays statement by the EPA reminded me of it.
That Obama and Podesta are using the EPA as a tool to bypass the democratic process in America.
Judging by what the EPA had to say things are going to plan for Obama.
Suppression of debate and oppression of democracy are what is being used.
People should fear that more than MGW.

December 8, 2009 4:15 am

.
And Darwin is a good location to see what is reeaally happening to the climate. Darwin was and is hundreds of miles from anywhere, and so its temperatures will be unaffected by urban growth (as long as someone did not build a barbie next to the station). Darwin is as ‘pure’ in climate terms as you are going to get.
Can everyone forward this item onto your local parliamentarians and media. This IS important.
.

KeithGuy
December 8, 2009 4:18 am

Richard (03:16:38)
“So does the satellite data unwittingly correspond to the ground data because it is calibrated with them?
I believe that satellite temperature data is calibrated by comparison with independent and concurrent thermometer data, but it is interesting that since we now have more confidence in our global temperature metrics (except GISS…?) global warming seems to have stopped, instead the manipulation is being applied retrospectively in an attempt to reduce early 20th Century temperatures.
As Churchill once said. “It is all right to rat, but you can’t re-rat.

Arnost
December 8, 2009 4:19 am

David Archibald (02:46:51);
Geoff Sherrington (02:14:27);
Geoff Sharp (03:25:34);
And Willis;
Please be aware that there is no continuous station in Darwin from 1880 to 1962 (as per Sherro’s post) or 1991 as per GISS (station zero).
The station of record was Darwin Post Office from 1982 till it suffered a direct hit from a Japanese bomb during the first raid on 19 February 1942. (The postmaster Hurtle Bald, his wife and daughter and 7 post office staff members were all killed instantly, and the post office itself was utterly destroyed). The station of record from then was Darwin Airport (which had about a year’s overlap with the Post Office at that time).
So (as per Willis’ graph above) Station Zero is in itself a splice of at least two stations (The Post Office and presumably the Airport – but I have no explanation of why it ends in 1991…)
Warwick Hughes did a post up on this about a month ago: http://www.warwickhughes.com/blog/?p=302#comments
Where there is a photograph of the Stevenson Screen at the PO from the 1880’s…
And I did one at Weatherzone at the same time:
http://forum.weatherzone.com.au/ubbthreads.php?ubb=showflat&Number=795794#Post795794
Where I have links to BoM data for the stations plus a link to some interesging stuff John Daly did a while back on Darwin.
cheers
Arnost

Stephen Wilde
December 8, 2009 4:25 am

Jack Green (01:56:09)
Thanks for pointing up the importance of the SABER observations.
I’ve gone into some detail on the potential implications here:
http://climaterealists.com/attachments/database/The%20Missing%20Climate%20Link.pdf
mostly in the second half.
It’s been somewhat overshadowed by the Climategate publicity but I’m hoping it will be noted more widely in due course.

wobble
December 8, 2009 4:28 am

Wow! It’s bad enough to use highly aggressive step function adjustments when “correcting” for station moves. But these continuous adjustments are inexcusable.

Turboblocke
December 8, 2009 4:33 am

“K … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …
So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.”
Apart from “MIDDLE POINT” http://www.bom.gov.au/climate/averages/tables/cw_014090.shtml
and Darwin Post Office http://www.bom.gov.au/climate/averages/tables/cw_014016.shtml
and CAPE DON http://www.bom.gov.au/climate/averages/tables/cw_014008.shtml to name but 3.
Ho hum.

December 8, 2009 4:33 am

“Unless something like this is done, we’ll see the Met spending three years using the same code and coming up with almost exactly the same dataset, and we’ll have lost.”
Agreed, more importantly, it will also be the science that is lost and we will be plunged into a new dark age.
No matter how many times that the Met office run the same data through the same code, they will get the same result. That is why the raw data and the code need to be independently analysed.

wobble
December 8, 2009 4:34 am

Mike in London (03:50:58) :
“” just as the MSM will look back on it as their time of greatest shame.””
I think it’s time to tell the MSM and other scientists that it’s time for them to get on the right side of this now.
Climategate has given them the excuse to claim that they were duped, but if they don’t switch sides now then they are part of the duping and will be held responsible for their shame.

Arthur Glass
December 8, 2009 4:35 am

” This has just got to get publicity in the msm somehow to spark an investigation and rework of the temperature record.”
There is slim hope that your average journalista ot talking head has the attention span to follow such a beautifully crafted and lucid argument.
Want to try it on Boxer and Waxman?

KeithGuy
December 8, 2009 4:39 am

So the Australian raw data shows no warming and we already know that the USA was warmer in the 1930s. If this continues we’ll be left with one thermometer, maintained by a peasant farmer somewhere in Siberia, that shows warming, and the whole of the 20th Century temperature reconstructions will be based on this…
… but haven’t I heard that one already? Remember Yamal?

Hugh
December 8, 2009 4:42 am

A very clear, well written article. Congratulations and keep up the good work! But still this whole thing is driving me crazy. When will the MSM wake up?
Hugh

maz2
December 8, 2009 4:44 am

Gore’s immolation vs Polar bear convicts.
…-
“*The cold snap is expected to continue all week with daytime highs ranging from -12 C on Thursday to a bitterly cold weekend that will see the high drop to -25 C. The normal high for this time of year is -6 C.”
…-
“UEA asks for local support over ‘Climategate’
Bosses at the University of East Anglia insisted last night the under- fire institution could ride out the storm of the climategate scandal – but called on the support of people in Norfolk and Norwich to help them through the most damaging row in its history.”
http://www.freerepublic.com/focus/f-news/2402716/posts
…-
“*Alberta deep freeze saps power
Record high energy usage in province as mercury plummets”
http://cnews.canoe.ca/CNEWS/Canada/2009/12/08/12077286-sun.html
…-
“Hudson Bay jail upgraded for wayward polar bears
Cnews ^ | December 8, 2009 | Canadian Press
WINNIPEG — Manitoba is spending more money to upgrade a polar bear jail in Churchill. Conservation Minister Bill Blaikie says the province is spending $105,000 to improve the jail’s walls and main entrance.
The compound is used to house wayward polar bears that get too close to the town or return to the community after being scared away.”

JP
December 8, 2009 4:46 am

This subject was covered in a CA thread some years ago. I believe it came up when someone discovered the TOBS adjustment that NOAA began using. The TOBS adusted the 1930s down, but the 1990s up. Someone calculated that the TOBS accounted for 25-30% of the rise in global temps.
The question about adjusting local station data to adjacent data also came up. Especially concerning grid cells. If San Fran and Palm Beach are in the same grid cell, how does one extrapolate and average. One station is affected by maritime polar air masses; the other continental tropical. If the enviorment is not homogenious, how can one extrapolate at all? One would be mixing apples and oranges. In California this wouldn’t be much of a problem ( there are plenty of adjacent stations in proximity to San Fran and Palm Beach); however, in places like South America (where Steve McIntyre found GISS only uses 6 reporting stations), or Africa this problem is very real. If one must apply such drastic adustments to the raw data, why even use raw data at all? Why not just say “this is what is really going on -the weather observers these last 6 decades were either drunk or incompetent.”
The answer to all of this is simple. Rely on RSS/UAH data. Yes, the records go back to only 1979, and there are geographical limitis. But, the idea that we can find a global climate signal therough thermometer records, and we can measure that signal to the tenth or hundredth of a degree is absurd. Thrermomer only measure microsite data at a single point in time. Supposedly the thermometers are measuring ambient air temps (which they do not. I don’t think sling psychrometers are used anymore). And supposedly the temperatures are measured over green grass, away from the shade, and away from things like patios, parking lots, and buildings.
Surface temps traditonally have the single purpose of assisting weather forecaster in making up mesoscale and macroscale forecasts. They can provide general trends in tracking broad climate changes. Surface temps do not have the precision that people like Jones, Hansen et als. say they do. If they did, how come the data must be sliced, diced, and obliterated by our climate experts.

Editor
December 8, 2009 4:47 am

When you say “throw away all of the inconveniently colder data prior to 1941”, do you mean “warmer”?

Chris Fountain
December 8, 2009 4:47 am

I just have a question about the dates covered in this analysis. How was it that there was a thermometer at the Darwin Airport about 20 years before the Wright brothers’ flight? Were we Australians so prescient that we built one in anticipation?

David Holliday
December 8, 2009 4:48 am

It amazes me that this is still being debated. The warmist argument has already been disproven by the recent climate behavior of the Earth. CO2 has risen and temperature has not. How much simpler can it get?
Also, I don’t know why studies like these that show CO2 levels were 4 to 5 times higher during the Medieval Warm Period than they are now don’t get mentioned more press. It’s obviously not possible to attribute increased CO2 levels in that time period to human activity. And equally obviously, the temperature eventually came down so no runaway warming caused by CO2.
The correlation issue is the achilles heel of the warmest argument. Regardless of the current temperature, the correlation showing CO2 as a driver to warming doesn’t hold up.
On an aside, I keep hearing the argument that 8 of the last 10 years are the warmest on record but I also remember an article in which it said that NASA had revised their data to show 1933 as the warmest on record. Considering just the last 100 years what are the warmest years on record?

Robin Kool
December 8, 2009 4:48 am

A bit OT:
Climategate is convincing for who follows it – who reads the articles here.
But when I explain it to friends, they keep coming back to one thought that makes it hard for them to get their mind around it: “How could the scientific community let this happen?”
They know that many politicians are ignorant and corruptible and that many activists are ignorant and extreme. Andf they know that most people are ignorant and naif.
But why didn’t the scientists speak up?
And I still find that hard to explain.
I tell my friends that these scientists of CRU and NASA don’t publish the raw measurements nor how they adjust them. And they are shocked and ask: “How can that be true. The whole scientific community would demand to see them.”
I tell my friends that the paleoclimatologists who come up with the hockey sticks on which the whole case rests of the uniqueness of the warming in the last decades of the 20st century have hijacked the peer review process.
They ask me why the scientists who were pushed out didn’t protest and if they did, why didn’t the scientific community stand up and put things right?
I tell that many scientific organizations are controlled by small groups of activists who support claim there is scientific consensus over catastrophic AGW. And again they ask me how that could happen. Why don’t the thousands of scientists who are members get rid of them?
I think that if we want the public to understand Climategate, we need to be able to answer these questions satisfactorily.

imapopulist
December 8, 2009 4:49 am

Are there any Patrick Henrys left in America?

KevinUK
December 8, 2009 4:49 am

Ken Hall (00:33:54) :
“This is exactly the kind of public, open examination of the raw and adjusted data that needs to be done for ALL stations globally to establish, once and for all, IF the entire earth is warming un-naturally. (and I have yet to see any definitive proof that the current warming is un-natural).”
Ken,
Watch this space!! Someone 🙂 is very close to doing exactly that!
Next step after that, what happens if we do some different far more scientically justifiable (so realistic) alternative homogenity adjustments? Does the blade of the ‘hockey stick’ go away?
If so what on earth are all those poor GCMs going to use when they are ‘spun up’ using the gridded datasets that no longer have a pre-determined warming trend in them? Will the ‘flux adjustments’ have to make a re-appearance in the AOCGCMs?
What will poor Tom Wigley and Sarah Raper do when MAGICC doesn’t have any ‘unprecendented warming’ model outputs to fit itself to?
KevinUK

ventana
December 8, 2009 4:50 am

“Inconveniently cooling data prior to 1941”
Shouldn’t that be warming?

December 8, 2009 4:50 am

Willis Eschenbach, is this the only site you examined? Or did you examine many before you found one that appears was blatantly rigged?
I just wonder because of all the thousands of sites available, it would seem unlikely that the first one examined in this detail would be a ‘rigged’ one, IF the record was generally sound. If the record was generally “fixed around the theory” then most , if not all, of them will be dodgy.
If this is the only one you examined, then you have a 100% record of dodgy data manipulation for every site examined.

December 8, 2009 4:50 am

Lovely to see that the figure in the IPCC Fourth Assessment Report starts in 1900, conveniently just after the previous warm period data finishes.

DocMartyn
December 8, 2009 4:52 am

I had a look at Alice Springs, lovely dataset, daily records with only a few days blank; flat as a pancake. Projection between 0.3-0.6 degrees per decade.

rukidding
December 8, 2009 4:55 am

For those that are interested here is the temperature graphs for Darwin from the 1880 to today.
As has been mentioned elsewhere Darwin was bombed in Feb 1942 which destroyed the post office so my guess would be that that was when record keeping moved to the airport were it would appear that it is today.
First graph 1880-1942
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14016&p_nccObsCode=36&p_month=13
Second graph 1942-2009
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14015&p_nccObsCode=36&p_month=13
Hope the linking works

ShowsOn
December 8, 2009 4:57 am

You haven’t explained why you don’t think the homogenised figures are accurate. Also, if you use a completely different data set from the Australian Bureau of Meteorology, a warming trend at Darwin since the late 1970s is clearly evident:
http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=T

Knut Witberg
December 8, 2009 5:01 am

The scale on the left side is not the same as on the right side. Error?

Gregory
December 8, 2009 5:05 am

The Darwin Zero series shown above has large and relentless upward adjustments starting in 1930. But those are today’s adjustments, as they stand according to current practice. The entire station history could be readjusted some entirely different way if, for example, a historic fact like a previously forgotten station move were discovered tomorrow.
Anybody who has been awake recently knows that global cooling was the big scare in the 1970s, although that one didn’t catch on with the public nearly so well. Climate scientists then, as now, had to be homogenizing station data for all the reasons listed in this article. It would be fascinating to see if their adjustments then tended to confirm the cooling they expected to see, just as their adjustments now do the opposite. Is the GHCN data stored in such a way that one could see what the homogenized Darwin Zero series looked like in 1975, with the adjustments as they stood at that time?

Chad Woodburn
December 8, 2009 5:07 am

“But the data is still good.” How do they know since CRU kept the raw data secret all this time? Every scientist who says, “but the data is still good” has to either have seen the data and evaluated it, or they have to have complete faith in the data-keepers. But we know they have NOT seen or evaluated the data that CRU has kept hidden. And as for faith in Jones et al.? The emails show that only gullible people could continue to have faith in them.
An answer like “But the data is still good” is pure propaganda; it is not even close to being science.

Turning Tide
December 8, 2009 5:11 am

A proposal
Would it be possible for the knowledgeable people here to publish a “recipe” telling ordinary folk how to do this sort of analysis, comparing the raw GHCN data with the “cooked” version?
I’m sure there’d be enough volunteers lurking on the blogs who would be willing to carry out this analysis, then we’d be able to provide a definitive answer to the point that Willis makes at the end of this article: “This may be an isolated incident, we don’t know.”

RC Saumarez
December 8, 2009 5:13 am

I just looked at the example the Met Office has released. The provenence is the CRU!
I’m nor surprised the Met Office is going to do a 3 year re-investigation of raw temperature records. They’ve been taken for ride as well as the rest of us.

December 8, 2009 5:14 am

I see Richard Black is back in the groove
http://news.bbc.co.uk/1/hi/sci/tech/8400905.stm
This decade ‘warmest on record’ …

kdk33
December 8, 2009 5:15 am

This has probably been asked…
Does anyone foresee an academic (or three or four) reviewing the raw data – objectively! – or will this be left to volunteers working pro-bono. (or maybe willie get’s paid?).
Point is, is there anyway to organize activities and crank through the data a bit faster. (I recognize you don’t find people who can do this loitering at the TEXACO.)
Just a thought.

KeithGuy
December 8, 2009 5:16 am

Knut Witberg (05:01:07) :
“The scale on the left side is not the same as on the right side. Error?”
I think the scale on the right refers to the adjustments.

rukidding
December 8, 2009 5:22 am

Very interesting ShowsOn that graph shows temperatures for times before 1942 when the temperature was being recorded at station 14016 the post office.
So how do you reconcile your graph that shows a steady rise for station 14015 with the graph I linked above that shows a reasonably flat graph over the same period and they both come from the BOM?.

ThinkingScientist
December 8, 2009 5:24 am

The shape of the difference function you are getting in figure 8 looks remarkably similar to the artificial fudge factor in the briffa_Sep98_d.pro file from the released documents. The artificial correction has puzzled me because of the swing down before it starts to ramp up. I cannot see why you would apply a function of this shape at all.

jgm2
December 8, 2009 5:25 am

The black line as presented in the IPCC report has been deliberately started at a low (1910) in the actual raw data as opposed to the real start of the data set which, from Figure 2, appears to start in 1880. So they’ve deliberately missed out one whole degree centigrade of cooling from 1880 to 1910 just so they (IPCC) can print a graph showing a 0.5% rise from 1910 to 2000 and a ‘shock’ 1deg C increase from 1950 – 2000.
Oh come on chaps.
And I’m new to this so can somebody explain what are the blue and red overlays on the Fig 9.12 in the IPCC data? Maximum and minimum of all the data sets (3?)(30?) in the input? Maximum and minimum temperature predictions from various models?
In either case why is the actual black line outside the shaded zone for a few years either side of 1950?

rukidding
December 8, 2009 5:33 am

This maybe of some interest it is a temperature graph for a small place called Menindee which is in the far west of New South Wales(Australia) and I would think would not suffer from any UHI effect.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=47019&p_nccObsCode=36&p_month=13
Notice a reasonably flat graph.And no maximum temp for 1998 in fact if you check every capital city airport temperature for Australia for 1998(the hottest year ever) only Darwin recorded a record maximum.
So how come Australia missed out or is it just because we are down under we are forgotten about.

December 8, 2009 5:34 am

5:29 a.m. PST … The temp right here, right now, in Susanville in northern CA is -10F or -23C, yet the national temp map with isobars and the pretty blue colors that is regularly updated is showing us toasty warm at between +20F and +25F. On the way in to work this morning, my driver’s Pick-up registered a -14F. What the???

KeithGuy
December 8, 2009 5:35 am

KeithGuy (05:16:08) :
Knut Witberg (05:01:07) :
“The scale on the left side is not the same as on the right side. Error?”
I think the scale on the right refers to the adjustments.
I see what you mean (fig 7 and 8). It does give an exaggerated impression of the adjustments.
I know see that the BBC are making a big play out of the recently released data that shows that that this year is the 5th hottest on record and that the last decade is the hottest ever.
I don’t doubt it, but what’s important is that the trend now shows cooling.

imapopulist
December 8, 2009 5:37 am

This is truly a smoking gun. There is no amount of rationalization that can be used to justify the changes made to the raw data sets. Anyone who is objective will be able to see this.
Now someone needs to take this excellent work and summarize it to a point where the average person can easily discern how temperature data is being manipulated.
I always suspected the most manipulation would take place in the remote corners of the Earth where unscrupulous scientist thought they could get away with it.

Phil A
December 8, 2009 5:37 am

So … how many temperature stations will people have to do this for before they admit just how large the problem is, and just how badly they have betrayed science? Not to mention betraying all the real scientists who have used this kind of adjusted data in good faith and will soon realise just how many years of their lives have been wasted in analysing meaningless numbers.

December 8, 2009 5:38 am
Olen
December 8, 2009 5:39 am

Global warming looks like a fraud when data are mis used by scientists and politicians want to use it for massive tax and control.

Editor
December 8, 2009 5:47 am

Thanks for the compliments. Also, I greatly appreciate the questions. It is enjoyable research, and I strive to present it clearly and with the satisfaction it brings me. I will answer these as time permits.
w.

J.Hansford
December 8, 2009 5:47 am

Speaking of GISS and James Hanson… Here is an interview of him on the Australian Broadcasting Corporation (ABC)… The program is Lateline and the interviewer is Tony Jones(AGW cheerleader and rabid leftist)
http://www.abc.net.au/lateline/content/2008/s2764523.htm

December 8, 2009 5:54 am

Just a note – Steve Mc on CA is pointing out that he’s had several old posts along the same lines.
Perhaps a bit of blog archeology would be useful at this stage to see if the data massaging is the same?

Ripper
December 8, 2009 5:54 am

Yep , Showson is quoting the new Climate site which has been adjusted.
Marble bar , Kalgoorlie , Meekatharra , & Southern Cross have all lost 1.5 degrees C in the 1920-1940 period resulting in most of Australia’s warming being filled over 1/3 of the most sparsely populated area of the continent.
These are supposedly the same
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=012074&p_nccObsCode=36&p_month=13
http://reg.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=maxT&area=wa&station=012074&period=annual

Editor
December 8, 2009 5:55 am

TheSkyIsFalling (02:44:02), you raise a good issue:

The lack of transparency is the problem. The adjustments should be completely disclosed for all stations including reasons for those adjustments. You have to be careful drawing conclusions without knowing why the adjustments were made. It certainly looks suspicious. In Torok, S. and Nicholls, N., 1996, An historical temperature record for Australia. Aust. Met. Mag. 45, 251-260 which I think was the first paper developing a “High Quality” (not sure that is how I would personally describe it given the Australian data and station history but moving along…) one example of adjustments is given for 224 stations used in that paper and they are for Mildura. The adjustments and reasons (see p.257):
<1989 -0.6 Move to higher, clearer ground
<1946 -0.9 Move from Post Office to Airport
<1939 +0.4 New screen
<1930 +0.3 Move from park to Post Office
1943 +1.0 Pile of dirt near screen during construction of air-raid shelter
1903 +1.5 Temporary site one mile east
1902 -1.0 Problems with shelter
1901 -0.5 Problems with shelter
1900 -0.5 Problems with shelter
1892 +1.0 Temporary site
1890 -1.0 Detect

My point is without the detailed station metadata it might be too early to draw a conclusion. This is why we need to know what were the adjustments made to each station and the reasons. Surely this data exists (if it doesn’t then the entire adjusted data series is useless as it can’t be scrutinised by other scientists – maybe they did a CRU with it!?) and if they do why are they not made public or at the very least made available to researchers. Have the data keepers been asked for this? I am assuming they have.

While there are valid reasons for adjustments as you point out, adjusting a station from a 0.7C per century cooling to a 6 C per century warming is a complete and utter fabrication. We don’t have the whole story yet, but we do know that Darwin hasn’t warmed at 6 C per century. My only conclusion is that someone had their thumb on the scales. It’s not too early for that conclusion … it’s too late.
w.

wws
December 8, 2009 5:55 am

Robin Cool wrote:
“I tell that many scientific organizations are controlled by small groups of activists who support claim there is scientific consensus over catastrophic AGW. And again they ask me how that could happen. Why don’t the thousands of scientists who are members get rid of them?
I think that if we want the public to understand Climategate, we need to be able to answer these questions satisfactorily.”
A small group of activists have always been able to take control of any situation where the majority is apathetic and splintered – study the Bolsheviks in 1917, who took over a country even though they had barely 10% support. And in these organizations you question, even though many scientists are members, they are controlled by only a handful of people at the top. Once an organization is corrupted (as the APC is currently) the only alternative is probably for those who disagree to quit and start their own organization – and this could take years if not decades before it achieves equal recognition.
Furthermore, the climatologists weren’t alone – they had government and the media on their side, the two most poweful weapons that scientists are afraid of. Annoy government and lose all your funding, annoy the media, get a negative story and lose your chance at tenure and a career. That trifecta of power was unassailable. Then, add in the fact that most scientists are specialists, not generalists, and thus as long as the controversy was outside their own little spheres of speciality, they felt that they needed to ignore it. How much trouble could it cause for them, afterall?
A lot more than they thought it could, it’s turning out.

December 8, 2009 5:56 am

Outstanding work, clearly explained and illustrated! I can only imagine the self-delusion and groupthink that went into all these “adjustments” at the GHCN, all earnestly applied in service to science. Those early warming revelations in the 1990’s were heady times, when almost any researcher could derive important new observations of the data. I’m sure it all seemed so very right.
Like phrenology.
But upon inspection by a disinterested outsider who isn’t caught up in the machine, it doesn’t even pass the sniff test.
Our culture has only begun to wake from this delusion, with many still falling more tightly into its grip. Not even George Orwell could have anticipated the EPA’s move to regulate carbon dioxide as a pollutant. It will take literally thousands of revelations like this one to reverse the tide.

boballab
December 8, 2009 5:56 am

Went to the met site and they admit in their FAQ sheet that it isn’t 100% Grade A Raw Data. Then they try to spin why it isn’t the raw data back onto the dog ate it in the 1980’s, but never fear we know its good because it’s from CRU, GISS and NCDC and “peer reviewed”!
My god that’s like hauling out a counterfeit $20 bill to prove that your not a counterfeiter.
Sorry Met office you need to show 100% Raw data, no adjustments, no peer review, no more appeals to authority. You also need to publish the codes used to make your adjustments. With what you have published you have not shown there is nothing there, matter of fact you showed the opposite when you admitted the data is gone. No Data to back up you claim then it gets trashed. I give the Met office a B for effort, a C for propaganda effect since most people will not look nor understand what the FAQ says and a big fat F for proving there is Man Made Global warming. At best with good codes and the raw data you could have proved warming but not causation to man.

Editor
December 8, 2009 6:01 am

David Archibald (02:46:51) :

Willis,
Please email me your last graph as I would like to use it in public lectures, with attribution. A low resolution one would look shoddy. david.archibald [at westnet.com.au
With thanks

It’s just a screenshot from AIS , as referenced above. Select by Country, Australia, unadjusted, average stations, smoothed, plot anomalies, plot all stations.
If you still need it let me know. I cleaned up your internet address in my reply, spambots, don’cha know …
w.

ozspeaksup
December 8, 2009 6:02 am

willis, a HUUUGE hug! last week I found a BOM page showing 3 graphs of OLD datasets, and they were remarkably level over a long timeframe , 20’s onwards. yet when I went back to my history i find its not there..well not the same page but..
i did get a page with data and some files my pc cannot translate?
i posted the link elsewhere today, to ask for help, so I am going to add the link and let you see what you can make of it all.
BOM advises they are updating and removing…gee how very convenient!
ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/
the charts I saw before had Kalgoorlie in WA listed and it had gotten cooler from a very high time in 1930 /1.. and Longreach in Qld was another.
again thanks heaps for our effort, I will be sharing this page around aus and o/s!
ps the missing info in the 30.40s would be depression years and Wartime.

Editor
December 8, 2009 6:05 am

Someone wrote :

They want to implement a tax so they can collect money from the rich polluters …

Please, dear friends, can we keep politics and taxes and economics on another thread? Your comments are fascinating, but this is a science blog.
Thanks for your cooperation,
w.

December 8, 2009 6:07 am

I added some more to your excellent analysis
http://strata-sphere.com/blog/index.php/archives/11787
It is quite clear why GISS needs to fudge the data – as I explain. Anthony should put out a challenge for people to do more of this on the GISS data, using the same formats, etc.

Editor
December 8, 2009 6:09 am

Stacey (01:57:20) :

Dear Willis

I know you looked at the CET and would appreciate a link, which I have lost?

Sure, it’s here. I’d forgotten I’d written that.

mathman
December 8, 2009 6:10 am

Now we know.
I was certain there had to be a good reason for losing the raw data, as has been claimed by the Jones group.
The good reason for having no raw data to use to either validate or invalidate the various IPCC reports is in the graphs shown in this thread.
The so-called homogenization is in fact blatant fraud.
One starts off with the conclusion: AGW must be “scientifically” proved in order to implement worldwide Carbon taxes. Such AGW is not found in the raw data. Darwin Station 0 is an instance of such raw data. How does one solve such an evident problem? One manipulates the raw data in order to arrive at the pre-determined conclusion.
This is an instance of one picture being worth a thousand words. Alas that my browser does not allow me to superpose the various graphs. Would it be possible to present all of the graphs to a uniform scale, so that superposition would allow a better compare/contrast?
This could best be done by the author, with the use of the original tables of information. The alternative is for the author to provide us with the tables of data used to generate the graphs, for us to use with our own graphing programs.

December 8, 2009 6:14 am

I’m no climate scientist but the ones making the news lately aren’t either.The homogenized data,skimpy tree ring data,and stopping release of raw data was just a part of their bad science.They are simply criminals who should be prosecuted.
I’m no climate scientist,but I did spend last night reading WUWT.

Oslo
December 8, 2009 6:15 am

Great work!
Seems we are slowly getting there.
And your own line sums it up nicely: “when those guys “adjust”, they don’t mess around”!

John S
December 8, 2009 6:18 am

I never knew that ACORN was in the climate temperature business.

infimum
December 8, 2009 6:23 am

Akasofu’s name is spelled wrong.
[Thanks, fixed. ~dbs, mod.]

Spenc Canada
December 8, 2009 6:23 am
Bunyip
December 8, 2009 6:24 am

Give us credit for honesty down here in the Great South Land. When our blokes ginger up the numbers they explain how they do it — peer-reviewed, of course.
Check this out, for example:The liberties they took make my brain ache.
http://www.giub.unibe.ch/~dmarta/publications.dir/Della-Marta2004.pdf
This little beauty of deduction explains all the lurks while also suggesting that the Australian records contain even more egregious examples of data goosing. In addition to the six degrees of difference (latitude and longitude) and 300 metres *(plus or minus) elevation deemed acceptable in the selection of “appropriate” substitute stations, the authors explain that that they sometimes go even further. Quote:
“…these limits were relaxed to include stations within eight degrees of latitude and longitude and 500 metre altitude difference…” (bottom of pg. 77)
Oy!
I do wish someone smarter than I could take a good, hard look at the above document.

Andrew
December 8, 2009 6:29 am

” Your comments are fascinating, but this is a science blog.”
If the topic is AGW related, it has very little to do with science. If we keep to your terms, we wouldn’t be able to talk about any of these issues. No science here as far as I can tell.

December 8, 2009 6:30 am

“(Interestingly, before 1984, Orwells’ 1984 was required reading for all year 11/12(?) students in Victoria… now I can’t find anybody under the age of 40 who has read it)”
Even though I am 40, I was never required to read it, although I did read it for the first time in 2004 and it scared the [snip] out of me too!
I too have found few people who have read it. In fact I do not know of a single ‘X-factor, celebrity dance, jungle got talent on ice’ reality TV viewer that has read it. I wonder if there is a correlation there? Hmmmmmmmmmmmm.

Anna
December 8, 2009 6:32 am

Thanks Willis, keep up the good work!
I made a graph on the annual mean temperatures of Sweden, just like Wibjorn Karlen did. The result sure doesnt look anything like the graph for the Nordic area in the IPCC report!
Try it yourselves : http://www.smhi.se/hfa_coord/nordklim/index.php?page=dataset
This needs to be done for all raw data there is, and where there are strange differences we need to demand a reasonable explanation!

December 8, 2009 6:32 am

Willis,
“While there are valid reasons for adjustments as you point out, adjusting a station from a 0.7C per century cooling to a 6 C per century warming is a complete and utter fabrication. We don’t have the whole story yet, but we do know that Darwin hasn’t warmed at 6 C per century. My only conclusion is that someone had their thumb on the scales. It’s not too early for that conclusion … it’s too late.”
Apologies I read the per century trend how I thought I saw it +0.6/100 yrs rather than +6/100 yrs! I am in total agreement with you. I also realise you are more than well aware of everything I posted but I thought I would throw it in anyway for information for others in case the actual nature of the of changes was of general interest for those who (like myself until recently) were totally unaware of this fiddle with data. I am actually outraged over this and love the work you did/are doing. Can’t wait to see more!

Paul
December 8, 2009 6:41 am

Perhaps there is nothing dishonest or silly here, but, when you won’t release data or method details what are people expected to think? Until the raw data and the method of “correcting it” are made fully public, as scientific method requires, the correct thing to do from a method standpoint is to treat this data as junk. It can’t be reproduced so it isn’t science.

Pamela Gray
December 8, 2009 6:42 am

My Democratic representatives aren’t smart enough to read this stuff for themselves (along with one or two Repubs believe it or not). Every time I have sent a letter I get back a nearly identical talking points response from the lot of them. I never thought I would ever be reduced to just wanting to throw them all out, or even not vote at all. This is truly making my left leaning, registered Democrat, AND patriotic Irish blood boil!

Basil
Editor
December 8, 2009 6:44 am

w,
Fascinating bit of work. Maybe you could comment, either here, or in an update to the post above, on the following, taken from the Easterling and Petersen paper you quote from:
A great deal of effort went into the homogeneity adjustments. Yet the effects of the homogeneity adjustments on global average temperature trends are minor (Easterling and Peterson 1995b).
Do they ever put a figure on to just how “minor” this effect is on the global average temperature trends? Are they referring to this?
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Or are they referring to something else?
However, onscales of half a continent or smaller, the homogeneity
adjustments can have an impact. On an individual
time series, the effects of the adjustments can be enormous.

Duh. I think you’ve demonstrated that very well.
These adjustments are the best we could do
given the paucity of historical station history metadata
on a global scale.

Well, maybe we need a global average that is without the adjustments.
But using an approach based on a
reference series created from surrounding stations
means that the adjusted station’s data is more indicative
of regional climate change and less representative
of local microclimatic change than an individual
station not needing adjustments.

Important admission, and qualification.
Therefore, the best
use for homogeneity-adjusted data is regional analyses
of long-term climate trends (Easterling et al.
1996b). Though the homogeneity-adjusted data are
more reliable for long-term trend analysis, the original
data are also available in GHCN and may be preferred
for most other uses given the higher density of
the network.

I’m not persuaded about the usefulness, even for regional analysis. I think any use must look at before and after comparisons, like you’ve done here, before assuming amything about the usefulness of the adjustments.

December 8, 2009 6:46 am

“Isn’t this similar to producing a marked-up copy of the dead-sea scrolls, with the corrections in felt-tipped pen, and missing bits added-in in biro, and then calling it ‘the definitive data-set’ ?”
This is a perfect analogy.
There is NO WAY that one can move a weather station 20 miles from a valley, or shore location, to the side of a mountain and then continue to validate the temperature record just by making an arbitrary correction. The entire weather patterns between those locations will be entirely different and the temperature record will not follow the same pattern, but at a different set average temperature.
The fact is that the temperature record is sooooo messed up that there is no way to determine a constant increase in temperature from the raw data, so it appears that they have made the data fit the science and hidden the fraud in the way they use the ‘necessary’ adjustments. All the people involved in the fraud agree with the outcome, they are all insiders in the man-made climate change religion and so they all peer-review each others data and methodology and sign it off as sound. After all, they all get the same amount of warming. It is entirely conclusion lead science AKA propaganda!
Another analogy us that the Climateologists are saying, OK the car has a scratch on the door, and a dent in the boot, the tyres may be a little bald, depending on how you define bald, but the car is basically still sound and entirely roadworthy. We are saying, SHOW ME THE ENGINE! we got a glimpse under the bonnet and did not see one.
This article is someone sneaking under the car to get a peek into the engine compartment and seeing no engine there yet they still want to force us to buy the car!

Kristian
December 8, 2009 6:49 am

Before Climategate, I thought the science was dubious and the aim political but the temperature data correct, at least for the last hundred years or so.
Right now I realise I was being too naive, the data seems to be flawed just like the rest of the “science”!

Charles. U. Farley
December 8, 2009 6:49 am

Source file= Jones+Anders
Jones data to= 1998
Says it all. More BS than a herd of buffalo.

Zhorx
December 8, 2009 6:50 am

It’s eerie just how similar the graphs of the adjustments are to the “fudge factor”, compare for yourself:
http://wattsupwiththat.com/2009/12/05/the-smoking-code-part-2/

December 8, 2009 6:52 am

ozspeakup – that data set on the ftp site at BOM is, I think, the old Torok data set I referred to in an earlier comment. Check out the method.txt file if it is there and do a search for the paper mentioned online and you’ll find what you seek.

TerryBixler
December 8, 2009 6:52 am

Thanks Willis and Anthony.
It was a very depressing day yesterday, thinking about Pearl and the EPA. I can only hope that truth will win out, it has in the past , as it might in the future ( I even feel that hope and change are tainted words so I did not use them).

Editor
December 8, 2009 6:53 am

Campbell Oliver (04:12:34) :

I’m very interested in following the discussions here. But there’s one thing I don’t understand, and I know that this is going to sound really seriously dumb, but I do want to know. It’s this. When people talk about an “anomaly”, I understand that this means a deviation from some value (probably some average, or a single reference value). But it seems that this value is never mentioned. So, my question is this: is there some standard definition of the value upon which temperature (and other) anomolies are based? If so, what is it? If not, how do people know what the actual temperature for some point is, given the value for the anomaly at that point?
Many thanks for any pointers to some 101 (or even a kid’s pre-101).
PS – I’ve tried googling “temperature anomaly definition” etc., with no luck.

The only dumb questions are the ones you don’t ask. An anomaly can be taken around any point. Usually it is an average over a specified period, like 1961-1990. In this case it is an anomaly around the average of the dataset.
w.

Pamela Gray
December 8, 2009 6:57 am

This thing about not having raw data anymore. I am confused about that. There is raw unadjusted station data that apparently can still be had by any Susie Q or Tommy T out there. Isn’t that the raw data? So who needs raw data from the Met? Correct me if I’m wrong, but the stationsurvey is able to capture the raw data from each station and display it here. Isn’t that raw data? And easily obtained? What if we decided that our next challenge was to tabulate and average climate zone station data (totally unadjusted) and run our own graphs here at WUWT? And I do mean by climate zone so that we are not averaging together apples and oranges. Then let THEM argue about our methods.

Amabo
December 8, 2009 6:59 am

Turns out pauchari doesn’t want to investigate the emails after all.

Steve Fitzpatrick
December 8, 2009 7:00 am

Very nice work Willis.
How much time did you devote to the efforts described in your post? Just wondering how much effort would be involved to do the same thing for a random selection of a hundred or so stations around the world.

Jeremy
December 8, 2009 7:00 am

More BBC Propaganda. Husky dogs may not have employment and face a bleak future in a warmer world. This is really pathetic. Teams of Husky dogs (which pull a sled) where replaced by motorized machines called Snowmobiles or in Canada a Skidoo over 50 years ago.
http://news.bbc.co.uk/2/hi/programmes/hardtalk/8399823.stm
What does it matter – keep telling enough lies and keep repeating them and eventually people will believe.
It is the same with polar bears – they are not in trouble or drowning at all (the population has actually increased five-fold since we restricted hunting).

Ryan Stephenson
December 8, 2009 7:00 am

So the Copenhagen summit has just heard that the last 9 years are the warmest on record by a long way.
You’d have trouble selling that one in the UK. 2003 experienced a particularly hot Summer due to the collision of two high pressure weather systems over Northern Europe. Even the warmists aren’t suggesting that was anything more than a weather event. But since 2003 Britain has had increasingly severe winters and overcast cool summers. Last winter we had snow so bad it brought the country to a standstill.
It is like listening to a bunch of ranting lunatics of the kind that carry boards with “The End is Nigh” on them.

December 8, 2009 7:04 am

Last week while updating my website (http://www.waclimate.net) with temperatures across Western Australia for November, I noticed something peculiar about August 2009 on the BoM website…
http://www.bom.gov.au/climate/dwo/IDCJDW0600.shtml
The mean min and max temps for August had all gone up by about half a degree C since first being posted by the BoM on Sep 1.
Below are the min and max temps for the 32 WA locations I monitor, with the BoM website data at the top as recorded from Sep 1 to Nov 17, and below them the new figures on the BoM website since Nov 17 …
August 2009
Albany
9 16.2
9.4 16.6
Balladonia
5 20.7
5.5 21.1
Bridgetown
5.7 15.7
6.2 16.1
Broome
14.6 29.2
15.1 29.7
Bunbury
8.2 16.7
8.7 17.2
Busselton
8.7 17
9.2 17.4
Cape Leeuwin
11.8 16.2
12.2 16.6
Cape Naturaliste
10.5 16.7
11 17.1
Carnarvon
11.4 23.2
11.8 23.6
Derby
15 32.7
15.6 33.2
Donnybrook
6.7 17.2
7.2 17.6
Esperance
8.3 17.7
8.8 18.1
Eucla
7.9 21.5
8.4 21.9
Eyre
4.3 21.6
4.5 22
Geraldton
9.5 20
10 20.5
Halls Creek
16.1 32.6
16.6 33
Kalgoorlie
6.8 20.3
7.2 20.7
Katanning
6.1 14.7
6.5 15.1
Kellerberrin
5.3 18.6
5.6 18.9
Laverton
7.5 22.4
7.9 22.9
Marble Bar
13.8 31.1
14.3 31.5
Merredin
6.1 17.7
6.5 18.1
Mt Barker
6.8 15.6
7 15.8
Northam
6.2 18.4
6.6 18.7
Onslow
13.8 27.7
14.3 28.1
Perth
8.8 18.5
9.3 18.9
Rottnest Island
12.4 17.3
12.9 17.7
Southern Cross
4.6 18.1
5 18.6
Wandering
5.3 16.1
5.6 16.6
Wiluna
7.5 24.8
7.7 25.2
Wyndham
18.3 34
18.8 34.4
York
5.6 17.9
5.9 18.3
I’ve questioned the BoM on what happened and received this reply …
“Thanks for pointing this problem out to us. Yes, there was a bug in the Daily Weather Observations (DWO) on the web, when the updated version replaced the old one around mid November. The program rounded temperatures to the nearest degree, resulting in mean maximum/minimum temperature being higher. The bug has been fixed since and the means for August 2009 on the web are corrected.”
I’m still scratching my head, partly because the bug only affected August, not any other month including September or October. There’s been no change to the August data on the BoM website since I pointed out the problem and they’re still the higher temps.
So if anybody has been monitoring any Western Australia sites at all (or other states?) via the BoM website, be aware that your August 2009 temperature data may be wrong, depending upon whether you recorded it before or since Nov 17, and it’s not yet known what’s right and what’s wrong.

December 8, 2009 7:13 am

Dang, these “adjustments” look like Michael Mann’s work and that is not a compliment.

MattN
December 8, 2009 7:14 am

F-word…

Leon Brozyna
December 8, 2009 7:17 am

A fine piece of detective work.
Now all that remains to be done is to translate the terms into language a very simple layman (like a journalist or politician) can understand. Let’s try this – the raw data are the real temperatures while the adjusted data are what some scientists think the recorded temperatures should be.
Its no wonder all the public normally sees is the adjusted data. Don’t want to confuse people with too many facts. They might start acting smart, like asking embarrassing questions such as, “How many adjustments are compound adjustments?” {Adjustments on top of adjustments on top of adjustments and so on.}

Alan the Brit
December 8, 2009 7:20 am

Barry Foster (00:51:02) :
That should read December 1941, December 1942 was a whole year out & our colonial cousins would not have needed any warning by then, they would have figured it out themselves by the carnage left by the Japanese carrier fleet!
That was a great post & it certainly looks like somebody has been telling porkies! I always understood that “homogenising” was what they did to milk to make it sterile for public consumption! Not far off the mark.

Gary Pearse
December 8, 2009 7:20 am

I see that the surfacestations project of Anthony’s has to be expanded to include what is done with the data because it seems that if the readings aren’t to the AGWers’ liking they “homogenize it” and if they are concerned that one might find the homogenization a way beyond reasonable, they are tempted to throw the raw data away! We better get at this quickly.

UC
December 8, 2009 7:24 am

“http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.”
But /94/941200 is
Number= 941200
Name= DARWIN AIRPORT
Country= AUSTRALIA
non-adjusted I guess

December 8, 2009 7:25 am

Unbelievable. First, they put thermometer on the airport. Then skew the readings upwards. Then merge all skewed and biased readings by highly suspicious algorithm into one sleazy mess called global temperature dataset.
I am convinced, that with unfudged SST data and high quality station data, even much less in number than thousand used for GISTEM/HadCRUT, global dataset should look like Arctic record with less amplitude. 40ties equal to present times.
http://www.junkscience.com/MSU_Temps/Arctic75Nan.html

December 8, 2009 7:26 am

Wow.
Thank you for all your hard work, Willis – for the clear explanations, the graphs, and he smoking gun.
I can’t believe these ‘adjustments’ – with steps like those one could prove anything at all.
And that’s ‘science’?
I must have been absent when they taught that in college …

Arthur Glass
December 8, 2009 7:26 am

In view of the (quoted) current (as of 10:00 AM EST) headline on Climate Depot, may this curmudgeon of a former English teacher remind everyone that ‘breath’ is a noun. (Proposed new motto for the EPA: ‘Every breath you take, I’ll be watching you.’) The verb is ‘breathe’.
There are, of course, phonemic changes in the pronunciation: In the verb, of the dental fricative represented by the spelling /th/ is ‘voiced’ i.e. produced with vibration of the vocal cords. Since in English voiced consonants invite lengthening of preceding vowels, this is ‘long’ /ea/. All of this in contrast with unvoiced /th/ and ‘short’ /ea/ in the noun ‘breath’.
This variation in medial vowel sounds is called in linguistics ‘guna’, from the Sanskrit. It is a persisting characteristic of Indo-European languages, and often has grammatical significance, as in irregular verbs in English, e.g. ‘lead’ is present tense, ‘led’ is past. In languages with complex rules for producing verb stems, e.g classical Greek, guna is crucial.
Of course in the case of classical Greek, as well as of many modern languages, we have to deal with the phenomena of ‘declension; ‘ whereas in English we ‘hide the declension.’
Whew! That was a long way to go for a punchline!

Mike
December 8, 2009 7:29 am

Falsus in uno, falsus in omnibus. When in Rome, do as the Romans do 😉

Atomic Hairdryer
December 8, 2009 7:29 am

Ok, my chin hurts.
Once from jaw dropping and hitting the desk having seen the magnitude of the man-made global warming, then from looking at the the severity of the Darwin bombing. Great article, and thanks also for adding the local history.

David
December 8, 2009 7:34 am

Darwin was attacked by the Japanese during WW2. So, accurate temperature recording may have been a lower priority during the early 1940’s.

Andy
December 8, 2009 7:41 am

Story in the BBC today.
“We’ve seen above average temperatures in most continents, and only in North America were there conditions that were cooler than average,” said WMO secretary-general Michel Jarraud.
It’s interesting how North America, with the most stations and technology, is the only one that shows cooling. It’s warming everywhere else. Naturally.
These people are shameless in their manipulation.
http://news.bbc.co.uk/2/hi/science/nature/8400905.stm

kwik
December 8, 2009 7:41 am

BLIMEY!!!!

geronimo
December 8, 2009 7:49 am

OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.
http://www.guardian.co.uk/

3x2
December 8, 2009 7:49 am

Having edited and graphed up a lot of N. European stations from v2.mean, I don’t think that what you have found in Darwin is in any way unique.
My current theory about the large differences between v2.mean and GISS (who knows with HadCRUT) is that they are not the result of malice as many believe but are a result of the kind of bulk processing operations made easy by the computing power available.
Let me explain. In much the same way that individuals are lost in the bulk processing operations performed found in every day activities (I am not a number! type) the same can be said of individual stations when processing so many records. That is to say that what is being processed is lost, only the results are important.
Just as a quick view of the scale. v2.mean has some 596000 entries. That is almost 600,000 years worth of annual records. mean_adj has some 422,000 so lets round up a little and say that the difference is 200,000 years. Each year has 12 points, that is 2,400,000 monthly means (lets not go to daily max/min)
So in some way 2.4 million points have, by some means, disappeared. My point here is simply that it would take a very determined individual to hand process 600,000 down to 400,000 examining 7,200,000 data points along the way.
I have hand edited about 160 “local” stations and tedious doesn’t even begin to describe the experience.
So I’m left with my “warming as an artifact of bulk data operations” which attempt very badly to make sense of individual stations. Nobody wants to go back over the results and check what has happened to individual stations where the end result (global average) is within “expectations”. The code would only be checked where there was plainly “something wrong” with the results. The processing code would then be changed and the whole job re-run until “expectations” are met
It is interesting to look at v2.mean Iceland and the same stations via GISS. They are very much the same and I believe that this may be because Iceland neatly escapes many of the adjustment processes that you identify in N. Australia.

JustPassing
December 8, 2009 7:50 am

Copenhagen climate summit in disarray after ‘Danish text’ leak
The UN Copenhagen climate talks are in disarray today after developing countries reacted furiously to leaked documents that show world leaders will next week be asked to sign an agreement that hands more power to rich countries and sidelines the UN’s role in all future climate change negotiations.
http://www.guardian.co.uk/environment/2009/dec/08/copenhagen-climate-summit-disarray-danish-text
Download here
http://www.guardian.co.uk/environment/2009/dec/08/copenhagen-climate-change%20here

George B
December 8, 2009 7:52 am

Much of what we know was built on theory.
As part of the Scientific process many theories have been proven wrong and new ones adopted.
Al Gore, and many in our government are graining power money and influence by supporting these false data. We now have an EPA that believes they have more power and influence than God and Country combined!
We are in the process of turning over our freedom, liberty, and wealth to a new Religion.

Richard Sharpe
December 8, 2009 7:53 am

Wow, I have not seen that airport for a long while. Last time was in 1996, but the most memorable time was just after Christmas, 1974, when I was helping get people onto planes.
It has changed a lot.

December 8, 2009 7:54 am

“David (07:34:50) :
Darwin was attacked by the Japanese during WW2. So, accurate temperature recording may have been a lower priority during the early 1940’s.”
Or higher if the temperature is of any importance to airplanes, tanks and troops…

Ian
December 8, 2009 7:55 am

** Applause **
Diligent & well researched piece.
Please keep up the superbly detailed work.
Best regards

December 8, 2009 7:55 am

Thanks Willis for this fine piece of work. Was that really the first Australian site you looked at in detail?

December 8, 2009 7:56 am
phlogiston
December 8, 2009 7:57 am

I’ve seen the light – global warming really is anthropogenic!
The globe itself is probably not warming, certainly not any more, but the global temperature record is another matter altogether.

AdderW
December 8, 2009 8:01 am

geronimo (07:49:33) :
OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.
http://www.guardian.co.uk/

Are we certain that this is not a hack, we wouldn’t want to get this wrong now, would we 🙂

AdderW
December 8, 2009 8:02 am

geronimo (07:49:33) :
OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.
http://www.guardian.co.uk/

Are we certain that this is not a hack, we wouldn’t want to get this wrong now, would we 🙂

3x2
December 8, 2009 8:03 am

RE : HadCRUT and your FOI requests.
GISS seems to perform the adjustments and stats “on the fly” mainly using v2.mean as a base. There doesn’t seem to be a bulk list left behind to compare to the original (v2.mean) so bulk comparisons are impossible.
But, from what I have read in the climategate files CRU seem store their adjustments in a database, the adjustments and stats being two separate processes. Perhaps this is why there is no chance of us ever seeing that data as used by CRU, it would allow bulk, station by station, comparisons with the “raw” data (v2.mean?)

Varco
December 8, 2009 8:05 am

“http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.”
The explanations given by the Met office are illuminating…
http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html
“Question 4. How can you be sure that the global temperature record is accurate?
The methodology is peer reviewed. There are three independent sets of global temperature that all clearly show the rise in global temperatures over the last 150 years. Also we can observe today that other aspects of climate are changing including reductions in Arctic seaice and glacier volume, and changes in phenological records, for example the dates on which leaves, flowers and migratory birds appear.”
So there you go, definitive proof of data accuracy via the dates upon which migratory birds appear. If any journalists who own a garden are reading this blog perhaps the above comment will assist in understanding why some many people are sceptical of the alleged ‘science’ from these institutions.
Does anyone else think it is curious that being ‘peer reviewed’ is cited along side migratory bird timing as proof of accuracy? Perhaps the Met office think being ‘peer reviewed’ is no longer enough after Climategate emails cast doubt on the process?
It is sad that reputable institutions have fallen so low…

December 8, 2009 8:07 am

Kudos, Willis, for your timely exposure of BAHD — Biased Anthropogenic Homogenization of Data.
Bob

BHenry
December 8, 2009 8:08 am

As a layperson it took me a some time and study to understand and appreciate Eschenbach’s post. This is also true with similar posts on the subject of “climate change”. Of course it was not written for the layperson but for those who are engaged in the study of the subject. The advocates of anthropomorphic climate change have gotten traction in the media by simplifying the subject so the average person can understand it. Any trial lawyer will tell you that you can’t persuade a jury of laypersons using technical language, you must state it in simpler language. I would like to see someone or group with credible credentials issue public statements on the subject which are understandable.

Editor
December 8, 2009 8:13 am

Willis,
The truly raw data for the stations are in the daily temperature records (.dly files) on the GHCN FTP site. What is interesting is that when GHCN creates the monthly records that they (and GISS) use, they will throw out an entire month’s worth of data is a single daily reading is missing from that month.
When a month is missing from the record, GISS turns around and estimates it using a convoluted algorithm that depends heavily on the existing trend in the station’s data, thus reinforcing any underlying trend. GISS can estimate up to six months of missing data for a single year using this method.
It seems to me the best place to start is with the raw daily data and find out how many “missing” months have a small handful of days missing, and estimate the monthly average for those days, either by ignoring the missing days or interpolating them.

Javelin
December 8, 2009 8:15 am

Excellent analysis – looks like you did more work on this they they did in 10 years. Once they publish all the raw data
So in summary. There are three Global datasets (CRU, GISS, and GHCN) used to justify warming temperatures, however they all all based on the same underlying data (GHCN), and this data requires adjusting because of changes to the weather stations and positions of thermometers. When you look at the underlying GHCN data in detail every time they make these adjustments they adjust the temperature upwards – without justification. Simple as that.
When the raw data is published every fool on the planet will be left naked in their all together with their hands over their nuts. Now everybody who wanted carbon taxes raise your hands.

Henry chance
December 8, 2009 8:15 am

The only thing all this hard work proves is that they have a motive to change raw data.
I see they have a motive to fight the release of raw data.

JP
December 8, 2009 8:17 am

“This thing about not having raw data anymore. I am confused about that. There is raw unadjusted station data that apparently can still be had by any Susie Q or Tommy T out there. Isn’t that the raw data?”
This has bothered me for sometime. Supposedly GISS, NOAA, and Hadley use the same stations; but they all come up with different temp reconstructions. GISS applies different adjustments to the same data than NOAA or Hadley, and vice versa. And I am not so sure they all use the same reporting stations. All perform very questionable and many times unpublished adjustments to different stations. To make sense of it all is impossible.

Tim Clark
December 8, 2009 8:17 am

Good analysis Willis. One minor point, if it has already been addressed above, just ignore me. Where you have the phrase and also a fine way to throw away all of the inconveniently colder data prior to 1941. , shouldn’t that be “inconveniently warmer data”?

Douglas Hoyt
December 8, 2009 8:27 am

According to Torok et al (2001), the UHI in small Australian towns can be expressed as
dT = 1.42 log(pop) -2.09
For Darwin with a population of 2000, the UHI is 2.60 C.
For Darwin with a poulation of 120,000, the UHI is 5.12 C.
The net warming then is 2.52 C, which explains all the warming that Eschenbach shows in Figure 7. Presumably the rapid growth in Darwin population began in 1942 and was relatively constant before then.
It appears that no UHI correction has been made. If they implemented it, then the warming would totally disappear.
See http://noconsensus.wordpress.com/2009/11/05/invisible-elephants/

TJA
December 8, 2009 8:27 am

http://news.bbc.co.uk/2/hi/science/nature/8400905.stm

In a separate move, the Met Office has released data from more than 1,000 weather stations that make up the global land surface temperature records.
CLIMATE CHANGE GLOSSARY
Select a term from the dropdown:
Suggest additions
Glossary in full
The decision to make the information available is the latest consequence of the hacked e-mails affair.
“This subset release will continue the policy of putting as much of the station temperature record as possible into the public domain,” said the agency’s statement.
“As soon as we have all permissions in place we will release the remaining station records – around 5,000 in total – that make up the full land temperature record.

Love this quote:

“We’ve seen above average temperatures in most continents, and only in North America[where the numbers have been open to scrutiny and adjustment – TJA] were there conditions that were cooler than average,” said WMO secretary-general Michel Jarraud.
“We are in a warming trend – we have no doubt about it.”

Mike Haseler
December 8, 2009 8:28 am

It’s a sad day for British Science!

Roger
December 8, 2009 8:28 am

I think a major story will be the leaker, more so than the leak. You will find the the IPCC etc will soon not be at all interested in investigating it. neither will UEA. The investigators may be under intense pressure to not release information about the leak. This explains the intense desire of the AGW to make sure the public thinks that the KGB/goblins etc were responsible. I think that, if is found that it was an internal leak, the IPCC and all associated with it might as well disband and go home.

PeterS
December 8, 2009 8:31 am

I have just asked the Met Office if the data they have made available is raw or processed (e.g. to correct non-warming). They referred me to this statement:
“The data that we are providing is the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.”
So now I know. It’s the non-climatic influences I’m worried about…
I hope you guys can find examples where clear comparisons can be made along the lines of the current post.

Robert Wood
December 8, 2009 8:31 am

A few months after Pearl Harbour, the Japanese bombed Darwin flat. Some good wrecks in the harbour 🙂

Methow Ken
December 8, 2009 8:36 am

Never was a single smoking gun more clearly and devastatingly exposed.
Major kudos to Mr. Eschenbach for his outstanding and meticulous effort.
But: Now what ?? . . .
Given the magnitude and extent of this demonstrated ”artificial adjustment”, as already mentioned the ”false in one false in all” assumption is a fair starting point as far as reasonable suspicions. But to have a shot at convincing the general public let alone the MSM, I expect a significant number of other stations around the world will have to be shown to have undergone similar large and unjustified ”extended warping” of the data.
This is what computers are good at (IF the software is professionally done):
Seems like it should be possible to write a program that would take as input:
[1] The totally unadjusted raw data for individual stations; and:
[2] The end-result data after all tweaking by CRU, GISS, & GHCN.
Minor SIDEBAR detail:
Of course 1st you have to actually GET both sets of data. . .
Ideal program output would then be graphical comparison along the lines of the excellent presentation in this thread start. . . . . Having said that, as someone who spent 25 years on a complex software engineering project, I immediately add:
Yes: I am aware that done right, this would be a non-trivial software project.
And: Also recognize as was pointed out in prior comment by TheSkyIsFalling that metadata giving reasons for real-world adjustments for individual stations would need to be reviewed.
OTOH:
Surely would not need to do a huge number of stations world wide to reasonably demonstrate and prove a pervasive smoking gun if similar results were common; i.e.:
Seems like a few dozen or so similar examples would start to be pretty overwhelming hard evidence.
Interesting times, indeed. . . .

kwik
December 8, 2009 8:37 am

Ont he CRU curve ,I really dont understand what I see. Well, I think the black line is the raw data, since you mention that under the graph. But the red and blue shaded area? Model predictions? Or?

Richard Sharpe
December 8, 2009 8:40 am

Willis,
Where was station zero located?
I am reasonably familiar with Darwin and might know where the station was located.
However, I cannot think of anything that would cause them to be 0.6C high for such a long period of time and with what looks like a gradually declining trend from 1880 to 1940.
Are you just trying to hide the decline? 🙂
While that large jump in 1941 looks suspicious I think it was later in the war that the Japanese bombed Darwin (would have to look it up, but as I recall, Pearl Harbor was late in ’41 wasn’t it. Something like December 7, 1941. While the RAAF was in the region, but probably largely operating from Tyndall, the US military presence in Darwin did not commence, I believe, until the US entered the war.) So, I suspect that the bombing did not cause the problems.

bsharp
December 8, 2009 8:47 am

Mr. Eschenbach:
I appreciate your effort in this matter. Your post has been shared with a person who gives great authority to the existing ‘academic community’. That person has dismissed your findings as opinion and unsupported personal conjecture that the process is broken.
Part of our discussion has hinged on your statement “So I looked at the GHCN dataset.” While acknowledging that the blog venue doesn’t require the same level of source citation as a peer-reviewed journal, your sources have been questioned.
Could you provide a more detailed reference/ link to the HCN data in question [both raw and adjusted]? Thanks and regards.

JJ
December 8, 2009 8:48 am

Good grief. Enough with the unmitigated speculation and hyperbole.
Willis – excellent analysis, but you go a bridge too far with your conclusions.
“Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? … Why adjust them at all?”
Those are very good questions. Making claims requires answers.
“They’ve just added a huge artificial totally imaginary trend to the last half of the raw data!”
You dont know that. You should not claim to know that which you do not. That is Teamspeak, leave it to the Team.
You have turned up what we already knew – that the alleged ‘global warming’ trend is a function of the adjustments applied to the raw data (or, as in the case of UHI and similar effects, not applied) as much or moreso than the raw data. That is worrying, but not necessarily illegitimate.
The example that you have done an excellent job of laying out here demonstrates that those who are making these adjutments need to have very good explanations for why they did so. Having stuck your neck out and called them dishonest, you had better pray that they dont have good explanations for those adjustment. If they do, the best that is going to happen is that you, and by the broad brush we, are going to be made to look like a bunch of biased, ranting fools.
Perhaps you should stick to pointing out worrying potential issues (again, good job of that) and save the claims of gross incompetance and malfeasance for after the questions you raise have been answered.
JJ

December 8, 2009 8:48 am

Willis,
Very very nice. Not much else to say ‘cept wow.

pwl
December 8, 2009 8:48 am

Breathtaking! Clear and precise. Yikes and Double Yikes indeed!

Billy
December 8, 2009 8:49 am

Could somebody help me out here? In The Times (http://www.timesonline.co.uk/tol/news/environment/article6936328.ece), we have the claim that CRU’s data was destroyed:
‘In a statement on its website, the CRU said: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”
The CRU is the world’s leading centre for reconstructing past climate and temperatures. Climate change sceptics have long been keen to examine exactly how its data were compiled. That is now impossible. ‘
However, over on RealClimate you’ve got Gavin Schmidt making claims like this:
[Response: Lots of things get lost over time, but no data has been deleted or destroyed in any real sense. All of the raw data is curated by the relevant Met. Services. – gavin]
[Response: No. If that was done it would be heinous, but it wasn’t. The original data rests with the met services that provided it. – gavin]
So which is it? Does the data exist somewhere or not? If it does then I don’t even understand why CRU even made such a dramatic announcement. Why didn’t they just say what Gavin Schmidt says above?
But on the other hand, reading this article it almost seems like the author DOES have access to raw data (and normalized data) which he used to calculate the Darwin adjustments. Is this the same data that CRU started with? If so, why not use this data to start with to reproduce CRU results? If the data does exist in some format I don’t see why there is any controversy at all.
Seems like somebody is being disingenuous but not sure who. This is something I find incredibly frustrating about this issue. The scientific issues are understandably opaque and subject to debate. That’s hard enough to get to the bottom of. But even simple things like, ‘Was the data destroyed or not?’ are subject to so much spin it’s nearly impossible for somebody who’s trying to be objective to sort it all out.

stevemcintyre
December 8, 2009 8:52 am

HEre’s a somewhat related post that I did on an Australian station a couple of years ago http://www.climateaudit.org/?p=1489

Neo
December 8, 2009 8:54 am

Given the amount of money they are talking about just for “Cap-n-Tax” (not to mention the IPCC money), this distortion is criminal.

john
December 8, 2009 9:00 am

Given the thoroughness of Willis Eschenbach’s methodology in tracing the Darwin temperature record, and what looks like a successful survey of US surface temperature stations (www.surfacestations.org), maybe a similar effort could be developed to audit these three main temperature databases. Establish a single method/process for review of the record, with required data formats, etc., publish a manual and let the globe have at it.

December 8, 2009 9:01 am

Interesting:
From: Phil Jones
To: Kevin Trenberth
Subject: One small thing
Date: Mon Jul 11 13:36:14 2005
Kevin,
In the caption to Fig 3.6.2, can you change 1882-2004 to 1866-2004 and
add a reference to Konnen (with umlaut over the o) et al. (1998). Reference
is in the list. Dennis must have picked up the MSLP file from our web site,
that has the early pre-1882 data in. These are fine as from 1869 they are Darwin,
with the few missing months (and 1866-68) infilled by regression with Jakarta.
This regression is very good (r>0.8). Much better than the infilling of Tahiti, which
is said in the text to be less reliable before 1935, which I agree with.
Cheers
Phil
Prof. Phil Jones
Climatic Research Unit Telephone +44 (0) 1603 592090
School of Environmental Sciences Fax +44 (0) 1603 507784
University of East Anglia
Norwich Email p.jones@xxxxxxxxx.xxx
NR4 7TJ
UK

Ben
December 8, 2009 9:05 am

Wonterful text.
Minor typos :
– figure 4, legend : bad copy-and-paste of the legend of figure 3 (remove the reference to the year 2000) ;
– the right Latin saying is “Falsus in uno, falsus in omnibus”.

boballab
December 8, 2009 9:05 am

kwik (08:37:09) :
“Ont he CRU curve ,I really dont understand what I see. Well, I think the black line is the raw data, since you mention that under the graph. But the red and blue shaded area? Model predictions? Or?”
First the black line in figure one is labeled “observations”, however that is not the Raw observation. That is the Observation after adjustment.
Second from what I understand the Red area is what the Model says the temp should be with CO2 forcing. The Blue are is what the Model says the temp should be without CO2 forcing.
Now when I looked at Fig 1 and and fig 2, to my Mark 1 eyeball the Raw looks like it correlates closely to what the models say the temp would be WITHOUT CO2 forcing (Blue shaded area).
Earlier I aked if Willis had laid the raw data over the the graph in Fig 1 and see if it does correspond with the blue area. The reason being if the IPCC’s own models without CO2 forcing matchs the Raw and Willis reconstruction, that in turn gives credence that people are ajusting the Raw to match the Models Red area or in other words why you get that huge adjustment.

Ben
December 8, 2009 9:11 am

(Also : 1850 instead of 1650 – second line after the photograph of the Darwin airport.)

December 8, 2009 9:11 am

Billy (08:49:21),
The obvious answer: Produce the raw data. The hand-written, signed and dated B-91 forms recording the daily temps at each surface station would be a good start.
JJ (08:48:13):
“…the alleged ‘global warming’ trend is a function of the adjustments applied to the raw data (or, as in the case of UHI and similar effects, not applied) as much or moreso than the raw data. That is worrying, but not necessarily illegitimate.”
What smacks of illegitimacy is the fact that when the data is massaged, it almost always shows warming: click1, click2, click3.
For the true global temperature, a record of temperatures from rural sites uncontaminated by UHI would show little if any global warming: click1, click2 [blink gif – takes a few seconds to load].
The CRU, the IPCC, the NOAA and the rest of the government funded sciences offices are trying to show an alarming increase in global temperatures. They almost always show a y-axis in tenths of a degree to exaggerate any minor fluctuations. But by using a chart with a less scary y-axis, we can see that nothing unusual is occurring: click

Jeff
December 8, 2009 9:15 am

I think we (the internet community) can end this debate once an for all … Using the stations cherry picked by the IPCC we could set up station teams via internet volunteers to review the raw vs “value added” GHCN data and validate those adjustments … where an adjustment appears to have been applied without good reason the team should attempt to do their own adjustment based on logical and justifiable reasoning …
This should allow the world to have a verified record set of actual temp measurement for at least the last 100 years … we don’t have that now …
step one would be to classify station location for appropriateness … bad sites would be marked for adjustment or exclusion … adjustments should never be averages they should be delta adjustments based on nearby reliable (i.e. non bad) sites …
An Army of Davids so to speak …
No reason this can’t happen within a year or two if someone can coordinate it …
Set up clearly defined rules on site validation …
Set up clearly defined adjustment methods to measure the warmists “valued added” against …
Use those adjustments methods to re-adjust the eggregious site adjustments …
Create a peer review process to allow a second, third and forth set of eyes to validate the work done by the team …
Allow anyone to join a team … anyone … Warmists are Welcome 🙂
Team decisions should be a super majority i.e. >66%

Phil A
December 8, 2009 9:16 am

“Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.” – PeterS quoting the Met Office
And with no visibility whatsoever of what those “adjustments” were, despite knowing full well (or because they know?) that therein lies the principal suspicions of dodgy data manipulation.
Hm. I wonder if the Met Office even *have* records of how those adjustments were done? Or whether some of them were done with long-lost bits of code on long-dead computers and what they actually have is an archive of raw data A, an archive of adjusted data B and an embarrassing lack of repeatability of how they got from A to B in the first place. Maybe they’re looking at the likes of Darwin and going [snip] too – except they can’t and won’t admit they have a problem.
Or was that what ‘Harry’ was trying to do? Trying to replicate past adjustments in putting together the adjusted database? And did he ever “succeed”?

December 8, 2009 9:21 am

At what point do we declare that these temperature records and proxies have too little confidence to be of any use in determining if AGW exists?
Would a going forward position be to use a well understood set of measurements and monitor changes for the next X years and see which models (hypotheses) are working. It seems Copenhagen may fail and that the “catastrophe” hypothesis is dying under the weight of Climategate — so why not?

rickM
December 8, 2009 9:32 am

I think what is missing from the current “coverage” so-called is this kind of analysis to beat back the claims that the science underpinning (I would love to use the word undermining) the CRUgate emails.
What I typically see is a talking head who interviews a warmist, and the talking points are driven by the warmist – again, no debate.
1) Emails were hacked
2) The emails and specific comments have been taken out of context
3) The science is sound

beng
December 8, 2009 9:35 am

Fine work as usual, Willis.
******
8 12 2009
JP (04:46:12) :
This subject was covered in a CA thread some years ago. I believe it came up when someone discovered the TOBS adjustment that NOAA began using. The TOBS adusted the 1930s down, but the 1990s up. Someone calculated that the TOBS accounted for 25-30% of the rise in global temps.
******
The TOBS issue was the first thing I thought, too, to explain the massive adjustments. They seem to use this as a catch-all adjustment because by nature the correction can be quite large in some specific instances (in both directions). The metadata to confirm this perhaps isn’t available — I don’t know.

Martin
December 8, 2009 9:36 am

Willis,
I don’t know where the NOAA URL is that gives individual station data (as opposed to data sets) so I went to the NASA/GISS site with the individual stations data.
http://data.giss.nasa.gov/gistemp/station_data/
The Darwin raw data there (http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=Darwin) seems to correspond with the raw data you showed in your graphs. But the homogenized data (at http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=2&name=Darwin) does not look like what you show as homogenized data.
Could you give a URL for the site where you got the data (and indicate which file if it is an ftp or otherwise multiple file listing)?
Thanks.

Bill Webster
December 8, 2009 9:36 am

It gets worse…
The latest adjustment for Darwin airport is +2.4 deg C
It would appear that as new data for that station is added each month going forward it will be immediately adjusted upwards by such a large amount by the “scientists”. How can this be right given that current temperature data with modern instruments can be expected to be more accurate than historical data?
Also if the UHI effect is taken into account any adjustment should be negative, not positive….
If this kind of fiddle is happening with current readings from many other stations around the world, then it is little wonder that the Met Office is claiming that the current decade is the hottest ever!