The Smoking Gun At Darwin Zero

by Willis Eschenbach

 

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport – by Dominic Perrin via Panoramio

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.
  • The shortest record is only nine years long.
  • There are gaps of a month and more in almost all of the records.
  • It looks like there are problems with the data at around 1941.
  • Most of the datasets are missing months.
  • For most of the period there are few nearby stations.
  • There is no one year covered by all five records.
  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My previous post on this subject.

The late and much missed John Daly, irrepressible as always.

More on Darwin history, it wasn’t Stevenson Screens.

NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony

The Smoking Gun At Darwin Zero

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. <a href=” http://www.iarc.uaf.edu/highlights/2007/akasofu_3_07/Earth_recovering_from_LIA.pdf”>Akasufo</a>. Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted <a href=”http://wattsupwiththat.com/2009/11/29/when-results-go-bad/“>here</a>:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from <a href=http://www.appinsys.com/GlobalWarming/climate.aspx>AIS</a>:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

DATA SOURCE: http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=darwin

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.

  • The shortest record is only nine years long.

  • There are gaps of a month and more in almost all of the records.

  • It looks like there are problems with the data at around 1941.

  • Most of the datasets are missing months.

  • For most of the period there are few nearby stations.

  • There is no one year covered by all five records.

  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say in <a href=http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_overview.pdf>An Overview of the GHCN Database</a>:

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My <a href=” http://wattsupwiththat.com/2009/11/29/when-results-go-bad/”>previous </a>post on this subject.

The late and much missed John Daly, irrepressible <a href=” http://www.john-daly.com/darwin.htm”>as always</a>.

More on Darwin history, it wasn’t <a href=” http://www.warwickhughes.com/blog/?p=302#comment-23412

“>StevensonScreens. </a>

4.7 3 votes
Article Rating
909 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michael
December 8, 2009 12:18 am

John F. Kennedy was assassinated Friday, November 22, 1963. I think that’s when all the shenanigans began.

Vote Quimby
December 8, 2009 12:31 am

Anyone can tell you Darwin and Northern Australia is the Tropical part of the country, where it’s always warm! 😉

December 8, 2009 12:31 am

Wow, just amazing work and effort. This has just got to get publicity in the msm somehow to spark an investigation and rework of the temperature record.

tokyoboy
December 8, 2009 12:33 am

So the forged hokey sticks are everywhere?

December 8, 2009 12:33 am

This is exactly the kind of public, open examination of the raw and adjusted data that needs to be done for ALL stations globally to establish, once and for all, IF the entire earth is warming un-naturally. (and I have yet to see any definitive proof that the current warming is un-natural).
IF after that process has been done, out in the open for ALL interested parties to see, examine and pick holes in, and IF that shows un-natural warming, THEN I would be more than happy for John Travolta, Leanardo di Caprio and Al Gore to give up their private jets to save the earth!

david atlan
December 8, 2009 12:35 am

Impressive detective work, congratulations!
I wish that journalists today would work this way, instead of copy pasting simply some press releases and showing a few pictures of polar bears looking lost in the water…

debreuil
December 8, 2009 12:35 am

Sure that UFO sighting was faked. But look at the other 2500 UFO sightings, they obviously couldn’t all be faked (very tired of hearing that argument regarding both the data and the ‘scientists’).
Not sure why I still find these things so shocking, I think mostly how they didn’t even use some made up excuse and hide things in complex math. They literally just move the line by hand and then submit it. Usually cheating is ‘going against the spirit of the game’, but I guess sometimes it is just cheating.
Excellent work figuring all this out, kudos.

December 8, 2009 12:36 am

Michael, we do not need wild conspiracy theories to distract us. If you want wild conspiracy theories, go to a warmist site. As a heads up, I believe that their latest, baseless, conspiracy theory involves Russians hacking the servers at CRU!

Nick Stokes
December 8, 2009 12:36 am

Darwin was extensively bombed in Feb 1942, which may explain the 1941 issue.

dearieme
December 8, 2009 12:37 am

Feed ’em to the crocodiles. Crooks!

Michael
December 8, 2009 12:37 am

Wasn’t the E-mail leak 1 day before Kennedy’s assassination date?

manfred
December 8, 2009 12:40 am

Peterson’s adjustments,… climate science is really a small world.
peterson is also the person with the deliberate untrue statements in his NOAA internal ‘talking points’ about anthony watt’s study and, of course, he is well represented in the CRU emails.

Henrik
December 8, 2009 12:45 am

I am speechless! And I am getting very angry…
Thank you for your work – you are a true scientist!

tallbloke
December 8, 2009 12:48 am

Top work Willis, can’t see them wriggling out of this one. That’s cooked data.

December 8, 2009 12:51 am

Michael. Or could it have been in December 1942 when the British radio station in Hong Kong picked up radio traffic about the forthcoming attack on Pearl Harbour and decyphered it – and it was made known to Roosevelt, who kept it to himself in order to allow a way in to the war? Ooohh!

singularian
December 8, 2009 12:51 am

In Australia they call it Climb-mate change.
up, up and away.

Ben M
December 8, 2009 12:51 am

Darwin was bombed in February, 1942. There was a build-up of military presence prior to that date (and certainly afterwards), so perhaps that has something to do with this anomalous 1941-data.

December 8, 2009 12:51 am

It’s interesting that at least parts of the Southern Hemisphere appear to show a slight cooling during the recent period of an active sun once artificial ‘adjustments’ are stripped out.
I have seen recent empirical evidence that counterintuitively indicates that a more turbulent flow of energy from the sun cools the stratosphere rather than warming it and that the cooling effect in the stratosphere exceeds the value of any warming effect on the seas and troposphere from any small increase in solar power.
http://www.nasa.gov/topics/earth/features/AGU-SABER.html
Thus overall during the 20th Century for rural and unadjusted uncorrupted sites we might see a cooling trend especially in the southern hemisphere where land heating from the small increase in solar power is less significant.
However it will still depend on the precise balance between energy release from oceans to troposphere and energy transfer from troposphere to stratosphere and thence to space.
Nevertheless I think we urgently need to get a precise grip on the temperature trend from suitably ‘pure’ sites as soon as possible.
I think there may be surprises in store in the light of that observed effect of a more turbulent solar flow actually increasing the rate of energy loss to space from the stratosphere.

Jack Green
December 8, 2009 12:52 am

This is why we need the FOIA Nasa. Get the Senators to read this. Right on. Hot Air has a good link on this latest bit of corruption “end justifies the means” argument.
http://hotair.com/greenroom/archives/2009/12/07/the-first-sign-of-corruption/
Thanks Willis and Anthony.

December 8, 2009 12:54 am

It can’t be a one off. The link below is to a graph from the NOAA’s website, and it shows that over 0.5 degree F of warming is all down to the adjustments. I find this hard to reconcile with common sense – surely they would have to adjust down for UHI effects?
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

Adrian Taylor
December 8, 2009 1:00 am

Willis, A fascinating article. Many thanks for doing the work…
one question, how does the data from ground stations in australia compare to the data from satellite ? I can’t find the data to compare.
It certainly looks like there is some pretty clumsy homogenising going on. How many ground stations are used in the ipcc figures ? I wonder how long it would take to re-assess all of them as you have done ?
Thanks again
regards
Adrian

Bryn
December 8, 2009 1:01 am

For the record, Darwin was bombed by the Japanese (using more bombs than were used on Pearl Harbor) in Feb 1942. The town’s population was only 2000 at that time and would hardly warrant adjustments for UHI. The town was razed by cyclone Tracy on Christmas Day in 1974, by which time the population was 42000. Currently the town has a population of over 120,000 (see Wikipedia records).
It seems neither catastrophe resulted in station moves — something I expected to read about when I first started to follow this contribution.
I congratulate the author’s careful analysis and comparison of the raw and “value added” records. This sort of commentary cannot be ignored and needs explanation by the “powers that be”.
Also remember that Darwin is well within the tropics (12 degS). Would such large shifts in temperature (alleged) be expected at that latitude?

outoftown
December 8, 2009 1:02 am

Willis – ty for the time and effort you have put into your research on the above work – This sort of work is where the real fraudulant nature of the shonky scientists will be revealed – the code holds the answers – its people like you Willis that will out this lot – once again- Thank you

Rhys Jaggar
December 8, 2009 1:02 am

I think that this paper should be emailed to every single delegate in Copenhagen, every single Cabinet Minister in every country in the world which uses Representative Government and every editor of every national newspaper and media station with one simple question:
‘This paper judiciously, ruthlessly, relentlessly and scientifically demonstrates the true nature of the difference between raw data and adjusted data where temperature records are concerned. Given that the raw data shows cooling whilst adjusted temperature shows rapid warming, would you agree that until ALL raw data for ALL sites used in the three global temperature record organisations GISS, CRU and GCHN are made public, examined critically by INDEPENDENT parties and the nature of the adjustments understood, that NO DECISIONS CONCERNING GLOBAL ACTION ON SUPPOSED GLOBAL WARMING SHOULD TAKE PLACE?’
And if they say no, I think we know what the attitude to all those folks is to rigorous science.
Not in my back yard, buster.
Well done, keep it up and produce 50 to 100 similar analyses.
Whatever that shows up. Criminal fraud or the occasional mistake. Well-meaning mistakes or the greatest scam in history.
And make all those politicians out there face the consequences. Resignations, imprisonments, whatever.
It’s time for the gloves to come off…..

michel
December 8, 2009 1:02 am

The whole thing clearly needs to be gone through with a fine toothcomb, station by station, and the histories and the adjustments taken into account, and it needs to be done with double blind methodology. What is critical is that the person doing the adjustment to the readings must not know what the effect of those adjustments will be on temperature. Don’t know exactly how you do that, but its the only way to get an unbiased set of adjustments solely on the merits of the station histories.
The ones who adjust must do it by objective criteria, which have to be tested in advance to make sure that they are robust between different operators, to verify the methodology is sound, and they must adjust without knowing the effect. It is like medical double blind studies, those rating the patient symptoms do so without knowing whether the patient is part of the drug or part of the control group. Then they are applied. The results might then be superior to raw unadjusted data.
Or maybe you just have to decide that the raw data is all we have, and that we cannot improve on it, uncertain as it is, and so you have to accept a larger measure of uncertainty in the conclusions drawn than any of us would like.

RC Saumarez
December 8, 2009 1:03 am

This is apalling. Speaking as a signal processor, this is psuedo science. The GHCN should release all their data so that the nature of adjustments can be seen. They should have to validate the adjustments.
Why? I guess is that the idea of global warming is so engrained that anything that doesn’t conform to it is regarded as an error.
It is astonishing that the MSM and wider scientific community havn’t really understood what is going on and our economies are going to be reshaped on the basis of evidence such as this. The political mainstream in the UK will not engage in any argument of AGW, “the science is settled”. Maybe in 5 years time, when the arctic ice remains normal and the Earth hasn’t warmed up, common sense may prevail.
We live in an age of enlightenment.

John Graham
December 8, 2009 1:04 am

You will find the problem in 1941 was due to the first Japanese air attack on Darwin and most of the population headed south as fast as possible

tallbloke
December 8, 2009 1:04 am

I hope we are going to see Anthony’s work on surface data published not too long after Copenhagen.

Rhys Jaggar
December 8, 2009 1:04 am

Send it to every delegate at Copenhagen, every President, Prime Minister or Cabinet Minister, every TV station, every school and every media mogul.
Tell ’em that the data in Darwin stinks. Stinks of shit.
And as sewage recycling is high on the Copenhagen agenda, you’d like to stick your nosey ass into another 100 stations in the GHCN record.

vdb
December 8, 2009 1:05 am

An adjustment trend slope of 6 degrees C per century? Maybe the thermometer was put in a fridge, and they cranked the fridge up by a degree in 1930, 1950, 1962 and 1980?

Neville
December 8, 2009 1:05 am

Willis I’ve just looked at the BOM site here in Australia and 2 stations cover the period 1882 to 2009.
The first is the Darwin post office 1882 to 1941 no. 014016 and Darwin airport 1941 to 2009 no. 014015.
The average mean temp ( high) is 32.7c for the PO and 32.0c for the airport.

David Mellon
December 8, 2009 1:05 am

I am not surprised with your findings but I am very impressed with the work you have done and shared with us. Whenever someone withholds scientific data, it is natural to ask a lot of questions. In this case I have so many questions for the three climate information holders I am not sure where to start. Thank you!

December 8, 2009 1:07 am

This behavior keeps popping up over and over and I can’t believe that they are brazen enough to hide it in plain site. I’ve been looking over data from historical weather stations where I live (Kamloops BC, Canada) and I find variations between stations just in a small area which fits with the siting of stations; ie airport temperatures appear higher. Since I’ve started looking into climatology, I’ve been comparing “official” temperatures in comparison to my house thermometer and there are differences of a few degrees. Using the USB temperature monitors this summer I found it difficult to compute the average temperature of even my yard as each physical location appears to have it’s own unique temporal heat signature. This is amplified by adjacent plants in the summer and the only time one sees homogeneity is in the winter where every place in my yard not close to the heated house is uniformly cold (about -10 C today).
I suggest that we do some distributed processing by chosing weather stations where we live and performing the same type of data analysis. I’m in the process of writing a scraping program for environment Canada weather sites and then it’s just a matter of averaging daily temperatures to get monthly values and comparing them to “adjusted” values. This would be similar to Anthony’s surface stations project. If anyone has data analysis software already written to deal with averaging/displaying the data I’d be interested in getting it as while I like programming as a hobby, I don’t want to reinvent the wheel unless I absolutely have to.
I’ll choose Kamloops BC as my part of the project.

Jason
December 8, 2009 1:07 am

According to breakfast TV in the UK the met have just realeased data that proves the last decade is the warmest on record

Invariant
December 8, 2009 1:09 am

Brilliant!

ForestGirl
December 8, 2009 1:11 am

Thanks for the brilliant and painstaking work. Just one question: Is there any information about the actual locations of the stations? That is, does the homogenization account for *actual knowledge* about the possibly changing environment around the stations. If, for example, Station 2, added around 1950, was in a field surrounded by trees whereas Station 1was on the runway…

Peter Plail
December 8, 2009 1:11 am

Hey, I’ve seen some adjustments similar to that somewhere else:
[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,2.6,2.6,2.6]*0.75 ; fudge factor

December 8, 2009 1:13 am

Willis, greetings from the lucky country and – awesome post, thanks.
As you say “So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead.”
I have the same fascination, I’m just glad you have done all the hard work and turned it into a something clear and digestible.
I’m sure most informed skeptics believe the world has been warming for 150 years. But when you see a post like this you do start to wonder. Maybe the modern warming is a “regional phenomenon”.. UAH does show much less warming in the southern hemisphere..
Are you going to do more posts on this subject? Please!

Nick Stokes
December 8, 2009 1:13 am

Here is a little history of the effect of the 1942 bombing on Darwin’s met operations.

UK Sceptic
December 8, 2009 1:14 am

I think the entire temperature record needs to be overhauled before politicians commit us to a path of economic meltdown.

Chilled Out
December 8, 2009 1:20 am

Your excellent analysis of this manipulation of raw data at Darwin shows what all programmers and business analysts know, a computer model will give the results it is programmed to give. In this case the artificial increase in weather station temperature readings appears fraudulent.
An obvious question is: “Where is the scientific research that can justify or legitimise the level of GHCN adjustments shown in your Figure 8?”
In any scientific study you inevitably have to sample datasets and sometimes remove bias from results. But such changes would normally be carefully documented and tested to demonstrate that they were not biasing the results. You would also expect error bars or confidence levels to be clearly identified so that the “accuracy” of the resultscan be assessed. Given the level of manipulation you have revealed on this site, the uncertainty equals or exceeds the alleged temperature rise.
The behaviour of the climate scientists involved in this work does not inspire confidence – the apparent lack of transparency and their refusal to share data or clarify their sources and methods points to chicanery and deception rather than open scientific enquiry and debate.

TomVonk
December 8, 2009 1:22 am

I must say that this is methodologically impressive . Really nothing to say about the analysis .
But why isn’t there a program that’d do a systematic screening ?
It would take all the unadjusted data in a given region , then the adjusted data and compute the adjustment .
That seems a rather easy way to check if the Darwin craziness appears in many other cases or not .
I know that I probably underestimate the amount of coding but from the functional analysis point of view it really doesn’t seem a very hard task .
At least not as compared to the problems of atmospheric non equilibrium dynamics .
If such a program doesn’t exist , it would certainly be worth to do it .

Martin Brumby
December 8, 2009 1:24 am

The ‘adjustments’ in Fig.7 wouldn’t be based on the atmospheric CO2 levels at Moana Loa, by any chance?
That would be a neat way of ‘adjusting’ the data. (Data Rape, I’d call it).
After all, we have Pershing’s evidence that the ‘science is incredibly robust’.
Well, he got the ‘incredibly’ bit right.

Pteradactyl
December 8, 2009 1:25 am

Will we ever get a politician brave enough to stand and fight the ever more obvious corruption in the world of climate change. Look how the Saudi minister has been vilified for speaking out about it, yes he may have oil interests and that has been jumped on, but the climate change gang have also got big business on their side for the renewable energy products. No-one disagrees that the climate is varying, but it is natural and we have to prepare for what ever it is going to do. Let’s stop talking about ‘greenhouse gas’ being the cause, once it is proven that Co2 is not the problem, are they then going to blame the only ‘real’ greenhouse gas – water vapour – clouds!
Thank you Anthony for keeping the real world sane!

yonason
December 8, 2009 1:26 am

What they are hiding, apart from just “the decline.’
“Iceland Temperatures Higher In Both Roman & Medieval Warming Periods Than Present Temps Peer-Research Confirms”
http://www.c3headlines.com/2009/12/iceland-temperatures-higher-in-both-roman-medieval-warming-periods-than-present-temps-peerresearch-c.html
“Climategate: Is There Evidence That NASA/GISS Researchers Have Fabricated Global Warming? If There’s Smoke, It’s Usually A Fire “
http://www.c3headlines.com/2009/12/climategate-is-there-evidence-that-nasagiss-researchers-have-fabricated-global-warming-if-theres-smoke.html
(Or, context is everything.)
“The Climate Liars: Obama Administration Claims Fossil Fuels Kills Millions – A 100% Lie, Opposite of All Known Health Facts & Statistics”
http://www.c3headlines.com/2009/11/the-climate-liars-obama-administration-claims-fossil-fuels-kills-millions-a-100-lie-opposite-of-all-.html
And, so very much more.
The data is the data. The only reason to “adjust” it is to hide the fact that it was bad data to begin with. On top of that, the “adjustments,” made by the same people who couldn’t do the measurements properly, are only likely to multiply rather than “correct” any “errors” in the data. There is no reason to trust people who have been lying for decades. They have been doing it so long, it’s second nature to them. They no longer care about or know how to tell the truth.

December 8, 2009 1:32 am

Thank you so much for this. I’m currently learning R, although it’s slow going due to other commitments, but it’s exactly these type of articles that someone like me needs. It looks like we’re losing the political fight, so the only way to respond is with the science.
From what I’ve seen we have E.M. Smith’s work, A.J. Strata, a site called CCC, Steve McIntyre and Jean S over at CA, and I’m sure various others, and of course your good self, all working in various ways on the various datasets.
Can a way be found to get you all together, plus interested parties willing to do some work (like me), to really work on this and produce a single temperature record, but rather than rehash CRU’s, GHCN’s or GISS’s code in something like R, actually come up with a new set of rules for adjusting temperatures.
I’ve seen yourself, among others, complain about the way they adjust for TOBS and FILNET, and now we have this article, demonstrably showing other shenanigans going on. Whatever was come up with would probably need to be peer-reviewed to get the methodolgy accepted, but at least there’d be something we could all trust.
I’d be willing to put in work, I have plenty of spare bandwidth on a fast shared server, and skills in web programming, but that said it would probably be better co-ordinated from here or CA, as you already have the presence and the interested parties coming here.
Unless something like this is done, we’ll see the Met spending three years using the same code and coming up with almost exactly the same dataset, and we’ll have lost.

Patrik
December 8, 2009 1:34 am

I must contend that no homogenisation at all should be used on a global scale!
The averaging will sort it self out since what one is showing is: averages.
All homogenisation will lead to distorted data on a global scale.
Of course if one specifically needs to collect trends for a smaller region, then it could be necessary to homogenise, but never on a global scale.

Capn Jack Walker
December 8, 2009 1:35 am

As a walker, in this nation. I have mixed with people who keep rainfall records in Australia. Australia is a harsh hot place, mostly desert or desert borderline, it’t not temp it’s rainfall, we watch, hot is hot, cold is cold.
No one dicks with rainfall measure. Not done.
No one dicks with station data.
Australia needs Stevenson screens Aus wide.

LB
December 8, 2009 1:37 am

OT, but Jack the Insider of The Australian has a Climategate blog. He is usually very good but completely misses the point, perhaps more knowledgeable chaps/chapettes would like to set the record straight:
http://blogs.theaustralian.news.com.au/jacktheinsider/index.php/theaustralian/comments/climategate_lame_by_any_other_name/

Jack Green
December 8, 2009 1:38 am

Stephen Wilde (00:51:39) : Nasa’s SABER satellite is on to something. Thanks Stephen. This program needs to be extended.

December 8, 2009 1:41 am

Wow. More manufactured fakeness than a million Hollywood blockbusters! Not a smoking gun, not a nuclear explosion, the birth of new GALAXY! When this hits the fan, Copenhagen will become “broken wagon” – the wheels are falling off! Great job!

yonason
December 8, 2009 1:44 am

RE my yonason (01:26:02) :
I said, “The only reason to “adjust” it is to hide the fact that it was bad data to begin with.” By that I meant, of course, that if “adjustments” really were needed, then the data was bad. However, when the data is good, that’s even worse, because we are no longer dealing with incompetence, but premeditated deliberate deception.

rcrejects
December 8, 2009 1:46 am

Could this perhaps explain the reluctance of CRU to release the data?

harpo
December 8, 2009 1:48 am

Greetings from Australia.
Nobody in the current Australian government cares a damn about whether the temperature has gone up down or side ways. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).
Climate Change, Climate Change, Climate Change, Tax will fix it, Tax will fix it, Tax will fix it… Climate Change, Climate Change…..
No will you critical thinkers just give up and love Big Brother. 2 + 2 = 5 remember…. 2 + 2 has always equalled 5…..
As an engineer educated in Australia in the 80’s it both breaks my heart and scares the [snip] out of me…
(Interestingly, before 1984, Orwells’ 1984 was required reading for all year 11/12(?) students in Victoria… now I can’t find anybody under the age of 40 who has read it)

Andrew P
December 8, 2009 1:49 am

“Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.”
Wow. So GHCN blatently adjusts the raw data, to create a post-war warming trend where none existed. And the GISS data appears to match GHCN’s (once it has also been ‘adjusted’). And CRU won’t release their raw data, which doesn’t inspire much confidence – not that I had much in them after recent developments.
From this example, and given the implications for the world economy currently being discussed in Copenhagen, I think that the precautionary principle should be adopted, and all adjusted data from GHCN, GISS and CRU should be classed as fraudulent, until proven otherwise. Willis’ essay should be sent to every politician and journalist possible.

December 8, 2009 1:50 am

“Barry Foster (00:51:02) :
Michael. Or could it have been in December 1942 when the British radio station in Hong Kong picked up radio traffic about the forthcoming attack on Pearl Harbour and decyphered it – and it was made known to Roosevelt, who kept it to himself in order to allow a way in to the war? Ooohh!”
Seeing how the Japanese attack on Pearl Harbour took place a year earlier then I think this is unlikely.
Seriously, though – this is great work and demonstrates exactly why climate science must be conducted openly and with free access not only to the raw data, but the methodology used to analyse it.
As a layman I look at the raw data and the “homogenized” version and can only assume that “homogenized” actually means massaged to fit a political preconception.

John in NZ
December 8, 2009 1:55 am

Thank you Willis.
I really learn a lot from your posts.

Jack Green
December 8, 2009 1:56 am

I think NASA knows that their CO2 models are flawed. Note the comment in this paper (thanks Stephen Wilde) that SABER is direct measuring CO2 ratios where GCM models are being used in climate simulations. Interesting that one hand doesn’t know what the other is doing or do they?
Abstract from:
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20090004446_2009001269.pdf
The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) experiment is one of four instruments on NASA’s Thermosphere-Ionosphere-Energetics and Dynamics (TIMED) satellite. SABER measures broadband infrared limb emission and derives vertical profiles of kinetic temperature (Tk) from the lower stratosphere to approximately 120 km, and vertical profiles of carbon dioxide (CO2) volume mixing ratio (vmr) from approximately 70 km to 120 km. In this paper we report on SABER Tk/CO2 data in the mesosphere and lower thermosphere (MLT) region from the version 1.06 dataset. The continuous SABER measurements provide an excellent dataset to understand the evolution and mechanisms responsible for the global two-level structure of the mesopause altitude. SABER MLT Tk comparisons with ground-based sodium lidar and rocket falling sphere Tk measurements are generally in good agreement. However, SABER CO2 data differs significantly from TIME-GCM model simulations. Indirect CO2 validation through SABER-lidar MLT Tk comparisons and SABER-radiation transfer comparisons of nighttime 4.3 micron limb emission suggest the SABER-derived CO2 data is a better representation of the true atmospheric MLT CO2 abundance compared to model simulations of CO2 vmr.

Stacey
December 8, 2009 1:57 am

Dear Willis
This is a great post and it demonstrates something that I have always intuitively felt about the treatment of data and the way th graphs are deliberately drawn to create alarm.
I know you looked at the CET and would appreciate a link, which I have lost?

Deadman
December 8, 2009 2:01 am

there’s a typo in the penultimate paragraph: in omnis is Latin for “in all.”

Ken Seton
December 8, 2009 2:02 am

Willis – Great read and easy to follow.
You do us a great service.
Thanks from Sydney (Aus).

Scott of Melb Australia
December 8, 2009 2:12 am

I have just forwarded this link to one of our more science savy opposition senators in Australia.
(you know the ones that revolted against the Carbon tax here in Australia and voted it down)
Plus had to include Andrew Bolt hoping it will get into his column in the MSM tomorrow.
Nothing like giving them a little Ammo.
Great article
Scott

KeithGuy
December 8, 2009 2:12 am

Excellent work Willis,
When it comes to the agreement between the three main global temperature data sets the term “self fulfilling prophecy” comes to mind.
Someone somewhere started with the notion that there has been a warming of something like 0.7 of a degree over the twentieth century, and the CRU, GISS, and GHCN have manipulated their data, using three different, contrived methodologies, until it agrees with their pre-conceived ideal.
It appears to me that the whole exercise of reconstructing historic global temperatures owes very little to science and much more to “clever” statistics.

supercritical
December 8, 2009 2:13 am

Raw data is just that.
And, anybody subsequently approaching the raw data must have an a-priori motivation, which must be made explicit.
If there are gaps, and bits of the raw data that are unsuitable for the purposes of the current study, then why not leave them out altogether?
IF the motivation is to determine whether or not today’s ambient global air temperatures are hotter or colder than they were, then a continuous record is NOT required. Rather, as long as there were matching continuous sequences of a few years, this would be sufficient for the purpose.
So why do the climate scientists need a ‘continous record’? And for what purpose are they trying to create an artefactual proxy of the real raw data? And in so doing, aren’t they creating a subjective fiction? an artefact? A man-made simulation?
Isn’t this similar to producing a marked-up copy of the dead-sea scrolls, with the corrections in felt-tipped pen, and missing bits added-in in biro, and then calling it ‘the definitive data-set’ ?

Geoff Sherrington
December 8, 2009 2:14 am

There is an unknown in the equation.
The Darwin data were collected by the Bureau of Meteorology, who have their own sets of “adjustments”. I am trying to discover if the Darwin data sent to GNCN are unadjusted or Australian-adjusted. By coincidence, I have been working on Darwin records for some months. There was an early station shift from the Post Office to the Regional Office near the town in 1962 (which would have gradually built up some UHI) to the airport, which in 1940 was way, way out of town but which is now surrounded by suburbs on most sides, so UHI is complicit again.
There is a BOM record of data overlap in the period 1967 to 1973. Overall, the Tmax was about 0.5 deg C higher at the airport and the Tmin was about 0.5 deg C lower at the airport during these 7 years. The Tmax averaged 31.5 deg C and 32.1 deg C at Office and Airport respectively. The Tmin averaged 23.8 and 23.2 at Office and Airport. Of course, if you take the mean, the Office is the same as the airport.
However, my problem is that I do not know if the Office and the Airport use raw or Australian adjusted data. I suspect the latter. If you can tell me how to display graphs on this blog I’ll put up a spaghetti graph of 5 different versions of annual Taverage at Darwin, 1886 to 2008. The worst years show a difference between adjusters of about 3.5 deg C, with KNMI GHCN version 2 adjusted being lower than recent BOM official figures.
I still do not know if any of us has seen truly raw data for Darwin.
Or from any other Australian station.

December 8, 2009 2:16 am

http://www.bbc.co.uk/blogs/ethicalman/2009/03/obama_will_circumvent_congress_to_limit_us_emissio.html
Democracy surplanted by the will of Obama and the unelected EPA of America.
See what John Podesta, Obama’s top adviser has to say.

Aligner
December 8, 2009 2:23 am

An excellent article.
But it doesn’t stop there. In order to arrive at a “global average” gridding is used with backfilling of sparse areas using methods such as averaging or interpolating from “nearby” stations, taking no account of topography, etc. Any error such as this therefore carries more weight than in areas where records are more prolific.
And nowhere is the margin of error introduced accounted for. It has always seemed to me that the degree of warming being measured is probably less than the margin of error of the temperature record itself, especially when SSTs from bucket measurements are added. Add in UHI effect (even if you adjust for that too) and the margin for error increases again.
Ultimately, therefore, IMHO the whole exercise becomes meaningless and making alarmist observations, let alone blaming CO2, preposterous.

Donald (Australia)
December 8, 2009 2:24 am

It would be interesting to feed this through to Copenhagen, and have some brave soul present it to the assembled zombies.
A clove of garlic, sunlight, or the sight of a wooden stake could not arouse more panic, or howls of anger.

December 8, 2009 2:26 am

Lets all “homogenize” our data
Into chunks of bits and pieces,
Lets forget which way is up or down
And randomize our thesis,
So black is white and white is brown
And purple wears a hat,
And when our data’s goose is cooked,
We’ll all say, “How HOT is that?”
.
.
©2009 Dave Stephens
http://www.caricaturesbydave.com

skylarker
December 8, 2009 2:29 am

From a tyro sceptic. Thank you for this excellent paper. More please.

Rob
December 8, 2009 2:38 am

This could explain the MSM reaction: he owns ALL the papers in Australia. This country has no freedom of the press anymore basically
copied from another site
“Phil Kean wonders why Sky gives so much time to the global warming scare. Perhaps it could be because it is owned by News International which is run by James Murdoch who is married to a climate change fanatic. Kathryn Hufschmid runs the Clinton Climate Initiative.
.
I understand that News International also owns a number of newspapers in this country. I don’t suppose that the fact that the boss’s wife is AGW nutter has any influence on the editorial policy of those newspapers.
.
It almost makes me wish that Daddy Rupert still had personal control of the media in this country.

Phillip Bratby
December 8, 2009 2:43 am

Onwards and upwards.
Great work Willis; much appreciated amidst all the BS surrounding Copenhagen.

December 8, 2009 2:44 am

The lack of transparency is the problem. The adjustments should be completely disclosed for all stations including reasons for those adjustments. You have to be careful drawing conclusions without knowing why the adjustments were made. It certainly looks suspicious. In Torok, S. and Nicholls, N., 1996, An historical temperature record for Australia. Aust. Met. Mag. 45, 251-260 which I think was the first paper developing a “High Quality” (not sure that is how I would personally describe it given the Australian data and station history but moving along…) one example of adjustments is given for 224 stations used in that paper and they are for Mildura. The adjustments and reasons (see p.257):
<1989 -0.6 Move to higher, clearer ground
<1946 -0.9 Move from Post Office to Airport
<1939 +0.4 New screen
<1930 +0.3 Move from park to Post Office
1943 +1.0 Pile of dirt near screen during construction of air-raid shelter
1903 +1.5 Temporary site one mile east
1902 -1.0 Problems with shelter
1901 -0.5 Problems with shelter
1900 -0.5 Problems with shelter
1892 +1.0 Temporary site
1890 -1.0 Detect
“Detect” refers to use of the Detect program (see paper). The “<” symbol indicates that the adjustment was made to all years prior to the indicated year.
The above gives an idea of the type of adjustments used in that paper and the number of adjustments made to data. For the 224 candidate stations 2,812 adjustments were made in total. A couple of points, the adjustments are subjective by their very nature. Use of overlapping multi station data can assist. I have concerns about the size of the errors these multiple adjustments introduce but I am certainly no expert. I wonder what the error bar is on the final plot when we are talking of average warming in the tenths of a degree C over a century. The stations really never were designed to provide the data that it is being used for but that is well known.
My point is without the detailed station metadata it might be too early to draw a conclusion. This is why we need to know what were the adjustments made to each station and the reasons. Surely this data exists (if it doesn’t then the entire adjusted data series is useless as it can’t be scrutinised by other scientists – maybe they did a CRU with it!?) and if they do why are they not made public or at the very least made available to researchers. Have the data keepers been asked for this? I am assuming they have.

Charles. U. Farley
December 8, 2009 2:45 am

From FOIA2009.zip/documents/osborn-tree6/tsdependent/compute_neff.pro
***Although there are no programming errors, as far as I know, the
; ***method would seem to be in error, since neff(raw) is always greater
; ***than neff(hi) plus neff(low) – which shouldn’t be true, otherwise
; ***some information has somehow been lost. For now, therefore, run
; ***compute_neff for unfiltered series, then for low-pass series, and
; ***subtract the results to obtain the neff for the high-pass series!

Ryan Stephenson
December 8, 2009 2:46 am

Can I please correct you. You keep using the phrase “raw data”. Averaged figures are not “raw data”. Stevenson screens record maximum and minimum DAILY temperatures. This is the real RAW data.
When you do an analysis of temperature data over one year then you should always show it as a distribution. It will have a mean and a standard deviation. Take the UK. It may have a mean annual temperature of 15Celsius with a standard deviation of 20Celsius.
Without the distribution the warmists can say “The mean of 2001 was 0.1Celsius higher than the mean of 2000. This is significant – we are heating the planet”. With the distribution you would say “The mean of 2001 was 0.1Celsius higher than 2000 but since the standard deviation of the annual distribution is 20Celsius, we cannot consider this as being statistically significant”.
If we had the REAL raw data we could almost certainly show that the off-trend averages of the last few decades was of no statistical significance anyway, before we got into the nitty-gritty of fudges to the data. By using slack language to describe the mean annual temperatures as “Raw Data” we are falling into a trap set by the warmists.

December 8, 2009 2:46 am

Willis,
Please email me your last graph as I would like to use it in public lectures, with attribution. A low resolution one would look shoddy. david.archibald@westnet.com.au
With thanks

pyromancer76
December 8, 2009 2:48 am

Amazinging. Couldn’t sleep (West Coast); when I began reading WUWT there was only one comment. Now 55. When I finish commenting, probably over 200.
Anthony and Willis Eschenbach, masterful work, skillful expose of the purposeful fraud. This becomes a whodunnit escapade and I am beginning to want to know WHEN it began in earnest. When was the temperature data of Darwin doctored? Who ordered it! Sometime between 1998 and 2001 (change in IPCC reports)?
I no longer believe “they probably began with good moral purposes from a desire to save humankind”. This deed was foul from the beginning. The 2008 U.S. election cycle had to be part of the “plan”. Too much fraud; too much unexplained; too much money from financial types; too much money from overseas; the ballot boxes stuffed or votes changed at the end in too many areas, not just the “normal” expected areas of voting fraud (such as Chicago) in American history that goes back into the 19th century. This was/is massive.
harpo (01:48:28) : “Greetings from Australia.
Nobody in the current Australian government cares a damn about whether the temperature has gone up down or side ways. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).”
Harpo seems to have a handle on the matter, or at least on the “rationale”. The “they” who are implementing the tax are also getting large salaries, excellent medical care, fantastic retirement bundles, and jet set perks (Copenhagen anyone, with free prostitute services?). “They” also can direct this tax money any which way “they” want. This “they” also includes corporations that are no longer making a profit on their products so they are turning to trading fees and largesse from the “they” their money helped elect; Enron seemed to begin this kind of trading scam. It is like vultures descending on the savings and retirements of the hard-working developed-world individuals and families — now that manufacturing and its collateral industries have left for China, India and other parts of the world.
Keep up the good work. Maybe “we” can “save the world” from the “they”.

Charles. U. Farley
December 8, 2009 2:50 am

FOI2009.zip\FOIA\documents\ipcc-tar-master
Lot of dissent displayed.
What happened to it all?

December 8, 2009 2:56 am

Great analysis.
The scale of the deception just keeps getting bigger.
Incredible.

Ripper
December 8, 2009 3:02 am

Warwick Hughes has a bit more info on this
http://www.warwickhughes.com/blog/?p=317

December 8, 2009 3:05 am

The last step up seems to be around 1979. It would be interesting to see if any other upsteps have happened since then, once satellite data went online.

Phillip Bratby
December 8, 2009 3:10 am

The Met Office has released data as a zip file at http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.

sHx
December 8, 2009 3:13 am

Supreb case study! Absolutely superb! It is also a lesson to other scientists on how to articulate issues in simple, layman’s terms. Sir, I salute your communication skills.
And I am sure the people from the crocodile country would be delighted to hear that a world class blogger is putting Darwin on to the world map.

Richard
December 8, 2009 3:16 am

So lets sum up then:
IPCC data = GHCN data = GISS data = CRU data
The official line + genuine believers CRU data is reliable, why because it “independently” shows the same profile and trends as the “independent” GHCN data and the GISS data.
Actually the 3 data sets are very nearly one and the same.
[ While on the subject] – but hey the satellite data also show the same trends and same profile since 1979. But a little bell rings – somewhere I read – satellite data is “calibrated” with ground data. There is only 1 ground data for all practical purposes GHCN. So does the satellite data unwittingly correspond to the ground data because it is calibrated with them?
But then again I read that the satellite temperatures correspond with the balloon transonde temps. So maybe that cannot be?
Dr Spencer if you read this could you comment please?
GHCN Aussie adjusted data big warming since 1950. (Funnily enough this is the period also when AGW started and was identified by the IPCC.) Raw data shows cooling. (Sounds familiar? NIWA?)
Darwin adjustments – oh oh….

Robinson
December 8, 2009 3:17 am

They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).

I don’t believe this. Even an economic illiterate knows that increasing costs on business are passed on to the consumer. The margins don’t change, do they? Not to mention the fact that the poor working man (in the UK at least) will have to holiday at Bognor Regis, rather than in Alecante and he’ll find it increasingly difficult to have his heating on in the winter, etc. Not really the kind of policy platform designed to improve the life of the poor, unwashed masses, is it?
No, I think this is more to do with energy security (vital national interest), coupled with a lot of highly stupid activist scientists. I would use the word opportunistic, but I don’t think it strong enough.

December 8, 2009 3:25 am

I have often wanted to check out the long term Darwin record but never got around to it. Since I have been a boy (which is some time) I have noticed Darwins temperature on the evening news is always between 32-34 C. A perfect place to test climate change.
Thanks for the great work….this could grow.

Peter Pond
December 8, 2009 3:28 am

I have the good fortune to have been born in Darwin, not so many years after the Japanese bombs stopped falling.
Darwin Post Office site temp records (1882-1942) show an mean max of 32.6C and a mean min of 23.6C. The monthly mean max temps ranged from 30.6C in January to 34.2C in November (just before the monsoon arrives). It is 24 metres above sea level.
Darwin Airport site temp records (1941 – 2009) show a mean max of 32.0C and a mean min of 23.2C. The monthly mean max temps range from 30.5C in January to 33.3C in November. It is 30 metres above sea level.
Both sites are close to the sea on three sides.
It can be easily seen that (a) being tropical, temps in Darwin do not vary all that much over the year and (b) that the temps were slightly higher in the earlier years – i.e. there would appear to have been some cooling since the early part of the 20th century.

jamesafalk
December 8, 2009 3:28 am

I’ve never said this a to a datahead man (or any other sort) before…but I think I love you. This is about as clear, convincing and robust a paper as anyone could put together given the available data.
Is that a Copenhagen herring I smell, or just something altogether fishy?

Nick
December 8, 2009 3:38 am

Why homogenise?
I see no need to homogenise at all. If you have a record starting in 1950 and ending in 1980, then you can fit a line and get the trend for that period. Is it up or down?
Likewise for the adjacent sites.
1. There is no need to join up different temperature records to make one.
2. Picking one site and not others is a cherry pick. Why exclude the other sites from the records if you use them for adjusting another site? There is no reason.
3. Missing months are relatively easy to fix. Create an average seasonal record. Average all Jan, Feb, Mar, … figures. If you miss Feb’s figure you can interpolate based on this curve.
4. Accuracy should improve as you get closer to today. Anything else is evidence of fraud. So if the adjustments become larger as we get to today, its fraud.
5. I’m not sure what you do about the UHI. It seems to me that the adjustments are large positive adjustments. At the same time the alarmists are say its a small effect. It should be a negative effect. It’s not consistent.
6. Surveys of sites are the only way of deciding if they are fit for purpose.
There is a research paper that can be easily done in the US. With Anthony’s data, you can show that the rural/urban decisions of Nasa base on lights etc is wrong.
Nick

December 8, 2009 3:39 am

Willis – hats off to you – an amazing post and superb clear analysis of the data.
Have blogged/linked/twittered and sent to the Opposition Leader here in the UK.

Fair Go
December 8, 2009 3:40 am

If we don’t have documentation for each and every adjustment made to homogenize the data, such homogenization must be considered suspect. Similarly that audit trail must be externally audited before the assumption of being suspect is changed! If the UEA/CRU don’t have that information they simply haven’t been doing their job and one has to wonder what they’ve been doing to earn their grants. The process of adjusting has the smell of “synthesising data” as we used to call making up results for high school science pracs.
Repetitive and unexciting work? Perhaps, but that’s what lies at the heart of much data collection in experimental science.

KimW
December 8, 2009 3:42 am

A clear and well reasoned argument that shows what a proper analysis should be. How did the CRU get so far from what is so clearly outlined here ?. My thanks.

P Gosselin
December 8, 2009 3:46 am

Dave UK:
– Podesta:
Leadership role my fanny!
This is what we call authoritarianism based on Stalinist Science..
Next it’s:
1. water (a water crisis is currently being manufactured)
2. food (meat, sugar and fat)
3. information
Is there a Paul Revere left in USA?

Mike in London
December 8, 2009 3:50 am

Thank you for Mr Eschenbach for a superb article – a tribute to genuine investigation and perseverence. It is the sort of thing that the “Quality” papers in the UK would once have done themselves in the “Special Investigations” they so love to pursue, but for the subject now being utterly inimical to their current editorial positions. The lack of journalistic investigation of climategate and the whole AGW movement is rapidly emerging as the 2nd most scandalous aspect (after the suborning of the scientifice method itself) of the whole global warming farrago.
I am not very hopeful that the global warming juggernaut, even after climatege can be prevented from causing at least tempoarary economic damage to the planet, rich and poor nations alike. But resistance is not futile; the true position is gradually being established with work such as yours, and I am quite sure that the future at least will universally recognise the voices of sanity such as yours and McCintyre’s that were raised at this time of Global Warming hysteria; just as the MSM will look back on it as their time of greatest shame.

Jim Greig
December 8, 2009 3:52 am

I know what we can do with Guantanamo. We can fill it with all the AGW desciples who are terrorizing the Earth with their fraudulant data!

hillbilly76
December 8, 2009 3:54 am

As a 7th generation Tasmanian I’m proud that many scientists have taken up the work of the Tasmania based late great John L Daly. I recommend his “What’s Wrong With Surface Record?” at http://www.john-daly.com/ges/surftmp/surftemp.htm as a great resource. Read about Low Head ground station and how the scientists ignored the changed circumstances there even when told. Also see http://www.john-daly.com-cru-index.htm for his email exchanges (not leaked) with East Anglia CRU head Phil Jones after John had caught them out in an obvious mistake. It is a great insight intothe mindset of those scientists and very relevant to the current climategate scandal. No wonder that on hearing of John’s death Jones callously told Mike Mann that “in an odd way, that’s cheering news”!

boballab
December 8, 2009 4:04 am

Is it just me or does the raw dat in Fig 2 and Fig 6 look like it corresponds to the GCM’s non CO2 forced blue section? You know what the temps would be if there was no CO2 forcing it just looks like they correlate to my MK1 eyeball while scrolling back and forth. Willis have you tried to superimpose the raw to the Model Non CO2 forced temp graph? It would be funny if they did match, becasue then there own model would show that the only Man made warming is adjustments to the Raw data.

December 8, 2009 4:05 am

“The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then.”
Especially steady has been Copenhagen: http://i45.tinypic.com/ele3bp.jpg

Richard
December 8, 2009 4:06 am

I have been a lurker on this website for some time and continue to be impressed with the quality of analyses presented here.
Keep up the great work!!

Turboblocke
December 8, 2009 4:11 am

Satellite measurements are calibrated against SI standard measures or by in situ temperature measurements.

Campbell Oliver
December 8, 2009 4:12 am

I’m very interested in following the discussions here. But there’s one thing I don’t understand, and I know that this is going to sound really seriously dumb, but I do want to know. It’s this. When people talk about an “anomaly”, I understand that this means a deviation from some value (probably some average, or a single reference value). But it seems that this value is never mentioned. So, my question is this: is there some standard definition of the value upon which temperature (and other) anomolies are based? If so, what is it? If not, how do people know what the actual temperature for some point is, given the value for the anomaly at that point?
Many thanks for any pointers to some 101 (or even a kid’s pre-101).
PS – I’ve tried googling “temperature anomaly definition” etc., with no luck.

December 8, 2009 4:14 am

Further to my last post,
This item was aired on UK TV 9 months ago. Yesterdays statement by the EPA reminded me of it.
That Obama and Podesta are using the EPA as a tool to bypass the democratic process in America.
Judging by what the EPA had to say things are going to plan for Obama.
Suppression of debate and oppression of democracy are what is being used.
People should fear that more than MGW.

December 8, 2009 4:15 am

.
And Darwin is a good location to see what is reeaally happening to the climate. Darwin was and is hundreds of miles from anywhere, and so its temperatures will be unaffected by urban growth (as long as someone did not build a barbie next to the station). Darwin is as ‘pure’ in climate terms as you are going to get.
Can everyone forward this item onto your local parliamentarians and media. This IS important.
.

KeithGuy
December 8, 2009 4:18 am

Richard (03:16:38)
“So does the satellite data unwittingly correspond to the ground data because it is calibrated with them?
I believe that satellite temperature data is calibrated by comparison with independent and concurrent thermometer data, but it is interesting that since we now have more confidence in our global temperature metrics (except GISS…?) global warming seems to have stopped, instead the manipulation is being applied retrospectively in an attempt to reduce early 20th Century temperatures.
As Churchill once said. “It is all right to rat, but you can’t re-rat.

Arnost
December 8, 2009 4:19 am

David Archibald (02:46:51);
Geoff Sherrington (02:14:27);
Geoff Sharp (03:25:34);
And Willis;
Please be aware that there is no continuous station in Darwin from 1880 to 1962 (as per Sherro’s post) or 1991 as per GISS (station zero).
The station of record was Darwin Post Office from 1982 till it suffered a direct hit from a Japanese bomb during the first raid on 19 February 1942. (The postmaster Hurtle Bald, his wife and daughter and 7 post office staff members were all killed instantly, and the post office itself was utterly destroyed). The station of record from then was Darwin Airport (which had about a year’s overlap with the Post Office at that time).
So (as per Willis’ graph above) Station Zero is in itself a splice of at least two stations (The Post Office and presumably the Airport – but I have no explanation of why it ends in 1991…)
Warwick Hughes did a post up on this about a month ago: http://www.warwickhughes.com/blog/?p=302#comments
Where there is a photograph of the Stevenson Screen at the PO from the 1880’s…
And I did one at Weatherzone at the same time:
http://forum.weatherzone.com.au/ubbthreads.php?ubb=showflat&Number=795794#Post795794
Where I have links to BoM data for the stations plus a link to some interesging stuff John Daly did a while back on Darwin.
cheers
Arnost

December 8, 2009 4:25 am

Jack Green (01:56:09)
Thanks for pointing up the importance of the SABER observations.
I’ve gone into some detail on the potential implications here:
http://climaterealists.com/attachments/database/The%20Missing%20Climate%20Link.pdf
mostly in the second half.
It’s been somewhat overshadowed by the Climategate publicity but I’m hoping it will be noted more widely in due course.

wobble
December 8, 2009 4:28 am

Wow! It’s bad enough to use highly aggressive step function adjustments when “correcting” for station moves. But these continuous adjustments are inexcusable.

Turboblocke
December 8, 2009 4:33 am

“K … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …
So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.”
Apart from “MIDDLE POINT” http://www.bom.gov.au/climate/averages/tables/cw_014090.shtml
and Darwin Post Office http://www.bom.gov.au/climate/averages/tables/cw_014016.shtml
and CAPE DON http://www.bom.gov.au/climate/averages/tables/cw_014008.shtml to name but 3.
Ho hum.

December 8, 2009 4:33 am

“Unless something like this is done, we’ll see the Met spending three years using the same code and coming up with almost exactly the same dataset, and we’ll have lost.”
Agreed, more importantly, it will also be the science that is lost and we will be plunged into a new dark age.
No matter how many times that the Met office run the same data through the same code, they will get the same result. That is why the raw data and the code need to be independently analysed.

wobble
December 8, 2009 4:34 am

Mike in London (03:50:58) :
“” just as the MSM will look back on it as their time of greatest shame.””
I think it’s time to tell the MSM and other scientists that it’s time for them to get on the right side of this now.
Climategate has given them the excuse to claim that they were duped, but if they don’t switch sides now then they are part of the duping and will be held responsible for their shame.

Arthur Glass
December 8, 2009 4:35 am

” This has just got to get publicity in the msm somehow to spark an investigation and rework of the temperature record.”
There is slim hope that your average journalista ot talking head has the attention span to follow such a beautifully crafted and lucid argument.
Want to try it on Boxer and Waxman?

KeithGuy
December 8, 2009 4:39 am

So the Australian raw data shows no warming and we already know that the USA was warmer in the 1930s. If this continues we’ll be left with one thermometer, maintained by a peasant farmer somewhere in Siberia, that shows warming, and the whole of the 20th Century temperature reconstructions will be based on this…
… but haven’t I heard that one already? Remember Yamal?

Hugh
December 8, 2009 4:42 am

A very clear, well written article. Congratulations and keep up the good work! But still this whole thing is driving me crazy. When will the MSM wake up?
Hugh

maz2
December 8, 2009 4:44 am

Gore’s immolation vs Polar bear convicts.
…-
“*The cold snap is expected to continue all week with daytime highs ranging from -12 C on Thursday to a bitterly cold weekend that will see the high drop to -25 C. The normal high for this time of year is -6 C.”
…-
“UEA asks for local support over ‘Climategate’
Bosses at the University of East Anglia insisted last night the under- fire institution could ride out the storm of the climategate scandal – but called on the support of people in Norfolk and Norwich to help them through the most damaging row in its history.”
http://www.freerepublic.com/focus/f-news/2402716/posts
…-
“*Alberta deep freeze saps power
Record high energy usage in province as mercury plummets”
http://cnews.canoe.ca/CNEWS/Canada/2009/12/08/12077286-sun.html
…-
“Hudson Bay jail upgraded for wayward polar bears
Cnews ^ | December 8, 2009 | Canadian Press
WINNIPEG — Manitoba is spending more money to upgrade a polar bear jail in Churchill. Conservation Minister Bill Blaikie says the province is spending $105,000 to improve the jail’s walls and main entrance.
The compound is used to house wayward polar bears that get too close to the town or return to the community after being scared away.”

JP
December 8, 2009 4:46 am

This subject was covered in a CA thread some years ago. I believe it came up when someone discovered the TOBS adjustment that NOAA began using. The TOBS adusted the 1930s down, but the 1990s up. Someone calculated that the TOBS accounted for 25-30% of the rise in global temps.
The question about adjusting local station data to adjacent data also came up. Especially concerning grid cells. If San Fran and Palm Beach are in the same grid cell, how does one extrapolate and average. One station is affected by maritime polar air masses; the other continental tropical. If the enviorment is not homogenious, how can one extrapolate at all? One would be mixing apples and oranges. In California this wouldn’t be much of a problem ( there are plenty of adjacent stations in proximity to San Fran and Palm Beach); however, in places like South America (where Steve McIntyre found GISS only uses 6 reporting stations), or Africa this problem is very real. If one must apply such drastic adustments to the raw data, why even use raw data at all? Why not just say “this is what is really going on -the weather observers these last 6 decades were either drunk or incompetent.”
The answer to all of this is simple. Rely on RSS/UAH data. Yes, the records go back to only 1979, and there are geographical limitis. But, the idea that we can find a global climate signal therough thermometer records, and we can measure that signal to the tenth or hundredth of a degree is absurd. Thrermomer only measure microsite data at a single point in time. Supposedly the thermometers are measuring ambient air temps (which they do not. I don’t think sling psychrometers are used anymore). And supposedly the temperatures are measured over green grass, away from the shade, and away from things like patios, parking lots, and buildings.
Surface temps traditonally have the single purpose of assisting weather forecaster in making up mesoscale and macroscale forecasts. They can provide general trends in tracking broad climate changes. Surface temps do not have the precision that people like Jones, Hansen et als. say they do. If they did, how come the data must be sliced, diced, and obliterated by our climate experts.

Editor
December 8, 2009 4:47 am

When you say “throw away all of the inconveniently colder data prior to 1941”, do you mean “warmer”?

Chris Fountain
December 8, 2009 4:47 am

I just have a question about the dates covered in this analysis. How was it that there was a thermometer at the Darwin Airport about 20 years before the Wright brothers’ flight? Were we Australians so prescient that we built one in anticipation?

David Holliday
December 8, 2009 4:48 am

It amazes me that this is still being debated. The warmist argument has already been disproven by the recent climate behavior of the Earth. CO2 has risen and temperature has not. How much simpler can it get?
Also, I don’t know why studies like these that show CO2 levels were 4 to 5 times higher during the Medieval Warm Period than they are now don’t get mentioned more press. It’s obviously not possible to attribute increased CO2 levels in that time period to human activity. And equally obviously, the temperature eventually came down so no runaway warming caused by CO2.
The correlation issue is the achilles heel of the warmest argument. Regardless of the current temperature, the correlation showing CO2 as a driver to warming doesn’t hold up.
On an aside, I keep hearing the argument that 8 of the last 10 years are the warmest on record but I also remember an article in which it said that NASA had revised their data to show 1933 as the warmest on record. Considering just the last 100 years what are the warmest years on record?

Robin Kool
December 8, 2009 4:48 am

A bit OT:
Climategate is convincing for who follows it – who reads the articles here.
But when I explain it to friends, they keep coming back to one thought that makes it hard for them to get their mind around it: “How could the scientific community let this happen?”
They know that many politicians are ignorant and corruptible and that many activists are ignorant and extreme. Andf they know that most people are ignorant and naif.
But why didn’t the scientists speak up?
And I still find that hard to explain.
I tell my friends that these scientists of CRU and NASA don’t publish the raw measurements nor how they adjust them. And they are shocked and ask: “How can that be true. The whole scientific community would demand to see them.”
I tell my friends that the paleoclimatologists who come up with the hockey sticks on which the whole case rests of the uniqueness of the warming in the last decades of the 20st century have hijacked the peer review process.
They ask me why the scientists who were pushed out didn’t protest and if they did, why didn’t the scientific community stand up and put things right?
I tell that many scientific organizations are controlled by small groups of activists who support claim there is scientific consensus over catastrophic AGW. And again they ask me how that could happen. Why don’t the thousands of scientists who are members get rid of them?
I think that if we want the public to understand Climategate, we need to be able to answer these questions satisfactorily.

imapopulist
December 8, 2009 4:49 am

Are there any Patrick Henrys left in America?

KevinUK
December 8, 2009 4:49 am

Ken Hall (00:33:54) :
“This is exactly the kind of public, open examination of the raw and adjusted data that needs to be done for ALL stations globally to establish, once and for all, IF the entire earth is warming un-naturally. (and I have yet to see any definitive proof that the current warming is un-natural).”
Ken,
Watch this space!! Someone 🙂 is very close to doing exactly that!
Next step after that, what happens if we do some different far more scientically justifiable (so realistic) alternative homogenity adjustments? Does the blade of the ‘hockey stick’ go away?
If so what on earth are all those poor GCMs going to use when they are ‘spun up’ using the gridded datasets that no longer have a pre-determined warming trend in them? Will the ‘flux adjustments’ have to make a re-appearance in the AOCGCMs?
What will poor Tom Wigley and Sarah Raper do when MAGICC doesn’t have any ‘unprecendented warming’ model outputs to fit itself to?
KevinUK

ventana
December 8, 2009 4:50 am

“Inconveniently cooling data prior to 1941”
Shouldn’t that be warming?

December 8, 2009 4:50 am

Willis Eschenbach, is this the only site you examined? Or did you examine many before you found one that appears was blatantly rigged?
I just wonder because of all the thousands of sites available, it would seem unlikely that the first one examined in this detail would be a ‘rigged’ one, IF the record was generally sound. If the record was generally “fixed around the theory” then most , if not all, of them will be dodgy.
If this is the only one you examined, then you have a 100% record of dodgy data manipulation for every site examined.

December 8, 2009 4:50 am

Lovely to see that the figure in the IPCC Fourth Assessment Report starts in 1900, conveniently just after the previous warm period data finishes.

DocMartyn
December 8, 2009 4:52 am

I had a look at Alice Springs, lovely dataset, daily records with only a few days blank; flat as a pancake. Projection between 0.3-0.6 degrees per decade.

rukidding
December 8, 2009 4:55 am

For those that are interested here is the temperature graphs for Darwin from the 1880 to today.
As has been mentioned elsewhere Darwin was bombed in Feb 1942 which destroyed the post office so my guess would be that that was when record keeping moved to the airport were it would appear that it is today.
First graph 1880-1942
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14016&p_nccObsCode=36&p_month=13
Second graph 1942-2009
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14015&p_nccObsCode=36&p_month=13
Hope the linking works

ShowsOn
December 8, 2009 4:57 am

You haven’t explained why you don’t think the homogenised figures are accurate. Also, if you use a completely different data set from the Australian Bureau of Meteorology, a warming trend at Darwin since the late 1970s is clearly evident:
http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=T

Knut Witberg
December 8, 2009 5:01 am

The scale on the left side is not the same as on the right side. Error?

Gregory
December 8, 2009 5:05 am

The Darwin Zero series shown above has large and relentless upward adjustments starting in 1930. But those are today’s adjustments, as they stand according to current practice. The entire station history could be readjusted some entirely different way if, for example, a historic fact like a previously forgotten station move were discovered tomorrow.
Anybody who has been awake recently knows that global cooling was the big scare in the 1970s, although that one didn’t catch on with the public nearly so well. Climate scientists then, as now, had to be homogenizing station data for all the reasons listed in this article. It would be fascinating to see if their adjustments then tended to confirm the cooling they expected to see, just as their adjustments now do the opposite. Is the GHCN data stored in such a way that one could see what the homogenized Darwin Zero series looked like in 1975, with the adjustments as they stood at that time?

Chad Woodburn
December 8, 2009 5:07 am

“But the data is still good.” How do they know since CRU kept the raw data secret all this time? Every scientist who says, “but the data is still good” has to either have seen the data and evaluated it, or they have to have complete faith in the data-keepers. But we know they have NOT seen or evaluated the data that CRU has kept hidden. And as for faith in Jones et al.? The emails show that only gullible people could continue to have faith in them.
An answer like “But the data is still good” is pure propaganda; it is not even close to being science.

Turning Tide
December 8, 2009 5:11 am

A proposal
Would it be possible for the knowledgeable people here to publish a “recipe” telling ordinary folk how to do this sort of analysis, comparing the raw GHCN data with the “cooked” version?
I’m sure there’d be enough volunteers lurking on the blogs who would be willing to carry out this analysis, then we’d be able to provide a definitive answer to the point that Willis makes at the end of this article: “This may be an isolated incident, we don’t know.”

RC Saumarez
December 8, 2009 5:13 am

I just looked at the example the Met Office has released. The provenence is the CRU!
I’m nor surprised the Met Office is going to do a 3 year re-investigation of raw temperature records. They’ve been taken for ride as well as the rest of us.

December 8, 2009 5:14 am

I see Richard Black is back in the groove
http://news.bbc.co.uk/1/hi/sci/tech/8400905.stm
This decade ‘warmest on record’ …

kdk33
December 8, 2009 5:15 am

This has probably been asked…
Does anyone foresee an academic (or three or four) reviewing the raw data – objectively! – or will this be left to volunteers working pro-bono. (or maybe willie get’s paid?).
Point is, is there anyway to organize activities and crank through the data a bit faster. (I recognize you don’t find people who can do this loitering at the TEXACO.)
Just a thought.

KeithGuy
December 8, 2009 5:16 am

Knut Witberg (05:01:07) :
“The scale on the left side is not the same as on the right side. Error?”
I think the scale on the right refers to the adjustments.

rukidding
December 8, 2009 5:22 am

Very interesting ShowsOn that graph shows temperatures for times before 1942 when the temperature was being recorded at station 14016 the post office.
So how do you reconcile your graph that shows a steady rise for station 14015 with the graph I linked above that shows a reasonably flat graph over the same period and they both come from the BOM?.

December 8, 2009 5:24 am

The shape of the difference function you are getting in figure 8 looks remarkably similar to the artificial fudge factor in the briffa_Sep98_d.pro file from the released documents. The artificial correction has puzzled me because of the swing down before it starts to ramp up. I cannot see why you would apply a function of this shape at all.

jgm2
December 8, 2009 5:25 am

The black line as presented in the IPCC report has been deliberately started at a low (1910) in the actual raw data as opposed to the real start of the data set which, from Figure 2, appears to start in 1880. So they’ve deliberately missed out one whole degree centigrade of cooling from 1880 to 1910 just so they (IPCC) can print a graph showing a 0.5% rise from 1910 to 2000 and a ‘shock’ 1deg C increase from 1950 – 2000.
Oh come on chaps.
And I’m new to this so can somebody explain what are the blue and red overlays on the Fig 9.12 in the IPCC data? Maximum and minimum of all the data sets (3?)(30?) in the input? Maximum and minimum temperature predictions from various models?
In either case why is the actual black line outside the shaded zone for a few years either side of 1950?

rukidding
December 8, 2009 5:33 am

This maybe of some interest it is a temperature graph for a small place called Menindee which is in the far west of New South Wales(Australia) and I would think would not suffer from any UHI effect.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=47019&p_nccObsCode=36&p_month=13
Notice a reasonably flat graph.And no maximum temp for 1998 in fact if you check every capital city airport temperature for Australia for 1998(the hottest year ever) only Darwin recorded a record maximum.
So how come Australia missed out or is it just because we are down under we are forgotten about.

December 8, 2009 5:34 am

5:29 a.m. PST … The temp right here, right now, in Susanville in northern CA is -10F or -23C, yet the national temp map with isobars and the pretty blue colors that is regularly updated is showing us toasty warm at between +20F and +25F. On the way in to work this morning, my driver’s Pick-up registered a -14F. What the???

KeithGuy
December 8, 2009 5:35 am

KeithGuy (05:16:08) :
Knut Witberg (05:01:07) :
“The scale on the left side is not the same as on the right side. Error?”
I think the scale on the right refers to the adjustments.
I see what you mean (fig 7 and 8). It does give an exaggerated impression of the adjustments.
I know see that the BBC are making a big play out of the recently released data that shows that that this year is the 5th hottest on record and that the last decade is the hottest ever.
I don’t doubt it, but what’s important is that the trend now shows cooling.

imapopulist
December 8, 2009 5:37 am

This is truly a smoking gun. There is no amount of rationalization that can be used to justify the changes made to the raw data sets. Anyone who is objective will be able to see this.
Now someone needs to take this excellent work and summarize it to a point where the average person can easily discern how temperature data is being manipulated.
I always suspected the most manipulation would take place in the remote corners of the Earth where unscrupulous scientist thought they could get away with it.

Phil A
December 8, 2009 5:37 am

So … how many temperature stations will people have to do this for before they admit just how large the problem is, and just how badly they have betrayed science? Not to mention betraying all the real scientists who have used this kind of adjusted data in good faith and will soon realise just how many years of their lives have been wasted in analysing meaningless numbers.

December 8, 2009 5:38 am
Olen
December 8, 2009 5:39 am

Global warming looks like a fraud when data are mis used by scientists and politicians want to use it for massive tax and control.

J.Hansford
December 8, 2009 5:47 am

Speaking of GISS and James Hanson… Here is an interview of him on the Australian Broadcasting Corporation (ABC)… The program is Lateline and the interviewer is Tony Jones(AGW cheerleader and rabid leftist)
http://www.abc.net.au/lateline/content/2008/s2764523.htm

December 8, 2009 5:54 am

Just a note – Steve Mc on CA is pointing out that he’s had several old posts along the same lines.
Perhaps a bit of blog archeology would be useful at this stage to see if the data massaging is the same?

Ripper
December 8, 2009 5:54 am

Yep , Showson is quoting the new Climate site which has been adjusted.
Marble bar , Kalgoorlie , Meekatharra , & Southern Cross have all lost 1.5 degrees C in the 1920-1940 period resulting in most of Australia’s warming being filled over 1/3 of the most sparsely populated area of the continent.
These are supposedly the same
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=012074&p_nccObsCode=36&p_month=13
http://reg.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=maxT&area=wa&station=012074&period=annual

wws
December 8, 2009 5:55 am

Robin Cool wrote:
“I tell that many scientific organizations are controlled by small groups of activists who support claim there is scientific consensus over catastrophic AGW. And again they ask me how that could happen. Why don’t the thousands of scientists who are members get rid of them?
I think that if we want the public to understand Climategate, we need to be able to answer these questions satisfactorily.”
A small group of activists have always been able to take control of any situation where the majority is apathetic and splintered – study the Bolsheviks in 1917, who took over a country even though they had barely 10% support. And in these organizations you question, even though many scientists are members, they are controlled by only a handful of people at the top. Once an organization is corrupted (as the APC is currently) the only alternative is probably for those who disagree to quit and start their own organization – and this could take years if not decades before it achieves equal recognition.
Furthermore, the climatologists weren’t alone – they had government and the media on their side, the two most poweful weapons that scientists are afraid of. Annoy government and lose all your funding, annoy the media, get a negative story and lose your chance at tenure and a career. That trifecta of power was unassailable. Then, add in the fact that most scientists are specialists, not generalists, and thus as long as the controversy was outside their own little spheres of speciality, they felt that they needed to ignore it. How much trouble could it cause for them, afterall?
A lot more than they thought it could, it’s turning out.

December 8, 2009 5:56 am

Outstanding work, clearly explained and illustrated! I can only imagine the self-delusion and groupthink that went into all these “adjustments” at the GHCN, all earnestly applied in service to science. Those early warming revelations in the 1990’s were heady times, when almost any researcher could derive important new observations of the data. I’m sure it all seemed so very right.
Like phrenology.
But upon inspection by a disinterested outsider who isn’t caught up in the machine, it doesn’t even pass the sniff test.
Our culture has only begun to wake from this delusion, with many still falling more tightly into its grip. Not even George Orwell could have anticipated the EPA’s move to regulate carbon dioxide as a pollutant. It will take literally thousands of revelations like this one to reverse the tide.

boballab
December 8, 2009 5:56 am

Went to the met site and they admit in their FAQ sheet that it isn’t 100% Grade A Raw Data. Then they try to spin why it isn’t the raw data back onto the dog ate it in the 1980’s, but never fear we know its good because it’s from CRU, GISS and NCDC and “peer reviewed”!
My god that’s like hauling out a counterfeit $20 bill to prove that your not a counterfeiter.
Sorry Met office you need to show 100% Raw data, no adjustments, no peer review, no more appeals to authority. You also need to publish the codes used to make your adjustments. With what you have published you have not shown there is nothing there, matter of fact you showed the opposite when you admitted the data is gone. No Data to back up you claim then it gets trashed. I give the Met office a B for effort, a C for propaganda effect since most people will not look nor understand what the FAQ says and a big fat F for proving there is Man Made Global warming. At best with good codes and the raw data you could have proved warming but not causation to man.

ozspeaksup
December 8, 2009 6:02 am

willis, a HUUUGE hug! last week I found a BOM page showing 3 graphs of OLD datasets, and they were remarkably level over a long timeframe , 20’s onwards. yet when I went back to my history i find its not there..well not the same page but..
i did get a page with data and some files my pc cannot translate?
i posted the link elsewhere today, to ask for help, so I am going to add the link and let you see what you can make of it all.
BOM advises they are updating and removing…gee how very convenient!
ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/
the charts I saw before had Kalgoorlie in WA listed and it had gotten cooler from a very high time in 1930 /1.. and Longreach in Qld was another.
again thanks heaps for our effort, I will be sharing this page around aus and o/s!
ps the missing info in the 30.40s would be depression years and Wartime.

December 8, 2009 6:07 am

I added some more to your excellent analysis
http://strata-sphere.com/blog/index.php/archives/11787
It is quite clear why GISS needs to fudge the data – as I explain. Anthony should put out a challenge for people to do more of this on the GISS data, using the same formats, etc.

mathman
December 8, 2009 6:10 am

Now we know.
I was certain there had to be a good reason for losing the raw data, as has been claimed by the Jones group.
The good reason for having no raw data to use to either validate or invalidate the various IPCC reports is in the graphs shown in this thread.
The so-called homogenization is in fact blatant fraud.
One starts off with the conclusion: AGW must be “scientifically” proved in order to implement worldwide Carbon taxes. Such AGW is not found in the raw data. Darwin Station 0 is an instance of such raw data. How does one solve such an evident problem? One manipulates the raw data in order to arrive at the pre-determined conclusion.
This is an instance of one picture being worth a thousand words. Alas that my browser does not allow me to superpose the various graphs. Would it be possible to present all of the graphs to a uniform scale, so that superposition would allow a better compare/contrast?
This could best be done by the author, with the use of the original tables of information. The alternative is for the author to provide us with the tables of data used to generate the graphs, for us to use with our own graphing programs.

December 8, 2009 6:14 am

I’m no climate scientist but the ones making the news lately aren’t either.The homogenized data,skimpy tree ring data,and stopping release of raw data was just a part of their bad science.They are simply criminals who should be prosecuted.
I’m no climate scientist,but I did spend last night reading WUWT.

Oslo
December 8, 2009 6:15 am

Great work!
Seems we are slowly getting there.
And your own line sums it up nicely: “when those guys “adjust”, they don’t mess around”!

John S
December 8, 2009 6:18 am

I never knew that ACORN was in the climate temperature business.

infimum
December 8, 2009 6:23 am

Akasofu’s name is spelled wrong.
[Thanks, fixed. ~dbs, mod.]

Spenc Canada
December 8, 2009 6:23 am
Bunyip
December 8, 2009 6:24 am

Give us credit for honesty down here in the Great South Land. When our blokes ginger up the numbers they explain how they do it — peer-reviewed, of course.
Check this out, for example:The liberties they took make my brain ache.
http://www.giub.unibe.ch/~dmarta/publications.dir/Della-Marta2004.pdf
This little beauty of deduction explains all the lurks while also suggesting that the Australian records contain even more egregious examples of data goosing. In addition to the six degrees of difference (latitude and longitude) and 300 metres *(plus or minus) elevation deemed acceptable in the selection of “appropriate” substitute stations, the authors explain that that they sometimes go even further. Quote:
“…these limits were relaxed to include stations within eight degrees of latitude and longitude and 500 metre altitude difference…” (bottom of pg. 77)
Oy!
I do wish someone smarter than I could take a good, hard look at the above document.

Andrew
December 8, 2009 6:29 am

” Your comments are fascinating, but this is a science blog.”
If the topic is AGW related, it has very little to do with science. If we keep to your terms, we wouldn’t be able to talk about any of these issues. No science here as far as I can tell.

December 8, 2009 6:30 am

“(Interestingly, before 1984, Orwells’ 1984 was required reading for all year 11/12(?) students in Victoria… now I can’t find anybody under the age of 40 who has read it)”
Even though I am 40, I was never required to read it, although I did read it for the first time in 2004 and it scared the [snip] out of me too!
I too have found few people who have read it. In fact I do not know of a single ‘X-factor, celebrity dance, jungle got talent on ice’ reality TV viewer that has read it. I wonder if there is a correlation there? Hmmmmmmmmmmmm.

Anna
December 8, 2009 6:32 am

Thanks Willis, keep up the good work!
I made a graph on the annual mean temperatures of Sweden, just like Wibjorn Karlen did. The result sure doesnt look anything like the graph for the Nordic area in the IPCC report!
Try it yourselves : http://www.smhi.se/hfa_coord/nordklim/index.php?page=dataset
This needs to be done for all raw data there is, and where there are strange differences we need to demand a reasonable explanation!

December 8, 2009 6:32 am

Willis,
“While there are valid reasons for adjustments as you point out, adjusting a station from a 0.7C per century cooling to a 6 C per century warming is a complete and utter fabrication. We don’t have the whole story yet, but we do know that Darwin hasn’t warmed at 6 C per century. My only conclusion is that someone had their thumb on the scales. It’s not too early for that conclusion … it’s too late.”
Apologies I read the per century trend how I thought I saw it +0.6/100 yrs rather than +6/100 yrs! I am in total agreement with you. I also realise you are more than well aware of everything I posted but I thought I would throw it in anyway for information for others in case the actual nature of the of changes was of general interest for those who (like myself until recently) were totally unaware of this fiddle with data. I am actually outraged over this and love the work you did/are doing. Can’t wait to see more!

Paul
December 8, 2009 6:41 am

Perhaps there is nothing dishonest or silly here, but, when you won’t release data or method details what are people expected to think? Until the raw data and the method of “correcting it” are made fully public, as scientific method requires, the correct thing to do from a method standpoint is to treat this data as junk. It can’t be reproduced so it isn’t science.

Pamela Gray
December 8, 2009 6:42 am

My Democratic representatives aren’t smart enough to read this stuff for themselves (along with one or two Repubs believe it or not). Every time I have sent a letter I get back a nearly identical talking points response from the lot of them. I never thought I would ever be reduced to just wanting to throw them all out, or even not vote at all. This is truly making my left leaning, registered Democrat, AND patriotic Irish blood boil!

Basil
Editor
December 8, 2009 6:44 am

w,
Fascinating bit of work. Maybe you could comment, either here, or in an update to the post above, on the following, taken from the Easterling and Petersen paper you quote from:
A great deal of effort went into the homogeneity adjustments. Yet the effects of the homogeneity adjustments on global average temperature trends are minor (Easterling and Peterson 1995b).
Do they ever put a figure on to just how “minor” this effect is on the global average temperature trends? Are they referring to this?
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Or are they referring to something else?
However, onscales of half a continent or smaller, the homogeneity
adjustments can have an impact. On an individual
time series, the effects of the adjustments can be enormous.

Duh. I think you’ve demonstrated that very well.
These adjustments are the best we could do
given the paucity of historical station history metadata
on a global scale.

Well, maybe we need a global average that is without the adjustments.
But using an approach based on a
reference series created from surrounding stations
means that the adjusted station’s data is more indicative
of regional climate change and less representative
of local microclimatic change than an individual
station not needing adjustments.

Important admission, and qualification.
Therefore, the best
use for homogeneity-adjusted data is regional analyses
of long-term climate trends (Easterling et al.
1996b). Though the homogeneity-adjusted data are
more reliable for long-term trend analysis, the original
data are also available in GHCN and may be preferred
for most other uses given the higher density of
the network.

I’m not persuaded about the usefulness, even for regional analysis. I think any use must look at before and after comparisons, like you’ve done here, before assuming amything about the usefulness of the adjustments.

December 8, 2009 6:46 am

“Isn’t this similar to producing a marked-up copy of the dead-sea scrolls, with the corrections in felt-tipped pen, and missing bits added-in in biro, and then calling it ‘the definitive data-set’ ?”
This is a perfect analogy.
There is NO WAY that one can move a weather station 20 miles from a valley, or shore location, to the side of a mountain and then continue to validate the temperature record just by making an arbitrary correction. The entire weather patterns between those locations will be entirely different and the temperature record will not follow the same pattern, but at a different set average temperature.
The fact is that the temperature record is sooooo messed up that there is no way to determine a constant increase in temperature from the raw data, so it appears that they have made the data fit the science and hidden the fraud in the way they use the ‘necessary’ adjustments. All the people involved in the fraud agree with the outcome, they are all insiders in the man-made climate change religion and so they all peer-review each others data and methodology and sign it off as sound. After all, they all get the same amount of warming. It is entirely conclusion lead science AKA propaganda!
Another analogy us that the Climateologists are saying, OK the car has a scratch on the door, and a dent in the boot, the tyres may be a little bald, depending on how you define bald, but the car is basically still sound and entirely roadworthy. We are saying, SHOW ME THE ENGINE! we got a glimpse under the bonnet and did not see one.
This article is someone sneaking under the car to get a peek into the engine compartment and seeing no engine there yet they still want to force us to buy the car!

Kristian
December 8, 2009 6:49 am

Before Climategate, I thought the science was dubious and the aim political but the temperature data correct, at least for the last hundred years or so.
Right now I realise I was being too naive, the data seems to be flawed just like the rest of the “science”!

Charles. U. Farley
December 8, 2009 6:49 am

Source file= Jones+Anders
Jones data to= 1998
Says it all. More BS than a herd of buffalo.

Zhorx
December 8, 2009 6:50 am

It’s eerie just how similar the graphs of the adjustments are to the “fudge factor”, compare for yourself:
http://wattsupwiththat.com/2009/12/05/the-smoking-code-part-2/

December 8, 2009 6:52 am

ozspeakup – that data set on the ftp site at BOM is, I think, the old Torok data set I referred to in an earlier comment. Check out the method.txt file if it is there and do a search for the paper mentioned online and you’ll find what you seek.

TerryBixler
December 8, 2009 6:52 am

Thanks Willis and Anthony.
It was a very depressing day yesterday, thinking about Pearl and the EPA. I can only hope that truth will win out, it has in the past , as it might in the future ( I even feel that hope and change are tainted words so I did not use them).

Pamela Gray
December 8, 2009 6:57 am

This thing about not having raw data anymore. I am confused about that. There is raw unadjusted station data that apparently can still be had by any Susie Q or Tommy T out there. Isn’t that the raw data? So who needs raw data from the Met? Correct me if I’m wrong, but the stationsurvey is able to capture the raw data from each station and display it here. Isn’t that raw data? And easily obtained? What if we decided that our next challenge was to tabulate and average climate zone station data (totally unadjusted) and run our own graphs here at WUWT? And I do mean by climate zone so that we are not averaging together apples and oranges. Then let THEM argue about our methods.

Amabo
December 8, 2009 6:59 am

Turns out pauchari doesn’t want to investigate the emails after all.

Steve Fitzpatrick
December 8, 2009 7:00 am

Very nice work Willis.
How much time did you devote to the efforts described in your post? Just wondering how much effort would be involved to do the same thing for a random selection of a hundred or so stations around the world.

Jeremy
December 8, 2009 7:00 am

More BBC Propaganda. Husky dogs may not have employment and face a bleak future in a warmer world. This is really pathetic. Teams of Husky dogs (which pull a sled) where replaced by motorized machines called Snowmobiles or in Canada a Skidoo over 50 years ago.
http://news.bbc.co.uk/2/hi/programmes/hardtalk/8399823.stm
What does it matter – keep telling enough lies and keep repeating them and eventually people will believe.
It is the same with polar bears – they are not in trouble or drowning at all (the population has actually increased five-fold since we restricted hunting).

Ryan Stephenson
December 8, 2009 7:00 am

So the Copenhagen summit has just heard that the last 9 years are the warmest on record by a long way.
You’d have trouble selling that one in the UK. 2003 experienced a particularly hot Summer due to the collision of two high pressure weather systems over Northern Europe. Even the warmists aren’t suggesting that was anything more than a weather event. But since 2003 Britain has had increasingly severe winters and overcast cool summers. Last winter we had snow so bad it brought the country to a standstill.
It is like listening to a bunch of ranting lunatics of the kind that carry boards with “The End is Nigh” on them.

December 8, 2009 7:04 am

Last week while updating my website (http://www.waclimate.net) with temperatures across Western Australia for November, I noticed something peculiar about August 2009 on the BoM website…
http://www.bom.gov.au/climate/dwo/IDCJDW0600.shtml
The mean min and max temps for August had all gone up by about half a degree C since first being posted by the BoM on Sep 1.
Below are the min and max temps for the 32 WA locations I monitor, with the BoM website data at the top as recorded from Sep 1 to Nov 17, and below them the new figures on the BoM website since Nov 17 …
August 2009
Albany
9 16.2
9.4 16.6
Balladonia
5 20.7
5.5 21.1
Bridgetown
5.7 15.7
6.2 16.1
Broome
14.6 29.2
15.1 29.7
Bunbury
8.2 16.7
8.7 17.2
Busselton
8.7 17
9.2 17.4
Cape Leeuwin
11.8 16.2
12.2 16.6
Cape Naturaliste
10.5 16.7
11 17.1
Carnarvon
11.4 23.2
11.8 23.6
Derby
15 32.7
15.6 33.2
Donnybrook
6.7 17.2
7.2 17.6
Esperance
8.3 17.7
8.8 18.1
Eucla
7.9 21.5
8.4 21.9
Eyre
4.3 21.6
4.5 22
Geraldton
9.5 20
10 20.5
Halls Creek
16.1 32.6
16.6 33
Kalgoorlie
6.8 20.3
7.2 20.7
Katanning
6.1 14.7
6.5 15.1
Kellerberrin
5.3 18.6
5.6 18.9
Laverton
7.5 22.4
7.9 22.9
Marble Bar
13.8 31.1
14.3 31.5
Merredin
6.1 17.7
6.5 18.1
Mt Barker
6.8 15.6
7 15.8
Northam
6.2 18.4
6.6 18.7
Onslow
13.8 27.7
14.3 28.1
Perth
8.8 18.5
9.3 18.9
Rottnest Island
12.4 17.3
12.9 17.7
Southern Cross
4.6 18.1
5 18.6
Wandering
5.3 16.1
5.6 16.6
Wiluna
7.5 24.8
7.7 25.2
Wyndham
18.3 34
18.8 34.4
York
5.6 17.9
5.9 18.3
I’ve questioned the BoM on what happened and received this reply …
“Thanks for pointing this problem out to us. Yes, there was a bug in the Daily Weather Observations (DWO) on the web, when the updated version replaced the old one around mid November. The program rounded temperatures to the nearest degree, resulting in mean maximum/minimum temperature being higher. The bug has been fixed since and the means for August 2009 on the web are corrected.”
I’m still scratching my head, partly because the bug only affected August, not any other month including September or October. There’s been no change to the August data on the BoM website since I pointed out the problem and they’re still the higher temps.
So if anybody has been monitoring any Western Australia sites at all (or other states?) via the BoM website, be aware that your August 2009 temperature data may be wrong, depending upon whether you recorded it before or since Nov 17, and it’s not yet known what’s right and what’s wrong.

December 8, 2009 7:13 am

Dang, these “adjustments” look like Michael Mann’s work and that is not a compliment.

MattN
December 8, 2009 7:14 am

F-word…

Leon Brozyna
December 8, 2009 7:17 am

A fine piece of detective work.
Now all that remains to be done is to translate the terms into language a very simple layman (like a journalist or politician) can understand. Let’s try this – the raw data are the real temperatures while the adjusted data are what some scientists think the recorded temperatures should be.
Its no wonder all the public normally sees is the adjusted data. Don’t want to confuse people with too many facts. They might start acting smart, like asking embarrassing questions such as, “How many adjustments are compound adjustments?” {Adjustments on top of adjustments on top of adjustments and so on.}

Alan the Brit
December 8, 2009 7:20 am

Barry Foster (00:51:02) :
That should read December 1941, December 1942 was a whole year out & our colonial cousins would not have needed any warning by then, they would have figured it out themselves by the carnage left by the Japanese carrier fleet!
That was a great post & it certainly looks like somebody has been telling porkies! I always understood that “homogenising” was what they did to milk to make it sterile for public consumption! Not far off the mark.

Gary Pearse
December 8, 2009 7:20 am

I see that the surfacestations project of Anthony’s has to be expanded to include what is done with the data because it seems that if the readings aren’t to the AGWers’ liking they “homogenize it” and if they are concerned that one might find the homogenization a way beyond reasonable, they are tempted to throw the raw data away! We better get at this quickly.

UC
December 8, 2009 7:24 am

“http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.”
But /94/941200 is
Number= 941200
Name= DARWIN AIRPORT
Country= AUSTRALIA
non-adjusted I guess

December 8, 2009 7:25 am

Unbelievable. First, they put thermometer on the airport. Then skew the readings upwards. Then merge all skewed and biased readings by highly suspicious algorithm into one sleazy mess called global temperature dataset.
I am convinced, that with unfudged SST data and high quality station data, even much less in number than thousand used for GISTEM/HadCRUT, global dataset should look like Arctic record with less amplitude. 40ties equal to present times.
http://www.junkscience.com/MSU_Temps/Arctic75Nan.html

December 8, 2009 7:26 am

Wow.
Thank you for all your hard work, Willis – for the clear explanations, the graphs, and he smoking gun.
I can’t believe these ‘adjustments’ – with steps like those one could prove anything at all.
And that’s ‘science’?
I must have been absent when they taught that in college …

Arthur Glass
December 8, 2009 7:26 am

In view of the (quoted) current (as of 10:00 AM EST) headline on Climate Depot, may this curmudgeon of a former English teacher remind everyone that ‘breath’ is a noun. (Proposed new motto for the EPA: ‘Every breath you take, I’ll be watching you.’) The verb is ‘breathe’.
There are, of course, phonemic changes in the pronunciation: In the verb, of the dental fricative represented by the spelling /th/ is ‘voiced’ i.e. produced with vibration of the vocal cords. Since in English voiced consonants invite lengthening of preceding vowels, this is ‘long’ /ea/. All of this in contrast with unvoiced /th/ and ‘short’ /ea/ in the noun ‘breath’.
This variation in medial vowel sounds is called in linguistics ‘guna’, from the Sanskrit. It is a persisting characteristic of Indo-European languages, and often has grammatical significance, as in irregular verbs in English, e.g. ‘lead’ is present tense, ‘led’ is past. In languages with complex rules for producing verb stems, e.g classical Greek, guna is crucial.
Of course in the case of classical Greek, as well as of many modern languages, we have to deal with the phenomena of ‘declension; ‘ whereas in English we ‘hide the declension.’
Whew! That was a long way to go for a punchline!

Mike
December 8, 2009 7:29 am

Falsus in uno, falsus in omnibus. When in Rome, do as the Romans do 😉

Atomic Hairdryer
December 8, 2009 7:29 am

Ok, my chin hurts.
Once from jaw dropping and hitting the desk having seen the magnitude of the man-made global warming, then from looking at the the severity of the Darwin bombing. Great article, and thanks also for adding the local history.

David
December 8, 2009 7:34 am

Darwin was attacked by the Japanese during WW2. So, accurate temperature recording may have been a lower priority during the early 1940’s.

Andy
December 8, 2009 7:41 am

Story in the BBC today.
“We’ve seen above average temperatures in most continents, and only in North America were there conditions that were cooler than average,” said WMO secretary-general Michel Jarraud.
It’s interesting how North America, with the most stations and technology, is the only one that shows cooling. It’s warming everywhere else. Naturally.
These people are shameless in their manipulation.
http://news.bbc.co.uk/2/hi/science/nature/8400905.stm

kwik
December 8, 2009 7:41 am

BLIMEY!!!!

geronimo
December 8, 2009 7:49 am

OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.
http://www.guardian.co.uk/

3x2
December 8, 2009 7:49 am

Having edited and graphed up a lot of N. European stations from v2.mean, I don’t think that what you have found in Darwin is in any way unique.
My current theory about the large differences between v2.mean and GISS (who knows with HadCRUT) is that they are not the result of malice as many believe but are a result of the kind of bulk processing operations made easy by the computing power available.
Let me explain. In much the same way that individuals are lost in the bulk processing operations performed found in every day activities (I am not a number! type) the same can be said of individual stations when processing so many records. That is to say that what is being processed is lost, only the results are important.
Just as a quick view of the scale. v2.mean has some 596000 entries. That is almost 600,000 years worth of annual records. mean_adj has some 422,000 so lets round up a little and say that the difference is 200,000 years. Each year has 12 points, that is 2,400,000 monthly means (lets not go to daily max/min)
So in some way 2.4 million points have, by some means, disappeared. My point here is simply that it would take a very determined individual to hand process 600,000 down to 400,000 examining 7,200,000 data points along the way.
I have hand edited about 160 “local” stations and tedious doesn’t even begin to describe the experience.
So I’m left with my “warming as an artifact of bulk data operations” which attempt very badly to make sense of individual stations. Nobody wants to go back over the results and check what has happened to individual stations where the end result (global average) is within “expectations”. The code would only be checked where there was plainly “something wrong” with the results. The processing code would then be changed and the whole job re-run until “expectations” are met
It is interesting to look at v2.mean Iceland and the same stations via GISS. They are very much the same and I believe that this may be because Iceland neatly escapes many of the adjustment processes that you identify in N. Australia.

JustPassing
December 8, 2009 7:50 am

Copenhagen climate summit in disarray after ‘Danish text’ leak
The UN Copenhagen climate talks are in disarray today after developing countries reacted furiously to leaked documents that show world leaders will next week be asked to sign an agreement that hands more power to rich countries and sidelines the UN’s role in all future climate change negotiations.
http://www.guardian.co.uk/environment/2009/dec/08/copenhagen-climate-summit-disarray-danish-text
Download here
http://www.guardian.co.uk/environment/2009/dec/08/copenhagen-climate-change%20here

George B
December 8, 2009 7:52 am

Much of what we know was built on theory.
As part of the Scientific process many theories have been proven wrong and new ones adopted.
Al Gore, and many in our government are graining power money and influence by supporting these false data. We now have an EPA that believes they have more power and influence than God and Country combined!
We are in the process of turning over our freedom, liberty, and wealth to a new Religion.

Richard Sharpe
December 8, 2009 7:53 am

Wow, I have not seen that airport for a long while. Last time was in 1996, but the most memorable time was just after Christmas, 1974, when I was helping get people onto planes.
It has changed a lot.

December 8, 2009 7:54 am

“David (07:34:50) :
Darwin was attacked by the Japanese during WW2. So, accurate temperature recording may have been a lower priority during the early 1940’s.”
Or higher if the temperature is of any importance to airplanes, tanks and troops…

Ian
December 8, 2009 7:55 am

** Applause **
Diligent & well researched piece.
Please keep up the superbly detailed work.
Best regards

December 8, 2009 7:55 am

Thanks Willis for this fine piece of work. Was that really the first Australian site you looked at in detail?

December 8, 2009 7:56 am
phlogiston
December 8, 2009 7:57 am

I’ve seen the light – global warming really is anthropogenic!
The globe itself is probably not warming, certainly not any more, but the global temperature record is another matter altogether.

AdderW
December 8, 2009 8:01 am

geronimo (07:49:33) :
OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.
http://www.guardian.co.uk/

Are we certain that this is not a hack, we wouldn’t want to get this wrong now, would we 🙂

AdderW
December 8, 2009 8:02 am

geronimo (07:49:33) :
OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.
http://www.guardian.co.uk/

Are we certain that this is not a hack, we wouldn’t want to get this wrong now, would we 🙂

3x2
December 8, 2009 8:03 am

RE : HadCRUT and your FOI requests.
GISS seems to perform the adjustments and stats “on the fly” mainly using v2.mean as a base. There doesn’t seem to be a bulk list left behind to compare to the original (v2.mean) so bulk comparisons are impossible.
But, from what I have read in the climategate files CRU seem store their adjustments in a database, the adjustments and stats being two separate processes. Perhaps this is why there is no chance of us ever seeing that data as used by CRU, it would allow bulk, station by station, comparisons with the “raw” data (v2.mean?)

Varco
December 8, 2009 8:05 am

“http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.”
The explanations given by the Met office are illuminating…
http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html
“Question 4. How can you be sure that the global temperature record is accurate?
The methodology is peer reviewed. There are three independent sets of global temperature that all clearly show the rise in global temperatures over the last 150 years. Also we can observe today that other aspects of climate are changing including reductions in Arctic seaice and glacier volume, and changes in phenological records, for example the dates on which leaves, flowers and migratory birds appear.”
So there you go, definitive proof of data accuracy via the dates upon which migratory birds appear. If any journalists who own a garden are reading this blog perhaps the above comment will assist in understanding why some many people are sceptical of the alleged ‘science’ from these institutions.
Does anyone else think it is curious that being ‘peer reviewed’ is cited along side migratory bird timing as proof of accuracy? Perhaps the Met office think being ‘peer reviewed’ is no longer enough after Climategate emails cast doubt on the process?
It is sad that reputable institutions have fallen so low…

December 8, 2009 8:07 am

Kudos, Willis, for your timely exposure of BAHD — Biased Anthropogenic Homogenization of Data.
Bob

BHenry
December 8, 2009 8:08 am

As a layperson it took me a some time and study to understand and appreciate Eschenbach’s post. This is also true with similar posts on the subject of “climate change”. Of course it was not written for the layperson but for those who are engaged in the study of the subject. The advocates of anthropomorphic climate change have gotten traction in the media by simplifying the subject so the average person can understand it. Any trial lawyer will tell you that you can’t persuade a jury of laypersons using technical language, you must state it in simpler language. I would like to see someone or group with credible credentials issue public statements on the subject which are understandable.

Editor
December 8, 2009 8:13 am

Willis,
The truly raw data for the stations are in the daily temperature records (.dly files) on the GHCN FTP site. What is interesting is that when GHCN creates the monthly records that they (and GISS) use, they will throw out an entire month’s worth of data is a single daily reading is missing from that month.
When a month is missing from the record, GISS turns around and estimates it using a convoluted algorithm that depends heavily on the existing trend in the station’s data, thus reinforcing any underlying trend. GISS can estimate up to six months of missing data for a single year using this method.
It seems to me the best place to start is with the raw daily data and find out how many “missing” months have a small handful of days missing, and estimate the monthly average for those days, either by ignoring the missing days or interpolating them.

Javelin
December 8, 2009 8:15 am

Excellent analysis – looks like you did more work on this they they did in 10 years. Once they publish all the raw data
So in summary. There are three Global datasets (CRU, GISS, and GHCN) used to justify warming temperatures, however they all all based on the same underlying data (GHCN), and this data requires adjusting because of changes to the weather stations and positions of thermometers. When you look at the underlying GHCN data in detail every time they make these adjustments they adjust the temperature upwards – without justification. Simple as that.
When the raw data is published every fool on the planet will be left naked in their all together with their hands over their nuts. Now everybody who wanted carbon taxes raise your hands.

Henry chance
December 8, 2009 8:15 am

The only thing all this hard work proves is that they have a motive to change raw data.
I see they have a motive to fight the release of raw data.

JP
December 8, 2009 8:17 am

“This thing about not having raw data anymore. I am confused about that. There is raw unadjusted station data that apparently can still be had by any Susie Q or Tommy T out there. Isn’t that the raw data?”
This has bothered me for sometime. Supposedly GISS, NOAA, and Hadley use the same stations; but they all come up with different temp reconstructions. GISS applies different adjustments to the same data than NOAA or Hadley, and vice versa. And I am not so sure they all use the same reporting stations. All perform very questionable and many times unpublished adjustments to different stations. To make sense of it all is impossible.

Tim Clark
December 8, 2009 8:17 am

Good analysis Willis. One minor point, if it has already been addressed above, just ignore me. Where you have the phrase and also a fine way to throw away all of the inconveniently colder data prior to 1941. , shouldn’t that be “inconveniently warmer data”?

Douglas Hoyt
December 8, 2009 8:27 am

According to Torok et al (2001), the UHI in small Australian towns can be expressed as
dT = 1.42 log(pop) -2.09
For Darwin with a population of 2000, the UHI is 2.60 C.
For Darwin with a poulation of 120,000, the UHI is 5.12 C.
The net warming then is 2.52 C, which explains all the warming that Eschenbach shows in Figure 7. Presumably the rapid growth in Darwin population began in 1942 and was relatively constant before then.
It appears that no UHI correction has been made. If they implemented it, then the warming would totally disappear.
See http://noconsensus.wordpress.com/2009/11/05/invisible-elephants/

TJA
December 8, 2009 8:27 am

http://news.bbc.co.uk/2/hi/science/nature/8400905.stm

In a separate move, the Met Office has released data from more than 1,000 weather stations that make up the global land surface temperature records.
CLIMATE CHANGE GLOSSARY
Select a term from the dropdown:
Suggest additions
Glossary in full
The decision to make the information available is the latest consequence of the hacked e-mails affair.
“This subset release will continue the policy of putting as much of the station temperature record as possible into the public domain,” said the agency’s statement.
“As soon as we have all permissions in place we will release the remaining station records – around 5,000 in total – that make up the full land temperature record.

Love this quote:

“We’ve seen above average temperatures in most continents, and only in North America[where the numbers have been open to scrutiny and adjustment – TJA] were there conditions that were cooler than average,” said WMO secretary-general Michel Jarraud.
“We are in a warming trend – we have no doubt about it.”

Mike Haseler
December 8, 2009 8:28 am

It’s a sad day for British Science!

Roger
December 8, 2009 8:28 am

I think a major story will be the leaker, more so than the leak. You will find the the IPCC etc will soon not be at all interested in investigating it. neither will UEA. The investigators may be under intense pressure to not release information about the leak. This explains the intense desire of the AGW to make sure the public thinks that the KGB/goblins etc were responsible. I think that, if is found that it was an internal leak, the IPCC and all associated with it might as well disband and go home.

PeterS
December 8, 2009 8:31 am

I have just asked the Met Office if the data they have made available is raw or processed (e.g. to correct non-warming). They referred me to this statement:
“The data that we are providing is the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.”
So now I know. It’s the non-climatic influences I’m worried about…
I hope you guys can find examples where clear comparisons can be made along the lines of the current post.

Robert Wood
December 8, 2009 8:31 am

A few months after Pearl Harbour, the Japanese bombed Darwin flat. Some good wrecks in the harbour 🙂

Methow Ken
December 8, 2009 8:36 am

Never was a single smoking gun more clearly and devastatingly exposed.
Major kudos to Mr. Eschenbach for his outstanding and meticulous effort.
But: Now what ?? . . .
Given the magnitude and extent of this demonstrated ”artificial adjustment”, as already mentioned the ”false in one false in all” assumption is a fair starting point as far as reasonable suspicions. But to have a shot at convincing the general public let alone the MSM, I expect a significant number of other stations around the world will have to be shown to have undergone similar large and unjustified ”extended warping” of the data.
This is what computers are good at (IF the software is professionally done):
Seems like it should be possible to write a program that would take as input:
[1] The totally unadjusted raw data for individual stations; and:
[2] The end-result data after all tweaking by CRU, GISS, & GHCN.
Minor SIDEBAR detail:
Of course 1st you have to actually GET both sets of data. . .
Ideal program output would then be graphical comparison along the lines of the excellent presentation in this thread start. . . . . Having said that, as someone who spent 25 years on a complex software engineering project, I immediately add:
Yes: I am aware that done right, this would be a non-trivial software project.
And: Also recognize as was pointed out in prior comment by TheSkyIsFalling that metadata giving reasons for real-world adjustments for individual stations would need to be reviewed.
OTOH:
Surely would not need to do a huge number of stations world wide to reasonably demonstrate and prove a pervasive smoking gun if similar results were common; i.e.:
Seems like a few dozen or so similar examples would start to be pretty overwhelming hard evidence.
Interesting times, indeed. . . .

kwik
December 8, 2009 8:37 am

Ont he CRU curve ,I really dont understand what I see. Well, I think the black line is the raw data, since you mention that under the graph. But the red and blue shaded area? Model predictions? Or?

Richard Sharpe
December 8, 2009 8:40 am

Willis,
Where was station zero located?
I am reasonably familiar with Darwin and might know where the station was located.
However, I cannot think of anything that would cause them to be 0.6C high for such a long period of time and with what looks like a gradually declining trend from 1880 to 1940.
Are you just trying to hide the decline? 🙂
While that large jump in 1941 looks suspicious I think it was later in the war that the Japanese bombed Darwin (would have to look it up, but as I recall, Pearl Harbor was late in ’41 wasn’t it. Something like December 7, 1941. While the RAAF was in the region, but probably largely operating from Tyndall, the US military presence in Darwin did not commence, I believe, until the US entered the war.) So, I suspect that the bombing did not cause the problems.

bsharp
December 8, 2009 8:47 am

Mr. Eschenbach:
I appreciate your effort in this matter. Your post has been shared with a person who gives great authority to the existing ‘academic community’. That person has dismissed your findings as opinion and unsupported personal conjecture that the process is broken.
Part of our discussion has hinged on your statement “So I looked at the GHCN dataset.” While acknowledging that the blog venue doesn’t require the same level of source citation as a peer-reviewed journal, your sources have been questioned.
Could you provide a more detailed reference/ link to the HCN data in question [both raw and adjusted]? Thanks and regards.

JJ
December 8, 2009 8:48 am

Good grief. Enough with the unmitigated speculation and hyperbole.
Willis – excellent analysis, but you go a bridge too far with your conclusions.
“Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? … Why adjust them at all?”
Those are very good questions. Making claims requires answers.
“They’ve just added a huge artificial totally imaginary trend to the last half of the raw data!”
You dont know that. You should not claim to know that which you do not. That is Teamspeak, leave it to the Team.
You have turned up what we already knew – that the alleged ‘global warming’ trend is a function of the adjustments applied to the raw data (or, as in the case of UHI and similar effects, not applied) as much or moreso than the raw data. That is worrying, but not necessarily illegitimate.
The example that you have done an excellent job of laying out here demonstrates that those who are making these adjutments need to have very good explanations for why they did so. Having stuck your neck out and called them dishonest, you had better pray that they dont have good explanations for those adjustment. If they do, the best that is going to happen is that you, and by the broad brush we, are going to be made to look like a bunch of biased, ranting fools.
Perhaps you should stick to pointing out worrying potential issues (again, good job of that) and save the claims of gross incompetance and malfeasance for after the questions you raise have been answered.
JJ

December 8, 2009 8:48 am

Willis,
Very very nice. Not much else to say ‘cept wow.

pwl
December 8, 2009 8:48 am

Breathtaking! Clear and precise. Yikes and Double Yikes indeed!

Billy
December 8, 2009 8:49 am

Could somebody help me out here? In The Times (http://www.timesonline.co.uk/tol/news/environment/article6936328.ece), we have the claim that CRU’s data was destroyed:
‘In a statement on its website, the CRU said: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”
The CRU is the world’s leading centre for reconstructing past climate and temperatures. Climate change sceptics have long been keen to examine exactly how its data were compiled. That is now impossible. ‘
However, over on RealClimate you’ve got Gavin Schmidt making claims like this:
[Response: Lots of things get lost over time, but no data has been deleted or destroyed in any real sense. All of the raw data is curated by the relevant Met. Services. – gavin]
[Response: No. If that was done it would be heinous, but it wasn’t. The original data rests with the met services that provided it. – gavin]
So which is it? Does the data exist somewhere or not? If it does then I don’t even understand why CRU even made such a dramatic announcement. Why didn’t they just say what Gavin Schmidt says above?
But on the other hand, reading this article it almost seems like the author DOES have access to raw data (and normalized data) which he used to calculate the Darwin adjustments. Is this the same data that CRU started with? If so, why not use this data to start with to reproduce CRU results? If the data does exist in some format I don’t see why there is any controversy at all.
Seems like somebody is being disingenuous but not sure who. This is something I find incredibly frustrating about this issue. The scientific issues are understandably opaque and subject to debate. That’s hard enough to get to the bottom of. But even simple things like, ‘Was the data destroyed or not?’ are subject to so much spin it’s nearly impossible for somebody who’s trying to be objective to sort it all out.

stevemcintyre
December 8, 2009 8:52 am

HEre’s a somewhat related post that I did on an Australian station a couple of years ago http://www.climateaudit.org/?p=1489

Neo
December 8, 2009 8:54 am

Given the amount of money they are talking about just for “Cap-n-Tax” (not to mention the IPCC money), this distortion is criminal.

john
December 8, 2009 9:00 am

Given the thoroughness of Willis Eschenbach’s methodology in tracing the Darwin temperature record, and what looks like a successful survey of US surface temperature stations (www.surfacestations.org), maybe a similar effort could be developed to audit these three main temperature databases. Establish a single method/process for review of the record, with required data formats, etc., publish a manual and let the globe have at it.

December 8, 2009 9:01 am

Interesting:
From: Phil Jones
To: Kevin Trenberth
Subject: One small thing
Date: Mon Jul 11 13:36:14 2005
Kevin,
In the caption to Fig 3.6.2, can you change 1882-2004 to 1866-2004 and
add a reference to Konnen (with umlaut over the o) et al. (1998). Reference
is in the list. Dennis must have picked up the MSLP file from our web site,
that has the early pre-1882 data in. These are fine as from 1869 they are Darwin,
with the few missing months (and 1866-68) infilled by regression with Jakarta.
This regression is very good (r>0.8). Much better than the infilling of Tahiti, which
is said in the text to be less reliable before 1935, which I agree with.
Cheers
Phil
Prof. Phil Jones
Climatic Research Unit Telephone +44 (0) 1603 592090
School of Environmental Sciences Fax +44 (0) 1603 507784
University of East Anglia
Norwich Email p.jones@xxxxxxxxx.xxx
NR4 7TJ
UK

Ben
December 8, 2009 9:05 am

Wonterful text.
Minor typos :
– figure 4, legend : bad copy-and-paste of the legend of figure 3 (remove the reference to the year 2000) ;
– the right Latin saying is “Falsus in uno, falsus in omnibus”.

boballab
December 8, 2009 9:05 am

kwik (08:37:09) :
“Ont he CRU curve ,I really dont understand what I see. Well, I think the black line is the raw data, since you mention that under the graph. But the red and blue shaded area? Model predictions? Or?”
First the black line in figure one is labeled “observations”, however that is not the Raw observation. That is the Observation after adjustment.
Second from what I understand the Red area is what the Model says the temp should be with CO2 forcing. The Blue are is what the Model says the temp should be without CO2 forcing.
Now when I looked at Fig 1 and and fig 2, to my Mark 1 eyeball the Raw looks like it correlates closely to what the models say the temp would be WITHOUT CO2 forcing (Blue shaded area).
Earlier I aked if Willis had laid the raw data over the the graph in Fig 1 and see if it does correspond with the blue area. The reason being if the IPCC’s own models without CO2 forcing matchs the Raw and Willis reconstruction, that in turn gives credence that people are ajusting the Raw to match the Models Red area or in other words why you get that huge adjustment.

Ben
December 8, 2009 9:11 am

(Also : 1850 instead of 1650 – second line after the photograph of the Darwin airport.)

December 8, 2009 9:11 am

Billy (08:49:21),
The obvious answer: Produce the raw data. The hand-written, signed and dated B-91 forms recording the daily temps at each surface station would be a good start.
JJ (08:48:13):
“…the alleged ‘global warming’ trend is a function of the adjustments applied to the raw data (or, as in the case of UHI and similar effects, not applied) as much or moreso than the raw data. That is worrying, but not necessarily illegitimate.”
What smacks of illegitimacy is the fact that when the data is massaged, it almost always shows warming: click1, click2, click3.
For the true global temperature, a record of temperatures from rural sites uncontaminated by UHI would show little if any global warming: click1, click2 [blink gif – takes a few seconds to load].
The CRU, the IPCC, the NOAA and the rest of the government funded sciences offices are trying to show an alarming increase in global temperatures. They almost always show a y-axis in tenths of a degree to exaggerate any minor fluctuations. But by using a chart with a less scary y-axis, we can see that nothing unusual is occurring: click

Jeff
December 8, 2009 9:15 am

I think we (the internet community) can end this debate once an for all … Using the stations cherry picked by the IPCC we could set up station teams via internet volunteers to review the raw vs “value added” GHCN data and validate those adjustments … where an adjustment appears to have been applied without good reason the team should attempt to do their own adjustment based on logical and justifiable reasoning …
This should allow the world to have a verified record set of actual temp measurement for at least the last 100 years … we don’t have that now …
step one would be to classify station location for appropriateness … bad sites would be marked for adjustment or exclusion … adjustments should never be averages they should be delta adjustments based on nearby reliable (i.e. non bad) sites …
An Army of Davids so to speak …
No reason this can’t happen within a year or two if someone can coordinate it …
Set up clearly defined rules on site validation …
Set up clearly defined adjustment methods to measure the warmists “valued added” against …
Use those adjustments methods to re-adjust the eggregious site adjustments …
Create a peer review process to allow a second, third and forth set of eyes to validate the work done by the team …
Allow anyone to join a team … anyone … Warmists are Welcome 🙂
Team decisions should be a super majority i.e. >66%

Phil A
December 8, 2009 9:16 am

“Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.” – PeterS quoting the Met Office
And with no visibility whatsoever of what those “adjustments” were, despite knowing full well (or because they know?) that therein lies the principal suspicions of dodgy data manipulation.
Hm. I wonder if the Met Office even *have* records of how those adjustments were done? Or whether some of them were done with long-lost bits of code on long-dead computers and what they actually have is an archive of raw data A, an archive of adjusted data B and an embarrassing lack of repeatability of how they got from A to B in the first place. Maybe they’re looking at the likes of Darwin and going [snip] too – except they can’t and won’t admit they have a problem.
Or was that what ‘Harry’ was trying to do? Trying to replicate past adjustments in putting together the adjusted database? And did he ever “succeed”?

December 8, 2009 9:21 am

At what point do we declare that these temperature records and proxies have too little confidence to be of any use in determining if AGW exists?
Would a going forward position be to use a well understood set of measurements and monitor changes for the next X years and see which models (hypotheses) are working. It seems Copenhagen may fail and that the “catastrophe” hypothesis is dying under the weight of Climategate — so why not?

rickM
December 8, 2009 9:32 am

I think what is missing from the current “coverage” so-called is this kind of analysis to beat back the claims that the science underpinning (I would love to use the word undermining) the CRUgate emails.
What I typically see is a talking head who interviews a warmist, and the talking points are driven by the warmist – again, no debate.
1) Emails were hacked
2) The emails and specific comments have been taken out of context
3) The science is sound

beng
December 8, 2009 9:35 am

Fine work as usual, Willis.
******
8 12 2009
JP (04:46:12) :
This subject was covered in a CA thread some years ago. I believe it came up when someone discovered the TOBS adjustment that NOAA began using. The TOBS adusted the 1930s down, but the 1990s up. Someone calculated that the TOBS accounted for 25-30% of the rise in global temps.
******
The TOBS issue was the first thing I thought, too, to explain the massive adjustments. They seem to use this as a catch-all adjustment because by nature the correction can be quite large in some specific instances (in both directions). The metadata to confirm this perhaps isn’t available — I don’t know.

Martin
December 8, 2009 9:36 am

Willis,
I don’t know where the NOAA URL is that gives individual station data (as opposed to data sets) so I went to the NASA/GISS site with the individual stations data.
http://data.giss.nasa.gov/gistemp/station_data/
The Darwin raw data there (http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=Darwin) seems to correspond with the raw data you showed in your graphs. But the homogenized data (at http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=2&name=Darwin) does not look like what you show as homogenized data.
Could you give a URL for the site where you got the data (and indicate which file if it is an ftp or otherwise multiple file listing)?
Thanks.

Bill Webster
December 8, 2009 9:36 am

It gets worse…
The latest adjustment for Darwin airport is +2.4 deg C
It would appear that as new data for that station is added each month going forward it will be immediately adjusted upwards by such a large amount by the “scientists”. How can this be right given that current temperature data with modern instruments can be expected to be more accurate than historical data?
Also if the UHI effect is taken into account any adjustment should be negative, not positive….
If this kind of fiddle is happening with current readings from many other stations around the world, then it is little wonder that the Met Office is claiming that the current decade is the hottest ever!

December 8, 2009 9:38 am

I am an engineer but thought it was obdivious to all what a scam the GW program is. Of course the data are “poop”. See Icecap, Dec 5, 2009 Alaska Trends by Dr Richard Keen for more “poop” in Alaska records–

Paul Vaughan
December 8, 2009 9:50 am

Insightful comment:
vdb (01:05:25) “An adjustment trend slope of 6 degrees C per century? Maybe the thermometer was put in a fridge, and they cranked the fridge up by a degree in 1930, 1950, 1962 and 1980?”

I agree with ForestGirl (01:11:06) about the need for site profiles. When I used to do forest & soils research, we used to take 2 or 3 pages worth of notes on the physical setting of each study plot. The log notes were diligently incorporated into electronic databases with full awareness of their value to future investigators.
Boris Gimbarzevsky (01:07:15) “[…] the only time one sees homogeneity is in the winter where every place in my yard not close to the heated house is uniformly cold […]”
You raise some interesting observations Boris. I hope many will consider your suggestion. I’ve been studying coastal BC stations. Serious questions arise.

Those who seem to immediately conclude ~1941 issues at Darwin must relate to war activities need to review:
a) their Stat 101 notes on confounding.
b) climate oscillation indices (e.g. SOI, NAO, PDO, AMO, etc.)
c) Earth orientation parameters.
Don’t make the mistakes Mr. Jones has made!

Great article – only the 2nd really interesting article on WUWT since CRUgate broke. This thread has inspired me to dust off a cross-wavelet analysis of TMax vs. TMin that highlights …. (may be continued…)

Chris Schoneveld
December 8, 2009 9:51 am

In Burketown the data between 1907 and 1953 had been adjusted upwards by approx. one degree C making a clear unadjusted warming trend to an adjusted flat/cooling trend. So it is not a nation wide bias in one direction.
http://www.appinsys.com/GlobalWarming/climgraph.aspx?pltparms=GHCNT100XJanDecI188020080900111AS50194259000x

Jeff
December 8, 2009 9:52 am

Is it just me but why does it seems that everytime someone tries to reconcile the raw data to the “value added” data unexplainable upward adjustments are almost always found ?
And why would Australia which is 79% the size of the USA use only 3 stations ?
If they told you they used 5 stations for the entire USA would anyone ever listen to them again ?
Garbage In –> Garbage Code –> Garbage2 Out
I’m coining the term G2O …

Michael
December 8, 2009 9:55 am

Entertain me for a moment in history.
I believe in the actual proven conspiracy Climategate, all the ingredients are there. Weather you believe in conspiracies or not is of no consequence. Hypothetically speaking, that there is a greater conspiracy of the Illuminati
using Man-made climate change to be used to their ends, Climategate may be Payback to the NWO conspirators, to commemorate the assassination of JFK. The E-mails were leaked on the same day of the week Kennedy died, 2 days before his actual assassination date. Perhaps they had to be leaked to the Internet on the day they were because the offices would be closed over the weekend. Close enough eah?

December 8, 2009 9:58 am

Just did a few calculations for raw v homogenised GHCN data for my home town of Brisbane using the Eagle Farm airport station and the trend flips from -0.6/100yrs to +0.6/100yrs looking at the data from 1950 – 2008. Not as dramatic but still interesting. The data pre 1978 is adjusted down and post 78 adjusted up. I have not looked into any reasons so I am just throwing it out there. I’ll get a few plots up tomorrow. I have no station history to see if there are reasons for this adjustment. Agree that a few dozen carefully examined stations by Willis and others who know what they are doing would be a good step.

JJ
December 8, 2009 9:59 am

Smokey,
My point being that smacking of this and that is not sufficient to make damning accusations. It is not only immoral to do so, it is very bad strategy. Do that enough, and at some point you are going end up having your ass handed to you on a silver platter.
That the adjustments applied to date almost always show warming is not necessarily illegitimate in the aggregate, much less is it any proof that the adjustments applied to any particular station are wrong, as Willis is claiming here.
It is entirely possible that the adjustments applied to the Darwin station are complete and justifiable, in which case Willis is a blathering idiot, not a very nice person for having made baseless accusations, and he, Jones and Mann can share a suite at the next American Sphincters Association convention.
It is also entirely possible that the adjustments that have been applied are legitimate, but that there are compensating adjustments (such as UHI or other siting issues) that have innocently or intentionally been left out. That leaves the current accusations just as off the mark. GHCN would be judged incompetant and/or culpable, but not for any of the reasons listed here.
It is also entriely possible that the adjustments that have been applied are incorrect, but that they were the result of a mistake or unconscious bias. GHCN is therefore error prone, but not criminal.
It is also entirely possible that illegitimate adjustments that have been applied and other legitimate adjustments left out, in a concerted effort to cook the books.
The information we have right now is consistent with all of these hypothesis. Stick to reporting the facts, and leave the fanciful storytelling to Team members describing their proxies.
Note that Anthony’s work here has been to catalogue station siting issues that potentially demand adjustment or other response (such as outright discarding) that are every bit as intensive as the adjustments seen here. Railing against adjustments per se is to slit one’s own throat, as is unsupported accusation of criminality …
JJ

David Ashton
December 8, 2009 9:59 am

This is a long thread, so sorry if my question has been asked previously, but when were these adjustments to the raw data first made? Before 1988 or after?

pat
December 8, 2009 10:02 am

Exactly the same prevarication as NIWA. Lies disquised as data.

Reed Coray
December 8, 2009 10:03 am

Shouldn’t that be “Crikey” not “Yikes”?

Duncan
December 8, 2009 10:05 am

Anyone made a joke about those Southern Hemisphere thermometers being upside down, yet?
That’s obviously why the trend changed direction, once they were normalized.

Dyspeptic Curmudgeon
December 8, 2009 10:11 am

I haven’t had the time to read all the comments, but following a link in the article and further, I found a comment which noted that the original Darwin station was at the Post Office. A station was installed at the Airport (at some point), and the Post Office was destroyed when Darwin was bombed in February 1942. The last monthly average for the Post Office station was January 1942. It is therefore possible that the discontinuity in the record is a switch to the Airport station for 1941 et seq.

December 8, 2009 10:12 am

JJ (09:59:35),
All of your “entirely possible” comments can be resolved by the full and complete disclosure of all of the raw and massaged data, and the hidden methodologies and “adjustments”, that the Team uses to come up with their scary AGW conclusions.
As they say, just open the books. Then everyone can see if they’ve been cooked. The fact that they’re still stonewalling tells me all I need to know about their veracity and accuracy. The leaked emails and code only reinforces my suspicion, and any contrary and defensive red faced arm-waving does nothing to convince me otherwise.

rbateman
December 8, 2009 10:18 am

Smokey (10:12:52) :
But Smokey, if they open the books, you know what will be found.
Someone will have to go in and open the books.

Zeke the Sneak
December 8, 2009 10:20 am

Riveting reading! Thank you.
A little black humor
“Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?”
Interestingly, a study has found (through the “use of eclectic proxies for temperature and other variables where empirical data is lacking”) a link between climate modification and Aztec sacrificial rituals:
http://geoplasma.spaces.live.com/blog/cns!C00F2616F39D0B2B!736.entry

December 8, 2009 10:24 am

Here is the same analysis done on the Grand Canyon:
http://i45.tinypic.com/bguywn.jpg
The nearby hottest city around is nearby (Tucson) and likely has been “homogenized” with this pristine site.
Sources:
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/
http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425723760080&data_set=0&num_neighbors=1

George E. Smith
December 8, 2009 10:26 am

So I didn’t see any mention; in the list of “inhomogeneity” factors, of the Weber grill effect.
Of course I believe at Darwin airport; when they say; “”just toss another shrimp on the barbie !” they simply lay it on the ground; which is hot enough to frazzle even the best kangaroo steaks.
This whole “data adjustment” business seems like a giant fraud to me.
If I place 12 thermometers in various places around my house, and read them every day for a hundred years, I can then perform ANY mathematical AlGorythm to that raw data, to obtain some out put data I can publish; I’ll call it GEST; the George E. Smith Temperature. Now if I move one of those thermometers; or maybe I just replace the old valve B&W TV set, with one of those modern gas hogging flat screen Plasma TVs, I would expect that the output of GEST to change, because the TV station thermometer is now getting goosed by my nifty new TV set.
Well so what; I simply put an asterisk in the report for the day I install my new TV, to tell all my interested neighbors that I have a new TV and from here on out, GEST is going to be different.
The fraud part comes in when I brazenly rename my GEST; and pompously call it GESHT for the George E. Smith House Temperature.
Well the AlGorythm that I perform on my 12 thermometer raw data to get GEST, in no way represents the temperature of MY HOUSE which is what GESHT fraudulently purports to be. And the reason it isn’t the true GESHT is that 12 thermometers is not sufficient to truly sample the temperature of my house; where the temperatures can range from + 400-450 F in the oven, when I am cooking that Kangaroo roast, down to maybe 10-20 F in the freezer, where I have the rest of those kangaroos I hit on the road stashed; and according to an argument by Galileo Galilei; in “Dialog on the Two World Systems.”, every temperature between 10F and 450 F must exist somewhere in my house, sometime, while I am doing the thanksgiving kangaroo.
Well you see GISStemp and HadCRUT have the same problem as GESHT; they are NOT representative of the mean globnal temperature, or even the mean global surface; or lower troposphere temperature. They are GISStemp, and HadCRUT respectively; the result of applying some quite arbitrary AlGorythm to a completely non representative sample of raw temperatures from various places around the world that do not together form a proper sampling of the continuous global function of temperature over space and time.
So GISStemp and HadCRUT are about the equivalent of the average telephone number in the phone directories of say Manhattan and East Anglia Respectively; they are not true measures of the mean global temperature (surface or lower tropo) of planet earth. If they were properly sampled; it would not matter if one of the stations gets a new Weber grill.
By the way; A really nice bunch of work there Willis; I’m going to have to print it out and try to digest it to see what you’ve uncovered

percysunshine
December 8, 2009 10:26 am

Don’t know if this has been pointed out yet, but it is possible that stupidity is still the culpret. The station at the 500km away pub could have been used to calculate the correction factor. If the process was automated, it would be one of the ‘nearest 5’. Of course, that is a methodology flaw and all of the corrections are suspect, for any sparse data.

Mike Haseler
December 8, 2009 10:27 am

Jean Parisot: “Would a going forward position be to use a well understood set of measurements and monitor changes for the next X years and see which models (hypotheses) are working.”
The key to any analysis is good data. That data simply doesn’t exist because the local temperature readings were never taken intending them to be used to estimate global temperature to anything like the present requirement and the proxy record, is far far worse.
Like the Irish say: “If I were going there, I wouldn’t’ start from here!” But we are here.
The first instinct may be to rip up all the suspect temperature sites, throw out all the tainted scientists and start again. But if we did that it would be many decades before we settled this issue once and for all. But we sure need to start spending serious money on temperature measurement (I have to express potential self interest as I used to design precision temperature control equipment). We can’t shirk, we have to get the best possible coverage of temperatures globally, in purpose made sites free from the taint of Urban heating – and certainly free from the taint of political meddling! At least that we we will know for certain what the climate is doing from that time on.
Then we have to go back to manual measuring the temperature!!! Yes, we’ve got to start measuring the temperature without automatic equipment, because the only way we will know the bias caused by automation is to get really accurate results for the discrepancy between manual and automated measurements. Similarly, we have to get some very accurate measurements and models so that we can back-calculate the Urban heating effect and take this out without some hair brained enviro-political-scientist just picking a figure at random. (Grrrr .. makes my blood boil!)
Then we have to get a hell of a lot more proxy measurements, we have to find ways to calibrate the proxy relationships so that we have a scientifically rigorous method based on extensive laboratory testing (i.e. for tree proxies, we might have to grow a few trees in labs at historic CO2 levels – even that is difficult). I’m sure there are good dendrochronologist out there, take out the “hide the decline” labs, and give the rest some decent money without the political bias, and … hopefully we’ll bring back the science and find out what really has been happening to the climate.
Basically, if we spent a fraction of the money the politicians wanted to tax us all on getting decent data then perhaps we wouldn’t be in this mess. It really hurts me to say this, but we have to learn all the lessons of Iraq:
1. Science shouldn’t be “sexed up” to fit the political needs.
&
2. Whilst it might seem sensible to oust all the bathists (aka climategate scientists) the only workable solution is to find a way to include these people in the new “regime” – minus the worst offenders, but unless we include the foot soldiers of the current regime, we’ll get the chaos we have right now in Iraq.
And I have to say it: the next time someone says: “we are in real imminent danger of WMD”, “the evidence is unequivocal”, can someone please remind the press that last lot of WMD was just a figment of the immigination of few oil-grabbing neocons.

December 8, 2009 10:27 am

A reminder when hearing “news” analyses: the emails are a bright, shiny butterfly, a magician’s diversion. The indisputable scientific misconduct is in the software and data management.

George E. Smith
December 8, 2009 10:31 am

I did just leave a comment, and it totally vanished immediately, when I posted it.

doug
December 8, 2009 10:31 am

Steve McIntyre showed a similar adjustment for the entire USHCN data set nearly three years ago. Original raw data removed, “value added” data replaces it, and nearly every correction is biased towards a warming trend.
http://www.climateaudit.org/?p=1142

JJ
December 8, 2009 10:32 am

Smokey,
“All of your “entirely possible” comments can be resolved by the full and complete disclosure of all of the raw and massaged data, and the hidden methodologies and “adjustments”, that the Team uses to come up with their scary AGW conclusions.”
Exactly.
So blog entries like this one should not conclude with unsupported accusations, they should conclude with reknewed demands for the data and methods. That is legitimate and powerful: You arent giving us the data, and here are some very suspicious results that make it look like you might be hiding something more than the decline.
Going futher afield into unsupported claims of ‘blatantly bogus’ and ‘false warming’ is, as you put it, red faced arm waving. And claims of ‘idisputable evidence of preconcieved notions’ yada yada yada is simply a lie itself. Thats Teamwork. Leave that to the experts.
JJ

percysunshine
December 8, 2009 10:34 am

I keep remembering that Enrons fall back position was that they had Arthur Anderson doing their external audits. Meanwhile, AA was rapidly shredding files and deleating emails.

doug in colorado
December 8, 2009 10:39 am

JJ, if a site selected at random shows this kind of manipulation, it’s not unreasonable to postulate there is improper conduct in the handling of other data, particularly since the crew trying to use this to cripple the world’s economies have repeatedly chosen to hide the raw data and the manipulations, up until the Blessed Saint Whistleblower of East Anglia put it all on the Web. Yes it’s possible this station was handled appropriately…it’s also possible that Jesse James had bank accounts at a lot of midwestern banks where he made “urgent withdrawals”…it’s possible the core of the moon is really green cheese. But the burden of proof rests with the Warmenistas, who by their conduct, have already shown themselves untrustworthy, and Willis Eschenbach has done a great service by screening down to show what was done to the numbers at Darwin.
Check a few more randomly chosen sites using the same methodology…that’s the scientific way. You tell us what you find.

George E. Smith
December 8, 2009 10:40 am

Well I guess it finally reappeared. Note that the extreme surface temperature range on earth, other than on volcanic lava flows or in boiling mud pools, goes from about +60C on the hottest tropical desert surfaces (maybe higher) down to about -90C at Vostok Station. (close to the extreme lowest low official temperature).
Why don’t they start plotting GISStemp on that scale from -90 to + 60, instead of -1 to +1 deg C ? then we can all see how insignificant it is.

December 8, 2009 10:46 am

Geoff wrote: “If you can tell me how to display graphs on this blog I’ll put up a spaghetti graph of 5 different versions of annual Taverage at Darwin, 1886 to 2008.”
Upload them to TinyPic.com then click on the uploaded image until you see a View Raw Image link to click on and post that URL.

Deb
December 8, 2009 11:02 am

It’s good to have this blog to look at after seeing the BBC’s full page of Copenhagen coverage. “Earth headed for 6 C of Warming” and “Our Warm Globe,” beh. I’m all for not doing stupid things to the environment, but come on people, can we at least take a tiny peep at the data before assembling 20K people together to throw money around and ruin our collective economies? Thanks to everyone who puts in the time and effort to keep this site running and to keep posting things like this analysis.

December 8, 2009 11:13 am

It is perhaps the only truly transparent component of the current national administration that the Manifesto Media….continues to ignore these issues while they obfuscate. Is it chilly in Copenhagen?
MM

Dave J
December 8, 2009 11:14 am

I appreciate your efforts Willis.

backscatter
December 8, 2009 11:17 am

Thank you for this.
I’m wondering if this is an example of what actually needs to happen for *all* of the monitoring stations datasets — quite literally we need an audit of the datasets with this exactly kind of public disclosure and debate on how the homogenization or “value added” data can be arrived at.
Only until there is widespread agreement on the data itself can reasonable conclusions or predictions be made.

Costard
December 8, 2009 11:28 am

JJ is absolutely correct. It’s not pleasant to hold yourself from advancing ahead of the facts, but as per strategy, it’s a bad idea to abandon your supply train. Leave inference to Mann, Jones and the rest. One day they will hang for it. If we are reasonable, measured and patient, we will not be ignored.

Viktor
December 8, 2009 11:31 am

Thank you, Willis. I greatly appreciate the clarity of your writing.

Richard M
December 8, 2009 11:37 am

I think Gary Pearse (07:20:39) and john (09:00:05) have the right idea. I’d like to see a parallel effort to the surface stations project. Maybe something like stationAdjust.org where a procedure could be outlined and others could use it to investigate other locations. Maybe it could be seeded by Willis laying out the steps he used to create this article.
I think a lot of people would more than happy to help if a common approach was documented.

Kitefreak
December 8, 2009 11:49 am

Mr. Willis Eschenbach:
Brilliant article – clear, understandable, incisive. Lots of hard work and dogged persistence. Many thanks and great respect to you. Sure helps explain the origins of the ‘global temperature record’. And it also highlights the genuine extreme difficulty and complexity of compiling such a thing.
I agree with previous posters who say we should work with the ‘rawest’ data possible – down to the level of daily min/max temperatures, however and wherever measured. If we could put together a global database of that raw data, by means of a global collaborative effort, then that would be a real foundation upon which ‘citizen researchers’ could then build. Much like AW’s surfacestations.org project, in terms of the ‘citizen data gathering’ aspect. We are legion, after all.
I think in such a database a re-sited weather station should be given a new weather_station_id – new site, new ID. Obviously a lot of work making paper records digital, but hey, many hands make light work. There was a weather station here in Tentsmuir Forest near Tayport in Scotland until recently. I should start here. Maybe I’ll do that.
We need to use the Internet for important things, while it is available in its present form. In the future, as in the past, we may not be able to do that sort of thing quite so easily. And they do seem to be intent on rewinding history, and sending us back to the dark ages with these fictions they have brainwashed our children with. It makes my blood boil, it really does.

Editor
December 8, 2009 11:50 am

I had found a few horrors and posted a GIStemp “Hall of Shame” a while ago:
http://diggingintheclay.blogspot.com/2009/11/how-would-you-like-your-climate-trends.html
According to Steve McIntyre (whose page on the GISS adjustments I have linked to), of 7364 sites, 2236 are positively (correct direction) adjusted for UHI, but a whopping 1848 (25% of the total) have a negative adjustment that increases the warming trend. IMHO there’s a lot of your global warming!

December 8, 2009 11:53 am

Willis analysis is good.
There is another issue with the ‘raw’ data which Joe D’Aleo and Ross McKitrick have been stating for years and that is the apparent link to station numbers. Note that there is, of course a close tie between the number of stations and global coverage.
Judging by the quality of the code from CRU if similar or the same was used to create the data from sparse stations, it would seem that D&M could well be right.
Some while back I took the data from McKitrick’s website and used the correlation between the station numbers and raw temperature to back out the effect of the station number variation. The result was surprising. It would seem likely a coincidence, but that cannot be ruled out, yet. My corrected values show a trend mid way between the trends of satellite data over the period of overlap.
http://homepage.ntlworld.com/jdrake/Questioning_Climate/userfiles/Influence_of_Station_Numbers_on_Temperature_v2.pdf
Trends for overlap period:
Surface Stations (Raw) 1.03°C per decade
Surface Stations (Corrected) 0.112°C per decade
RSS Satellite 0.157°C per decade
UAH Satellite 0.093°C per decade
NB: I wrote the piece in two sections, so don’t stop half way.

Basil
Editor
December 8, 2009 12:02 pm

Martin (09:36:00) :
Willis,
I don’t know where the NOAA URL is that gives individual station data (as opposed to data sets) so I went to the NASA/GISS site with the individual stations data.
http://data.giss.nasa.gov/gistemp/station_data/
The Darwin raw data there (http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=Darwin) seems to correspond with the raw data you showed in your graphs. But the homogenized data (at http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=2&name=Darwin) does not look like what you show as homogenized data.
Could you give a URL for the site where you got the data (and indicate which file if it is an ftp or otherwise multiple file listing)?
Thanks.

I’m curious about this myself. I also went to the GISS site, and noticed that the “homogenized” data for Darwin basically leaves out all the older, cooler, stuff. Now, this is GISS, not GHCN, so I don’t know if that matters. But with GISS, it looks like they may have taken the upward adjustments for Darwin from GHCN, but left out the early years.

JJ
December 8, 2009 12:04 pm

Doug,
“JJ, if a site selected at random shows this kind of manipulation, it’s not unreasonable to postulate there is improper conduct in the handling of other data,”
That is not correct. It is not only unreasonable to ‘postulate improper handling of other data’, it is absolutely unreasonable for you to assume that there is improper handling of these data, based on the info we have on hand. When you do that, you are acting like the men in the emails.
Once again, there is nothing necessarily wrong with these adjustments. If you want to claim that there is, then the burden of proof is on you. That they havent yet supported their claims in sufficient detail does not relieve you of the responsibility to prove your own claims. To the contrary, it is wrong for you to make unsupported accusations, and in the long run supremely stupid.
Allow yourself to rise to worms, you will eventually find a hook.
JJ

APE
December 8, 2009 12:06 pm

Willis fantastic post!
I havent read through all the comments so please excuse me if someone has already brought this up.
The adjustment line shown by Willis does look strangely (and disturbingly) familiar to a scaled valadj array! which BTW also looks like the adjustments put on USHCN http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Triple Yikes! Talk about deja vu!
Anthony perhaps a separate post on this strange similarity in adjustments?
APE

steven mosher
December 8, 2009 12:07 pm

Willis,
I think you are seeing the results that occur when data is thrown into the meat grinder. That is, an analyst constructs what he thinks is a viable adjustment proceedure. He applies it to a few cases. he views the results and concludes that the “adjustment” make sense. Then he throws all the data at that adjustment code. When the end result confirms his bias, that the world is warming, he takes this as evidence that his adjustment “works” There are two distinct approaches to this problem. The first approach is what I would call a top down approach. Understand the problem of adjustment. Construct an adjustment approach and apply it to all data. A few spot checks and it’s good to go. Theory dominates the data. The other approach, your approach, Anthony’s approach, is to tackle the problem from the bottoms up. A station at a time. Question: which station in GHCN shows the largest trend over the past century after adjustment. That might be an interesting approach to a systematic station by station investigation of the adjustment procedure.

December 8, 2009 12:09 pm

Great work, Willis. As with E.M.Smith’s sterling work on GISS, it’s only when the data is dissected thermometer by thermometer that the real story emerges. Real science….
I’ve suggested (having an IT/database/accounting background) that the re-done temperatuture records should have a transactional basis. Every data point (station/date/time/transaction # as the index, temperature as value) should be stamped with who/what/when/type etc, to provide a full audit trail. And most importantly, this leaves the raw data visible (as the first transaction). If we mock up a single measurement of this sort, and assume that the 2.4 adjustment is applied to that data point, it looks like this:
station/date/time/transaction #/temperature/who/what/when/type
Darwin/20091208/0930/30.4/WayneF/Raw data/200912090100/TEMP
Darwin/20091208/0930/2.4/GHCN/Kludge Factor/200912090101/ADJUST
This sort of transactional record is the heart and soul of accounting systems, and if stored in a SQL dadatbase, renders searches trivially easy: get the 2009 raw temp records for Darwin:
Select temperature from temperturestable where type = ‘TEMP’ and station = ‘Darwin’ and date between 20090101 and 20091231
That would give real accountability, identify at a glance what adjustments were being applied, by what process, and to what data points, and facilitate research and audit.
Perhaps we should be thinking about an ‘Audit the Temp’ movement, because Willis’ analysis certainly points to some dark and dirty data adjustments.
And I can’t help thinking that this is a Piltdown man moment ….. a reconstructed and apparently plausible temperature record which, when looked into, is simply a shambles.
Audit the Temp!

hillbilly76
December 8, 2009 12:12 pm

To those many other novices like me searching for the truth I recommend these sites. “What the Stations Say” at http://www.john-daly.com/station/stations.htm which has a global map showing locations, details and records of many ground stations. There is also a “Stations of the Week” series at http://www.john-daly.com/stations.htm . Like everything John did it is all explained in simple easy to follow terms and layout.
The following Postscript by John L. Daly highlights many AGW sceptics concerns about GISS and CRU reliance on and manipulation of such data.
“The whole point of this investigation of just one station is that although Darwin shows an overall cooling due to that site change, this faulty record is the one used by GISS and CRU in their compilation of global mean temperature. More importantly, most of the stations used by them will have similar local faults and anomalies, rendering any averaging of them problematical at best. The best statistical number-crunching cannot eliminate these errors.
And why pick on a cooling station – on a site skeptical of `global warming’?
Simply this – the vast majority of stations are affected by urbanisation with inadequate urban adjustment by GISS, as demonstrated elsewhere on this site. Even where urbanisation is not at issue, rural stations also have serious local problems, discussed in more detail in “What’s Wrong with the Surface Record?”. (One such station here in Tasmania is also featured on this site – “Hot Air at Low Head”)
The net effect is that since most such local errors result in warming (and not cooling as in Darwin’s case), the result will be an apparent global warming in the surface record where none may actually exist. This is why the satellite record is a much more reliable guide to recent temperature trends.
Darwin was picked out here simply because it’s record was visibly faulty even from the data itself. But if Darwin can `slip through the net’, what happens to those thousands of faulty station records where the faults are less visible and obvious? As Ken Parish pointed out above,
“All the above historical information emphasises just how much you need to know about the history and surrounding circumstances of local surface weather readings in order to draw any meaningful long-term trend conclusions from them, especially when you are dealing with an apparent global warming trend of only around +0.25°C since 1976”.

Keith
December 8, 2009 12:13 pm

Temperature at Copenhagen at 1800 UTC was 6 Celsius. Not sure if that’s homogeneous, pasteurised or plain raw – it came from the British Met Office web site…

December 8, 2009 12:13 pm

Oh, fergot the transaction# in my little mock-up. Mea culpa, mea maxima culpa.

MIke
December 8, 2009 12:14 pm

Why wouldn’t acceleration be used as the measuring tool instead of measured value? I am no statistician, just a lowly applied math grad but hear me out. All stations around the world are going to have numerous adjustments made for various reasons not associated to actual temperature change. I believe that each station would, in general, have a log that details the adjustments and why they made. However, we already know that these adjustments are highly subjective and are often done with offsets that exceed the supposed global increase. Therefore, these adjustments need to be removed from the data set entirely. Since the logs detail the date that the adjustment was made, we are able to accurately remove those points of the data set that would introduce an invalid temperature acceleration. Having removed such data, we are now unable to measure the actual temperature. But, we are still left with data sets that show the acceleration of the temperature in either a positive or negative way and we have removed the inaccurate acceleration data. Given that we can consider most thermometers to have been relatively accurate across their range and given that we must accept the accuracy of the date of logs, we have an accurate representation of the temperature acceleration or deceleration over time. This then would allow us to measure the positive or negative gradient over time. If such analysis was applied to the entire data set of all stations, I think it would be possible to produce a largely accurate representation of the temperature record that completely excluded all man introduced adjustments. This would require the logs and the original temperature readings. Do we have those yet?

Johnny Climate
December 8, 2009 12:17 pm

You’d think if AGW people were as dedicated to science as they say, they’d be picking apart all this stuff like Willis Eschenbach just did. But they’re not.
Science be damned.

December 8, 2009 12:23 pm

Holy cow. I’m starting to doubt that there’s any global warming at all, man-made or otherwise. Especially now that they’re telling us that 2009 is one of the five warmest years when I know for a FACT that it’s the coldest winter and summer I’ve ever experienced in La.

Britannic no-see-um
December 8, 2009 12:26 pm

Oh to be a no-see-um on the wall as the rebuttal is feverishly discussed, air thick with expletives.
Data war has commenced.

Kitefreak
December 8, 2009 12:26 pm

vjones (11:50:15) :
I had found a few horrors and posted a GIStemp “Hall of Shame” a while ago:
http://diggingintheclay.blogspot.com/2009/11/how-would-you-like-your-climate-trends.html
************************************
Nice link.
Splicing, adjustment, homogenisation. Should not be allowed any longer. We need to see the data ‘in the raw’, then WE can analyse it and WE can use the medium of the internet to ‘peer review’ it. Then WE can tell Science and Nature and National Geographic to get tae Falkirk. And I did use to respect those publications….

David Brewer
December 8, 2009 12:26 pm

Willis – you may get a few more clues from an old exchange about the Darwin record on John Daly’s site here http://www.john-daly.com/darwin.htm

December 8, 2009 12:28 pm

JJ (12:04:45),
Promoters of the AGW hypothesis make their claims based on original methods and data that they either refuse to release, or that they claim has been thrown out.
This directly violates the Scientific Method, which requires the opportunity for others to falsify the AGW hypothesis by replicating the exact methodologies and data that were used to construct the hypothesis.
If the original raw data is either missing or is not provided, then the AGW conclusion is nothing more than an opinion. It is not a scientific hypothesis, it is a conjecture.
The burden is always on those proposing a new hypothesis, not on those questioning it.

Mickle
December 8, 2009 12:29 pm

Darwin airport’s history section says that the airport is now 311 Hectares, surely that’d need a large -ve adjustment for UHI?? Though I’m sure it’s not all tarmac, just lots…

Lincolntf
December 8, 2009 12:32 pm

Thank you for this and so many other articles about Climategate. You may very well be writing the only honest contemporary account of the scandal.
Keep it up and you might need to write a book by the time it’s through.

mustapha mond
December 8, 2009 12:34 pm

I have long suspected this, just looking at the GISS website. It made no sense. Good work Willis, and I have wondered why S. McI has moved away from this since looking at the ROW. Can you create something similar to figure 2 in your post using all global station data that cover 1900-2000 (of which there are few, I know). They definitely don’t look like the IPCC, but if anything should be weighted heavily – i.e. they are our best data/least need for adjustment, apples vs apples, etc.
I also noticed in the e-mails that Gil Compo @ NOAA was having troubles with the 1910-1940 data reconstruction they are attempting, where it wasn’t matching accepted data. It’s good to know they are (were, at least) double checking this.

Chris
December 8, 2009 12:43 pm

Great work. Hopefully a climate scientist will pick up this ball and run with it to produce a “peer reviewed” paper that documents this, and hopefully other “homoginizations” of the raw data.
(In reading through some of the EPA’s responses to comments on their endangerment finding, they deflected many comments on the [lack of] data quality with the excuse that “that blog post is not peer-reviewed”)

December 8, 2009 12:45 pm

Excellent piece of work Willis!
In addition to the stations/locations you considered, there are a lot of Aboriginal missions and mines (both abandoned and current) scattered around the Northern Territory – including many within 500 – 750 km of Darwin e.g. Rum Jungle, Oenpelli, Nabarlek, Ranger/Jabiru etc., etc.
From personal knowledge of these locations I know some had/have been collecting daily temperature and rainfall data for quite long periods. Not all reported their data to BOM and even for those that most likely did (the missions in my experience) it seems not all sites’ data can be accessed via BOM (I’ve tried). However, in most cases it would not be too hard to get the data from the missions themselves or from the mining companies or other bodies like NT Dept. of Mines, Ansto etc.
By more data gathering from such ‘unofficial’ sources it would be possible to thoroughly ‘interrogate’ IPCC temperature record for the Northern Australia region shown in Fig. 9.12 from the UN IPCC Fourth Assessment Report you show in article above.
Perhaps using all ‘official sources’ plus unconventional sources (as mine sites etc) it would also be a very useful exercise to ‘interrogate’ a suite (say 5 – 10) such IPCC test regions globally as a very powerful test of the integrity of the IPCC process.

Editor
December 8, 2009 12:45 pm

Kitefreak (12:26:22) :
Thanks! And I agree with you.

Indiana Bones
December 8, 2009 12:46 pm

Rhys Jaggar (01:02:48) :
Just damned well said, sir. 100% support.

Kitefreak
December 8, 2009 12:49 pm

Wayne Findley (12:09:13) :
I’ve suggested (having an IT/database/accounting background) that the re-done temperatuture records should have a transactional basis. Every data point (station/date/time/transaction # as the index, temperature as value)
***********************************
I suggest
date
minimum temp
max temp
longditude
latitude
as the key (unique identifiers) to the central database table.
Then, who made the claim, where they got the data from, personal notes from the claimant, station history, etc., linking to further database tables which
MAKE THE WHOLE PROCESS TRANSPARENT
AND LEAVE A COMPLETE AUDIT TRAIL.
Sorry for shouting.

steven mosher
December 8, 2009 12:56 pm

Willis,
I like the adopt a station idea. For maximum effect I would suggest that the stations be ordered according to the trend they show after adjustment.

JJ
December 8, 2009 1:02 pm

Smokey,
“The burden is always on those proposing a new hypothesis, not on those questioning it.”
Exactly.
So long as you are questioning you have nothing to prove.
On the other hand, the moment that you start making claims such as: ‘These results are blatantly bogus.’ ‘These adjusted temperatures are false warming’, ‘These researchers made illegitimate adjustments to make the data match their preconceived notions’ … at that point the burden of proving those claims rests on you.
Willis cannot prove these claims. He should not have made them. Better he should have stopped at the questioning – ‘Hey, whats up with these huge adjustments?’ ‘How do you justify that change, and not this one?’ – and demanded answers, rather than immediately rushing on to make unsupported claims of his own.
He was wrong to do that, particularly given that the claims are in fact accusations of impropriety. And beyond that it was wrong, it was not smart. We all agree that there are legitimate adjustments that may be made to raw data. If these adjustments are legitimate, crow is on Willis’ dinner menu.
Whenever you find yourself saying ‘I dont understand why …’ you need to keep in mind that you are ripe to be schooled.
Stick to what you can prove – this is our message to the Team. Goose.Gander.Sauce.
JJ

David Snyder
December 8, 2009 1:21 pm

This must be what they mean by “Value Added” Data.

Nick Stokes
December 8, 2009 1:22 pm

Willis,
I had the same experience as Martin (09:36:00) : . The homogenised data plotted from the GISS site doesn’t look like your Darwin 0 data, You’ve referenced everything else very well – could you please give a reference for this data, as shown in your final graph (red in Fig 7)?
[REPLY – He’s not showing homogenized NASA/GISS data, he’s showing homogenized NOAA/GHCN data. ~ Evan]

Editor
December 8, 2009 1:28 pm

@ weschenbach (12:38:53) and John Goetz (08:13:02)
Poking around in the GISS data and GHCN data – you see plenty of missing months – they seem happy to toss out data when it suits and yet hold on to other data for convenient reasons – like no data for years, then suddenly the station starts reporting again and you get a few much warmer points on a graph.
I have also found a case where the station officialy stopped reporting according to the NCDC staion locator, but GISS has data clearly filled in for up to 10 years after, very much warming the location. I’m gathering data for a post on that one.

tom
December 8, 2009 1:37 pm

Why am I not surprised to see data “adjustments” and all kind of fudging of data being the hallmark of all “story lines” supporting AGW?
Back in the 90’s when computer models were first used to predict “catastrophic global warming”, the models could not used for “hind casting” i.e. to simulate past known temperatures. The results were way out of line with known past temperatures. Mount Pinatubo erupted in the early 90’s and the subsequent year, or two were significantly cooler because the volcanic eruption injected significant amount of aerosols into the upper part of the atmosphere. This gave the modelers an idea. Pollution was the reason why models could not do “hindcasting”. Unfortunately there was no world-wide pollution data available. No problem for the modelers. They just introduced enough ” pollution” into their models to make them simulate past temperatures accurately! While the overlying idea that aerosols reduce global warming was correct, introducing it as an adjustable variable to make their models simulate reality accurately, is not science, is data fudging. I would say that data fudging is endemic to global warming “science”.

December 8, 2009 1:49 pm

I don’t understand why certain people are so biased into believing the global warming hoax. The reality is that there are a lot of people that think a one world government and currency would be a good thing.

Nick Stokes
December 8, 2009 1:50 pm

Willis,
Following my request for a reference (13:22:23), I now see that you have given a source here weschenbach (12:57:46). But do you have an explanation for why the plot as shown on the GISS interactive site for homogeneity adjusted data for Darwin Airport does not appear to be rising (as your Fig 7 is)? In fact, if you compare their plots of raw and homogeneity-adjusted for Darwin Airport 1941-2009, there’s virtually no change at all.

December 8, 2009 2:00 pm

Willis, I managed to confuse myself with the two scales that you have on Darwin zero. The right hand scale seems to be the “amount of adjustment” scale. And the left seems to be the temperature anomaly scale. But the “amount of adjustment” scale is half the change of the anomaly scale. This makes it appear that the amount of adjustment is twice as large as what is seen in the resulting anomaly. I’ve unconfused myself, but it might be worthwhile to use the same relative scale in the future.
Still, fine job on your part. Thanks.

Martin
December 8, 2009 2:03 pm

Willis (and parenthetically Evan)
I understand that the homogenized data shown is NOT the GISS homogenized data. And you did label it GHCN in the text. But what I — and I think Nick — don’t know is the site where you got. I found the raw data in the GHCN v2.mean.z file. (I checked the two long series — 0 and 1 — and they have the same values as the GISS raw data.) But where is the GHCN homogenized series data?
I got the raw data at the ftp site:
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/
I just could find anything around there for the homogenized data. Help?
Thanks.

SABR Matt
December 8, 2009 2:07 pm

I don’t think this “proves” that someone went in and adjusted everything by hand to force a warming trend on the data…it does, however, prove that – at a minimum – the statistical training techniques they used are very poorly conceived.
I saw a lecture less than a month ago that talked about GISS’s homogenization techniques. There are time-series driven statistical techniques in common use all around the world (not just in climate studies) that try to keep jumps in the data from influencing analysis. There’s a +/- system that looks at the residuals above a trend line to see if there are unusual runs of above or below-average points and makes stepwise adjustments to stop that from happening…the problem may be that the weather/climate has significant auto-correlation (it may not be uncommon for 20 years to be colder and 20 years to be warmer based on oscillations in solar activity, the PDO, the AMO and vulcanism.
I think what we have here is a group of people starting with a few base assumptions and choosing the wrong methods for their data analysis.
I think the way to adjust the temperature record to account for inhomogeneities is to do historical research on each data site and only make an adjustment if you can find a physical explanation. That’s how a scientist should proceed…you need to begin with first principles and not let statistical methods run away with your data unchecked.

David L. Hagen
December 8, 2009 2:22 pm

Amazing discovery Willis. Keep up the good work.
PS See NIST for Subsidiary SI Units

Celsius temperature degree Celsius °C – K

(not “Celcius”).

December 8, 2009 2:22 pm

Martin:
“I just could[nt?] find anything around there for the homogenized data. Help?”
Wouldn’t that be the file v2.mean_adj.Z (two lines down at /pub/data/ghcn/v2/)?

jaypan
December 8, 2009 2:31 pm

This is exactly the way to get them.
Request access to as many raw data as possible and show how they have been tweaked.
Nobody has a right to “own” and hide such raw data.
Not at all must it be allowed to change them however, then to present the uneducated public an “optimized” version and suggest what to do.
Such processed data and results are nothing but garbage.
To do it this way is a crime, not even close to any kind of science.

Roger Knights
December 8, 2009 2:32 pm

Willis: “They’ve just added a huge artificial totally imaginary trend to the last half of the raw data!”
JJ: “You dont know that. You should not claim to know that which you do not. That is Teamspeak, leave it to the Team.”

I agree. McIntyre’s caution in this regard is the model to follow. We must not open ourselves to counterpunching by make roundhouse swings.

Doug
December 8, 2009 2:40 pm

What really makes me laugh are news articles that say:
“The past decade is the warmest decade in 40 years” or “the average temperature of this decade is warmer than the previous decade”
While both are correct, it hides the fact that no warming has taken place this decade. Heck, it could even have cooled significantly in this decade and the statement would still be correct given the rapid warming in the 90’s. They are very misleading statements, but technically correct.

Gary Hladik
December 8, 2009 2:51 pm

Good things about a Willis Eschenbach article on WUWT:
1. Topical
2. Well-organized
3. Educational
4. Errors admitted and corrected
5. Willis is incredibly good-humored and patient
6. He tries to answer all questions
Bad things about a Willis Eschenbach article:
1. Some of the longest d….d comment threads on WUWT!

tallbloke
December 8, 2009 2:57 pm

Willis, I posted a link to your piece on a generally warmist site I’ve been arguing on and it got this criticism. I don’t agree with it, the guy can’t spell your name for a start… but thought it might help you tighten your argumentation to make such dismissals more difficult. If you feel like responding to it here, I’d lke to post it there if that’s ok with you.
1. Eisenbach graphs the unadjusted data, and shows that it doesn’t match the adjusted data.
2. Eisenbach then asks why adjustments were made to this data, and then proceeds to completely fail to answer his own question by taking a sort of vague, vanilla explanation of why adjustments are made and stating that that one paragraph does not seem to apply to the record for this particular airport.
3. The train has already gone off the tracks at this point. . . a more meticulous person might have explored the possibility that the adjustments were possibly made to account for other things. Eisenbach seems to just jump to the conclusion that they must have been pulled out of a hat.
4. Eisenbach then makes a motion toward throwing them a bone by making one adjustment of his own. However, he does not give any clear explanation for this adjustment – anyone who was following his line of thought up to this point should have to conclude that he just pulled it out of a hat.
5. He follows that up with an explanation for why he wouldn’t do any further adjustments that shouldn’t have anything to do with the way climatologists normally decide to do adjustments. The decision on how to handle data like this should be made for consistent, quantitative reasons, never because someone’s just eyeballing a graph and tweaking it until they think it looks right.
5a. In the process of doing the above, he does an interesting thing: He quoted a GHCN paragraph that indicated that adjustments are made for multiple factors, including but not limited to station location. He agreed that that made sense, so presumably he thought all of it made sense. However, he followed that up with a detailed analysis that is based on the presumption that station location is the only valid reason to make an adjustment.
So now we’ve hit the second place where Mr. Eisenbach can’t even seem to agree with himself on how things should be done, and we’re still only halfway through the post. . .

Kitefreak
December 8, 2009 3:03 pm

MIke (12:14:18) :
“This would require the logs and the original temperature readings. Do we have those yet?”
We all need to do this.
“We all need to get it together”.
The data, that is.
Get that together, and we’ve got something we can work with.

Jakers
December 8, 2009 3:07 pm

Maybe it would be best to go with the Japanese JMA temperature dataset instead of the “western” nation’s sets.

JJ
December 8, 2009 3:09 pm

Willis,
“Well, yes, I do know that it is huge, it is artificial, and that it is totally imaginary.”
No you dont. Granted, it is large. And all adjustments are ‘artificial’. But totally imaginary? You dont know that. Not yet.
“We have no less than five different thermometers at Darwin, all of which agree that the temperature did not climb at an unheard of rate (6 C per century) after 1941. Not sure how much clearer that could be.”
If it is legitimate to adjust one thermometer, and it may very well be, then it may also be legitimate to adjust all thermometers in the area similarly. If that is the case here, then what you may have discovered is not that they illegitimately applied an enormous false adjustment to two of the thermometers in the average, but that they failed to apply a huge legitimate adjustment to the third. Up goes the AGW.
You dont know. You have asked some very good questions here. Time to pose those same questions directly to the people who damn well should have the answers … then draw conclusions as to effect and motive.
Until then, be content that you have layed out some important facts in a fashion that the layman can readily grasp:
1) The instrumental record is not merely the wholey objective exercise of reading a thermometer and writing down the number accurately.
2) The instrumental record is subjected to various adjustments, the magnitude of which easily dwarfs the alleged ‘global warming’ signal.
3) These adjustments are not well documented outside of the small clique of researchers who compile these datasets – and perhaps not even within those circles.
4) The propriety of these adjustments cannot be ascertained without access to both the raw data, and very detailed method descriptions.
You’re doing good work. Dont ruin it by overreaching.
JJ

Brian Dodge
December 8, 2009 3:11 pm

You say
“And CRU? Who knows what they use? We’re still waiting on that one, no data yet …”
The CRU said(before the server was taken offline because of the hack – http://www.cru.uea.ac.uk/cru/data/)
” The various datasets on the CRU website are provided for all to use, provided the sources are acknowledged. Acknowledgement should preferably be by citing one or more of the papers referenced on the appropriate page. The website can also be acknowledged if deemed necessary. CRU will endeavour to update the majority of the data pages at timely intervals although this cannot be guaranteed by specific dates.”
Who is lying?

Doug
December 8, 2009 3:23 pm

It would be pretty hard to justify all those warming adjustments and few, if any cooling ones. Most everything I can think of would actually warm the area, such as:
-addition of parking lots, runways, etc
-clearing surrounding vegetation for new structures/runways
-urbanization of surroundings
What could possibly account for the massive warming adjustments made? There are no noticeable spikes in the temperature plot. Things I could think of would be resurfacing with high albedo material or moving the station to a bushier location. While those might explain one such adjustment, there are 3-4 of those, each one having to build on the next. I find that highly unlikely. Again, the vast majority of any adjustment I can think of would be cooling adjustments. It sure smells like data manipulation to me.
It would be interesting to do this analysis on many other stations.

December 8, 2009 3:45 pm

Hi Willis
Can I just confirm that I am right with respect to the GHCN file you got the adjusted (homogenized) data from please (as Martin and Nick have previously queried)?
If GISS have not done a closely similar adjustment then you have opened a real can of worms.
Regards

E.M.Smith
Editor
December 8, 2009 3:53 pm

A marvelous job. Well done.
The base data, GHCN, have also been incredibly biased via selective thermometer deletions.
A similar analysis on some of those effects would be “a beautiful thing”…
http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/

Paul
December 8, 2009 3:56 pm

I understand that the scientists have taken the raw data and manipulated it to reflect what they think is the ‘most accurate’. It makes sense to do just that.
But if you have all of the raw data, and all of the adjusted data, could it not be demonstrated statistically if there was a bias one way or another?
For example If 55% of the value added data is shifted up, vs 45% down, would that not prove that there was a bias in the resulting conclusions?
Of course that would also have to be reviewed becasue it is easy to see that numbers should be shifted down as data coming from urban sites would reflect nearby land use changes.

D. King
December 8, 2009 3:57 pm

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style …
What a hoot.

Nick Stokes
December 8, 2009 3:58 pm

Willis (14:41:37),
OK, GISS and GHCN seem to get very different homogeneity adjustments for the case of Darwin. This is curious, because their algorithms seem similar, and the global results track fairly well. But you raised the specific issue of the IPCC plot, and its dependence on CRU. Well, this isn’t GHCN either. As far as I can tell, the relevant CRU data is obtainable on this MetOffice site. Darwin is station 941200.
FrancisT has plotted that data. And, as he says, it seems that the CRU is also very little different from GHCN unadjusted. And there is little upslope.
So a further query might be why N Australia has an uptrend that isn’t reflected in the CRU Darwin figures. Or maybe the posted CRU figures aren’t the ones that the IPCC relied on after all. However, on the face of it there doesn’t seem to be any link between the adjustments you have shown and the IPCC plots.

December 8, 2009 3:59 pm

Even a drunk MONKEY would not have “homogenized” those numbers like that. What reason for the “adjustments” would excuse the massive changes, a non-stop forest fire for the last 50 years and a localized ice age for the first 50 years? LMAO
“H-Yuck! Jest trust us scientists, we’d nay-verr miss-ah-lead ya! H-Yuck!”

Eric Anderson
December 8, 2009 4:03 pm

Brian Dodge, are you taking the position that all the data, including raw data, was freely available from CRU?

December 8, 2009 4:06 pm

Perhaps this code has something to do with the Darwin problem (man, how ironic is that!)
Found in the documents\cru-code\linux\mod directory:

! homoegeneity.f90
! written by Tim Mitchell
! module contains routines to carry out homoegenity testing
! based upon Int J Clim 17(1):25-34 (1997) Appendix 4
[snip]
!*************************************** calc correlation coefficients
if (QNoTest.EQ.0) then
! write (99,*), “calc correlation coefficients” ! @@@@@@@@@@@@@@@@@@@@@@@@
do XYear = 2, NYear ! calc C difference series
if (DataC(XYear).NE.MissVal.AND.DataC((XYear-1)).NE.MissVal) &
DifferencesC(XYear) = DataC(XYear)-DataC((XYear-1))
end do
CorrSum = 0
do XRStn = 1, NRStn ! iterate by R stn
if (BaselineR(XRStn).NE.MissVal) then
DifferencesR = MissVal
do XYear = 2, NYear ! calc R difference series
if (DataR(XRStn,XYear).NE.MissVal.AND.DataR(XRStn,(XYear-1)).NE.MissVal) &
DifferencesR(XYear) = DataR(XRStn,XYear)-DataR(XRStn,(XYear-1))
end do
call LinearLSRVec (DifferencesC,DifferencesR,Aye,Bee,Correlation(XRStn)) ! calc correlation
if (Correlation(XRStn).GT.0) CorrSum = CorrSum + Correlation(XRStn)
end if
end do
if (CorrSum.LT.2) then
QNoTest = 1 ! require decent regional series to test homogeneity
end if
end if
!*************************************** calc weighted anomalies
if (QNoTest.EQ.0) then
! write (99,*), “calc weighted anomalies” ! @@@@@@@@@@@@@@@@@@@@@@@@
do XYear = 1, NYear
OpNumer=0 ; OpDenom=0 ; CorrSum=0
if (DataC(XYear).NE.MissVal) then
do XRStn = 1, NRStn
if (DataR(XRStn,XYear).NE.MissVal.AND.BaselineR(XRStn).NE.MissVal.AND. &
Correlation(XRStn).GT.0) then
if (AnomType.EQ.0) then
OpNumer = OpNumer + ((Correlation(XRStn) ** 2) * (DataR(XRStn,XYear) – &
BaselineR(XRStn)))
else if (AnomType.EQ.1.AND.BaselineR(XRStn).NE.0) then
OpNumer = OpNumer + (((Correlation(XRStn) ** 2) * DataR(XRStn,XYear)) / &
BaselineR(XRStn))
end if
OpDenom = OpDenom + (Correlation(XRStn) ** 2)
CorrSum = CorrSum + Correlation(XRStn)
end if
end do
if (OpDenom.NE.0.AND.CorrSum.GT.2) then
if (AnomType.EQ.0) then
Anomalies(XYear) = DataC(XYear) – BaselineC – (OpNumer/OpDenom)
else if (AnomType.EQ.1.AND.BaselineC.NE.0.AND.OpNumer.NE.0) then
Anomalies(XYear) = (DataC(XYear)/BaselineC) / (OpNumer/OpDenom)
end if
end if
end if
end do
end if
!*************************************** decide which years to test
if (QNoTest.EQ.0) then
! write (99,*), “decide which years to test” ! @@@@@@@@@@@@@@@@@@@@@@@@
if (present(Break)) then
TestYear(Break) = .TRUE.
else if (present(XBreakYear)) then
TestYear(XBreakYear) = .TRUE.
else if (present(BreakVec)) then
do XYear = 1, NYear
if (BreakVec(XYear).EQ..TRUE.) TestYear(XYear) = .TRUE.
end do
else
QFirstA = MissVal ; QLastA = MissVal ! don’t consider first5 / last5
XYear = 0
do
XYear = XYear + 1
if (Anomalies(XYear).NE.MissVal) QFirstA = XYear
if (QFirstA.NE.MissVal.OR.XYear.EQ.NYear) exit
end do
XYear = NYear + 1
do
XYear = XYear – 1
if (Anomalies(XYear).NE.MissVal) QLastA = XYear
if (QLastA.NE.MissVal.OR.XYear.EQ.1) exit
end do
if ((QLastA-5).GE.(QFirstA+5)) then
do XYear = (QFirstA+5), (QLastA-5)
TestYear(XYear) = .TRUE.
end do
end if
end if
end if
!*************************************** test for single shift
if (QNoTest.EQ.0) then
! write (99,*), “test for single shift” ! @@@@@@@@@@@@@@@@@@@@@@@@
QPassFail = MissVal ; MaxRatio = 0.0
do XYear = 2, NYear-1
if (TestYear(XYear).EQ..TRUE.) then
call SingleShift (Anomalies,XYear,TestRatio,Simple=1)
if (present(TestVec)) TestVec(XYear) = TestRatio
if (TestRatio.NE.MissVal) then
if (QPassFail.EQ.MissVal) QPassFail = 0
if (abs(TestRatio).GE.MaxRatio) then
MaxRatio = abs(TestRatio) ; QPassFail = XYear
end if
end if
end if
end do
if (MaxRatio.LT.2.AND.QPassFail.NE.MissVal) QPassFail = 0
end if

Could also be from another homogenization program, homogiter.f90 found in the documents\cru-code\linux\cruts directory. I can’t locate MakeContinuous subroutine/module. A portion:

if (NGotRef.GT.0) then ! have got more stns with ref ts
! for these stns, correct original Data
call MakeContinuous (Data,RefTS,GotRef,Disc,YearAD,Differ,Verbose,Adj=Data)
! … and make checked part trusted
call Trustworthy (Data,RefTS,GotRef,Believe)
do XStn=1,NStn
if (GotRef(XStn)) then ! stn has ref ts
Sought(XStn)=.FALSE. ! so no longer needs checking
do XYear=1,NYear
if (Believe(XYear,XStn)) then ! for the years with a valid ref ts
Trust(NYear,NStn)=.TRUE. ! the series may enter other ref ts
if (DiscCrude(XYear,XStn)) then
DiscCrude(XYear,XStn)=.FALSE. ! any discon have been healed
NDisc=NDisc-1
end if
end if
end do
end if
end do

Looking through this code, I’m struck that if anyone working for my employer wrote anything remotely similar to this and it was put into production, the federal government would SHUT US DOWN.

December 8, 2009 4:22 pm

This Darwin data is your smoking gun.
The computer code from the CRU files is just a quick and dirty hack to match these types of adjustments for raw data files. Unless it can be shown that the leaked CRU code was used for anything other than that, it’s probably counter-productive to claim the CRU code is a smoking gun.

Martin
December 8, 2009 4:22 pm

Steve Short (14:22:38),
Much thanks. “adj”: “adjusted.” Yep. Sorry to be a dummy.
And now that have the adjusted data (averaged) and I can see how someone would discard the pre 1942-43 data, but the particular inhomogeneity can’t be adjusted for unless one knows the the early period mean, which begs the question … at least for 500 km.
I have to now re-read your (Willis’s) post to try to understand how the adjustment produced an increased trend for the post 1950 period.

December 8, 2009 4:31 pm

Just to be clear, I agree that the CRU code is bad code. I agree that studying it is worthwhile.
But unless it can be shown that the code was used to produce data or reports that were made available to scientists outside CRU or to the public, it’s not a smoking gun.

Gail Combs
December 8, 2009 4:38 pm

P Gosselin (03:46:10) :
Dave UK:
– Podesta:
Leadership role my fanny!
This is what we call authoritarianism based on Stalinist Science..
Next it’s:
1. water (a water crisis is currently being manufactured)
2. food (meat, sugar and fat)
3. information
Is there a Paul Revere left in USA?
REPLY:
Well I have the equines….
Actually my equines and I spent the weekends for the last couple of years alerting people to the various fraud taking place(Federal Reserve Act, Food Safety bill & Cap and Trade)
Does that count?
(The equines are a pair of really cute ponies people want to pet, snags them every time better than candy)

Hector Pascal
December 8, 2009 4:42 pm

Hi Willis. I am a long term Darwin resident (hi to Richard Sharpe!) and have been following Darwin temps for a number of years as I work in environmental science. If you wish, I can provide you (by email) Google snaps of previous and current observation sites plus my versions of temp data (some very interesting insights).
My opinion is the same as yours – the data have been fiddled. Most telling is the difference between raw data and adjusted data.

December 8, 2009 4:45 pm

I agree that until the code is proven to have been used it’s just an insight into what and how they’re manipulating this data. I don’t know why anyone would have spent the time and energy writing all the code and then not use it, however.
I like the Darwin analysis and am considering using it as a template to examine the stations in my area.

Nick Stokes
December 8, 2009 4:48 pm

Following Nick Stokes (15:58:59) I did my own plot of the CRU data as posted, vs the GISS data, raw. In the overlap interval, they superimpose almost exactly. No evidence yet of big CRU adjustments that might have gone into IPCC. No smoking gun.
Maybe the CRU data was further adjusted later. But that is how it is posted.

Martin
December 8, 2009 4:52 pm

emelks (16:06:13),
Thanks for the code. Wow FORTRAN has changed since I last wrote any code (in the 70s with FORTRAN66). But it does not appear to use the standard test for breaks in time series: the Chow test. Formally it is a stability of regression coefficients test. The Wikipedia reference looks OK for a summary: http://en.wikipedia.org/wiki/Chow_test
A Chow test on the series 0 raw data with a 1941 breakpoint and linear trend regression with AR1 and AR3 terms (to remove the major serial correlation) gives an F statistic, F(4,99), of 5.4 which is significant at the 0.001 level. Of course that doesn’t say anything the eye can’t see from the graph.
If one had to use the earlier data, a defensible adjustment would be a drop in the mean so as to produce an ARMA forecast that best fit the post break data for 4 or 5 periods. A bit of a pain to program, but it wouldn’t be much longer than the FORTRAN code above although it would call both the ARMA estimation and forecasting routines.

December 8, 2009 4:54 pm

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

Has anybody tested the validity of this, or are they just making a WAG?
That is, is it possible to take five stations distributed around a central station, interpolate their readings (assuming we know what averaging/weighting function was used), and compare it to the known readings from the central station?
I can certainly see how this might completely fail — assume a tropical circular island with weather stations all around the periphery at sea level and one station with missing data on a mountaintop in the center. How could missing data from the central station be reasonably reconstructed from the coastal ones?

Scott B
December 8, 2009 4:56 pm

Anyone understand why the last chart is cut off around 1995? Seems strange when the other charts go up to recent times.

El Buggo
December 8, 2009 4:58 pm

I agree with the speaker that John Daly is much missed.

Dr A Burns
December 8, 2009 5:04 pm

Deborah Smith ( debsmith@smh.com.au ) , wrote in a front page article in the Sydney Morning Herald on the weekend, discussing the comment in HARRY_READ_ME.txt about Australia’s climate data as being “a bloody mess” and that “the rest of the databases seem to be in nearly as poor a state as Australia.”
From this mess, dear Debs concludes that there is “no question to the validity of temperature records” !
At least ClimateGate has reached the front page. Hopefully the masses will soon start to realise that Labor’s ETS is based on fraud.
Dear Debs,
Based on the ClimateGate fraud, you concluded in your frontpage weekend article that there is “no question to the validity of temperature records” !
I am repeatedly staggered by the magnitude and extent of the fraud associated with the temperature record, as more is revealed. Here is a new detailed analysis of temperatures close to home, which shows how 130 years of cooling in Australia was turning into warming by these fraudsters.

Alvin
December 8, 2009 5:10 pm

magicjava (16:31:09) :
Maybe if you post it three times it will magically become truth.

December 8, 2009 5:10 pm

I’ve just had a quick squiz at some data from the Oz weather Station Data site:
http://www.bom.gov.au/climate/data/weather-data.shtml
The first few sites I chose look pretty darn flat as a trend. Could it be that it it has all been entirely made up? When I’ve looked at sea level data from GLOSS (sp?) they seem pretty up & down but averaging out to no trend. These may be ALL showing no discernible trend if not ‘adjusted’.
I will examine all I can tonight. To do it is easy:
1. download the site list, opens in Excel.
2. Go from this link entering each site in turn,
3. select from the top left of the grid to the bottom right,
4. copy, paste into notepad (to preserve tabs)
5. paste into Excel
6. highlight the annual average column and press the graph button (easier in orifice 2007).
7. observe the flat trend
8. write to your political representative of choice / media of choice.
I am only speculating on step 7, but so far that’s what I’ve been seeing. Also, If I had more time, or millions (or even billions) of $’s in research grants, I’d look at the monthly data too.

boballab
December 8, 2009 5:15 pm

@ Nick Stokes
On the Met Office site when you got the record did they specify the record was the raw numbers or not. The reason I’m asking is because not all the records the Met office released are raw some (I don’t know how many) are adjusted. This is from the FAQ on the Mets website:
“The data that we are providing is the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.”

nevket240
December 8, 2009 5:15 pm

Just another link in the chain towards a Royal Commission into the state of climate science.
Have you dudes signed the E-petition at Steve Fieldings site yet??
why not??
regards

3x2
December 8, 2009 5:18 pm

JJ (15:09:36) :
If it is legitimate to adjust one thermometer, and it may very well be, then it may also be legitimate to adjust all thermometers in the area similarly. If that is the case here, then what you may have discovered is not that they illegitimately applied an enormous false adjustment to two of the thermometers in the average, but that they failed to apply a huge legitimate adjustment to the third. Up goes the AGW.

Sounds good … in a “hey was that a Rainbow Bee-eater that just went by? ” kind of way.
Simple … get v2.mean .. extract Darwin and edit the duplicate years any way you desire to arrive at a one record per year series (using only the available data – this is important).
Explain here how exactly you get any significant trend from your series (without resorting to making up new data – again important)
If this were your money we were discussing there would be absolutely no question of me adjusting the reality of your bank account using the first five accounts that share similar account numbers to yours let alone using my secret “real value” formula.
It took me but a couple of minutes to edit up Darwin from v2.mean and replicate the graph (Figure 5) on my own PC, could you at least do the same? Of course you can always continue fanning the smoke screen.

December 8, 2009 5:18 pm

This:
“I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:
Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?
If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.
The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:”
and this:
“One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000.”
are maybe the crux of the matter and raise the following questions.
(1) Is the GHCN (NOAA) database the same as the NASA (GISS) database as Professor Wibjorn Karlen? NOAA does not equal NASA for these purposes surely?
(2) Can we rely on the accuracy (?) of an interpretation (?) that ‘the emails’ (remarkable isn’t it we at least all know exactly what these are 😉 suggest (?) that the CRU database relies (?) on GHCN (whew)?
(3) Just how many independent global surface temperature databases are in physical existence at a raw data level?
(4) Just how many discrete and well defined methods of adjustment are there?
(5) Just how many such independent adjusted (‘homogenized’) databases are there out there?
(6) Which are (a) the database and (b) method of adjustment which IPCC bases its official position on?
I think I’m getting too old for this.

Gail Combs
December 8, 2009 5:27 pm

imapopulist (05:37:32) :
…I always suspected the most manipulation would take place in the remote corners of the Earth where unscrupulous scientist thought they could get away with it.
Reply
Looks like we need some volunteers from Russia. Perhaps the “hackers” from Tomsk? sarc/
Actually it would be nice if some volunteers from Alaska, Russia, and/or Northern Canada could do the same type of analysis with some visual checks if it is possible.

Nick Stokes
December 8, 2009 5:29 pm

boballab (17:15:24)
They said the data was a subset of Hadcrut3, which I take to mean that it has been adjusted.

Gerard
December 8, 2009 5:37 pm

>>”Is it possible that these adjustments were all for some legit reason? Well, nothing is impossible … but when it gets that improbable, I say it is evidence that the system is not working and that the numbers have no scientific basis. I certainly may be wrong … but I have seen no evidence to date that says that I am wrong. If you have such evidence, bring it on, I’ve been proven wrong before. But I think I’m right here.”
You aren’t wrong. The data has been deliberately manipulated. It is fraud and criminal prosecutions should occur.

JustADumbEngineer
December 8, 2009 5:37 pm

Thank you for the extensive and easy to understand analysis of one piece of the data.
I’ve long been concerned with the lack of disclosure and “we are so smart, anyone else should be ignored” attitude by the scientists (perhaps I should put that in scare quotes now!) and politicians (“settled science”, Million Degree Earth Al etc…) who insist that we are suffering from GW and, specifically, AGW (and, that we “must” do something about it regardless of cost). I’ve also been casually following Steve McIntyre’s fine work and ClimateAudit and more recently the work over at SurfaceStations.org.
The CRU leak was definitely a high point for the year for any objective observer and it was because of the groundwork laid and published by the likes of ClimateAudit, WattsUpWithThat, and SurfaceStations that caused it to resonate simultaneously with so many people at such a critical time. Thank God for the Internet.
The CRU’s actions in the past has delayed the resolutions of questions about “AGW: if, what, how much, what’s the social impact?” substantially. Hopefully now we can redo the analysis in a traditional scientific way with open disclosure using accepted statistical methods. Sadly, if the CRU et al just _happen_ to be right, their arrogant secrecy and (sorry to say) fraud may have doomed Mankind by delaying action (but, I’m suspect the reason for the fraud is because they really don’t have solid data to support their bias so I’m betting on Mankind to Win and CRU DNF).
Thank You All and keep up the good work.

patrioticduo
December 8, 2009 5:38 pm

Bryan,
“I can certainly see how this might completely fail — assume a tropical circular island with weather stations all around the periphery at sea level and one station with missing data on a mountaintop in the center. How could missing data from the central station be reasonably reconstructed from the coastal ones?”
You couldn’t (since weather and climate are chaotic systems) but it would be possible to flag potential incorrect “adjustments” by determining average temperature differentials and comparing to target station. This would assist in identifying stations for closer scrutiny. IMHO

Bart Nielsen
December 8, 2009 5:41 pm

This study really “puts the cookies on the bottom shelf where the kiddies can easily reach them.” Homogenizing milk liberates lipolytic enzymes from the membranes of the fat globules, which in times leads to the milk or cheese made from it going rancid. It appears that homogenizing temperature data also makes it go rancid.

scott
December 8, 2009 5:43 pm

Not sure whether anybody has pointed this out (there are lot of comments!). Australia has over 50 “long” stations (>100 years), of which about 40 are held to have 95% plus complete records. A lot of these are at airports, but some of those are not airports as they are known in the NH!!!
Why on earth would anybody choose to use a tiny handful to represent temperatures over an area roughly the size of the contiguous states of the US or China (twice the size of Europe)?

andy
December 8, 2009 5:44 pm

Could you send the story to the ABC news service in Australia please. They need a loud wake up call. http://www.abc.net.au

hillbilly76
December 8, 2009 5:50 pm

For Richard 3-16-38. Re your query about Satellite(MSU), surface temperatures, radiosonde etc. Your memory was right! John Daly outlines specifically how the UNIPCC scientists “dealt with” the troublesome divergence question in 1979. i.e., – surface showing warming (after adjustments), MSU showing little or none.
Site is http://www.john-daly.com/ges/surftmp/surftemp.htm. Been out there a long time, but still true and very relevant as their decision profoundly influenced all future UNIPCC modelling.

December 8, 2009 5:52 pm

You mean CRU hides theirs don’t you?

Sean
December 8, 2009 5:52 pm

the last few years Oct?Nov records for Drawin have been interesting
Many temp, No. of days above *temp, have been smashed

J. Peden
December 8, 2009 5:56 pm

Martin Brumby (01:24:43) :
The ‘adjustments’ in Fig.7 wouldn’t be based on the atmospheric CO2 levels at Moana Loa, by any chance?
That would be a neat way of ‘adjusting’ the data. (Data Rape, I’d call it).

Judging from the satellite temp. curves, CO2 seems to be becoming a rather poor proxy for temps. Something about a “divergence”….

itsova
December 8, 2009 6:01 pm

Manfred: (40:53)
Interesting point about Peterson…As the Dean of the College of Earth and Mineral Sciences, Easterling is MMann’s superior at PSU…he might be involved in the review of Mann…PSU is not being the least bit transparent with the inquiry…

Pamela Gray
December 8, 2009 6:05 pm

So who has the originals? Handwritten by the tireless souls who march out to the little chicken coup out behind the house, open the little door, shine a flashlight into the dark reaches of it’s belly, and write down the temp on a form?

Richard Sharpe
December 8, 2009 6:08 pm

Hector Pascal said:

Hi Willis. I am a long term Darwin resident (hi to Richard Sharpe!) and have been following Darwin temps for a number of years as I work in environmental science.

Was there from ’56 to ’74 with a few years missing …
Do you have an email address for Ken Parish? I am interested in asking him about the temperatures in the early years … (realrichardsharpe (at) gmail.com)

December 8, 2009 6:19 pm

Let’s do some basic inductive reasoning. How hard would it be to reproduce the CRU grid record from the known reporting stations in the raw and adjusted GHCN databases?
Not hard for land-based data. That would at least identify whether CRU are massaging the data any more (than say a standard kriging exercise).

Richard
December 8, 2009 6:22 pm

NIWA has yet to disclose how they did their “adjustments” / “homogenisation” . This may well reveal a bias, which may not be intentional. This bias may be just due to bad logic, it maybe due to the bias in their minds that the world is warming and an involuntary transfer of that bias to the adjustments… Who cares, we want to know and we want to see the adjustments – exactly how they were done.
So far the position stands – The raw data shows No warming .
NIWA position – NZ has warmed 1.9 C in the last 100 years or so on the basis that 7 stations in NZ (after ajustments) show this to be so. Example Wellington had to be adjusted up by 0.79 C (their adjustment in this case may or may not be correct, certainly their reasoning has holes in it).

Bill Illis
December 8, 2009 6:24 pm

Great work Willis. Keep it coming.
Here’s some more to add to the adjustment debate. How about the NCDC adjustments done to the US temperature record.
If you remember Anthony’s visit to the NCDC, they provided a presentation on the (fine) US station siting (and the adjustments they needed to do fix the fine station’s siting). This was also presented in a paper which I read but is no longer available.
http://wattsupwiththat.com/2008/05/13/ushcn-version-2-prelims-expectations-and-tests/
Here is the current USHCN V2 TOBs adjustment – adding 0.225C to the trend since 1920.
http://img69.imageshack.us/img69/6590/ustobs.png
Here is the Homogenization adjustment – adding another 0.225C to the trend since 1915.
http://img109.imageshack.us/img109/7312/ushomogenizationovertob.png
There is one other adjustment (not shown in the presentation but in the paper which doesn’t have any real impact on the trend).
So how much have US temperatures increased since 1900 or 1920 (after the adjustments) – a little more than the 0.45C that have been made in adjustments.
http://img44.imageshack.us/img44/3491/usmonthlyanom.png

Richard
December 8, 2009 6:36 pm

hillbilly76 (17:50:30) :
For Richard 3-16-38. Re your query about Satellite(MSU), surface temperatures, radiosonde etc. Your memory was right! John Daly outlines specifically how the UNIPCC scientists “dealt with” the troublesome divergence question in 1979. i.e., – surface showing warming (after adjustments), MSU showing little or none.
Site is http://www.john-daly.com/ges/surftmp/surftemp.htm. Been out there a long time, but still true and very relevant as their decision profoundly influenced all future UNIPCC modelling

Here is what I read about Darwin from that site : “Tropical stations in Malaysia and Indonesia show warming, while Darwin and Willis Island in Australia, both tropical stations in the same region, do not.” Is it saying that the satellite data is not showing Darwin warming or the Ground data? Its not clear.
That really is an eye opener. It should be widely read. The ground data does indeed seem faulty.

December 8, 2009 6:41 pm

Hats off for this important and informative work and for sharing it.

Doug in Seattle
December 8, 2009 6:50 pm

WIllis good to see your work again. I have followed this saga now the last six years when I first heard of Steve McIntyre’s work.
I hope three things come out of what has occurred lately:
1. Open data,
2. Open Code, and
3. No more BS until 1 and 2 are done and fully vetted.
Seeing the madness unfold in Copenhagen I have to wonder whether my hopes are realistic, but there I am.
In the meantime, keep the pressure on these clowns.

Gail Combs
December 8, 2009 6:51 pm

Jeremy (07:00:22) :
More BBC Propaganda. Husky dogs may not have employment and face a bleak future in a warmer world. This is really pathetic. Teams of Husky dogs (which pull a sled) where replaced by motorized machines called Snowmobiles or in Canada a Skidoo over 50 years ago….
I have to reply to this one. Husky teams were used in dog sled racing up until recently. The dogs live out doors so they grow the correct fur coat and underlying layer of fat. PETA throw a hissy fit and insisted the dogs must be kept in heated kennels. THAT was the end of dog sled racing because dogs kept in heated kennels come down with pneumonia when raced.
You are correct this is pure BS. For what it is worth there are more horses in the USA now than there were a century ago.

Jim (A resident of Darwin)
December 8, 2009 6:55 pm

There is something I would like to slip into the mix
There is a document “Territory 2030” just released
by the govt. A futures strategy document.
In the DRAFT strategy, It is stated that in the next 20 years
“Temperatures will rise an average of 2ₒC to 3ₒC”!!!!!!
If one splices a 2009->2030 2.5 oC temp rise onto the Darwin airport
record, or Hadley CRU, or almost any other temperature record, the
statement is clearly garbage.
This is something that could be put into play here by simply
getting the airport record, even one with an exaggerated temperature
rise, and sending it to the local newspaper, the NT news. The
NT news would be quite likely to print a reasonable quality
graphic and an accompaning short letter.

Hilary Ostrov (aka hro001)
December 8, 2009 6:56 pm

Anthony, both you and Steve McIntyre are mentioned in an editorial in today’s WSJ (hope this is a good place to mention this, if it’s in the wrong spot, I apologize!):
http://online.wsj.com/article/SB10001424052748704342404574576683216723794.html#articleTabs%3Darticle
—-begin excerpts—
The Tip of the Climategate Iceberg
The opening days of the Copenhagen climate-change conference have been rife with denials and—dare we say it?—deniers. American delegate Jonathan Pershing said the emails and files leaked from East Anglia have helped make clear “the robustness of the science.” Talk about brazening it out. And Rajendra Pachauri, the head of the U.N.’s Intergovernmental Panel on Climate Change and so ex-officio guardian of the integrity of the science, said the leak proved only that his opponents would stop at nothing to avoid facing the truth of climate change. Uh-huh.
[…]
In 2004, retired businessman Stephen McIntyre asked the National Science Foundation for information on various climate research that it funds. Affirming “the importance of public access to scientific research supported by U.S. federal funds,” the Foundation nonetheless declined, saying “in general, we allow researchers the freedom to convey their scientific results in a manner consistent with their professional judgment.”
Which leaves researchers free to withhold information selectively from critics,
[…]
When it comes to questionable accounting, independent researchers cite the National Oceanic and Atmospheric Administration (NOAA) and its National Climate Data Center (NCDC) as the most egregious offenders. The NCDC is the world’s largest repository of weather data, responsible for maintaining global historical climate information. But researchers, led by meteorology expert Anthony Watts, grew so frustrated with what they describe as the organization’s failure to quality-control the data, that they created Surfacestations.org to provide an up-to-date, standardized database for the continental U.S.
Mr. McIntyre also notes unsuccessful attempts to get information from NOAA.
[….]
—end excerpts—

Optimizer
December 8, 2009 7:18 pm

If nobody else has pointed this out, before you go circulating that last graphic around, you might want to fix that title. “Dawin” vs “Darwin”.
Oh, and great post. I’ve seen similar sorts of analyses (probably at ClimateAudit), so I’m not the least bit surprised. This analysis is particularly straightforward, though, and makes it very obvious as to what is going on.
IMHO, adjusting the raw data is, for lack of a better term, corrupt – even if done in an unbiased way. Even when done innocently, what the analyst is trying to do is to create a single dataset for a single place where none really exists. To be rigorous, each dataset must be considered independently. If the thermometer is moved from the pub to the airport, you are no longer measuring the temperature at the pub – you’re measuring the temperature at the airport. It’s as simple as that. This post shows what kind of monkey business you invite when you try to pretend that there is still only one dataset, when there is really two.
It also shows how questionable the use of these measurements are in the first place. If the temperature data at the pub is significantly different than that for the airport, for example, that only shows that you need a much higher density of measurements than you actually have to describe the temperature of the area. It’s called “aliasing”, and there is simply no substitute for an adequate number of samples. If you don’t have enough samples, you simply cannot tell what the average temperature of the area is.
If you change the thermometer, and that changes the measurement significantly, that tells you that your thermometer stinks, or is uncalibrated. The only way to improve your data is to then then calibrate at least one of the thermometers involved – simply making “adjustments” does not increase the absolute accuracy.
Perhaps it is only human nature to try to come up with a number, where there really isn’t one to be had. But that’s not science.

Ian
December 8, 2009 7:26 pm

Its interesting to me how well the adjustment graph for the temperature series aligns so nicely with the briffa_sep98_e.pro “valadj” artificial adjustments. Its not one for one.. but its certainly close.

December 8, 2009 7:42 pm

I suppose the new freezing point of Australian water is +2°C. That’s an even neater trick than Mike’s!

David Whelboun
December 8, 2009 7:56 pm

Thank you for all your effort to track down the truth. You deserve to be recognised around the world for your painstaking analysis.

Christian Bultmann
December 8, 2009 8:18 pm

Impressive work.
Another way to establish a bias is perhaps to look at the release date of CRU or GISS monthly temperature report.
I noticed over the years that when ever the temperature trend was falling it took longer for GISS to publish there findings.
When the temperature trend was increasing it was expected and published without further checks only to get caught with there pants down like last fall as GISS used the September temperatures for the October temperatures in Siberia.

JJ
December 8, 2009 8:28 pm

Willis,
“Now, I don’t see how that could be legit.”
That you cannot see it does not mean it isnt so. Typical of the Team’s arrogance is the notion that they know it all. This is why you should ask, before drawing unsupported conclusions. Admit your limits. You are not God.
“If the record for Darwin Zero needs adjusting by some amount, then as you point out you’d need to adjust them all by the same amount, ”
No, I pointed out that it might be legitimate to adjust them all by the same amount, not that it would be necessary to. There could be more than one adjustment being applied. One adjustment might be applied to all stations, another only to one. The point is you dont know. Ask.
“Nor did they “fail to apply” an adjustment to one of them as you say.”
You said they did. You said “but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched.” Totally untouched implies that they didn’t apply an adjustment to that one, though they did to the others. That they may have failed to apply an adjustment to the untouched one is consistent. It may have happened, it may have not. you dont know. Ask.
“We know that because they made a very different adjustment to Darwin One than to Darwin Zero.”
Or, a different pair of adjustments. Or five different adjustments. Three to one and two to the other. You dont know. Ask.
“Is it possible that these adjustments were all for some legit reason?”
Yes. Which is why you should ask.
“Well, nothing is impossible … but when it gets that improbable, …”
It’s like listening to Mann. How on earth do you quantify probability of that event? Quit making stuff up, back up to what you can prove, and ask about the rest.
“I say it is evidence that the system is not working and that the numbers have no scientific basis. I certainly may be wrong … but I have seen no evidence to date that says that I am wrong.”
Really, are you sure there arent hacked emails with your name on them? This is very shoddy reasoning you are displaying. Once again, you do not know everything. There may be perfectly legitimate adjustment(s) applied here that you are simply unfamiliar with. Before you jump to the conclusion that someone else is criminal, ask.
“If you have such evidence, bring it on, I’ve been proven wrong before. But I think I’m right here.”
People that think they are right and who are unwilling to take the steps necessary to find out if they are not are at the heart of this problem. Dont continue to be one of them. Ask.
If you are going to make claims, it is up to you to prove them correct. It is not sufficient for you to make unsupported claims, and demand that other prove you wrong. That’s Teamwork. Ask.
Honestly, I dont see what the issue is. I have bent over backwards letting you know that I support what you are doing, that you have done valuable work so far, and that I think you are on to something. The only problem is that you dont want to complete the work before you make very nasty conclusions about other people. It is not proper for you to do that without first asking them the questions that you raise but cannot answer yourself.
Ask!
JJ

Walt
December 8, 2009 8:51 pm

Keep fighting the good fight, Anthony. We’re behind you all the way.

Dr A Burns
December 8, 2009 9:01 pm

It would be interesting to see the effect of UHI on av. global temps using:
T(uhi)-T= 1.2 log(population) – 2.09
… as described here:
http://www.warwickhughes.com/climate/seoz_uhi.pdf
I have no doubt that it would reinforce Briffa’s 1998 data showing cooling after 1940.

intrepid_wanders
December 8, 2009 9:13 pm

I suggest to normalized this simulation with the same parameters that NOAA uses. The US Historical Climate Network of NOAA uses a system for this very bias that we are observing.
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/
The interesting coincidence is that beginning of the ‘bias’ or intercept adjustment is very evident in 1960 onward, as opposed to the Darwin 1940-41 bias. I speculate that there was a instrument change that occurred with a radar installation that was later bombed by the Japanese in ’42. The most interesting part is that the tree ring correlation goes to crap in the 60’s, the same time these silly corrections come into play. Coincidence… I say not. I have worked with systems with multiple correlation factors (measurement equipment) and it is difficult in the very best of situations.

bill
December 8, 2009 9:16 pm

I’m sorry Willis but I must question your plots
I have plotted raw GHCN Raw Giss Homogenised GISS and they do not compare with yours at all. Will you please show the source of your (faulty?) data.
Here are my sources
Giss: http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=501941200004&data_set=1&num_neighbors=1
ghcn: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean.Z
Here are my plots:
http://img37.imageshack.us/img37/9677/darwingissghcn.png
I suppose giss or ghcn may have adjusted their figures but this seems unlikely
Note that my plot shows 2 discontinuities 1940 and 1995. If these are removed then a warming will be shown!!!!!!!!!!!!!!!!
Comments please

Grahame(Aus)
December 8, 2009 9:29 pm

Excellent article Mr Eschenbach, I assume we can trust your integrity, it’s getting very hard to know who to trust these days. Your findings don’t surprise me.
I have recently, out of curiosity, had a brief looked at the Australian temperature data that is available on the BOM site for my own local area, Newcastle, and was very intrigued by what I saw.
I chose Newcastle’s Nobby’s weather station (61055) and compared it with Sydney’s Observatory Hill weather station (66062).
I chose these two sites because they probably have the longest continuous record of anywhere in Australia, dating from the mid to late 19th century, and I would guess that the measuring point would not have changed by anymore than a few metres over that time.
The most significant difference is that the Nobby’s station is isolated from urban development by water and sand for at least a kilometre all around so any heat island affect would be minimal, whereas Sydney city centre has grown around the Observatory site as well as the Sydney Harbour Bridge off ramp in the 1930’s and a major roadway in the 1950’s.
The Newcastle site shows a generally flat temperature trend while the Sydney one shows a steady rise more in line with the accepted trends.
It would be very interesting for someone to do a careful analysis of these two sites to confirm my observations as there are not many temperature records this long in Australia.
I have no idea whether the data available on the BOM site is unadulterated or not.

December 8, 2009 9:30 pm

Thanks for the great article.
I pulled up a station at West Point that has been in use since the late 1800s.
The averages based on the raw data show little to no warming.
Yet the homogenized data seems to depress the temps prior to 1980 and it inflates temps thereafter.
I create a crappy, superimposed graph to show it, here: http://thevirtuousrepublic.com/?p=4813
Two questions come to mind. One, why doesn’t the raw data show the “hockey stick?” Two, why does the manipulation of the data depress the figures pre-1980 and inflate it thereon?
I think we all know the reason why and science isn’t involved.

December 8, 2009 9:31 pm

Darwin Zero Dirge
Darwin Zero from the Land Down Under,
Ground Zero for deceptive plunder,
Lots of weather from a lot of years
Ground to powder in the Warmist Gears!
They digested all the data – devouring every bit and byte –
And what came out the other end? A stinking thing as dark as night,
Obedient to Gore and Jones, oblivious to all that’s known,
Looking like the doomsday clock had struck it’s final hour
And the world was
Out of luck.
Darwin Zero from the Land Down Under,
Ground Zero for deceptive plunder,
Lots of numbers from lots of years
Ground to powder in the Warmist Gears!
But then Eschenbach said, “Hey – full stop!”
“These charts are wrong – they don’t match up!”
He promptly checked the data and,
Checked and checked and checked again,
Until the numbers showed him true
What “homogenized” could do…
Darwin Zero from the Land Down Under,
Ground Zero for deceptive plunder,
Lots of lies from lots of years,
All made to feed their Doomsday fears!
.
©2009 Dave Stephens
http://www.caricaturesbydave.com

December 8, 2009 9:47 pm

Anthony
Excellent presentaion, as usual. Your sober analyses help to keep me on an even keel.
MW

Alan Sutherland
December 8, 2009 9:56 pm

For JJ
I understand your desire for purity to counter possible dishonesty in the science. But I also know that if the Team does not provide the answers requested, or even acknowledge the question, this puts the whole issue into limbo. In the meantime Copenhagen continues and some stupid deal is done, in which case Willis’s work becomes irrelevant.
So I support the more aggressive approach. Maybe some points will be lost, but from what I see, more are likely to be won. The JJ system allows the Team to win. How much progress did Steve M make with his approach similar to your suggestion. The CRU leaks show that the Team were playing Steve – in other words science had nothing to do with it.
Alan

Christopher Hanley
December 8, 2009 10:34 pm

For those interested, this page gives access to a wide range of historic weather observations around Australia:
http://www.bom.gov.au/climate/data/index.shtml
e.g.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=014015&p_nccObsCode=36&p_month=13

JJ
December 8, 2009 10:45 pm

Alan,
“I understand your desire for purity to counter possible dishonesty in the science.”
It isnt purity. It is a) common decency and b) good strategy.
It is not moral to accuse people of committing a crime absent proof. It is not moral to accuse people of committing a crime based on your own admitted misunderstanding of their methods, especially without first asking them if you have their method right.
And, it is very bad strategy to act like a crank, when the oppositions (very successful) strategy to date has been to paint you as a crank.
“But I also know that if the Team …”
Its NOAA, not Hansen or CRU. They publish their data and methods. Its worth asking for a methods clarification.
“… does not provide the answers requested, or even acknowledge the question, this puts the whole issue into limbo.”
No it doesnt. You’re still free to run with it. And if you’re stonewalled, you run with that, too. But you dont claim more than you can prove, unless you want to be played.
“In the meantime Copenhagen continues and some stupid deal is done, in which case Willis’s work becomes irrelevant.”
Nonsense.
First, it is not necessary to make unsupported accusations of crime in order to make full use of the Darwin example. Stick with what you can prove.
Second, there isnt going to be anything substantive coming out of Carbonhagen, and this issue doesnt end there. Paint yourself as a bunch of cranks (every climate scientist that produces data we dont understand is a criminal!!) while burning the issue out in two weeks, and you have given up the long game.
Climategate has given us tons of sensational material, there is no need to squander any of that, much less to waste something so important as an audit of the the instrument record.
Third, shame.
JJ

Billy Sanders
December 8, 2009 10:49 pm

What a relief! I no longer have to worry about global warming. Or pollution. Or carbon emissions, or limited energy resources. I am so happy now. I can go back to being an idiot.

December 8, 2009 10:58 pm

As noted by others, the 1941 issue is most likely the result of record destruction in February 1942 after the major Japanese raid on Darwin (there were actually others ongoing until 1943). There was no sealed road from Darwin to the rest of Australia until mid 1942, with transport primarily being by sea; it wouldn’t be suprisinging to find that any central record keeping may have relied on dispatch of the data once a year or once every 6 months for example (and hence the record for parts of 41 was destroyed in Feb 42.)
One thing to note, is that you’re ignoring comparing temperature records (if the still exist) to the north. Darwin is 720kms from Dili, East Timor, and closer again to Maluku in Indonesia. It’s not perfect, but it might give you another data set to compare to (presuming they exist somewhere)

J. Peden
December 8, 2009 11:08 pm

JJ
Which is why you should ask.
No, they should have “told”, right from the start. Otherwise they have no Science. No one even needs to ask. If they don’t tell, they have nothing scientific to “ask” about.
The “nasty” conclusions result from exactly what the “Team”, which has a large supporting gang, has done and is still doing: Perpetrating Scientific Fraud. Conspiring to pass off what is not Science as Science. There may be other frauds.

Brendan H
December 8, 2009 11:12 pm

Smokey: “The leaked emails and code only reinforces my suspicion, and any contrary and defensive red faced arm-waving does nothing to convince me otherwise.”
Correct if I’m wrong Smokey, but I sense that you don’t trust climate scientists very much.
In these situations I find it useful to repair to my rule-of-thumb measure of trustworthiness. Using the “Beard Index” I find the AGW scientists are highly trustworthy, and that there is empirical evidence to show this.
Compare the carefully cultivated beards of Schmidt and Mann with that of McIntyre. Clearly, Schmidt and Mann take great care to present a manicured and precise front, much like their climate work.
In contrast, McIntyre looks like he was just dragged out of bed. Can we expect precision and good work from a man who neglects his appearance so? I think not.
I have found my index a great comfort during difficult times such as we are currently undergoing. I think you would also find great benefit from this easily applied and highly reliable measure.

Billy Sanders
December 8, 2009 11:22 pm

>Not sure what your point is, Billy. I worry about pollution and limited energy resources. I don’t worry about carbon emissions and global warming.
Oh. Cherry-picked environmentalism. Nice.

SimpleGuy
December 8, 2009 11:26 pm

Wow. WOW.
A simple request: bugger with all the “adjustments”. Over a large-enough data series, they should all average out anyway and back to zero, absent some systemic effect like UHI.
What does the average temperature do if you just take raw, unadjusted data, and average it all up. All stations, all sites, all years. I’d *start* with that data and analysis.
Enough manipulation. Publish the raw data, write the code to analyze it in just a few minutes, and publish the raw statistics and averages.

Richard Sharpe
December 8, 2009 11:35 pm

>Not sure what your point is, Billy. I worry about pollution and limited energy resources. I don’t worry about carbon emissions and global warming.
Oh. Cherry-picked environmentalism. Nice.

Yeah, Willis. Billy is the man who defines what environmentalism is, and if you don’t meet his requirements, then you are a Cherry Pickin’ Denier!

JJ
December 9, 2009 12:09 am

Willis,
In your analysis, you quoted from the NOAA GHCN methods document, regarding the homogeneity adjustments. This passage from that document demands your immediate attention:
A great deal of effort went into the homogeneity
adjustments. Yet the effects of the homogeneity adjustments
on global average temperature trends are
minor (Easterling and Peterson 1995b). However, on
scales of half a continent or smaller, the homogeneity
adjustments can have an impact. On an individual
time series, the effects of the adjustments can be enormous.
These adjustments are the best we could do
given the paucity of historical station history metadata
on a global scale. But using an approach based on a
reference series created from surrounding stations
means that the adjusted station’s data is more indicative
of regional climate change and less representative
of local microclimatic change than an individual
station not needing adjustments. Therefore, the best
use for homogeneity-adjusted data is regional analyses
of long-term climate trends (Easterling et al.
1996b). Though the homogeneity-adjusted data are
more reliable for long-term trend analysis, the original
data are also available in GHCN and may be preferred
for most other uses given the higher density of
the network.

Emphasis mine.
This would seem to be capable of containing the ultimate answer to many of your questions.
It is documented that the described homogeneity adjustment methods may result in ‘enormous’ adjustments at the single station level. It is asserted that these adjustments have minor effects on global average trends, and a reference is given in support of that assertion. It is documented that these adjustments produce datasets that are potentially less valid at the single station level than the unadjusted data, which is why they look goofy to you.
This provides an explanation for what you are seeing that does not involve outright ciminal fidgeting with the data at the individual station level to match ‘preconceived notions’.
The homogeneity adjusted series are held to be not only valid, but more representative than unadjusted series, for the purpose of regional and larger scale long term trend analysis.
If you have a legitimate beef with what you are seeing at Darwin, it would appear that it likely is not with adjustments that have been applied surreptitiously, but rather with the assertion that the adjusted data are superior for large scale, long term trend analysis.
If you are going to address that issue, you are going to need to abandon clock analogies and the like. The maths involved do not always operate intuitively, and cannot be refuted by such analyses. You are going to need to refute the theoretical basis, much as Steve M did with Mannian PCA. I’ll wager that is going to take some reading on your part. It would appear that papers by Easterling and Peterson would be a good place to start.
It remains a possibility that criminal fidgeting has occurred with the Darwin station data, but an alternate explanation for large adjustments lies before you now, one previously outside your knowledge.
This is why you should ask.
JJ

December 9, 2009 12:11 am

Ah yes… the Beard Index. I know it very well.
I’m 61 and many of my friends and acquaintances have beards. Mine comes and goes like the winter snows. I’ve long noticed bearded mean (sadly I’ve not yet interviewed a bearded woman) fall into two clear categories as follows:
(1) The hang loose types who hate shaving (because it basically sucks). These guys require to have obligingly mellow lovers who don’t mind smooching their way through that mess of hair ‘come what may’. These guys basically don’t give a damn. They don’t mind growing old either gracefully or disgracefully or grumpily if that is to their taste. To these guys getting old usually means more laughs, deeper friendships, better wines, worse beards and increased refinement of their BS detector.
(2) Next we have the control freaks, those often anally retentive types who carefully construct for that well manicured beard, carefully sculptured around the facial contours and features to highlight their ‘good points’. These guys are usually stuck with whiny lovers who don’t like facial hair and demand a well manicured flightpath to a very occasional peck on the lips or nose or whatever. These guys are generally secretly obsessed with wherever it is they perceive themselves to be (well) positioned in that all important human pecking order. After all, its all about presentation n’est pas? Naturally, these guys are not ageing with anything other than resentment – indeed ageing too is simply another…..Catastrophe.
Quite frankly, Type 1 are my kinda guys.
Type 2 have a beard essentially for one reason. It’s called Hide the Decline.

steven mosher
December 9, 2009 12:43 am
Rick Jelliffe
December 9, 2009 12:55 am

I am intrigued that you include YAMBA in your readings, in the second diagram, and call it a Northern Station. It would be surprising if it fits the bill for a neighbouring station. Is that what the CRU people did, or just what you did?
YAMBA lat -29,43 long 153.35 is about 3,000km from Darwin. It sits on a different ocean (Pacific) next to my home town, and has very equitable climate. It is the YAMBA you get when dialing up on the AIS site.
3000km, to give some context, is a little more than the distance between London England and Ankara Turkey. More than the distance from Detroit Michigan and Kingston Jamaica.

Mike Bird
December 9, 2009 1:13 am

One of the UEA CRU emails (0839635440.txt) is relevant to this. It is from John Daly to n.nicholls@BoM.Gov.Au and includes data from Station 66062 which shows Sydney Observatory’s annual mean temperature.
16.8 16.5 16.8 17 17 16.7 17.1 17.4 17.9 17.4 17.2 17.1 16.9 17 17.2 17.2 17.4
17.6 17.6 17.6 16.7 17.1 16.8 17.4 16.8 17.3 17.8 17.5 17.1 17.2 17.6 17.3 17.1
16.9 16.9 17.3 17.3 17.3 17.6 17.5 17.4 17.2 17.1 17.3 17.2 17.2 16.9 17.5 17.4
17.2 17 17.5 17.4 17.5 17.7 18.3 17.8 17.4 17.2 17.4 18.3 17.3 18 18.1 18 17.5
17.3 18 17 18.2 17.4 17.6 17.5 17.4 17.1 17.4 17.3 17.5 17.7 18 17.8 18 17.4
17.8 16.8 17.5 17.4 17.6 17.6 17.2 17.4 17.9 17.9 17.6 17.7 17.8 17.7 17.6 17.8
18.3 18 17.6 17.8 17.8 17.8 18.1 17.9 17.5 17.8 18.3 18 17.7 17.3 17.5 18.5 17.4
17.8 17.7 17.8 17.7 18 18.5 18.2 17.8 18.1 17.5 17.8 17.8 18 18.6 18.1 18.1
18.6
These arent labelled, but the start year is 1859. If you download the current 66062 data from BoM there is complete agreement until 1970, after which no agreement. A partial comparison is given here: –
Sydney 66062
Year 2009 data 1992 data Difference
1961 17.8 17.8 0.0
1962 17.8 17.8 0.0
1963 17.8 17.8 0.0
1964 18.1 18.1 0.0
1965 17.9 17.9 0.0
1966 17.5 17.5 0.0
1967 17.8 17.8 0.0
1968 18.3 18.3 0.0
1969 18.0 18.0 0.0
1970 17.7 17.7 0.0
1971 17.9 17.3 0.6
1972 18.0 17.5 0.5
1973 18.7 18.5 0.2
1974 17.8 17.4 0.4
1975 18.4 17.8 0.6
1976 18.0 17.7 0.3
1977 18.4 17.8 0.6
1978 18.0 17.7 0.3
1979 18.4 18.0 0.4
1980 18.8 18.5 0.3
Once Again a step type adjustment followed by a somewhat random walk.
Kind of intriguing that it is so similar
Mike Bird

December 9, 2009 1:20 am

The law of great numbers, that as the number of samples increase the average (and median) will become more and more close to its true value, indicates that with many stations, raw and homogenized station data should start to look familiar. Alas they don’t. If there was a persistent false cooling trend it could still be justified, but I would rather guess that there is a false warming trend due to urban heat islands than a false cooling trend.

Frederick
December 9, 2009 2:29 am

I live in France. On Monday I was driving on the motorway here and was listening to “Radio Traffic” for news about any snarl ups when they broadcast a short interview with a person (can’t remember his name but I think he was Senegalese) who has been appointed by the environment minister here – Borloo – to work on the climate justice campaign at Copenhagen and after.
He described how he had been invited by Borloo into his office and that Borloo has shown him a photo of the earth by night on which he pointed out that while there were lots of lights in Europe and South Africa, the whole strip of central africa was dark. According to this person, Borloo asked him to work on electrification of Africa. This, he said, would be paid by 250 billion euros raised from a Tobin Tax on financial transactions.
This was news to me. Also, it makes no sense. First, electrifying Africa is not going to cut Co2 emissions. Second, the Tobin tax has not been passed into law. It is still talk. And to assume that you can raise Euro 250 billion from it is crazy since as soon as a tax like this is imposed the amount of transactions will fall and you will collect less than you might think. Also, who asked us if we want this? But it seems to have been decided. Crazy!

Frederick
December 9, 2009 2:39 am

I found this. OK, it’s not decided, but it is being pushed by France at Copenhagen.
http://www.ambafrance-us.org/climate/interview-jdd-jean-louis-borloo/

Aussie Gal
December 9, 2009 3:37 am

From a web-site in Australia that struck me as very poignant when seeing Stuffushagen in the news:
My parents used to recall the times pre WW11 and the rise of fascism and the Nazis. They often referred to the Nuremberg Rallies…the whipping up of fanatical belief and ideology…the shrill propaganda spewed forth by those sated with power, hate and bigotry. They also spoke of the punishment inflicted on those who dared to disagree.
My parents have now passed on, yet I was reminded of their stories while I watched the reports on the events unfolding in Copenhagen. For the first time in my life, I felt a sickening sense of fear for the future.
It might be for a different cause and acted out by different players, yet the message is the same…the cult is powerful, ruthless, intolerant…and God help those who disbelieve.
Hmm, makes one think hey?

Richard S Courtney
December 9, 2009 3:53 am

Willis:
A very fine analysis.
I have one question.
Do you have an explanation for the homogeneity adjustment made for a single year in 1900 (as indicated in your Figure 7)?
It seems that the purpose was to force agreement at 1900 between the raw and adjusted data sets.
Other adjustments are much larger than the adjustment I am querying, but my query is important.
If the 1900 adjustment was only made to force agreement at the start of the century then this adjustment is de facto evidence that adjustments are made for purely arbitrary reasons.
Richard

Basil
Editor
December 9, 2009 4:04 am

steven mosher (00:43:07) :
heres something odd.
whats weird about this
http://www.metoffice.gov.uk/climatechange/science/monitoring/locations.GIF

What are you getting at? The uneven coverage of the US? (Lots of red dots in the western US, not so many in the midwest — where it is cooler). But the red dots are just a subset of the total.
Or is it the dots sprinkled throughout the oceans, in places where there might not seem to be islands, normally?
What do you see?

Basil
Editor
December 9, 2009 4:09 am

JJ (00:09:31) :
I quoted from the same section earlier, but Willis apparently missed it (easy enough, in a thread this huge). I’d be curious about his take on it. I also asked if the bit about “Yet the effects of the homogeneity adjustments on global average temperature trends are minor” is quantified anywhere.

Gail Combs
December 9, 2009 4:45 am

It seems to me the best place to start is with the raw daily data and find out how many “missing” months have a small handful of days missing, and estimate the monthly average for those days, either by ignoring the missing days or interpolating them.
John, thanks for your interesting comment. I agree that the daily data needs to be looked at … so many clowns, so few circuses. Also, we need an answer about dealing with the missing data. Interpolate? If so, how? Another issue for surfacetemps.org to publicly wrestle with.
Reply:
Just a suggestion. I would think using the mean of the data even for the month with days missing would be the best way to go. The effect of using incomplete data sets would be added to the error analysis and not to the data itself. The data with have footnotes that detail the information that is missing. The statisticians should be able to address this. Given the number of individual stations used to estimate the global temperature, the deviations from the true mean of the months calculated from incomplete data, should nullify each other. The amount of error introduced would be far less than error introduction from “artificial interpolation” Interpolation could only be used by determining which stations best parallel each other and the offset. However I would think parallel and offset would be better used in determining the probable error introduced by using that particular incomplete data set than by “adjusting” the information with a “best guess fudge”
I would also agree with MIke (12:14:18) :
Once the data is “cleaned up” and monthly averages determined I see the advantage of using “acceleration as the measuring tool instead of measured value” because there would be no need to try and splice data sets together from different locations to make one long data set.

Ryan Stephenson
December 9, 2009 4:46 am

I took a look at the data for Ross-on-Wye from the GHCN database. I took this one because Ross is an old town with long records, untroubled by very heavy traffic and therefore any Stevenson screen there is unlikely to have been tainted by “adjustments”. This was the first station I looked at – NO GLOBAL WARMING!
I would give you a link but the GHCN database is down. What a curious coincidence….
Anyway, I’m looking in detail at the statistics for Stornoway now. A small island in the far north. Had hoped that the data would be untainted but it seems the Stevenson screen is next to an airport – bet that wasn’t there in the 30’s! Oh, and it now has electronic thermometers for remote reading – bet they didn’t have those in the 30’s either!
Anyway, the means for the annual distribution show a 0.35Celsius increase in the last 10 years compared to the first 10 years. Problem is the averaged monthly show a variation ranging from -0.03 to +1.0 Celsius taken over the same period. That tells you that you can’t rely on the annual means because they don’t necessarily reflect the means of the underlying distributions. Furthermore even in the monthly averages you can see a variation of 6Celsius over each January from 1931 to 2008. This is due to what is commonly referred to as weather. So the Climate change signal is 0.35Celsius in a weather signal of 6Celsius. Is that statistically significant? I don’t think so.

Richard M
December 9, 2009 5:30 am

I’m a little confused as to how temperature trends are computed. With or without raw data. It was mentioned that CRU started their use of Darwin in 1963. Exactly what do they use for Darwin temps prior to 1963? Is this where they use an average of other stations? If not, then throwing in a tropical station in the middle of a sequence would clearly bias the results. Is this what E.M. Smith often refers to as “the march of the thermometers”?

Jim
December 9, 2009 5:51 am

JJ (00:09:31) :
If one uses a temp station A to adjust temp station B, then station A gets more weight. The more it is used to adjust other stations, the more weight it gets. This procedure does not increase the “paucity” of stations, it just spreads the influence of a given station around. These guys have not proved their method makes the data more suitable for a genrealized climate change quantification. Just saying it does not make it so.

Steve
December 9, 2009 6:38 am

An interesting post – hope you get the coverage you deserve!

Richard S Courtney
December 9, 2009 6:39 am

Willis:
I add another question.
In your analysis the adjustements seem to end at or near 1979. This was when the satellite data series started.
There is no calibration against which to compare any of the station records and the HadCRUT, GISS, GHCN data sets. Homogenisation adjustment of station records alter the HadCRUT, GISS, GHCN data sets. The station records and those data sets have each been subjected to alterations.
The series of alterations to the HadCRUT, GISS, GHCN data sets seem to have had the effect (intention ?) of bringing them into closer agreement with each other. But the satellite data (i.e. RSS and UAH) provide alternative (and independent) possibility of comparison.
The RSS and UAH data sets each show little positive trend since the start of their compilation in 1979.
So, my question is
is the Darwin data typical of other station data from GHCN in that it has large positive trend adjustments applied by homogenisation prior to ~1979 but not after 1979 when the ‘satellite era’ began?
If so, then there is clear reason to distrust all the homogenised station records and the HadCRUT, GISS, GHCN data sets for time prior to the ‘satellite era’.
Richard

December 9, 2009 6:41 am

Hi –
First of all, kudos to Mr. Willis Eschenbach: a right scrum job there.
Of all the critiques – and I skimmed the comments – it seems that no one has noticed the fundamental error that the adjusters made.
If I am constructing a proxy, I need to find a baseline, one “best fit” or best practice estimation point or series of data points where I can then leverage incomplete knowledge to form a greater whole.
As someone who handles a huge amount of industrial data, the closer you are to the current date, the more confidence you generally have in the data, as reporting is better, samples are more complete, turnaround is faster (resulting in faster revisions) and if you have a question about the validity of a data point, you can find an answer with recent data (simply ask and someone may remember), but asking about a monthly data point of 23 years ago results in shrugs and “dunno”.
Now, that said, I do understand that adjustments are placed on data, especially if the stations involved have changing environments. Given the difficulties in having a constant operating environment, I can understand the rational behind making corrections. But why did they do this from a baseline from the 1940s?
Data must be understandable and pristine: hence if I have a daily temperature reading (and the poster who points out that this is an average between daily highs and lows is correct: without those, these numbers are compromised unless you are only looking at the vector of the average, rather than its level…but I digress) I want that to remain unchanged, since I can then simply add new data points without any adjustments (until, of course, something happens that requires an adjustment: in that case, I rebase the series so that my current readings always – always! – enter the database unadjusted.
To repeat: the most current data is inviolable and is not to be adjusted!
This is exactly, it seems, to be what Mr. Eschenbach did in his Figure #6.
The egregious error of the original data processing was to apply the corrections from left to right: it should be applied right to left, i.e. from the current data point backwards.
By applying it from the first data point forwards, they are making the assumption that the oldest data is the most accurate and hence forms the baseline for all following data corrections: from what I’ve understood of the data collection methods, this is, shall we say, a somewhat … heroic assumption.
This is, really, basic statistical analysis. I’d love to be snarky about this and ask if climatologists have ever had statistics, but know too little about what they learn during their training to do that with any degree of accuracy.
Pun intended.

December 9, 2009 6:45 am

It appears you quote the explanation for the homogonisation results in your own entry. Have a look at http://www.bom.gov.au/climate/change/datasets/datasets.shtml which details how and why the data was homogonised (all the research is public). Their opening paragraph:
“A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”
No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=3

JJ
December 9, 2009 6:51 am

Basil (04:09:16) :
“I quoted from the same section earlier, but Willis apparently missed it (easy enough, in a thread this huge).”
So you did! I do not doubt that Willis missed your post. I had missed it, too. I now feel a bit like Scott, showing up at the South Pole only to find Amundsen’s flag flapping in the breeze 🙂
“I also asked if the bit about “Yet the effects of the homogeneity adjustments on global average temperature trends are minor” is quantified anywhere.”
Yes, and you also provided this link:
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
which does appear to quantify the ‘minor’ effect of the homogenation routine. Huh, 0.6C. Fully half of the supposed ‘global warming’ effect over the period. Yeah, that’s minor all right. And that shape. It seems so familiar …
Why would a homogenation routine introduce/amplify that shape?

michaelfury
December 9, 2009 7:04 am
Bill Schulte
December 9, 2009 7:22 am

Need a PDF of this one.
REPLY: See further up in comments there is a link – A

Jim
December 9, 2009 7:35 am

***************
Dominic White (06:45:19) :
“A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”
No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=3
*******************
So why do they assume the drop is wrong. Maybe all the previous readings were too high and need to be adjusted downwards. It seems they always have an explanation for what they did, but never have proof that it yields an accurate temperature history. It looks like a bunch of hand-waving to me.

MattB
December 9, 2009 7:37 am

Does the site have any comment re: Dominic White’s post at 06:45:19? Sounds very sensible to me.

December 9, 2009 7:39 am

more things for human beings to protect the earth.

MattB
December 9, 2009 7:43 am

hmm actually I can answer it myself and I think the BOM refers to the introduction of stevenson screens circa 1910, so not in the 1940s era.

MattB
December 9, 2009 7:51 am

darwin temperature record: http://www.john-daly.com/darwin.htm
“Dear John,
Further to my emails of earlier today, I have now heard back from Darwin Bureau of Meteorology. The facts are as follows.
As previously advised, the main temperature station moved to the radar station at the newly built Darwin airport in January 1941. The temperature station had previously been at the Darwin Post Office in the middle of the CBD, on the cliff above the port. Thus, there is a likely factor of removal of a slight urban heat island effect from 1941 onwards. However, the main factor appears to be a change in screening. The new station located at Darwin airport from January 1941 used a standard Stevenson screen. However, the previous station at Darwin PO did not have a Stevenson screen. Instead, the instrument was mounted on a horizontal enclosure without a back or sides. The postmaster had to move it during the day so that the direct tropical sun didn’t strike it! Obviously, if he forgot or was too busy, the temperature readings were a hell of a lot hotter than it really was! I am sure that this factor accounts for almost the whole of the observed sudden cooling in 1939-41.
The record after 1941 is accurate, but the record before then has a significant warming bias. The Bureau’s senior meterologist Ian Butterworth has written an internally published paper on all the factors affecting the historical Darwin temperature record, and they are going to fax it to me. I could send a copy to you if you are interested.
Regards Ken Parish”

Slartibartfast
December 9, 2009 7:59 am

The “smoking gun” at Ground Zero

Oh, great. We all needed another Truther pet conspiracy theory.

Gail Combs
December 9, 2009 8:02 am

Willis Eschenbach
“…..Because I assure you, with these good folks, I have a host of experience that “Just ask” doesn’t work.
JJ, I appreciate both the tone and the content of your comments. In a regular scientific situation, this would never come up, and I would just ask, you would be totally correct. But these days, much of climate science is not science at all. It is people fiercely fighting to withhold information from the public. When they are fighting to keep information secret, “just ask” is just inadequate.”

Willis,
I would like to add. When “just ask” is repeatedly blown off in a scientific discussion. A discussion where the data and methods are withheld, then the discussion is no longer in the realm of science at all. It has devolved into a playground pi$$ing contest, and the scientists withholding data should no longer be considered scientists but political hacks until they do return to the relm of science – OPEN DEBATE, unbias peer review and gentlemanly conduct towards others.
I have nothing but contempt for these people who sully the name and methods of science.

Richard Sharpe
December 9, 2009 8:07 am

Jim (07:35:41) says:

***************
Dominic White (06:45:19) :
“A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”
No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=3
*******************
So why do they assume the drop is wrong. Maybe all the previous readings were too high and need to be adjusted downwards. It seems they always have an explanation for what they did, but never have proof that it yields an accurate temperature history. It looks like a bunch of hand-waving to me.

Indeed. If you look at the Darwin data there is a gentle decline in temperatures from 1882 to about 1939, and then there is a sharp drop.
Even if we assume that the sharp drop is due to a change in measurement method, the gentle decline from 1882 to 1939 still needs to be explained. We should not throw the baby out with the bath water.

Richard Sharpe
December 9, 2009 8:13 am

MattB (07:51:04) quotes Ken Parish:

As previously advised, the main temperature station moved to the radar station at the newly built Darwin airport in January 1941. The temperature station had previously been at the Darwin Post Office in the middle of the CBD, on the cliff above the port. Thus, there is a likely factor of removal of a slight urban heat island effect from 1941 onwards

However, there is a problem here. The temperature data from 1882 to 1939 shows a gentle decline over that period followed by a sharp decline.
I do not think that Darwin was a bustling, dynamic city in 1882 that then went into decline, reducing the UHI.

Gail Combs
December 9, 2009 8:18 am

Brendan H (23:12:57) :
….In these situations I find it useful to repair to my rule-of-thumb measure of trustworthiness. Using the “Beard Index” I find the AGW scientists are highly trustworthy, and that there is empirical evidence to show this…..
I have found my index a great comfort during difficult times such as we are currently undergoing. I think you would also find great benefit from this easily applied and highly reliable measure.
Gee Brendan H, now you have me confused. I thought that was the pompous donkey index! By doing a scientific comparative analysis of a picture of Steve to a picture of Einstein, Steve obviously is higher on the intelligence scale than Schmidt and Mann, so for that matter is “Harry” Harris.

December 9, 2009 8:20 am

The most valuable service of the whistle blower is the “how they did it”.

December 9, 2009 8:28 am

from over here in Calee forneee yaah, a heartfelt Thank you for deciphering this “man made disaster” or is it the new Global game of climate extortion!

MattB
December 9, 2009 8:32 am

Richard why is one station showing a very very marginal decline in the late 1800s of any concern in the slightest?

Gail Combs
December 9, 2009 8:34 am

Richard Sharpe (23:35:44) :
“….Yeah, Willis. Billy is the man who defines what environmentalism is, and if you don’t meet his requirements, then you are a Cherry Pickin’ Denier!”
Interesting how Billy and his friends arguments are not from the realm of science but from the realm of law and that most politicians at least in the USA are lawyers.
“When the law is against you, argue the facts. When the facts are against you, argue the law.When both are against you, attack the plaintiff.” – R.Rinkle
“The legal system’s often a mystery, and we, its priests, preside over rituals baffling to everyday citizens.” – Henry Miller

Steve S.
December 9, 2009 8:37 am

michaelfury (07:04:50) :
Hey Michael, your 911 conspiracy link would be best posted at RealClimate, MediaMatters of HuffingtonPost etc where AGW warmers are also 911 conspiracy loons.
They are as certain of exploded building as they are of CO2 emissions flooding and boiling the plant.

bill
December 9, 2009 8:44 am

Ryan Stephenson (04:46:20) :
A uk spaghetti graph for you
Figures unadjusted met office
http://img410.imageshack.us/img410/8996/ukspaghetti.jpg
UK averaged
http://img266.imageshack.us/img266/5720/ukaveraged.jpg
Some stuff from the islands
http://img689.imageshack.us/img689/188/ukislandspaghetti.jpg

Neo
December 9, 2009 9:00 am

Why do you think they call it “Climage Change” ?

tallbloke
December 9, 2009 9:02 am

magicjava (16:31:09) :
Just to be clear, I agree that the CRU code is bad code. I agree that studying it is worthwhile.
But unless it can be shown that the code was used to produce data or reports that were made available to scientists outside CRU or to the public, it’s not a smoking gun.

Point taken. Though I’d say that whatever the code was used for, internally or externally, it’s evidence of a severe lack of quality control and programming ability!

tallbloke
December 9, 2009 9:05 am

weschenbach (16:05:36) :
tallbloke (14:57:12) : Thanks, tallbloke, quote as you wish. Regarding his questions/statements:

Thanks Willis. His first reaction was that 2.5C adjustments are not unheard of, and since Darwin varies 10C a day, a change in time of obs could account for it.
I’va asked him to provide examples of 2.5C adjustments. If he comes up with any, we’ll get some more smoking guns to add to the collection. 😉

Richard Sharpe
December 9, 2009 9:18 am

MattB (08:32:59) asks:

Richard why is one station showing a very very marginal decline in the late 1800s of any concern in the slightest?

You are seemingly very incurious.
UHI was advanced as one explanation for the sharp decline in the 1939/1941 timeframe. Yet, that explanation is not compatible with the record for the prior 60 years. I can think of no measurement or mechanical reason for the prior decline.

George E. Smith
December 9, 2009 9:35 am

“”” Ryan Stephenson (02:46:32) :
Can I please correct you. You keep using the phrase “raw data”. Averaged figures are not “raw data”. Stevenson screens record maximum and minimum DAILY temperatures. This is the real RAW data.
When you do an analysis of temperature data over one year then you should always show it as a distribution. It will have a mean and a standard deviation. Take the UK. It may have a mean annual temperature of 15Celsius with a standard deviation of 20Celsius. “””
So Ryan; if a Stevenson screen records the maximum and minimum daily temperature (the RAW data), just what is the purpose of showing that as a distribution.
The RAW data is a record of DIFFERENT observations. Suppose I go to the London zoo, and record what creature I find in each cage or enclosure or display area. Should I then show this RAW data as a distribution and calculate its mean and standard deviation ? Would I perhaps find that the mean animal in the london zoo, is a Wolverine, and the standard deviation is a Lady Amherst pheasant ?
Why all this rush to try and convert real RAW data into something totally false and manufactured.
The temperature is different every place on earth and changes with time in a way that is different at every place; so why try to replace all of that real RAW information with two numbers that at best are total misinformation.
It seems to me that statisticians, having run out of useful things to calculate, gravitate towards climatology and start applying their methodologies to disguising what the instruments tell us the weather and climate is, or has been.
GISStemp and HadCRUT are simply that; mathematical creations of arbitrary AlGorythms applied to recorded information of historic weather data; the results of which have no real scientific significance, as far as planet earth is concerned. They certainly don’t tell us whether living conditions on earth are getting better or worse; or even how good they might have been at some past epoch.

Fred Lightfoot
December 9, 2009 9:44 am

Thank you Sir !
lets hope the idiots don’t put the world into reverse gear. I have posted my Representative the prime reader ( for 5 year olds) with the WUWT link.

J.Hansford
December 9, 2009 9:53 am

When the stevenson screens came in, it would be interesting to see what they did to “adjust” for that change…. That is why there is a step change for Darwin around 1940-41 in record zero I’d say.

Sjoerd Schreuder
December 9, 2009 9:56 am

bill (08:44:07) :
Ryan Stephenson (04:46:20) :
A uk spaghetti graph for you
Figures unadjusted met office
http://img410.imageshack.us/img410/8996/ukspaghetti.jpg

Did I notice “De Bilt” in there? That’s a village in The Netherlands, and it’s clearly not in the UK. The dutch met office KNMI is sited there.

Slartibartfast
December 9, 2009 10:11 am

It happens that we (the US) have a meteorological site on Darwin proper. Coordinates are 12.4246°S 130.891597°E. I have no idea how that site relates to the sixty-seven-year-old thermometer at Darwin in terms of position, but it’s got enough instrumentation to corroborate or invalidate the adjustments to Darwin’s thermal record, I’d guess.
Website is http://www.arm.gov/sites/twp/c3
I just don’t know how to access the data. It’s only about seven years old, though, so it’d only be able to serve to validate recent data.
I’d guess the AGW community has already done some work there, if I were in the habit of guessing.

Slartibartfast
December 9, 2009 10:16 am

I’d also guess that it’s within a few kilometers of the Darwin weather station, which should be CEFGW.

JJ
December 9, 2009 10:33 am

“When the stevenson screens came in, it would be interesting to see what they did to “adjust” for that change….”
From my read of the homogenation methods, they did not do anything specific to adjust for any specific issue (such as Stevenson screens, TOB, etc) for stations outside the US.
For USHCN stations, they did apply a metadata based homogenization. If the station metadata documeted a site change or an instrument change etc, then they applied a specific correction to the data in an attempt to account for it.
For stations outside the US, they did not do that. Instead, they homogenized each station to a reference station, using a first difference series. This does not apply an explicit, defined correction to a discontinuity of known source…

T. Magee
December 9, 2009 10:43 am

Perhaps it is a mere coincidence, but there is a tremendous amount of similarity between the homogeneity adjustments shown in Figure 8 and the plot of the valadj array at http://cubeantics.com/2009/12/climategate-code-analysis-part-2/comment-page-4/?rcommentid=989&rerror=incorrect-captcha-sol&rchash=6b6ceabae8a0407d386b125f133a74ff.
I mean, the general shape is the about the same, the magnitude is the almost the same and the time scale is close.

T. Magee
December 9, 2009 10:57 am

Figure 7 more so than Figure 8

Kate
December 9, 2009 11:09 am

We’ve got to stop preaching to the choir and get on over to http://www.washingtonpost.com/wp-dyn/content/article/2009/12/08/AR2009120803402.html
The warmers are out in force, calling Palin a Bimbo for speaking up, and excoriating the WashPo for running an Op they don’t like.

Street
December 9, 2009 11:25 am

A couple things occured to me:
1) The reasonable adjustments to stations should have a natural distribution centered on zero. Adjustments for station location or instrumentation changes should be equally positive and negative. The only decent reason for a positive bias among many stations would be moving stations from poor sites (beside an AC) to a good site (in a park), but even then, an adjustment would result in ‘locking in’ the bias of the poor site to all future measurements from the new site.
2) If #1 is true and the net result of all adjustments, then global temperatures do not need adjustment, only gridding. Adjustment is only required for continuity in examining individual stations.
3) The exception in this process would be any form of econometric adjustment for heat island effect. However, this would almost always be a negative adjustment.
Bottom Line: If the sum of all global adjustments is positive, it’s a biased adjustment.

Jeff L
December 9, 2009 11:53 am

My 1st thought on this is that what is needed is project similar to the surfacestations.org project. Grass-roots volunteer project. Volunteers do the same type of analysis as Willis did for Darwin for as many of the GHCN stations as possible (limiting factor certainly will be raw data) & have them compiled for the worls to see on a website. It could be eye opening. I am guessing with the number of scientifically trained readers on WUWT that this potentially is an achievable community science project.
Any takers? I’ll pitch in with some station analysis.

mbabbitt
December 9, 2009 12:16 pm

Can there be anything more upsetting and unsettling to a scientific evidence-based argument than to show that the evidence as presented is not only untrustworthy but perhaps intentionally so? And can we agree that if the temperature record has been massaged, twisted, encouraged, and otherwise manipulated to minimize inconvenient facts, that what we have is scientific fraud even if done in sincerity, with the best of intentions? Finally, why should the peoples of developing countries have to suffer more and longer and be denied the wonders and comforts of modern life just because of the faulty work of well-intentioned zealots who lost their ability to claim objectivity many years back?

December 9, 2009 12:49 pm

And I’m racking my brains trying to think what could possibly justify the amazing adjustments made for to record at Brisbane’s (Eagle Farm) International Airport over the latter part of the period 1950 – 2008. Any ideas?
http://thedogatemydata.blogspot.com/2009/12/raw-v-adjusted-ghcn-data.html

Basil
Editor
December 9, 2009 12:58 pm

JJ (06:51:00) :
Be careful on the 0.6 figure for the cumulative adjustments — they are using F, not C.

December 9, 2009 1:26 pm

Willis;
I went to surfacetemps.org – Seems to be selling park equipment. WUWT?
I have been analyzing South Texas temperature data, attempting to determine San Antonio UHI using 5 surrounding rural sites (using raw data). Since the SA site had a major move in June 1942 (from a rapidly growing downtown to a (then) rural airport), I looked to see how NOAA had handled the move:
In the 5 years before the move, NOAA subtracted 2°F (1.97 to 2.07) annually from the raw data. In the 5 years after the move, they subtracted 1°F (1.01 to 1.11) from the raw data.
I’d be willing to repeat your exercise using the whole NOAA dataset for South Texas once surfacetemps is up and running.
BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.
Ryan Stephenson (04:46:20) : …Anyway, the means for the annual distribution show a 0.35Celsius increase in the last 10 years compared to the first 10 years. Problem is the averaged monthly show a variation ranging from -0.03 to +1.0 Celsius taken over the same period.
If you want to see a baffling wide variation of trends, check out averaged monthly Tmax & Tmin instead of Tmean.

December 9, 2009 1:29 pm

I think Willis is not correct to assume CRU have used the GHCN Darwin Zero. I also think it is wrong to assume Jones / CRU have simply use the GHCN station data. See my take over at;
http://www.warwickhughes.com/blog/?p=357

patrick healy
December 9, 2009 1:30 pm

willis
really spectacular article. i nominate you for the nobel prize!
this has set me thinking. as a pure novice in all this, i know that in scotland virtually all the recording stations are at or near airports.
our local recordings are from Leuchars military airport just across the water from Carnoustie here. that is a very busy military airport. for security reasons, presumably, we cannot gain access to verify the position of the measuring point.
i wonder if anyone has investigated the observatory in Armagh (pronounced arm – ahh) in ireland.
this is sited in a small city which is very rural and has, to my knowledge, been recording for over 150 years. it should be a good source of unadalturated data.
perhaps you or steve could have a look at this.

Jon
December 9, 2009 1:53 pm

Makes me wonder. I can’t honestly judge this in anyway, I’m not qualified. I can’t buy AGW because I would have to buy it on faith. I can’t deny it either because I would have to deny it on faith. Bottom line, I have no opinion.
If they’re going to devastate economies to “correct” the climate, they better “fix” things in such a way that it will help us whether they’re right or not. For example, nuclear power, solar thermal, and so on, done right, could help us to fight GW and to secure our domestic energy future. (Both providing jobs and security) It would help us either way. Those are the kinds of answers we need. What I hope doesn’t happen are answers that’re too specific to mitigating climate change.

CodeTech
December 9, 2009 1:55 pm

Tom in Texas (13:26:56) :
BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.

WinRar does it…

denis
December 9, 2009 2:07 pm

Seems to me it would be best to use some process that reconciles temperature readings from the various stations by associating each with the all other “nearby” readings. (i.e., not varying from some fixed time by more than a couple of minutes). The average of such a closely associated group would provide a global temperature reading at a specific times of the day. …
There would seem to be numerous ways to aggregate such groupings to rule out real outliers without having to make arbitrary changes to individual readings.

JETSOLVER
December 9, 2009 2:23 pm

THANK YOU!!! This breakdown puts some meat on the bones of the manipulation, and gives us direct questions which can be either answered directly, or obfuscated. I cannot express my thanks for your and others efforts to dig down through the available data, and allow the rest of us just getting up to speed on the raw datasets (such that remain).
More please, it is appreciated by all seeking truth.

December 9, 2009 2:23 pm

CodeTech (13:55:17) :
Tom in Texas (13:26:56) :
BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.
WinRar does it…
Thanks. Will try it.

geo
December 9, 2009 2:39 pm

What I want to know is does GHCN make those kind of adjustments in a. . .ahh. . . “artisanal” (to borrow a wonderful application of the word from Steve McI) manner, individually, one human studying the individual records for a station, and making the changes. . . . or does this happen as a some impersonal computer wholescales an algorithm across its dataset, and rarely does a human look at outliers and go “oh, gee, that can’t be an acceptable result”. And even if they do the latter, do they have any ability to change that one result and make the change “stick” for the future.
The two models of how this happens (individually vs mass computer) do make a difference it seems to me. Both as to going to intent in the face of an obviously wrong result, and in finding out “who did this?” and how to fix it going forward.
If this is the work of a mass of interns one station at a time over years. . . .that’d really be one heckuva mess to untangle now. If it is the work of a computer program, it can probably be nudged somehow to kick out outliers on some criteria or other to be adjusted individually if necessary by another file identifiying them by station id to do something along the lines of “don’t do your usual algorithm here, because it sucks –do this instead”.

Slartibartfast
December 9, 2009 2:40 pm

Oh, I just have Cygwin installed. Presto! You have a lot of *nix capabilities at your fingertips.

geo
December 9, 2009 2:44 pm

I guess what I’m wondering is did a human do that on purpose, or is this yet another application of the 80/20 rule of computer programming? (i.e. 80 percent of the work to do 20 percent of the cases)

Joel D
December 9, 2009 2:46 pm

http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php?utm_source=sbhomepage&utm_medium=link&utm_content=channellink
Willis,
You have got to love this post. You published your findings yesterday and already they have proven them “false”. Now they make some claim that you doctored data. You were trying to find how much of an adjustment to the data was necessary to go from the raw data to the “corrected” data. How would you do this without trying to make modifications to the raw data? Everything in their article is pure crap and they have quite obvoiously not even taken the time to read your article. What is wrong with these people?

John Smelt
December 9, 2009 2:57 pm

I’m sitting here watching the BBC following the warmist party line – no mention of opposing views of course.
I cannot help but think that all this, when coupled with the events in the UK of the last 12 years that this is a precursor to Orwell’s 1984. As series of excuses to govern and dominate the proles.

geo
December 9, 2009 3:00 pm

Oh, the 80/20 rule could be applied from paper application of the algorithm as well, manually, by someone who doesn’t have the authority, respect, and/or knowledge to challenge it.
I just wonder the *actual* mechanics of how these adjustments get made. Is it Phil Jones and Gavin Schmidt poring over each one and coming to an agreement, with Michael Mann called in to break ties? It is at least a climate PHD working from a script who has the ability to go back to the script writer and say “look, your algo needs work –this is a ridiculous result when I follow it for this case”? Or is it interns working from a 20 page script of what to do, and not a chance in the world they’d tell anyone “this is rubbish!”, or anyone in authority would listen to them if they did? Or is a computer program doing it?
How does that *actually get done*? I want to know that before I start making decisions on incompetence vs bias.

zosima
December 9, 2009 3:06 pm

@OP
You make basic mistakes that undermine your results. Some very basic background:
http://data.giss.nasa.gov/gistemp/
For example, your complaint about 500 km separation is simply facile:
“The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.”
Avoiding the mistakes of previous generations is exactly why scientific research is left to people with specializations in these fields. If you insist on doing amateur analysis, I would suggest you start at the most recent publications and following the citations backward. That way you can understand exactly how and why the corrections are applied rather than just guessing based on one input and their output.
@Street
Normal distribution will only apply if measurements are samples from a random variable. You cannot assume this and in this assumption’s absence, your analysis is false.

David
December 9, 2009 4:08 pm

Joel D, did you actually read that article? It’s pretty clear on the reasons why using the raw data without adjustments is going to give you bad results.

Methow Ken
December 9, 2009 4:12 pm

O.K.: Think we can now reasonably submit that WUWT has gone mainstream; at least in some quarters:
For those who may not have noticed, James Delingpole at the UK Telegraph now has a major piece up dated 8 Dec, where he quotes extensively from this thread start (including a graph) AND links directly to it. See:
http://blogs.telegraph.co.uk/news/jamesdelingpole/100019301/climategate-another-smoking-gun/
Not only that: The above piece was prominently featured in the ”Climate Change Debate” headline section on RealClearPolitics. Regardless of desperate attempts by AGW partisans to subvert and suppress it, I think the message is starting to get out to the wider world. . . .

Mike
December 9, 2009 4:29 pm

I’m waiting for RealClimate’s response. Perhaps you can prod them.
At the moment, as a casual observer I notice this: plots 2–5 look the same as the IPCC plot 1. All you have to do is look at the time period they have in common. It seems a bit shady that IPCC chose not to show pre-1920 data, which indicates a cooling trend, but presumably their full-time quantitative analyses take this into account.

M. Johnson
December 9, 2009 4:39 pm

As someone who works with analysis of scientific data for a living, the thing that most strikes me about the removal of “inhomogeneities” is that it is a technique that is generally thought necessary only for small data sets. When dealing with a sufficiently large data set, random errors cancel themselves out. For example, for every station moved to a warmer location, one would expect there would be one moved to a cooler location – so why correct at all?
Surely the surface temperature record of the entire world is a sufficiently large data set to analyze, at least once, without “correction” for random errors.

Ken
December 9, 2009 4:52 pm

Do realize what you have found here. The “homogonized” data matches exactly the “fudge factor” code in the CRU source code. Here is the code:
;
; Apply a VERY ARTIFICAL correction for decline!!
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
(…)
;
; APPLY ARTIFICIAL CORRECTION
;
yearlyadj=interpol(valadj,yrloc,x)
densall=densall+yearlyadj
What this does is it establishes an array that artificially injects increases in temperature that will automatically turn the data into a hockey stick. The hockeystick it creates matches exactly to the Station Zero at Darwin, showing the raw and the homogenized versions.
Many people trying to debunk the source code say it is common practice in modeling to include adhoc code for test purposes not to be used to publish actual data. This proves a real life application of the “fudge factor” code.

December 9, 2009 5:46 pm

“Warwick Hughes (13:29:22) :
I think Willis is not correct to assume CRU have used the GHCN Darwin Zero. I also think it is wrong to assume Jones / CRU have simply use the GHCN station data. See my take over at;
http://www.warwickhughes.com/blog/?p=357
Well, thanks Warwick, much appreciate that .
Refer Steve Short (17:18:27) :
“(2) Can we rely on the accuracy (?) of an interpretation (?) that ‘the emails’ (remarkable isn’t it we at least all know exactly what these are 😉 suggest (?) that the CRU database relies (?) on GHCN (whew)?
Willis :
“That’s what Phil Jones said, and until they release their data, there’s no way to verify it.”
Exactly. Yet more evidence this Phil Jones is one ultra-slippery character. Glad I never had him as a ‘peer reviewer’. He doesn’t even have a manicured beard either – shame on him!

Slartibartfast
December 9, 2009 6:09 pm

Yes, verily. But if you’re Tim Lambert, it’s more about demonizing people you disagree with.

Street
December 9, 2009 6:09 pm

M. Johnson – Exactly my reasoning as well. I’d love to hear a plausible reason why it’s necessary to add positive adjustments to more stations than you are adding negative adjustments. Has anyone ever run a calculation of what the net adjustment is globally and what the distribution looks like?
Another thing pointed out in the article:
“The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.”
That seems to be a sensible approach to automated adjustment. However, I think problems may crop up if the areas covered by the individual stations are not equal. Your more likely to get mutliple stations in metro areas. Let’s say you have a cluster of 5 stations around a city. All of them should exhibit similar heat island effects, right? If the algorithm references the 5 nearest stations, this cluster would only reference itself and confirm it’s common heat island effect as ‘real’. No real problem there.
Problem is stations that are not clustered (likely when they are rural) may use reference stations in these metro clusters for adjustment. This may result in the positive trend due to heat island effect being ‘transmitted’ into the rural station via the adjustment process.
@zosima – The random variable is the differences in temp bias between the old and new locations and the old and new instrumentation. You can probably argue that newer instruments might be designed to prevent solar heating and run cooler, but location changes should exhibit a random bias on temps, shouldn’t it?

carrot eater
December 9, 2009 6:12 pm

Willis,
Some questions, then.
How is anomaly defined in Fig 6 and 7? What is your baseline period? Is it a coincidence that Fig 7 starts at an anomaly of zero, or did you adjust the whole plot downwards to make it so?
You seem to recognise there was a station move in 1941. I find your ‘judgment call’ to not make an adjustment there to be quite odd. I’d suggest repeating Fig 6 and 7 with an adjustment for the station move, to see what it looks like. Further, did you ask what sorts of site adjustments might have been made after that time?

Ken
December 9, 2009 6:30 pm

Willis
Please look at the CRU source code I put in my last code. The “homogenization” and the “fudge factor” codes match perfectly. It is a connection that proves beyond doubt the raw data was manipulated.

Ken
December 9, 2009 6:32 pm

Willis
OK so the “fudge factor” and “homogenization” are from two different sources but their manipulations are almost identical.

larryfromlinolakes
December 9, 2009 6:43 pm

Willis,
Here’s a bit of back and forth I had with a gent over at the Discovery Magazine blog…can you make any sense of his comment, and is there anything to it? Did they just adjust station zero to match 1 and 2?
61. Larry Johnson Says:
December 9th, 2009 at 9:27 pm
“50. MartinM Says:
December 9th, 2009 at 6:47 pm
What, this isn’t an explanation?
If you actually look at the raw data, it’s pretty bloody obvious why the station 0 record has been adjusted in the way it has. The step change around 1940 is obviously due to the shift of the station and the addition of a Stevenson screen. And the addition of an upwards trend from that point on is to bring it into line with stations 1 and 2, which track each other almost exactly, and both show a strong warming trend from 1940 (1950 in the case of station 1, since that’s when it starts) to 1990.”
Appreciate any valid explanation, but I’m not follow’ ya. Willis says they all pretty much agree (all three) so why adjust any of them. Then he says they adjusted 0 and 1 but left 2 untouched. They all look pretty close to me. So I still think his concerns are valid. Maybe I’m missing something.
http://blogs.discovermagazine.com/intersection/2009/12/09/how-the-global-warming-story-changed-disastrously-due-to-climategate/comment-page-2/#comment-41786

Ken Bingham
December 9, 2009 6:51 pm

2.6….
2.5
1.7
1.2
0.8
0.3
0 0 0 0 0
-.1 -0.1
-0.25
-.3
1900 -> 20 40 50 60 70 80 90 2000
[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$ <-CRU fudge factor
Plug in the numbers on the GHCN "homoginization" with the CRUs "fudge factor algorithm. Or should we say ALGORE-RITHM.

Cre8tv
December 9, 2009 7:03 pm

Thank you. Unmask the charlatans and save truth!

Mike
December 9, 2009 7:09 pm

Yamba is a few hundred kilometers south of Brisbane on the Eastern coast of Australia. There is absolutely NO WAY that any adjustment of Darwin could be made based on any data from Yamba. Is that what you’re saying was done? Pure bunkum if it was.

WestHighlander
December 9, 2009 7:11 pm

Time to start from scratch! …. at this point our ground-truth is just no there So I hereby humbly offer gratis my humble action plan (it’s also a bit cheaper than the Copenhagen Treaty an will as a bonus produce lots of good jobs at good wages):…. 1) Spend the next two years designing a new state-of-the art family of fully automated, reliable instruments specified to produce a reliable century-long web accessible local climate record …..2) Spend the next ten years surveying sites (good, better and best) and equipping them with the respective good, better and best (full redundant 3D profile of latent an sensible heat fluxes, H2O, CO2 concentrations, etc installed on a 100 m tower) equipment sets (ratio of 10^6, 10^4, 1000 sites) …. 3) Spend the same 12 year period developing the best possible open analysis tools (Linux GNU-like) and testing, verifying and testing (as we’ve done with the satellite MSU data) …. 4) Do the same with a new fleet of redundant satellite equipment packages which can piggy-back on various satellites to monitor the sun, earth full disk radiation balance, cloudiness, volcanoes (including undersea, under ice, under sea ice, etc) … 5) Accumulate data from 2020 until 2060 (gives us a couple of double-length solar cycles to compare against) and then we can see if the best of the 2060 version GCM models (presumably much improved with Exa-flop caliber computing available in your pocket) are able to match our high precision, high accuracy, optimally-sited, ground and space-truth data sets … 6) Then we can make a more credible judgment than we are to do at this time … the other BIG Benefit … by making ALL of the Data and all of the Analysis tools freely accessible to any and all on via a web browser — we can crush once and for all the “alchemist-like” AGW Gaia priesthood

December 9, 2009 7:17 pm

J. Peden (23:08:27) nails it:
While I completely understand where JJ is coming from and admire the sentiment, I agree with J. Peden, “No, they should have ‘told’, right from the start. Otherwise they have no Science. No one even needs to ask. If they don’t tell, they have nothing scientific to ‘ask’ about.”
This is precisely the point. The “climate scientists” who pretend that they are ginning out “scientific” conclusions that are utterly unverifiable and unreproducible because they hide (or lose or destroy) their data and methods are not scientists, by definition. To claim the scientific mantle without sharing data and methods — remember, “no one needs to ask” — is, prima facie, fraud.
In contrast, Willis — like a bona fide scientist — has put his data and methods front and center for all to see and critique. He is entirely within his rights to call out the CRU for failing to do the same.
I think there is a good shot here, for someone, at putting together a massive False Claims Act case in the United States. Anyone have any idea how many millions of U.S. tax dollars have been squandered on this hoakey “climate science” stuff?

Mike
December 9, 2009 7:18 pm

Westhighlander, I concur. Let’s get going with this. Yamba adjusting for Darwin in a chaotic system is pseudo science. We need an open source, volunteer funded and driven solution for this mess.

Mike
December 9, 2009 7:22 pm

If you want to know how mad it would be to adjust Darwin using Yamba data check out google maps http://maps.google.com/maps?hl=en&source=hp&q=yamba+australia&um=1&ie=UTF-8&hq=&hnear=Yamba+NSW,+Australia&gl=us&ei=0GUgS8eSC4q9ngfZtKzWDQ&sa=X&oi=geocode_result&ct=title&resnum=1&ved=0CAsQ8gEwAA Please tell me that Yamba was not used to adjust Darwin! please, please!

Mike
December 9, 2009 7:31 pm

I am going to patiently wait for Willis to confirm that Yamba was NOT used to correct Darwin. If Yamba was in fact used to correct Darwin then I can honestly say that Darwin “corrected” figures do not in any way reflect the historical temperature at Darwin. There are simply too many local climate variables between Yamba and Darwin. To such an extent that their climate’s are mutually exclusive. To begin with, Darwin is tropical, Yamba is temperate. How on earth can an adjustment be made between such environments? Can we please start a unified project to analyze and collate all ground station data?

Melinda Romanoff
December 9, 2009 7:38 pm

Sir-
Thank you for your work. I wish I had that kind of time, not owned by others, for that kind of effort.
Bravo!

MrData
December 9, 2009 7:57 pm
Neo
December 9, 2009 7:58 pm

I once attended a seminar given by a guy who was a expert in radio communications. During a break, he told us a story about how he had developed a burst transmitter design for an agency within the “intelligence community”. In the process, he described how not only did this intelligence agency have guys designing radio transmitters that could be hidden, they had another set of guys, a “counter group,” who’s job it was to detect hidden radio transmitters. These two groups would go after each other in an attempt to come up with the best possible transmitters and the best possible methods of detection.
In climate science, we have a bunch of seemingly half drunken academics who live off the government dole while they concoct amateurish schemes to prove something that it seems has been predetermined to be true, no matter the actual empiric data. The only group of guys trying to test their schemes are underfunded or doing work on their own time, pro-bono.
This process is obviously corrupt. It was never meant to provide the truth. If it was, the government research community would also have a fully funded “counter group” to try to prove that “Anthropogenic Global Warming” doesn’t exist, has little impact or at least can be easily mitigated and therefore save billions, if not trillions, of dollars/Euros/pounds on trying to prevent a non sequitur.
The fact that there is no “counter group” immediately brings into question the purpose of the activity and whether it is meant to be part of that “waste, fraud and abuse” that so often infiltrates all vestiges of government. The fact that this is an international activity makes one wonder if the UN has any real function except to give heads of state a chance to go shopping in New York City from time to time and travel to useless conferences where they can dine well and come up with new ideas on how to fleece their citizens at home.

December 9, 2009 9:51 pm

What information is contained in the “failed quality assurance” files? I presume these are the left-overs after the “value-adding” (or fact-removal) process?
Just probing without much knowledge.
My gut read is that you’ve done another fair-minded analysis. Maybe that’s what is disturbing to so many people.
Also, out of curiosity, Willis: in picking through the meta-data during your quest, have you ever seen anything approaching the chaotic “Harry-Read-Me” file?

Brendan H
December 9, 2009 10:44 pm

Steve Short: “Next we have the control freaks…”
I disagree. A disciplined beard is the sign of a disciplined mind. They adorn men who are in a hurry to get things done, to shape the world in their own dynamic image. They don’t leave any loose ends lying around and are fully in control of their domain.
Gail Combs: “By doing a scientific comparative analysis of a picture of Steve to a picture of Einstein…”
Not so fast. A common mistake among sceptics is to fail to take into account that the Beard Index is multi-factoral. Adjustments must be made for cultural, geographical and other relevant factors.
The relevant factors for Einstein are that 1) he was the 20th century’s reigning scientific genius, and is thus entitled to a major adjustment upwards for the genius factor; 2) he wore a moustache, thus the downward adjustment for scruffiness is half that of fully bearded men.
Interestingly, Lord Christopher Monckton makes no appearance on my index. This is a puzzle, since, despite being beardless, this lack is fully compensated for by his eyebrows. I suspect he inhabits an index all of his own.

Paul R
December 9, 2009 10:45 pm

Mike (19:31:13)
I am going to patiently wait for Willis to confirm that Yamba was NOT used to correct Darwin. If Yamba was in fact used to correct Darwin then I can honestly say that Darwin “corrected” figures do not in any way reflect the historical temperature at Darwin. There are simply too many local climate variables between Yamba and Darwin. To such an extent that their climate’s are mutually exclusive. To begin with, Darwin is tropical, Yamba is temperate. How on earth can an adjustment be made between such environments?
So where is the actual thermometer at Yamba? I’ll wait patiently for someone to tell me It’s not in the car park of the Moby Dick Motel. : )
You couldn’t make this stuff up.

Rick Jelliffe
December 9, 2009 10:58 pm

(Follow up on Yamba)
So let me get this right. The HadCRUT3 paper shows hundreds of stations that it says are used, which correspond to the Aust BOM stations. See figure 1 http://hadobs.metoffice.com/hadcrut3/HadCRUT3_accepted.pdf
But you quote Professor Karlen that NASA only has three stations. You pick three stations, one monsoonal (Darwin), one desert (Alice Springs), one temperate coastal (Yamba), and add them, and then you get surprised that the result does not look like anything the the IPCC graphic? What is the point of that?
Then you do all sorts of elaborate reverse engineerings, to discover that there has been some kind of a data adjustment, when the owners of the data (the Aust. BOM — I don’t think NASA had any stations in Australia in the early 1900s!) warn in their page on the Australian station figures that the early numbers are unreliable without an adjustment.
It seems to me that your figure 4 is the only one of much interest. Where is this smoking gun?

JJ
December 9, 2009 11:13 pm

Willis,
“First, yes, I read the text you quoted.”
Perhaps, but you dont seem to understand the implications.
Your post centers on evident large adjustments to a station that do not appear to make sense with respect to the temperatures local to that station. You claim this is iron clad evidence of wrong doing.
The methods document that you quoted answers that charge. It recognizes that the homogenization methods may apply large adjustments to single stations, that do not make sense with respect to temperatures local to that station. The methods document in fact recommends that unadjusted data be used for analyzing single stations for this reason.
The assertion is that the homogenized data are more reliable when used to analyze long term trends at the region scale or larger. Support is given for that assertion, in the form of a cited paper. The further assertion is that the homogenization methods have small effect on globally averaged results. Support is given for that assertion, in the form of a cited paper.
Your claims regarding the Darwin adjustments are responded to, in the paper you read prior to making the claims.
If you have legitimate issues with Darwin or any other site in GHCN, that you have found a site with large adjustments that do not track well with local temps is not among them. That circumstance is predicted in the methods. A well supported response to you would be ‘Duh. We told you that.’
Moving forward, potentially legitimate lines of attack would include:
1) Refuting the assertion that the homogenized data are valid for long term, region scale or larger trend analyses.
2) Refuting the assertion that the homogenization method has only minor effect on globally averaged temperature trends.
Above, Basil posted a link to a NOAA chart that plots Adjusted – Raw, and the trend of the adjustments is 0.33C (vs a total ‘global warming’ land temp trend of 1.2C over the same period). Not knowing which version of GHCN is used for the graph, I dont know which homogenization methods the graph applies to, but 0.33C seems significant even if it isnt earth shattering.
More importantly, the real metric of interest would not be Adjusted – Raw, but Properly adjusted – Improperly adjusted, if such obtains. If you can prove that the homogenization methods are illegitimate for long term global temperature trends (see #1) or if the the methods are OK but have not been applied per spec, and if the resulting err is of significant magnitude, you’ve got something. You dont have any of that yet.
“I know that huge adjustments are sometimes made to individual stations.”
Do you also understand that even if those huge adjustments dont track local temperatures at some stations, the homogenized data are held to be valid for the purpose to which they are being put? Find all the large, weird adjustments you want. You dont have anything unless you prove wrong the research that says they dont matter in the aggregate.
“I’ve looked at them. I’ve looked at a lot of stations. The adjustments to Darwin Zero are in a class all their own.”
Claiming to have found a rare outlier does not strengthen your position.
“And yes, that is possible, it may all just be innocent fun and perfectly scientifically valid. ”
As of this time, you do not have reason to believe otherwise, let alone point fingers and claim criminality.
“And if someone steps up to the plate and lists why those adjustments were made, and the scientific reasons for each one, I’ll look like a huge fool. Still waiting …”
One wonders what exactly you are waiting on. You have the raw data. The homogenization methods have been provided to you, along with a bibliography of documents that provide great detail. You quote from them.
You need to read them. If you do, one of the first things that you are likely to pick up on is that (outside of the US) GHCN2 does not apply adjustments of the sort that your ‘show me the scientific reason for each one’ question assumes.
Stop ‘waiting’. Get to work.

Roger Knights
December 9, 2009 11:52 pm

Neo wrote:
“In climate science, we have a bunch of seemingly half drunken academics who live off the government dole while they concoct ridiculous schemes to prove something that it seems has been predetermined to be true, no matter the actual empiric data. The only group of guys trying to test they schemes are underfunded or doing work on their own time pro-bono.
This process is obviously corrupt. It was never meant to provide the truth. If it was, the government research community would also have a fully funded “counter group” to try to prove that “Anthropogenic Global Warming” doesn’t exist, has little impact or at least can be easily mitigated and therefore save billions, if not trillions, of dollars/Euros/pounds on trying to prevent a non sequitur.
The fact that there is no “counter group” immediately brings into question the purpose of the activity and whether it is meant to be part of that “waste, fraud and abuse” that so often infiltrates all vestiges of government.”

Henry Bauer, who believes that the currently embedded practices of modern, bureaucratic science have corrupted it (the CAWG consensus being a prime example IMO), has suggested that 10% or so of funding needs to go to contrarian viewpoints, that there should be a place at the table for contrarians (in every field), and that there should be “science courts” where both sides can argue their case in matters where established science has shut out or shouted down outsiders. You can find more here:
“Science in the 21st Century: Knowledge Monopolies and Research Cartels”
By HENRY H. BAUER
Professor Emeritus of Chemistry & Science Studies
Dean Emeritus of Arts & Sciences
Virginia Polytechnic Institute & State University
Journal of Scientific Exploration, Vol. 18, No. 4, pp. 643–660, 2004
http://henryhbauer.homestead.com/21stCenturyScience.pdf

mitchell porter
December 10, 2009 12:21 am

We have to get to the bottom of this “Yambagate” affair.

Alan P
December 10, 2009 1:34 am

So, you guys all know better than thousands of actual scientists? Global warming is actually not happening?
So…
Why are all those glaciers melting?
Why have the sea levels risen?
What has happened to the billions of tons of CO2 that HAS been pumped into the atmosphere by human activities?
Would you dispute the science that shows CO2 has a warming effect in the atmosphere?
Would you dispute that there is now significantly more CO2 in the atmosphere than prior to the industrial revolution?
So… according to you [snip] types, somehow, maybe by magic, the CO2 that humans put into the atmosphere doesn’t add any additional warming.
Well, that’s okay then. Lets all just assume everything is ok, and not take any action.
Brilliant. I feel much better with you geniuses around.

Morgan T
December 10, 2009 3:16 am

Alan P
Glaciers: Some are retreating, some are growing some are not moving at all, but in general we know that glaciers have been retreating since around 1870, (Before that they we growing due to the huge impact from Krakatau)
Sea levels have been rising for a very long time there is nothing new in that, but since 2006 there is no change.
The CO2 will stay for a while in the athmosphere, for how long?, depends who you ask.
There is more CO2 in the athmosphere now than in pre-industrial time, no one denies that, but at this level (380 ppm) an increase of CO2 makes a very limited impact on the greenhouse effect (Do not even try to dispute that, it has been verified the correct scientific way)

carrot eater
December 10, 2009 5:17 am

Willis,
Further, in Fig 5, you plot the various records that come up for Darwin Airport in the GISS page, looking at raw data.
Simply looking at them, it looks to me that in the periods of overlap, those aren’t independent measurements at all, but that they’re duplicates. Did you think to ask somebody to clarify that first, before making conclusions like: “Why should we adjust those, they all show exactly the same thing.”
Meanwhile, in looking for neighboring stations, I don’t think we need to limit ourselves to ones that go back to 1941; we can still learn something from ones that start after that.
I find it interesting that the GISS homogenised set doesn’t pick up until 1963. If nothing else, we might learn something here about the difference between GISS and GHCN’s methods for homogenisation.
Finally, I’d still like to see you repeat the analysis with an adjustment for the station move in 1941; not doing so seems really questionable.

David
December 10, 2009 5:46 am

I can’t believe this. I can’t believe that the science has been so corrupted.
I wish the ABC would finally do their job and begin reporting on serious issues like the global warming alarmist fraud which is present and persistent in the scientific community at the moment.
Anyone want to start a petition?

Neo
December 10, 2009 6:12 am

Alan P
So, you guys all know better than thousands of actual scientists?
Some of us are actual scientists and engineers with decades of experience, so don’t give the PR line about the “””SCIENTISTS”””
The questions raised by the “skeptics” rarely are … does “climate change”, “climate warming” or “climate cooling” exist ? … they all do, from time to time.
The real question to be ask is how great are the changes ? … are they within normal variability ? … does man’s contribution really change things that much ? … why isn’t the IPCC including effects of clouds in their models ? (it’s hard) … are clouds affected by cosmic rays ? (possibly yes) … why is solar variability including the solar effects on cosmic rays downplayed by the IPCC ? … can we mitigate it cheaper ? … is it worth the price ? … can we live with it ?
Some of these and other questions have difficult answers, so let’s start now to answer them. The science is not “settled.” Anyone who tells you it is “settled” is a fool, idiot or a politician looking to perform a “wallet-ectomy” on you.
Every problem does not demand a solution. For instance, there is a chance that the Earth could be hit by a massive meteor that could end civilization in minutes, do you see trillions of dollars being spent on a solution to that problem ? No, because it will cost too much given the likelihood. A large solar flare could fry the Earth, is there a solution to that ? No, we will all fry if that happens.
Is it worth $20 or $30 trillion to possibly reduce the global temperature by 0.1C when it will might naturally go up by 1.0C or more ? .. it might not go up at all ? Could that money be better spent mitigating the result of the difference or the whole thing ? Maybe an alternative will leave some money could go to end suffering and reduce world hunger. Now there’s a happy thought.

Slartibartfast
December 10, 2009 6:34 am

Global warming is actually not happening?

Who’s saying that, besides you?

Brian D Finch
December 10, 2009 6:39 am

Willis, here is what seems to be another smoking gun. There used to be two stations at Halls Creek. The first, at Old Halls Creek [Latitude: 18.25° S; Longitude: 127.78° E; Elevation: 360 m], operated until the 20th May 1999. Yet the BOM only graphs the data up until 1951 – thereby jettisoning nearly half a century of data. Nevertheless, the data they use shows a mean maximum temperature flatlining at approx 33.4 degrees C. This may be seen at: http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002011&p_nccObsCode=36&p_month=13
The second station, at (New) Halls Creek Airport [Latitude: 18.23° S; Longitude: 127.66° E; Elevation: 422 m] opened in 1944 and is presumably running today. The data produced shows a mean maximum temperature flatlining at approc 33.7 degrees C. This may be seen at: http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002012&p_nccObsCode=36&p_month=13
Now, here we have the data from two stations separated by about 15 kilometres and 62 metres height above sea level. That the higher of the two in height shows the higher temperature may have something to do with its site (airports tend to have lots of lovely heat-absorbing tarmac/asphalt). Nevertheless, they both show flatlining annual maximum mean temperatures. One would therefore expect a homogenised graph of the data from the two to reflect this – ie: to flatline at about 33.5-33.6 degrees C.
However, this is precisely what we do not find. Instead, the data from the two have been crudely combined to produce an apparent century long annual mean maximum temperature rise from about 32.2 degrees C in 1900 to about 33.8 degrees C in 2009. This may be seen at: http://reg.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=maxT&area=aus&station=002012&dtype=raw&period=annual&ave_yr=T
As the late Bernard Levin might have said: ‘Not only is this nonsense, it is nonsense on stilts.’

Basil
Editor
December 10, 2009 6:42 am

JJ (23:13:30) :
I’m generally in agreement with your take on this. I do not know if Darwin is indicative of a larger problem or not. I do think it indicates the need to check further. With the release of a “subset” of the HadCRUT3 data, I downloaded it, and looked for a station near me. That ended up being Nashville, TN. I then pulled the GISS data, to compare it. In recent years, the data are exactly the same. But at some point in the past, the GISS becomes consistently 1 degree cooler. I’ve been too busy to nail it down — hopefully this weekend — but it has me suspicious of how GISS adjusted its data so that max US temps were no longer in the 1930’s, so I’m wondering if there is a pattern here.
One thing that has occurred to me, and which I may write up as a post for Anthony’s consideration, is that all these “step function” adjustments we see, which have the potential to seriously distort estimated trends are much less an issue when the data are analyzed as first (or seasonal) differences. And the documentation for the homogeneity adjustments say — if I’ve read it correctly — that they are based on correlating differences with “nearby” stations, which strikes me as the correct thing to do — if we’re going to do it, which is another matter entirely. But whatever is obtained from using differences seems to be lost in translating back to undifferenced data.
Above, Basil posted a link to a NOAA chart that plots Adjusted – Raw, and the trend of the adjustments is 0.33C (vs a total ‘global warming’ land temp trend of 1.2C over the same period). Not knowing which version of GHCN is used for the graph, I dont know which homogenization methods the graph applies to, but 0.33C seems significant even if it isnt earth shattering.
Actually, most of the adjustment comes after 1950, so it is relatively a more significant part of the “warming” claimed for the second half of the 20th century.

Ed
December 10, 2009 6:55 am

The adjustments that seem to have been applied do not look like they were made by a person. They look like they were made by an algorithm running through the data. It appears that this is what they did – they took the raw data and “homogenized” it. Assuming that they never went back and checked on their homogenization software, is it possible that that is where the fault lies? Could some long ago programmer have created a routine to work with the data and that program he/she created wound up having a runaway warming effect because of an error? That doesn’t seem unreasonable to me and would actually be sort of an exculpatory explanation for the scientists involved. If a scientist assumes that the homogenized data is the gold standard, and that process itself introduces the warming, unless that scientist checks his/her premises the data would seem to conclusively show a trend.
Really, the only way to confirm this theory would be to do a lot of plots of data like this and see if this kind of adjustment is consistent. I’d propose an effort, similar to the “How (not) to measure temperature” series for people to go through and do this exercise with other temperature sets. I, for one, would be interested in doing this and I suspect you could create a whole “Army of Davids” to plot a ton of these quickly. My only problem is I don’t know where the data is or how to get it into a format that could be analyzed (if excel isn’t the software of choice for it). If you could do a post on how to get this data and how to do the analysis like what you’ve done here, I think it could be huge. If people checked 100 miscellaneous stations and they came up with both up and down adjustments that overall balanced out, then we could maybe rule out the idea that it was the adjustment scheme that has created this problem. But, if all or most of them show a significant warming adjustment then we’d know something was up. If you can do a post on how to do this kind of work and then set people to it, I’ll bet you could have a bunch of them done within a day or two.
This post has already been very influential. It could be much moreso if it turns out that this kind of adjustment is common.

hillbilly
December 10, 2009 7:31 am

I believe the Earth’s surface is approximately 29% land mass and 71% covered by water. In the Southern hemisphere water coverage rises to about 85%. Given the small area that can therefore be covered by ground stations plus the many factors which can and do lead to errors, as a lay person I wonder why UNIPCC scientists have chosen to use the data so gathered as the main basis for their climate modelling and predictions? Sophisticated satellite technology used for temperature measuring has been rigourously tested and would appear to be a far more accurate and less error prone way to give a truer picture of global mean temperature. Perhaps the following comments of John Daly are just as relevant today as they were when first published. Quote:-
“A further problem with the statistical processing is that neither GISS nor CRU can inspect the siting or maintenance of the thousands of stations they include in their data set. Thus even stations which might reasonably be assumed to be `clean’ like Low Head Lighthouse, may actually conceal site-specific errors which are known only to people who are local to the station.
As to station selection, not all stations in the world are included in the global data sets. This raises the question as to what criteria are used when selecting stations when the researchers at GISS and CRU have little local knowledge about the stations themselves. Indeed, they have shown no hesitation in accepting big-city stations into their data sets, knowing full well that urbanisation will be a wild card in the data.
The Australian Bureau of Meteorololgy (BoM) did attempt a station-by-station analysis of error-producing factors for Australian stations, including urbanisation, and offered an adjusted dataset of all Australian stations. The corrections included Low Head Lighthouse whose record was `cooled’ and brought more into line with nearby Launceston Airport. However, GISS and CRU are still using the original dataset, not the modified one offered by the BoM.
Station degradation and closures
Since about 1980, there have been numerous closures of stations across the world as governments have sought to cut expenditure in public services. The loss of stations has been particularly significant in the southern hemisphere where the station density was already thin to begin with. The adoption of the 100-station `climate reference network’ to cover the vast Australian continent suggests a further downgrading of stations not included in that hundred.
This has an unintended consequence for the statistical calculation of `global mean temperature’. In each 5°x5° grid-box any thinning out of the number of stations over time will result in a smaller mix of stations in the 1980s-1990s than was the case in previous decades, with a consequent shift in the mean temperature for each grid-box. This shift could theoretically result in a warming creep in some sectors and a cooling creep in others, not caused by climate, but caused by a shrinking station integration base. Another response to these closures is to accept stations into the database which had previously been excluded. This could result in sub-standard stations contaminating the agreggate record even further.
The end result cannot be statistically neutral because the majority of the stations being closed are precisely those stations which GISS and CRU designate as `rural’. These are the very stations which have the least warming, or no warming at all, and their closure during recent decades leaves that entire period to the tender mercies of the urban stations – i.e. accelerated warming from urbanisation. It is little wonder that the 1980s and 1990s are perceived to be warmer than previous decades – the collected data is warmer. But was the climate itself warmer? The surviving rural stations would suggest not.
We return to the central question. Is the surface record wrong in respect of both the amount of warming reported during the 1920s and in respect of the disputed warming trend it reports since 1979? In the latter case, the surface record is contradicted by both the satellite MSU record and the radio sonde record.
This is where individual station records can prove useful. Such records represent real temperatures recorded at real places one can find on a map. As such they are not the product of esoteric statistical processing or computer manipulation, and each can be assessed individually.
Some critics will dismiss individual station records as merely `anomalous’ (in which case most of the non-urban stations would have to be dismissed on those grounds), but when one station acquires an importance far beyond its own little record, no effort is spared to discredit it. This was the fate of Cloncurry, Queensland, Australia, which holds the honour of having recorded the hottest temperature ever measured in Australia, a continent known for its hot temperatures. The record was 53.1°C set, not in the `warm’ 1990s, but in 1889. It was a clear target for revisionism, for how can a skeptical public be convinced of `global warming’ when Cloncurry holds such a century-old record? The attack was made by Blair Trewin of the School or Earth Sciences at the University of Melbourne, with ample assistance from the whole meteorological establishment. And all this effort and expense was deployed to discredit one temperature reading on one hot day at one outback station 111 years ago. Stations do matter.
In the Appendix, there are numerous station records from all over the world, most of them from completely rural sites, some of them scientifically supervised. One telltale sign of any good record is when the data extends back many decades with no breaks. Where the record is unbroken, it indicates better than anything else that the people collecting the data take their task seriously, a good reason to also have confidence in their maintenance and adherence to proper procedures. Where the record is persistently broken, such as Mys Smidta and many other Russian and former Soviet stations, there is no reason to have any confidence in the fragmentary data they do return.
However, this is not the way GISS, CRU, or the IPCC view them. In spite of the station closures, and the fragmentation of so much Russian and former Soviet republic data, the surface record continues to be accepted uncritically in preference to the well validated satellite and sonde data. Indeed, in the latest drafts of the IPCC `Third Assessment Report’ , the surface record is taken as a foundation assumption to underpin all the predictions about future climate change. To admit the surface record as being seriously flawed would unravel the integrity of the entire report, and indeed unravel the foundations of the `global warming’ theory itself. ” end quote
A suggestion: Could one of the many knowledgable people that have posted on this site access the raw satellite data from 1979 to the present and see how it stacks up with what UNIPCC has been putting out. Perhaps it’s a simple thought from a simple man but wouldn’t that give us a good idea of of the true situation?

JJ
December 10, 2009 7:33 am

Alan,
“So, you guys all know better than thousands of actual scientists?”
There are not thousands of scientists who have published evidence that confirms Catastrophic Anthropogenic Global Warming. There are only a handful that have even claimed that. And yes, we have demonstrated in some instances that our work is in fact better than theirs i.e. that their work was in fact worng. And a significant number of that handful have been shown to be doing things that are decidedly unscientific, i.e. they are not ‘actual scientists’.
And not that credentials matter – despite what you may have heard, science is about who has the right answer, not who has the right diploma – but some of us are actual scientists.
Thousands of us, in fact. Not that numbers matter … despite what you may have heard, science is not a popularity contest.
“Global warming is actually not happening?”
Depends on what you mean by that term. One of the tricks that the alarmists use, and they have used it very successfully on you, is to switch back and forth between two or more defintions of that term. One of those definitions, there may be some evidence for. The scary defintion of that term, the one that people wave around when they want to make politics go their way, is not sufficiently supported.
The scary Catastrophic Anthropogneic Global Warming theory has several components:
1) There is a recent (last 100 yrs) rise in globally averaged temperatures.
2) The temperature to which the globe has risen is unprecidented in human history.
3) The temperature rise is caused by CO2 emitted by human activity.
4) The temperature rise can be reversed by changes in human activity.
5) The temperature rise will bring net catastrophic effects to humans, such that if we must do anything we can to stop it, provided that we can (see #4).
That is ‘global warming’. All of those points are necessary to ‘global warming’. None of that has been proved.
Only point #1 even comes close. I would argue that the temperature measuring networks have insufficient quality, spatial distribution, and temporal coverage to prove that claim. (Incidently, if you read the CRU emails, you will see I am not the only one who thinks that 🙂
That said, is the earth experiencing a recent (100 yr) warming trend? I wouldnt be surprised. The longer term trend is the recovery from the Little Ice Age, and the much longer term trend is the climb out of the Big Honkin Ice Age. Given those trends, and the fact that climate is a dynamic system, I wouldnt be surprised if we have seen some recent warming. So what?
Note that the examples thet you regurgitate from the alarmist talking points are all anecdotal (not rigorous proof of a global effect) and all reference #1. Whatever evidence there is for the global warming defined by #1, it is bait and switch to pretend that evidence proves the ‘global warming’ that depends on #1, #2, #3, #4, and #5.
Point number two requires some very specific research – demonstrate globally averaged temperatures are now higher than any point in the history of cvilization. That requires sufficient spatial, and temporal coverage of temperature measurements that are of sufficient accuracy and reliability to make a determination of global temrature to witihn several tenths of a degree.
This research does not exist. Only a handful of people are even trying to do it, and the most prominent among them are, bluntly, crooks.
The third point has not been proven. It has only been demonstrated by shaky inference applied to computer models that are known to not accurately some phyisical processes important to climate (the modelers freely admit this), models that did not predict the current ‘lack of warming’ trend. This may be due in part to the fact that those models are parameterized with and calibrated to current temperature datasets (#1) and past temperature reconstructions (#2) that are not been proven and that have had the fingers of some very non-scientific people mucking about in them.
Point four is based on those same models.
So is point five.
There are not ‘thousands of scientists’ working on each of those points, who have independantly proven those points to be true. In many cases, there are only a handful of scientists working on any one of those points, they are not independant, and they have not yet proved the point.
And some of those handful of scientists are not ‘actual scientists’. Some of them are doing things that are absolutely counter to science. Hiding adverse results, corrupting peer review, bullying other scientists, refusing to provide methods and data for replication, evading the laws that require them to do so, conspiring to destroy evidence .. this is not science, and the people doing it are not scientists.

WAG
December 10, 2009 7:51 am

Actually, Lambert never made any ad hominems against Willis. Respond to the substance of Lambert’s critique:
http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
Also, I thought that according to SurfaceStations.org, siting problems were what rendered the RAW data unreliable. But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?

BCK
December 10, 2009 7:56 am

I was skeptical of this skeptism! So I thought I would cross-check it (always healthy in scientific debate) so I went to the GISS data you referenced above.
I chose to look at Norwich for several reasons. Firstly it is a favourite town of mine, but secondly it has a 100 years of reliable data, as far as I know, there is no need to adjust the data.
I had to slightly alter the data’s format so that I could plot it. The resulting graph is shown in the above link. I have fitted it with a sine function (the Earth orbits the sun fairly periodically, so it is fair to assume a sinusoidal fluctuation in temperature, though of course this isn’t accurate over long periods) with a linear increase or decrease. This increase is 0.000659169 +/- 0.0001593 C/ month in my model.
Now I am not a climate scientist and this is only one data point but I thought I’d share it with everyone. Maybe we could each do this with our own favourite town and amass the results!
[I used the following in an iMac (or cygwin for windows, or any *NIX) :
first, use the link in the original article to download raw data.
next use a text editor (Vi, Excel, Notepad) to remove the list at the top of month names and change the spaces to tabs (in Vim you can use
:%:s/ \+/\t/g
Then run :
cut -f2-13 -d’ ‘station.txt | perl -pe ‘s/ /\n/g’ > station-temp.dat
where there is a tab between the single quotes (CTRL-V then press tab)
The use gnuplot to plot and fit the data.
first issue the command:
set datafile empty ‘999.9’
as 999.9 signifies missing data
the set the fit function
y(x) = A* sin(x*f + C) + E + D*x
There maybe a lot of reasons this is a bad idea to use this function: cue climate scientists to shoot me down. The first part is just seasonal variation as we orbit the sun. A is an amplitude (probably based on where you are) f is the frequency of our orbit, using units of rad/month this is 2* pi / 12 or 0.523598776. C just exists to offset the start point, which probably won’t be the spring equinox. E is the station’s mean temperature, that like A, depends on where you are. D is the increase (or decrease) in temperature per month, if there is any at all.
You need to then plot this graph to see how close the fit resembles the data. You then need to adjust the values until you get a reasonable fit. Once you’ve got something that looks reasonable, gnuplot can adjust it to finer detail. However, if you don’t start with something reasonable, gnuplot goes mad and fails. e.g. for Norwich I used:
A = 16
C = 1.6
f = 0.53
D = 0.001
E = 5.02003
You can now issue these commands:
plot “station-temp-numbered.dat” using 1:2, y(x)
fit y(x) “station-temp-numbered.dat” using 1:2 via A, C, f, D, E
We are interested in D and the error on D. Plot again to check gnuplot did something reasonable. If it didn’t you need better starting points. ]
p.s. I have no doubt that industrialisation is slowly destroying the planet and needs to be halted.

wobble
December 10, 2009 7:58 am

JJ (23:13:30) :
””One wonders what exactly you are waiting on. You have the raw data. The homogenization methods have been provided to you, along with a bibliography of documents that provide great detail.
You need to read them. If you do, one of the first things that you are likely to pick up on is that (outside of the US) GHCN2 does not apply adjustments of the sort that your ’show me the scientific reason for each one’ question assumes.
Stop ‘waiting’. Get to work.””
JJ, can you explain what you mean by this?
Are you telling Willis that he needs to figure out the reasons for each of the adjustments to Darwin on his own by relying on homogenization procedures described in the bibliographical documents?

wobble
December 10, 2009 8:02 am

WAG (07:51:47) :
“”But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?””
I think Willis is comfortable with adjustments for errors. It seems as if he wants those that made the adjustments to describe exactly what errors they were adjusting for. There were half a dozen or so adjustments made so they should be able to point to half a dozen or so errors.

wobble
December 10, 2009 8:09 am

WAG (07:51:47) :
“”Also, I thought that according to SurfaceStations.org, siting problems were what rendered the RAW data unreliable. But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?””
(This is my second try. I’m sorry if this ends up as a duplicate comment.)
WAG, I think Willis is willing to accept adjustments that make sense. He wants to know the reason that each of the half a dozen adjustments were made. If there were half a dozen adjustments, then the adjusters should be able to document half a dozen specific errors that required adjustments.

December 10, 2009 8:12 am

WAG (07:51:47)…
…once again displays his abysmal lack of critical thinking.
The siting issue is entirely different from the issue of CRU and Michael Mann hiding the raw data. Sites on airport runways, or next to air conditioner exhausts should not be “adjusted” by the Elmer Gantry CRU scientists, who refuse to disclose how they do their data massaging. Sites contaminated by manmade heat should be discarded completely.
WAG claims that “Lambert never made any ad hominems against Willis.” But Lambert says that Willis is “lying.” To any normal person, that is an ad hominem attack.
WAG and Lambert are a typical examples of the the ethics-challenged people who migrate to the alarmist camp. Since they fail to make a credible case, they always fall back on ad hominem attacks. Lacking the science, that’s all they’ve got.

Slartibartfast
December 10, 2009 8:24 am

“Actually, Lambert never made any ad hominems against Willis.”
For real? The title of his frakking post is

Willis Eschenbach caught lying about temperature trends

That’s not ad hominem, though? That bald, unsupported assertion that Willis is knowingly engaging in deception, that’s not ad hominem?
You guys just can’t recognize it when you see it. Because…well, you’re soaking in it. Really…”Climate Fraudit”? This is middle-school ad hom.

Mutton Dressed as Lamb
December 10, 2009 8:27 am

And all that Guff about Meat and Carbon is also a load of “Bull”.
http://muttondressedaslamb.wordpress.com/2009/07/16/the-carbon-meat-myth/

Brian D Finch
December 10, 2009 8:32 am

Re my previous post (06:39:05) : Why homogenise anything at all? Do the unused 48 year records from Old Halls Creek from 1952 to 1999 continue to flatline at 33.4 degrees C? If so, whence comes the justification for a rise of 1.6 degrees C over the past century?

December 10, 2009 8:35 am

My complaint would be a 100 year sampling of Earth’s monitored weather is a moronic idealistic approach to dictate or predict any warming or cooling trend of this planet!
The earth’s billions of years of turmoil and evolution and weather are far too extreme to evaluate by using only a 100 year window! And I find not one analysis or speculation of our Sun being of any consideration?

jrshipley
December 10, 2009 8:36 am

“A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”
http://www.bom.gov.au/climate/change/datasets/datasets.shtml
A number of commentators have pointed to this quote, which is absent from the top post. I’d like to make the meta point about what this tells us about the denier methodology. There are published papers on the homogenization methods that might have been addressed, possibly critically, in scientific argument by a skeptic. A denier, on the other hand, just puts the words “adjustment” and “homogenization” in scare quotes and places them in a speculative and conspiratorial context.
Witness this subtle rhetorical use of scare quotes in place of actual arguments. From the top post: “The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data.” This sort of thing shows clearly the line between skeptic and denier. The skeptic would look in the literature for the explanation and meaning of homogenization. The denier leaps to conspiratorial conclusions.
[REPLY – When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating), we could actually look into said “meaning”. (At least) until then, scare quotes apply. ~ Evan]

Roger Knights
December 10, 2009 8:46 am

“One of the tricks that the alarmists use, and they have used it very successfully on you, is to switch back and forth between two or more definitions of that term.”
There’s a technical term for that trick: equivocation.

Mike
December 10, 2009 8:53 am

Willis, I’ve been reading some of the criticisms of your article. And as a result, I am circling back to one of my original assertions regarding the assessment of the validity of the ground surface records which is that the ground station data should be limited to raw data only and that “adjustments” are not permitted because they are arbitrary and prone to error (errors that would exceed the global signal). Therefore, in order to analyze the ground record it would be necessary to break up a station’s record into independent data sets where each data set is bounded by the previous and next known event that occurred in the station’s history (destruction, relocation, installation of screens, introduction of nearby structures etc). Then, each individual set would be tested for increase or decrease and the entire differential set would have averaging applied to determine a global up or down tick. However, this process is a monumental task that BOM did not carry out. They, and all other groups (NOAA etc), simply applied their “adjustments” in order to provide a continuous report for each station. These “adjustments” are OK for the original purpose of the stations, which was to produce a temperature record that meteorologists could then use to predict weather in regional or even local micro climate areas surrounding groups of stations. But such a technique is hopelessly inadequate for predicting a global temperature trend. We (skeptics and supporters of AGW alike) cannot produce a reliable answer to the question until we have access to ALL of the original and completely unadjusted data and access to ALL of the station historical records. When do we get organized to get this project underway?

WAG
December 10, 2009 8:56 am

Wobble – the Australian scientists did detail their reasons for adjusting the data, as Lambert pointed out. Here’s what the scientists’ say:
“A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.
“The impacts of these changes on the data are often comparable in size to real climate variations, so they need to be removed before long-term trends are investigated. Procedures to identify and adjust for non-climatic changes in historical climate data generally involve a combination of:
* investigating historical information (metadata) about the observation site,
* using statistical tests to compare records from nearby locations, and
* using comparison data recorded simultaneously at old and new locations, or with old and new instrument types.”
Smokey – no, accusations of “lying” do not constitute an ad hominem. If someone says something that is not true, that is a lie. If you claim that adjusting temperature data is baseless, “blatantly bogus,” or “indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming,” that is factually incorrect, and therefore a lie. Willis didn’t even make an effort to examine the reasons for adjusting the data.

wobble
December 10, 2009 8:57 am

jrshipley (08:36:36) :
““A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious…””
And if the change in the thermometer shelter at Darwin occurred many years prior to 1941, then what was the reason for the SERIES of adjustments in the 1940s and beyond?

Sock Puppet of the Great Satan
December 10, 2009 9:04 am

‘When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating)’
Evan, you’re the one obfuscating. It’s been pointed out to you that the 1940s jump is . You were aware that there were five stations in the temperature series.
If you’d wanted to do a competent analysis, you’d have, oh, maybe colored series from each station differently. Then, you could see differences in each series. Then, you could have seen the changes in the 1940s as being due to *one of old weather stations being bombed by the Japanese* leading to a new station being built in a different location rather than a grand conspiracy of Evul Climate Scientists. Unless the Evul Climate Scientists were helping the Japanese during WW2. How do the temperature trends look from before and after 1941, if given the step change in the early 1940s is from a weather station relocation?
However, you’ve pleased your fanboys. How nice for you.
[REPLY Let’s keep it simple. How about just a nice cup of tea and the code? ~ Evan]

Mutton Dressed as Lamb
December 10, 2009 9:11 am

I can email a peer reviewed research report to those interested about meat and Green House Gases, Published in the journal of the Royal Agricultural Society of England.
http://muttondressedaslamb.wordpress.com/2009/12/10/peer-reviewed-report-on-livestock-and-green-house-gases/

robotech master
December 10, 2009 9:15 am

I thought I’d cross post this for a laugh.
“David, Willis is lying by presenting the raw temperature graph, which mixes together several records, as the true temperature record of Darwin. He doesn’t get the benefit of the doubt because of his previous dishonesty about temperatures.
Posted by: Tim Lambert | December 9, 2009 11:33 PM”
Now I could be wrong but isn’t the whole IPCC report/s and many of mann’s, CRU’s, and other global warming data based on “mixing together several records” and then “adjusting them”…
Tree rings, proxy data, adjusted data… all mixed and matched at different time scales and such… so is Tim lambert now claiming the IPCC reports are lies?

carrot eater
December 10, 2009 9:39 am

[REPLY – When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating), we could actually look into said “meaning”. (At least) until then, scare quotes apply. ~ Evan]
The methods are described in some detail in papers published by the NCDC. Willis could try to follow those procedures. Perhaps not all relevant information about that particular site and the data is available online – time of observation changes, and the like? Then he could politely ask the either the Australian BoM or the NCDC where he could find more, instead of going to public accusations of fraud. Not feeling like writing your own code from scratch? I don’t know about NOAA, but GISS has all of theirs available for download; one could look at that.

December 10, 2009 9:40 am

WAG (08:56:36):
“Smokey – no, accusations of ‘lying’ do not constitute an ad hominem. If someone says something that is not true, that is a lie.”
False conclusion. WAG demonstrates his abysmal lack of understanding and ethics by imputing motive to what could just as well be a mistake [or not], or simply a difference of opinion.
If WAG can prove a deliberately dishonest motive for anything Willis said, then I will concede that Willis was lying. But given the fact that Willis repeatedly concedes whenever someone points out an error in his analysis, it is apparent that unlike WAG, he is interested in finding the truth.
That makes clear that both WAG and Lambert are deliberately making their reprehensible ad hominem attacks. There is a reason that alarmists like WAG and Lambert make ad hominem accusations against those they disagree with: because they lack the facts to support their arguments. Their name calling doesn’t make them right, only despicable.

JJ
December 10, 2009 9:44 am

Wobble,
“Are you telling Willis that he needs to figure out the reasons for each of the adjustments to Darwin on his own by relying on homogenization procedures described in the bibliographical documents?”
More to the point, I am telling Willis that if he reads the homogenization methods, he is likely to find that the sort of adjutments his criticisms assume (discrete adjustments for documented issues such as station moves, instrument changes, etc) are not the type of adjustments made in GHCN2 for stations outside the US. Rather, GHCN2 homogenizes by first difference comparison to a reference series.
The homgenization methods are documented, and Willis has them. He also has raw data. This provides him several options for pursuing the argument legitimately.
He could attempt to exactly duplicate the homogenization for the Darwin site. If it works out, then he would have complete knowledge of the nature of the adjustments. If it doesnt, then he may have proved that the published adjustment methodology wasnt followed – that would be a smoking gun of sorts.
He could also perform his own homogenization adjustment of the raw data, using GHCN methods and a reference series of his choice that fits the GHCN methods and that he personally finds to be suitable. If choosing a different but otherwise suitable reference series results in dramatically different results, that would indict the adjustment methods.
He could also perform his own homogenization adjustment of the raw data, using his own methods. If the results are significantly different than GHCN with respect to a global averaged long term trend, he would be on to something. He’d still have to demonstrate that his method was sufficient (or preferably, superior) but if he could that would be huge.
He could also investigate the principle claims about the homogenization method: that it is sufficient for large area long term trend analysis, and that it results in minor effects on the global average long term trend. This would require plenty of reading and understanding of the methods documentation.
Instead, he seems to be content to wave his arms wildly and point to the fact that the homogenization method appears to produce enormous adjustments in some stations, and that those adjustments sometimes do not track well with the temps local to that station – both of which are effects documented and accounted for in the published methods.
This leaves him wrestling in the mud with the likes of Tim Lambert.

December 10, 2009 9:53 am

Well done. Very impressive work. However by 1/2012 this will all be meaningless. Our system IS binary. Wormwood will emerge from Libra space to the south of our planetary plane. Governments will deny all former knowledge. A very close pass to Urth will leave the sky orange, the seas red and deposit many rocks. A turnaround path seven years latter will destroy much. Yashua/Y’hovah will res-Q. Look up. The times of tribulation draws near…

jrshipley
December 10, 2009 10:02 am

“[REPLY – When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating), we could actually look into said “meaning”. (At least) until then, scare quotes apply. ~ Evan]”
I never understand these calls for releasing things that are widely discussed in the literature. Lambert cites several papers. In any case, I’m still not seeing how anyone with an once of intellectual dignity can leave the explanation of homogenization out of a post on homogenized vs raw data. There’s no argument in favor of using inhomogenous data and no substantive criticism of the published work on all of this, just conspiracy-mongering over adjustment of data at one station.
Again, this is a clear case of denialism as opposed to skepticism. The voraciousness of the effort by deniers to undermine, through conspiracy-mongering, the credibility evidence for anthropogenic change indicates to me the strength of that evidence and the desperation at the exhaustion of skeptical lines of argument.
[REPLY – It’s really quite simple. Until the algorithms are released for independent review it is impossible to replicate the findings or diagnose the procedure. “Widely discussing it in the literature” does NOT feed the scientific bulldog. Until methods are released, scare quotes are quite appropriate. As Col. Mandrake put it, “The code, Jim. That’s it. A nice cup of tea, and the code . . .” ~ Evan]

wobble
December 10, 2009 10:02 am

WAG (08:56:36) :
“”Wobble – the Australian scientists did detail their reasons for adjusting the data, as Lambert pointed out. Here’s what the scientists’ say:””
Yes. I know what they generically say.
Are you, here and now, claiming that each of their adjustments was definitively made as a result of an actual, and documented, event that they mention?
This is all Willis is asking. It should be that difficult for you to understand this.

wobble
December 10, 2009 10:09 am

JJ (09:44:25) :
“”He could attempt to exactly duplicate the homogenization for the Darwin site. If it works out, then he would have complete knowledge of the nature of the adjustments. If it doesnt, then he may have proved that the published adjustment methodology wasnt followed – that would be a smoking gun of sorts.””
JJ, you’re wrong.
It’s not enough to have the raw data and the methodology which is supposed to be used. Willis would need to know the recorded dynamic events which resulted in the adjustment methodology being applied – this is what he’s asking for.
You seem to believe that a referee, who knows the rules of a game, is capable of determining whether or not penalties were properly applied during a game he didn’t not see. The referee would also need to know the supposed reasons for the penalties. Would he not?

wobble
December 10, 2009 10:15 am

jrshipley (10:02:11) :
“”I never understand these calls for releasing things that are widely discussed in the literature.””
jrshipley, are you claiming that the specific reason for each adjustment at Darwin is discussed in the literature. If so, would you mind tell me where it’s discussed?

wobble
December 10, 2009 10:18 am

Sock Puppet of the Great Satan (09:04:40) :
“”Then, you could have seen the changes in the 1940s as being due to *one of old weather stations being bombed by the Japanese* leading to a new station being built in a different location rather than a grand conspiracy of Evul Climate Scientists.””
Are you claiming that ALL the changes in the 1940s are due to one weather station being moved?

jrshipley
December 10, 2009 10:29 am

“jrshipley, are you claiming that the specific reason for each adjustment at Darwin is discussed in the literature. If so, would you mind tell me where it’s discussed?”
I’m not the one making paranoid conspiratorial claims. I always find this sort of “gotchya” from deniers amusing. Why are you asking some random person in a blog thread to do your research for you? And how can you justify making the conspiratorial allegations in the top post before finding out the specific reasons? You’ve got the cart before the horse here, but I appreciate you helping to show how this entire post is based on paranoid speculation. Until you know the specific reasons, which you acknowledge not knowing, then you haven’t got much of a smoking gun here do you? Skeptics ask questions. Deniers presume conspiracies before even looking for answers. Which are you?

carrot eater
December 10, 2009 10:29 am

“It’s really quite simple. Until the algorithms are released for independent review it is impossible to replicate the findings or diagnose the procedure. “Widely discussing it in the literature” does NOT feed the scientific bulldog. Until methods are released, scare quotes are quite appropriate. As Col. Mandrake put it, “The code, Jim. That’s it. A nice cup of tea, and the code . . .””
I can only speak to my own experience, but if I want to reproduce or build upon an analysis described in the literature, I write my own code; I don’t go bugging other people for theirs. If the description in the literature is half-way decent, you’ll be able to do it. In this case, the publications by Peterson look pretty detailed to me; you should be able to have a try at it.
In any case, all GISS code is public; you can go see how their homogenisation code works, no?

December 10, 2009 10:35 am

Anyone can google your name and see that you have quite a reputation for fudging data to fit your theory, interesting given your accusations. Just what are your credentials and how are you more qualified to make the adjustments you see fit than the international scientific community? You’ve got quite a swanky blog page, you’re pandering well. I trust this is profitable for you, isn’t capitalism great?

Slartibartfast
December 10, 2009 10:42 am

Also, see: Well, Poisening The

JJ
December 10, 2009 10:47 am

Wobble,
“JJ, you’re wrong.”
Nope.
Please pay attention to what I write. This is the third time this has been explained.
“It’s not enough to have the raw data and the methodology which is supposed to be used. Willis would need to know the recorded dynamic events which resulted in the adjustment methodology being applied – this is what he’s asking for.”
Wrong on both counts.
Once again, GHCN2 does not apply metadata based adjustments to non US stations. Outside the US, homogenization is by first difference based comparison to a reference series. The ‘records of dynamic events’ that you and he think you need likely do not exist, and at any rate do not apply. Further, it doesnt appear that Willis has actually asked for anything, and he seems very resistant to the notion.
Willis has the raw data. He has the methods. He needs to understand the methods and apply them. I gave a few suggestions above as to potentially productive ways that could be done.
“You seem to believe that a referee, …”
Your analogy is inapplicable. Homogenization is not a ballgame. Stick to the facts.

Evan Jones
Editor
December 10, 2009 10:49 am

Strange that an appeal for actual methods (as opposed to a “description” of methods) should raise such objection.

wobble
December 10, 2009 10:53 am

jrshipley,
Let me ask you again.
Are you claiming that the specific reason for each adjustment at Darwin is discussed in the literature?
If you don’t know, then you can’t claim that the information Willis is looking for is publicly available. Therefore, you should stop claiming that it is.
If you are claiming that it’s publicly available, then please tell me where.

wobble
December 10, 2009 11:03 am

JJ,
Are you claiming that the GHCN2 homogenization can alter data at stations at which nothing has occurred to warrant such alterations – other than observed divergence from a reference series?

bill
December 10, 2009 11:12 am

Sjoerd Schreuder (09:56:33) :
bill (08:44:07) :
Figures unadjusted met office
http://img410.imageshack.us/img410/8996/ukspaghetti.jpg
Did I notice “De Bilt” in there? That’s a village in The Netherlands, and it’s clearly not in the UK. The dutch met office KNMI is sited there.

Apologies, I didnt deselect it for the plot (you are lucky salekhard – siberia – was not included!) But it seems to fit in with the rest

alkali
December 10, 2009 11:35 am

@wobble: “Are you claiming that the GHCN2 homogenization can alter data at stations at which nothing has occurred to warrant such alterations – other than observed divergence from a reference series?”
I think this is mostly correct, except insofar as it could be read to suggest that there is just one reference series against which all weather stations are measured, which is _not_ the case. My understanding of what NOAA is doing is comparing the data for each station to the other nearby weather stations with which that station’s data is otherwise most highly correlated.
To provide a very bad example using completely made up numbers, assume the following weather stations show the following temperature readings:
A: 1,2,3,4,3,2,1
B: 3,4,5,6,5,4,3
C: 1,2,3,4,7,6,5
If you calculate the period-over-period differences in temperature readings:
A: +1,+1,+1,-1,-1,-1
B: +1,+1,+1,-1,-1,-1
C: +1,+1,+1,+3,-1,-1
C is highly correlated with A and B except for one outlier (the +3 rather than -1). If that difference is statistically significant, an adjustment will be applied to C.
Now, to get to what I take your central point to be: could this adjustment be applied even if there is no historical information available that might explain what might have happened to temperature readings at C between year 4 (when the reading was 4) and year 5 (when the reading was 7)? YES.
Why do that? Here’s my best understanding:
1) This statistical method can be mechanically applied to the data set without having researchers make subjective assessments of what impact particular historical changes at particular stations might have on data collection at those stations.
2) Outside of certain stations in the US, there is not good historical information about station conditions (“metadata”) that would enable such subjective assessments to be made.
3) In some cases, historical information wouldn’t be helpful in reconstructing the reasons for oddities in the data (e.g., historical operator and transcription errors).
4) The adjusted data is recommended by NOAA only for use in regional analysis. (If you are doing a global calculation, errors in unadjusted will hopefully mostly cancel each other out. If you are looking at one individual particular station’s data for some reason, you’re better off doing what you can with the metadata you have.)

JJ
December 10, 2009 11:38 am

Wobble,
“Are you claiming that the GHCN2 homogenization can alter data at stations at which nothing has occurred to warrant such alterations – other than observed divergence from a reference series?”
Yes.
Now that you have grasped that part, please go back and read the rest of my posts on the subject, and endeavor to understand the other parts as well. Please do this before replying again …
The short version is that the GHCN2 methods document that Willis quotes from says:
You may see enormous adjustments applied to some stations. These adjustments may not track well with the temperatures local to that station. That is OK. While the sometimes goofy looking adjusted data should probably not be used for local analyses, we think it actually gives more reliable results when applied to long term global trend analysis, and here’s why (cites paper).
To which Willis artfully replies:
AHA!!! I have found enormous adjustments to this one station!!!!! And these adjustments do not track well with the temperatures local to that station!!!!!! You guys are all a bunch of crooks and liars!!!
To which Anthony adds:
Smoking Gun!!!!!!
Calm down folks.
There may be something up here, but you certainly havent found it yet. In fact, all you’ve done so far is confirm what they already told you.

December 10, 2009 11:41 am

Carbon dioxide cause or effect?
We should take the climate change seriously because these changes have created and destroyed huge empires within the history of mankind. However, it is a shame that billion dollar decisions in Copenhagen may be based on tuned temperature data and wrong conclusions.
The climate activists believe that the CO2 has been the main reason for the climate changes within the last million of years. However, the chemical calculations prove that the reason is the temperature changes of the oceans. Warm seawater dissolves much less CO2 than the cold seawater. See details from:
http://www.antti-roine.com/viewtopic.php?f=10&t=73
This means that CO2 content of the atmosphere will automatically increase, if the sea surface temperature increases for ANY reason. Most likely, carbon dioxide contributes to global warming, but it is hardly the primary reason for climate change.
The magnitude of CO2 and water vapor emissivity and absorptivity are the same, however, the concentration of CO2 (0.04%) is much less than water (1%) in atmosphere. In this mean the first assumption is that water vapor effect on the climate change must be much larger than CO2.
If it is true that:
1. CRU adjusted and selected the data according to their mission,
2. The town heating effect have nearly been neglected,
3. Original data has been deleted,
4. The climate warming cannot be seen from country side weather station data, then real scientist should be a little bit worried 🙂
Climate models which correlate with the CRU data cannot validate the methods of CRU, because all these models have been calibrated and fittet with the CRU data. So it is not a miracle that they fit to the CRU results. An other problem is that they do not take into account the effect of oceans.
Kyoto-type agreements transfers emissions and jobs to those countries which do not care about environmental issues. They also channel emissions trading funds to the population growth and increase of welfare, which both increase CO2 emissions. New Post-Kyoto agreements should channel the funds directly to development of our own sustainable technology and especially to the solar technology.

gg
December 10, 2009 11:43 am

I don’t quite get the point of all this: you start the post correctly saying that weather station data need to be adjusted, than you take a station out of 7000 and show that it was, indeed, adjusted. How is this a smoking gun I don’t understand.
I understand you are not willing or able to reproduce the exact same procedure that GHCN used but what is the point? I mean, I can also cook pasta here at home, claim that I am cooking pasta the way you do and then say “YIKES! You see! This pasta tastes like crap ERGO JJ cannot cook”. This is beyond human logic, come on.

tallbloke
December 10, 2009 12:00 pm

More on Time of Observation from the warmist site I do battle on. Any comment here Willis?
“I had time to read a little bit about the Time of Observation Bias, and it’s fascinating. As I understand it, before November 1994 Australia was calculating its daily mean temperature as the average of the daily maximum and minimum, as USA presently does. I don’t know what studies were done on TOB effects in Australia, but the study by Karl et al (1986) for the USA showed that bias can be as high as 2C, and that the difference in bias, say between late afternoon and around sunrise, can be even greater. The average of 7 years of data showed significant areas of northern Texas, Oklahoma and southern Kansas have a change in TOB of 2.0C or more in March if time of observation changes from 1700 to 0700.
Before 1940 in the US, 75% of the cooperative network observers were making the observation in the evening, but by 1986 only 45% were doing this, and one of the most frequent observation times had become 0700. But apparently the need for adjustments of such magnitude in cases like this is “nonsense”.”

Ian Holton
December 10, 2009 12:12 pm

After viewing many Australian sites on the GISS records (and not knowing what is adjusted or not) I cannot see how anyone in a sane mind can make any sense out of it at all!…….
The records are in the main disjointed with gaps and short periods shown, the trends are different for adjoining stations, about 1/3rd of the sites go up, 1/3rd go down and 1/3 show random ups and downs with no clear trend at all.
In the very main the only sites showing any real significant trends up are the large city urban warming BOM sites, and some of their nearby BOM airport stations do not show this upward trend at all(even though their are tarmac heat issues there as well)…….
Leaving one to believe strongly in urban warming distortions, And I note also that these sites are the only ones that have been updated to 2009.
IMO the surface record should be scrapped after viewing this mess, how anyone can make adjustments and fill in gaps and fit that all together and say anything meaningful at all about long term temperature trends is beyond me! “An unbelievable mess!”(Paraphrased…….As the leaked email scandal programmer stated in his notes on Australian stations!)

carrot eater
December 10, 2009 12:13 pm

wobble (11:03:08) and also @JJ (11:38:10) :
“Are you claiming that the GHCN2 homogenization can alter data at stations at which nothing has occurred to warrant such alterations – other than observed divergence from a reference series?”
At this point, I must rather disagree with JJ. The GHCN uses statistical methods to produce corrections. These are described in the literature in great detail, and the basic method should be reproducible. But the corrections are not imaginary nor unrelated to reality; they should correspond to something that actually happened at that station. Willis quotes the NCDC above on what sorts of changes might have happened. If you had access to the historical data about the station (the metadata), you would likely see the reason for those corrections.
IN fact, the Australian BoM does have access to that historical metadata; it’s their station, after all. They perform their own homogenisation, independent of GHCN. They use both statistical methods and the metadata (knowledge of site changes, etc).
There is a nice example of this in Torok (1996), which is cited by the Australian BoM. They use the stats to sniff out changes at some station, and as it turns out in that example, all of the corrections but one actually correspond to some known change at the site: a station move, a new screen, some problem with the screen, and so on. One correction in 1890 was not explained by the historical notes.

December 10, 2009 12:15 pm

tallbloke
I haven’t accessed the ‘warmist site’ you refer to but have these people never ever heard of maximum and minimum thermometers (much less how to reset them)?
While most determinations of maximum and minimum daily temperatures are now done electronically, the Maximum Minimum Thermometer also known as Six’s thermometer, was invented by James Six in 1782.
This TOB stuff sounds like another amazing warmist ‘cover up’ to me.

Ken
December 10, 2009 12:44 pm

Here’s a nice data set for viewing.
http://www.bom.gov.au/climate/change/amtemp.shtml

wobble
December 10, 2009 12:47 pm

JJ,
I just honored your request and read through all your comments on this thread.
I certainly see your point, but the details of your comments don’t square with Willis’ claims.
He seems to claim that homogenization adjustments of the raw data is done on a case by case basis.
However, you seem to be claiming one of two things:
1) that these homogenization adjustments were automatically applied according to quantitative rules put in place without any analysis of Darwin’s actual temperature data; or
2) that these homogenization adjustments were made specific to Darwin independent of any preestablished quantitative rules. And that the qualitative reasons for the specific adjustments don’t necessarily rely on documented events.
If you are claiming #1, then I’d like to see the apparent disagreement with Willis hashed out.
If you are claiming #2, then we’re back to disputing the justifications for adjustments which may or may not be provided. However, I do agree that it’s better to merely question and leave any definitive accusations out of the post. The definitive accusations are a liability without serving much of a “simple message” purpose.

Mike
December 10, 2009 12:51 pm

It’s not the “need” for the adjustments that is “nonsense” but it is the WAY in which the adjustments are made that is unknown. By your own admission, it is known since 1986 that up to 2C can be attributed to TOB but we are expected to believe that the “adjustments” are accurate? While at the same time outfits such as CRU refuse to release their data and refuse to proffer a rationale for their adjustments and then when we do get a glimpse of the information from them in FOIA information that was released a bit earlier than they expected we see enormous fudge factors and no explanation as to how they came to these “adjusted” data sets other than what amounts to “take our word for it, we’re the scientists and we know what we’re doing”. Is that science? And would you get on an airplane if that was the paradigm used in its design?

Lucy Hancock
December 10, 2009 12:58 pm

this is good work, terrific imho, and many of the comments too.

E.M.Smith
Editor
December 10, 2009 1:00 pm
wobble
December 10, 2009 1:01 pm

alkali (11:35:33) :
“”I think this is mostly correct, except insofar as it could be read to suggest that there is just one reference series against which all weather stations are measured, which is _not_ the case. My understanding of what NOAA is doing is comparing the data for each station to the other nearby weather stations with which that station’s data is otherwise most highly correlated.””
Yes, it seems as if this is the case. Have you taken a look at the documents which supposedly outline the algorithm? I’ll take a look, but I’ll be very surprised if subjective inputs (open to the questions that Willis is asking) are a requirement for replicating the adjustments to the raw data (making replication impossible).
So what we may be left with is claims of, “the temperatures were all properly adjusted according to our published and universal algorithm” despite the fact that the published algorithm allows significant, subjective input.
We will see, and I am very willing to back away from any accusations until we answer these questions definitively.

wobble
December 10, 2009 1:04 pm

Sorry, above I meant, “I’ll be very surprised if subjective inputs … are NOT a requirement for replicating the adjustments…”

JJ
December 10, 2009 1:21 pm

Wobble,
“I just honored your request and read through all your comments on this thread.”
Thank you.
“I certainly see your point, but the details of your comments don’t square with Willis’ claims.”
They do, however, appear to square with the methods document that Willis quotes from 🙂
“However, you seem to be claiming one of two things:”
To be clear, my claims are only as to my understanding of the content of the methods document. Basil and alkalai both give decent exposition of those methods above.
“1) that these homogenization adjustments were automatically applied according to quantitative rules put in place without any analysis of Darwin’s actual temperature data; ”
If i understand it right, that is probably the closest.
It appears that the GHCN2 homogenization method for non US stations adjusts a station’s data based on first difference comparison of those data to a reference series. That is an analysis of the Darwin station data – it is a search for and correction of apparent discontinuities within the Darwin data, using a reference series created from a nearby station both to locate the discontinuity as well as to provide the correction. It does not rely on documented events for either.
“If you are claiming #1, then I’d like to see the apparent disagreement with Willis hashed out.”
Me too!

December 10, 2009 1:23 pm

Town of Oenpelli in West Arnhem Land founded around 1900
“The Oenpelli Mission began in 1925, when the Church of England Missionary Society accepted an offer from the Northern Territory Administration to take over the area, which had been operated as a dairy farm. The Oenpelli Mission operated for 50 years.”
Oenpelli weather station records:
http://weather.ninemsn.com.au/climate/station.jsp?lt=site&lc=14042
Station commenced 1910
Situated 230 km due east of Darwin

carrot eater
December 10, 2009 1:26 pm

Willis: “Even if one of them were for a Stevenson Screen (it’s not, but we can play “what if”, what are the other four for?”
Has it occurred to you to ask the Australian BoM in private correspondence? They’re the ones who have the historical metadata for the station.
Taking your graph at face value, most of the adjustments are after the site move in 1941 (which I still don’t understand why you don’t want to adjust for). So for recent adjustments, the reference network comes into play. If the statistical method used by the GHCN is working well here, then those adjustments should line up with something in the metadata (the GHCN says they don’t have or use metadata for non-US stations).
You should also note that GISS, CRU and the Australians all do their own homogenisation, and the IPCC used the CRU, not the GHCN homog. Why not show all four sets of adjusted results, so we can see how they compare?

E.M.Smith
Editor
December 10, 2009 1:31 pm

Don’t know what happened to this the first time:
weschenbach (12:25:33) :
From what I’ve seen we have E.M. Smith’s work, A.J. Strata, a site called CCC, Steve McIntyre and Jean S over at CA, and I’m sure various others, and of course your good self, all working in various ways on the various datasets.
Can a way be found to get you all together, plus interested parties willing to do some work (like me), to really work on this and produce a single temperature record, but rather than rehash CRU’s, GHCN’s or GISS’s code in something like R, actually come up with a new set of rules for adjusting temperatures.

I’m available. I’ve been planning to ‘re-hack’ GIStemp to make “Smith Temp” by de-buggering it; but I’d be just as happy to point out what they do that is reasonable, and what they do that is borken 😉 and have the thing re-done right, from scratch.
The biggest problem I see is to get ahead of NCDC and GHCN into the full planetary data series and avoid their “cooking by deletion” in the last 20 years.

So last night I went and registered surfacetemps.org. I envision a site with a single page for each station. Each page would have all available raw data for that station. These would be displayed in graphic form, and be available for download. In addition there would be a historical timeline of station moves, copies of historical documents, and most importantly, a discussion page.

Works for me. I’ve bookmarked it.

I do not think our purpose should be to create a new dataset, although that may be a by-product. I think our purpose should be to serve as a model for data transparency and availability. For each page, we should end up with a recommendedset of adjustments, clearly marked, shown, and discussed. But if someone wants to use one of the adjustments but feels the others are not justified, fine.

Given the selective thermometer deletions in GHCN, if you just do a “don’t hack it like GIStemp or CRU do” you are missing the third leg of this stool…
The first and most pernicious “fudge” is the removal of recent cold records from the Global Historical Climate Network data set. If all you do is ratify that as “OK”, then you will get very similar results to all the other folks once you start splicing, homogenizing, and trend plotting. You can get around that be leaving OUT all the recently deleted records from the entire past, but then you have a very small and very biased set of records to work with (hot stations mostly).
So you have to get in front of NCDC and GHCN, and that means in some way making your own dataset. Be it a ‘by product’ or goal…

On the main page, we should have a graphing/mapping interface that will allow anyone to grab a subset of the data (e.g. rural stations with a population under 100,000 in a particular area that cover 1940 to 1960).

How will you get “rural stations” when GHCN has 90+% of surviving stations at airports? (And many of the rest in cities). That is why you have to get ahead of GHCN and make a data set of raw(er) data.
For the USA, you can get USHCN, but even that has been “value added” (YUCK! I’m coming to hate those words…) by NCDC in unknown ways…

I would suggest that we do our work in R.

Yet Another Language to learn?… SIGH…. OK, I’ll look at it. But the lag time of learning YAL, and the time consumed to do it, is not attractive.

Oh, yeah. There’s more to come on the GHCN question, I’ve uncovered a very curious and pervasive error. Stay tuned for my next post.

And where will this post appear? Here one hopes…

December 10, 2009 1:33 pm

Oops, my apologies.
I have just discovered that all temperature data for Oenpelli (230 km east of Darwin) prior to Decmber 1963 appeared to have been lost according to BOM records.
The full rainfall record back to 1910 was not lost.

Vincent
December 10, 2009 1:34 pm

A very interesting blog. I commend the comments of wobble, alkali and JJ from whom I am learning a lot. Could do with less name calling though (jrshipley) – doesn’t help, just pisses people off.
It seems to me though, that these temperature records are such a tangled mess, I can’t see how even the sharpest minds can extract any useful knowledge from them. They should all be binned and replaced with proxy records.

Michael Tobis
December 10, 2009 1:50 pm

This is quite striking.
I would like to replicate your results. Please provide URLs for the data and the source code.
Thanks in advance.

JJ
December 10, 2009 2:13 pm

Willis,
“I’m sorry, but that explanation doesn’t hold water.”
Yeah, it still does.
“There were adjustments back as far as 1930. In the thirties Darwin was the only station within 500 km. So the changes could not have been based on a five station “reference series” as you claim. There were no stations near to Darwin then, much less five stations”
I have been unable to find a reference to a 500 km limit in the methods document. Given the method’s focus on ‘regional and larger’ scales, they may have used five stations from farther away. If that is the case, that might be a productive line of investigation for you to pursue, in the context of critiquing the claims about the method’s applicability to global average long term trends. Keep in mind if you go that route that some aggregate maths can be very robust to such extreme examples, in ways that are not intuitive.
It remains that there is no proof that the results you are seeing are anything other than artifacts as predicted by the documented methodology.
Speculating some: in addition to using five sites, some or all of which may be farther than 500km from Darwin, it is also possible that a modification of the methods was appplied – using less than five sites, for example. In my experience, that sort of thing wouldnt be that unusual. If such departures from the methods occured, that would be important.
You have the data and the methods. Thus, you have the means to perform a Steve M style audit, attempting to replicate results. Use the five stations nearest to Darwin that meet the method’s criteria. See what you get. That would be an excellent follow up to your current work. You can do that, while you wait for NOAA to respond to the questions you aksed of them.
You did ask NOAA those important questions, didnt you?
🙂

December 10, 2009 2:14 pm

Willis
Following up from this may help you:
http://www.bom.gov.au/climate/averages/tables/cw_014042_All.shtml
AFAIK Oenpelli is the oldest station within 500 km of Darwin with the longest temperature records. For personal reasons I know the C of E mission kept fairly reliable (max/min) temperature records from 1925 until relinquished around 1975. A Stevenson Screen was certainly in place in WWII. Why all pre-1993 data temperature should now be missing from BOM records is a mystery to me.
IMHO, logically, if “…an analysis of the Darwin station data …. and correction of apparent discontinuities within the Darwin data, using a reference series created from a nearby station both to locate the discontinuity as well as to provide the correction.” occurred it would have most likely used data from this weather station (which was not subject to coastline effects etc.).
Therefore, presumably, all corrections to the Darwin Station data occurred from 1964?

CC
December 10, 2009 2:30 pm

JJ,
Seeing as how you seem to know what you’re talking about, and are good at explaining stuff for laymen, I have a question. What’s the purpose of the GHCN2 homogenization? I am struggling to see what use it has.
Presumably the reference series must be known to be more reliable that the series being adjusted, otherwise the adjustment would have no justification. But if we have a reliable reference, why use the less reliable series at all? I assume the reference must be some distance away, and we are just trying to get broader coverage.
If there is a non-negligible geographic distance between the reference and the other series which are adjusted to it, why is it justified to adjust the other series to the reference? Might the differences not be due to local climate variations?
Also, if a series such as the Darwin series is adjusted so much that it no longer bears any resemblance to the actual temperature at Darwin, which the authors seem to admit might potentially happen, then how is it useful? Isn’t the whole point to work out what the average temperature of an area is? How is this helped by moving some of the raw data away from its correct temperature?
I’m not a skeptic trying to pick holes in everything, I’m just genuinely interested in the methodology used here.

tallbloke
December 10, 2009 2:50 pm

Vincent (13:34:52) :
It seems to me though, that these temperature records are such a tangled mess, I can’t see how even the sharpest minds can extract any useful knowledge from them. They should all be binned and replaced with proxy records.

Lol. Which sort of proxy did you have in mind?

December 10, 2009 2:56 pm

This was written in a 2000 Aust. Federal Govt. agency report (Office of the Supervising Scientist based in Jabiru).
“Climate records in the Alligator Rivers Region and neighbouring regions are relatively limited. Following common practice, weather stations have typically been associated with human habitations and consequently the monitoring sites have records limited by the relatively short history of townships in the region. For example, Jabiru has a climate record
extending back to 1982. Within the region, the longest climate record is approximately 80 years, from Gunbalunya (Oenpelli). Longer-term climate records are available for Darwin.
Climate conditions have been described by a range of authors, including the Australian Bureau of Meteorology (1961), McAlpine (1969), Christian and Aldrick (1977), Hegerl et al (1979), Woodroffe et al (1986), Nanson et al (1990), Riley (1991), Wasson (1992), Butterworth (1995), Russell-Smith et al (1995), McQuade et al (1996) and Bureau of Meteorology (1999). On the basis of these authors’ works a general description of the climate can be made.”
Quite apart from the other adjustment-related issues discussed in the most recent posts above, how are we to explain that of the ~80 years of temperature records for Oenpelli (Gunbalunya) to 2000 i.e. since ~1920 (most prob. ~1925) which BOM saw fit to review in 1961 and academic authors in 1969, 1977 etc ., the raw (or even summary) temperature records for Oenpelli prior to calendar 1964 are now not available from BOM?
IMHO that station is the primary (perhaps only suitable?) candidate for an intra-regional basis for adjustment of the Darwin temperature record prior to ~1980 back to 1941 and earlier!

wobble
December 10, 2009 3:12 pm

carrot eater,
This is a direct quote from the paper that Lambert claims documents the methodology ABM used to do their adjustments.
According to the paper, an objective method to detect a discontinuity required suitable reference stations with five years of overlap prior to the discontinuity, five years of overlap following the discontinuity, within eight degrees of latitude, within eight degrees of longitude, and within 500 meters of elevation. I’ll have check the lat/longs and temperature records to see if this requirement was met.
Then we learn:
“”Where a discontinuity was evident
from the reference series or DTR test, but not detected
by the objective method, the magnitude of adjustment
was estimated.””
So the subjective (visual) reference series test and the subjective (visual) DTR test were used to determine the existence of a discontinuity if the requirements of the objective method weren’t met. And if the requirements for the objective method weren’t met, then we now know that the magnitude of the “adjustment was estimated.”
Got that? It was estimated, and the paper doesn’t disclose an estimation methodology.
The paper also admits:
“The decision of whether or not to correct for a
potential inhomogeneity is often a subjective one.”
And:
“The subjectivity inherently involved in the
homogeneity process means that two different adjustment
schemes will not necessarily result in the same
homogeneity adjustments being calculated for individual
records.”
In fact, this paper (Marta and Collins – 2003) couldn’t even replicate Torok and Nicholls (1996) despite their best efforts.
I’m thinking that it’s a bit disingenuous for anyone to be assuming that the level of subjectivity used by ABM can be replicated.
JJ, do you have a link to the GHCN2 methodology that you just reviewed?

JJ
December 10, 2009 4:06 pm

Wobble,
I am unclear as to the relevance of ABM methods to the GHCN dataset.
GHCN methods are here:
http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_overview.pdf
There does not appear to be a lot of room for subjectivity in this method, as described. Decisons are based on numeric cut offs, significance test values, etc.

alkali
December 10, 2009 4:17 pm

@wobble: “Have you taken a look at the documents which supposedly outline the algorithm? I’ll take a look, but I’ll be very surprised if subjective inputs (open to the questions that Willis is asking) are NOT a requirement for replicating the adjustments to the raw data (making replication impossible).”
I think the motivation for using that procedure is to avoid that kind of subjectivity. The GHCN methodology is described here.
Note that the GHCN (US; NOAA) adjusted set is different from the ABM (Australian) adjusted set. The GHCN set does not try to give weight to historical information about station conditions (“metadata”); the ABM set does.
: “Also, if a series such as the Darwin series is adjusted so much that it no longer bears any resemblance to the actual temperature at Darwin, which the authors seem to admit might potentially happen, then how is it useful?”
The same document linked above states at p.11:
“A great deal of effort went into the homogeneity adjustments. Yet the effects of the homogeneity adjustments on global average temperature trends are minor (Easterling and Peterson 1995b). However, on scales of half a continent or smaller, the homogeneity adjustments can have an impact. On an individual time series, the effects of the adjustments can be enormous. These adjustments are the best we could do given the paucity of historical station history metadata on a global scale. But using an approach based on a reference series created from surrounding stations means that the adjusted station’s data is more indicative
of regional climate change and less representative of local microclimatic change than an individual station not needing adjustments. Therefore, the best use for homogeneity-adjusted data is regional analyses of long-term climate trends (Easterling et al. 1996b). Though the homogeneity-adjusted data are more reliable for long-term trend analysis, the original data are also available in GHCN and may be preferred for most other uses given the higher density of the network.”
One could cynically argue that no historical temperature data could ever be useful: if you use raw data, you can’t be sure that some adjustment isn’t warranted for changes in station conditions over time; if you adjust using historical information, you introduce subjectivity and potential bias; if you adjust using some kind of algorithm, hey, you aren’t taking into account historical information. I don’t think that Catch-22 quite works: the challenge is to make a fair evaluation of the considerable amount of very good data that we do have.

alkali
December 10, 2009 4:24 pm

A small correction: I should have said that the GHCN set does not try to give weight to historical information about station conditions (“metadata”) for non-US sites. The GHCN site _does_ take metadata into account in adjusting for a substantial number of US sites (I suppose because NOAA trusts its own metadata).

wobble
December 10, 2009 4:37 pm

JJ (16:06:35) :
“”GHCN methods are here:””
What? Nothing can be replicated using that “Overview.” It doesn’t provide nearly enough detail to generate code or even perform simple adjustments.
If fact, one can’t even get a sense of the objectivity versus subjectivity of that methodology from that.
“”I am unclear as to the relevance of ABM methods to the GHCN dataset.””
Concur. It’s not relevant. Maybe Lambert and carrot eater will figure that out, too.

JJ
December 10, 2009 4:40 pm

Willis,
“While there is no reference to 500 km in the documents, they keep talking about “nearby” stations, and they say they use five stations.”
In point of fact, they dont say ‘nearby’. They say ‘neighboring’.
“In 1930 there were no stations within 750 km of Darwin, which is hardly “nearby” in anyone’s books.”
They may be in the ‘neighborhood’, espcially considering (once again) the scale that their intended use assumes: regional and larger. The assertion is that these methods are valid for large area, long term trend analysis. Supporting research is provided for that assertion.
“It is possible that as you say these are “artifacts” of the homogenization process. If so, it is the responsibility of those running the database to remove such artifacts, and they have not done so.”
Nonsense. There is no specific ‘responsibiity’ to remove artifacts from any dataset, only to account for their effect on the intended use of the adjusted data. They have done so.
The two primary claims about the validity of the homogenization method are the rocks against which your current criticism crashes. You cannot succeed without first refuting those claims, and they cannot be refuted with handwaving.

carrot eater
December 10, 2009 4:51 pm

wobble: GHCN and ABoM are producing independent products here, using similar but different methods. I’ve already pointed out (if not here, elsewhere, I forget) there is some subjectivity in the BoM method; I read the same things you did. But you should still get similar results, if you do what they said. Then again, they have and use the historical meta-data; so far as I can tell, that information is not online.
The GHCN does not use historical metadata for non-US stations. I’m just saying that the corrections that come up using the GHCN method should correspond to actual physical events in the historical metadata, IF you obtained that data from the ABoM. Time of observation changes, changes in equipment, etc.
I think the algorithm described by Peterson for GHCN in multiple publications is more than concise enough to use. If anybody cares all that much, they should go ahead and try it. Reproducing somebody’s results isn’t about taking their code, hitting ‘run’, and getting the same number. It’s far better to implement the procedure for yourself. See how neighboring stations correlate with each other; see how far away they can be and still correlate. Get some idea for that, then build your own reference network, and so on.
I agree with jj that it’s incumbent upon Willis to go this step further. All Willis has done is shown that there are adjustments. But we knew that. The question is whether those adjustments are based on a defensible use of statistics, or did the algorithm choke at Darwin and send up spurious results?
Again, we’ve got BoM, GHCN, CRU and GISS. All take the same raw data, and homogenise them in their way. Willis has only put up one of these. It’d be useful if he or somebody else put up all of them. At a glance, it looks like GISS only starts at 1963 – perhaps using their algorithm, it wasn’t possible to homogenise before that point, due to lack of neighbors? I don’t know.

carrot eater
December 10, 2009 4:57 pm

And I’m still curious what the baseline period is for the anomalies above. It seems pretty clear that afterwards, they’ve been shifted up or down so they start at zero; I don’t understand why that’s necessary. Just let them start wherever they start. There’s no need for them to start at zero.

bill
December 10, 2009 5:11 pm

Posted on the stick thread:
Willis Looking at the unadjusted plots leads me to suspect that there are 2 major changes in measurement methods/location.
This occur in january 1941 and June 1994 – The 1941 is well known (po to airport move) . I can find no connection for the 1994 shift
These plots show the 2 periods each giving a shift of 0.8C
http://img33.imageshack.us/img33/2505/darwincorrectionpoints.png
The red line shows the effect of a suggested correction
This plot compares the GHCN corrected curve (green) to that suggested by me (red).
The difference between the 2 is approx 1C compared to the 2.5 you quote as the “cheat”.
http://img37.imageshack.us/img37/4617/ghcnsuggestedcorrection.png
Any comments

JJ
December 10, 2009 5:11 pm

“What? Nothing can be replicated using that “Overview.” It doesn’t provide nearly enough detail to generate code or even perform simple adjustments.”
You cannot possibly have worked you way thru that paper and its antecedents with sufficent attention to detail to make that statement.
The methodology looks very straightforward to me. Further details in the half a dozen specific methodology papers cited. If there is anything missing, you would need to scour the paper tree to know that, and be very specific as to what you need.
“If fact, one can’t even get a sense of the objectivity versus subjectivity of that methodology from that.”
Where’s the subjective process step? You provided an example from ABM. Where’s the window here?

December 10, 2009 5:15 pm

Willis
I strongly support what you are doing but your statement:
“While there is no reference to 500 km in the documents, they keep talking about “nearby” stations, and they say they use five stations. In 1930 there were no stations within 750 km of Darwin, which is hardly “nearby” in anyone’s books.”
is simply untrue as I’ve just discussed here in several posts reporting on my efforts to locate the data for Oenpelli (Gunbalunya), only 230 km due east of Darwin. There is plenty of evidence (personal contacts in the region, plus academic references plus a BOM report) that temperature (and rainfall etc) records were taken at Oenpelli since at least 1925 when the CofE mission was established. However, as I’ve pointed out, full temperature records prior to calendar 1964 (some back to January 1957) cannot be sourced from BOM (although rainfall back to 1910).
FYI, not counting Oenpelli, there was at least one other Northern Territory mission-based weather station within 750 km of Darwin probably also in existence in 1930 (Warruwi) . The temperature records for Warruwil before January 1957 also appear to have been lost by BOM. The funny thing is it that BOM actually refers to those records in a report in 1961!
Therefore we must conclude that the adjustments at Darwin from 1940 through 1960 (say) were made on a wider regional basis. This brings in other much wider apart locations such as Daly Water, Alice Springs, Port Moresby, Jackson Airfield, Broome etc.

wobble
December 10, 2009 5:17 pm

alkali (16:17:02) :
“”I think the motivation for using that procedure is to avoid that kind of subjectivity. The GHCN methodology is described here.””
and JJ
“”You cannot possibly have worked you way thru that paper and its antecedents with sufficent attention to detail to make that statement.””
Please. Here’s a quick example.
Unless “neighboring” is defined then this methodology is excessively vague.
“”The first of these sought the most highly correlated neighboring station, from which a correlation analysis was performed on the first difference series: FD1 = (T2 – T1).””
It’s impossible to determine which neighboring station is the most highly correlated without knowing which stations are candidates and which are not.
Without knowing this, it’s impossible to complete Step 1.
Here’s another quick example:
“”the discontinuity was evaluated using Student’s t-test. If the discontinuity was determined to be significant, the time series was subdivided into two at that year.””
Significance is not defined. Given that the Darwin data contained many small adjustments in later years then you can bet your a$$ that the determination of significance of a discontinuity is germane.

December 10, 2009 5:22 pm

Nice work Bill.

Leslie Parsons Harman
December 10, 2009 5:28 pm

Been doubting the warmists for years. Also have been following Lomborg’s lead when I teach children. It’s more important to use our gifts to improve the present status of the less fortunate than to pursue questionable and overzealous policies. Perhaps the next step in your data “forensics” is: What CO2 producing activity were they trying to mimic with their “adjustments”?
Keep up the good work.

wobble
December 10, 2009 5:31 pm

carrot eater (16:51:36) :
“”I’ve already pointed out (if not here, elsewhere, I forget) there is some subjectivity in the BoM method; I read the same things you did. But you should still get similar results, if you do what they said. “”
That’s completely unrealistic. How can I “do what they said” when they merely tell me to estimate the “magnitude of adjustment” based on visual findings of discontinuity.
“”It’s far better to implement the procedure for yourself. See how neighboring stations correlate with each other; see how far away they can be and still correlate. Get some idea for that, then build your own reference network, and so on.””
I agree that this must be done prior to making any accusations. However, I doubt the effort will yield any results. I have a feeling that such a replication exercise will be a complete time sink as one attempts to continuously tweak and turn dozens of assumption nobs. In the event of a replication failure, no accusations can be made and the failure would simply be greeted with “well you didn’t make the proper assumptions because you’re not a climate professional.”
I’m fine dropping the accusations of wrongdoing. But let’s be realistic about the chances of a replication exercise yielding any fruit at all.

wobble
December 10, 2009 5:45 pm
CC
December 10, 2009 5:52 pm

alkali,
“One could cynically argue that no historical temperature data could ever be useful: if you use raw data, you can’t be sure that some adjustment isn’t warranted for changes in station conditions over time; if you adjust using historical information, you introduce subjectivity and potential bias; if you adjust using some kind of algorithm, hey, you aren’t taking into account historical information. I don’t think that Catch-22 quite works: the challenge is to make a fair evaluation of the considerable amount of very good data that we do have.”
Yes I suppose you’re right… and given all the hoo-ha about the in-group mentality exposed in the CRU emails and its potential to increase subconscious bias, I think in this case I prefer the blind algorithm, even if it does occasionally spit out an odd result!

December 10, 2009 6:08 pm

Proposing (?) the wrong word.
That certainly is the (BOM-sourced) dataset from GISS for Oenpelli for the period from 1964. Naturally it tells us nothing about the missing 1925 – 1963 Oenpelli record and hence casts no light on Willi’s concerns over the 1941 adjustment.
As you can see there is some sort of ‘step shift’ in the Oenpelli record around 1977 – 1978. Is this a reasonable step shift (i.e. due to a position change etc) or yet another obscure ‘step up’ BOM or a GISS adjustment? Did you spot any related metadata?
In any event it, certainly provides us with an very long-standing inland site (quite rare in the Northern Territory) only 230 km east of Darwin to compare with Darwin.
However, Bill seems to have identified a step adjustment to the Darwin record having occurred in June 1994.
It certainly is a worry that these step shifts always seem to be up and seem to mostly to occur from about the late 1970s…….
I must confess I inhaled (way back then 😉

bill
December 10, 2009 6:36 pm

Willis Eschenbach (17:19:16) :
Please see my post:
bill (17:11:19) :
For “reasoned” changes suggested for the raw data
1941 +0.8C
1994 +0.8C
Note that both these steps are across a missing months data – time to set uo and calibrate?
The june 1994 step, I can find no reason for on the web (digital sensors perhaps?)

wobble
December 10, 2009 7:12 pm

CC (17:52:13) :
“”I think in this case I prefer the blind algorithm, even if it does occasionally spit out an odd result!””
I’d still like to find out exactly how blind the algorithm really is.

December 10, 2009 7:28 pm

GISS 1977 step @ Oenpelli (Gunbalunya) (Station 14042; 230 km east of Darwin) +0.5 C ± ~0.3 C.

December 10, 2009 7:31 pm

Not blind if it was an ‘AlGorhythm’.

WAG
December 10, 2009 8:49 pm

Wow, no one has responded to Lambert’s criticism. Still. The issue is not whether the codes are released – that’s just stonewalling/ ad hominem attacks. The real issue is that no one has ever given a reason why the adjustments were not justified.
Before making accusations of “bogus” adjustments, the burden of proof is to show why the adjustments should not have been made.

Ed
December 10, 2009 8:57 pm

JJ et al:
I agree that one data series like this is not conclusive. I think most people would agree with that. Why can’t folks do exactly this kind of analysis showing the adjustment to see if this happens a lot? If a random sample of say 50 equivalent stations were run and all or nearly all of them had significant warming adjustments made like Darwin Zero (which I think everyone agrees looks fishy – even though it might not be) then I think we’d all agree that maybe there is some sort of bias built into the adjustment method. If they all pretty much average out (and I think that we could pretty easily determine what a statistically significant result would be) then that tells us that while Darwin Zero may be adjusted wrong, it’s not indicitive of a trend.
One way a bias could be introduced is that (if I understand correctly) this kind of adjustment isn’t run in the United States because we have the meta data. By treating the United States differently, if the US data is used as “neighboring” by the adjustment software it could introduce a trend that could replicate. It’s also not out of the question that the adjustment code could use adjusted data at each location instead of raw data so you could also wind up with a runaway trend there as well.
If some of the stats wiz folks here could come up with the number of stations that would be statistically significant and then someone come up with a procedure for picking that number of stations at random and then we see the graphs, wouldn’t that tell us something?

Evan Jones
Editor
December 10, 2009 9:06 pm

Well, for the US, there are 1221 NCDC/USHCN1 stations.
Weighting them all equally (quick and dirty, but nonetheless informative):
Raw Data: +0.14C per century
Adjusted Data (FILNET): +0.59C per century
That’s a pretty big adjustment. Roughly twice as many trends are adjusted warmer as cooler.
(I suspect this is much the same for the rest of the world. We’ll find out, soon enough!)

JWM
December 10, 2009 9:34 pm

Hello Willis,
I commend the effort you have expended here, it truly takes someone dedicated to go through all of the trouble. However, I think you have mislead your audience in not quoting some of the more important parts of the article you cite (the quote below Figure 7). From the article (pp. 2844–2845), they: (1) state the error on their “second minimizing technique” of p=0.01 from a Multivariate Randomized Block Permutation (thus allowing the reader to determine whether they think that is reasonable); (2) they go on to explain their discontinuity adjustments as being based on tests at every annual data point (95% confidence being the cutoff for adjustment, using a nonparametric technique not subject to distributional biases); and (3) that their goal is to create historic data that is homogenized to current record keeping.
A couple comments (and here I am assuming that what they did is what they said they did—a replication could determine whether that was the case). First, as an effort to remove bias from all periods, their statistical method seems sound. One can always argue with whether they should have picked a 99% confidence level for the annual discontinuity adjustments, and one could re-estimate their results accordingly. But 95% is the usual, baseline standard and the analysis can always be repeated. Second, as your black line shows, the adjustments are sometimes in the negative direction, but the great majority are positive. Each of the jumps, of course, are the points at which the discontinuity analysis found, with a 95% confidence, that there was a positive *or* negative jump. This is the source of their adjustment—statistical estimation. Any one of the adjustments has a 5% chance of being an error, but as you know, the chances of just 5 being wrong simultaneously is very, very small. Still, of course, you could re-estimate the models and apply a stricter confidence interval. Third, there seems to be some assumption in the above discussion that what is being measured is the temperature in Darwin. I’m not a climate scientist (or a geographer), but I don’t believe that is correct. What they are trying to estimate is a temperature for the geographic area, what they *call* Darwin, which is why the distance to and from each point doesn’t matter much in terms of trying to get a historic baseline (assuming, of course, they aren’t ignoring other valid data points within that region). Finally, a purely graphical approach may not be able get to the bottom of their statistical techniques. Unless you can see in dimensions > 3 (I certainly can’t), then projecting multiple station moves, etc., onto a 2D surface may be misleading.
Best,
~JWM

Hessu A.
December 10, 2009 11:41 pm

This is exciting. Thanks.
I hope this leads to some actions. I mean, I would like to see more answers on high level to all these “An Inconvenient Truths”

JJ
December 10, 2009 11:48 pm

CC (14:30:52) :
“Seeing as how you seem to know what you’re talking about, and are good at explaining stuff for laymen, I have a question. What’s the purpose of the GHCN2 homogenization? I am struggling to see what use it has.”
Thank you. I will preface by caveating myself. I do quite a lot of environmental monitoring and a fair bit of modeling, so am familiar with the territory that these homogenization schemes operate in, and I am comfortable with much of the math. I am not an expert in homogenization theory, however. I can probably pull off a decent explanation for the layman, but my own understanding doesnt necessarily extend much deeper.
The general idea behind homogenization is to remove non-climate signals from the data. These include various observational errors as well as instrument changes, siting changes, etc. The two methods that GHCN uses to accomplish that are :
1) a metadata approach. You look at detailed station records for indications of non-climate effects, and you apply a specific correction for each effect.
2) a relative approach. This begins with the assumption that climate is fundamentally regional, and that all stations within a particular region should show the same climate pattern. Individual sites within a region may vary from one another in scale (one may be warmer or cooler than another on average) and there will of course be random sampling effects. Other than that, sites within the same region are assumed to have the same pattern and the same variability about the pattern, and thus should be highly correlated with one another. Thus, if you compare sites in the same region and you see a relatively minor difference in pattern between otherwise highly correlated sites, that minor difference is assumed to be a non climate signal that can be adjusted out.
GHCN uses 1) for US sites, and 2) outside the country. There are benefits and drawbacks to both. Good quality metadata isnt available everywhere. Even where it is, coming up with an adjustment to apply for each non climate effect can be challenging. It’s often an exercise in modeling itself – and carries with it all of the error inherent in that process. Often, the adjustment applied to an effect discovered by the metadata approach is derived using the relative approach.
With the relative approach, you dont need metadata. Even where there is good metadata, the relative approach can pick up things that arent recorded. On the other hand, the relative approach is founded on some broad assumptions. The adjustments that you choose to make and the magnitude of those adjustments are not based on site specific knowledge about what happened there, but what you infer should have happened there, based on what you see happening elsewhere in the ‘neighborhood’. You are relying on your assumptions holding frequently enough that the results are close enough to the truth often enough to improve the aggregate results vs what you would have got using the raw data.
There are statistical methods that can be used to make those assumptions, and sometimes to check those assumptions, but they arent perfect arbiters even in theory. And as we have seen with the statistics bound up in the Tree Ring Circus, sometimes the theory and/or the proper application of complex statistics is above the heads of many researchers for whom statistics is not their primary field. The maths often dont operate intuitively, and spurious results are often difficult or effectively impossible to detect. I dont know that there is anything amiss in the area of homogenization schemes, but it is a concern.
“Presumably the reference series must be known to be more reliable that the series being adjusted, otherwise the adjustment would have no justification.”
Rather the opposite. The reliability of the reference series is not known, it is assumed. That is part of the fundamental assumption that climate is regional in pattern. That assumption is the weakness of the relative approach. Is climate truly regional in pattern? Everywhere and always? Good questions.
“But if we have a reliable reference, why use the less reliable series at all?”
The series being adjusted is not necessarily less reliable than any of the individual series that are aggregated to comprise the reference series. In fact, it might be more reliable overall than any of them. The assumption is that it is unlikely that several stations will all show exactly the same non climate effect at the same time, and that the group can correct the individual.
Say station #1 moves to the airport in 1963. It is unlikely that stations #2 and #3 will also move the same year to a place that engenders the same average temp change as the move by station #1, so stations #2 and #3 will expose the change at station #1 in 1963. Meanwhile, station #2 might get a Stevenson screen in 1930, move to the park in 1950, and change over to MMT sensor in 1980. It can contribute to making station #1 more reliable, though it is itself generally less reliable.
“I assume the reference must be some distance away, and we are just trying to get broader coverage.”
For me, it is an interesting question as to exactly where the balance lies between the influence of greater spatial coverage and the influence of the regional homogeneity assumption on the overall results.
“If there is a non-negligible geographic distance between the reference and the other series which are adjusted to it, why is it justified to adjust the other series to the reference? Might the differences not be due to local climate variations?”
Under the fundamental assumption of the relative approach, no. At least, there wont be enough of that to effect the overall result significantly. So they say.
“Also, if a series such as the Darwin series is adjusted so much that it no longer bears any resemblance to the actual temperature at Darwin, which the authors seem to admit might potentially happen, then how is it useful? Isn’t the whole point to work out what the average temperature of an area is? How is this helped by moving some of the raw data away from its correct temperature?”
If that happens, it is evidence that the fundamental assumption has been broken, for some time, at that site.
That is why Willis’ work here is interesting. If correct, it exposes a fairly significant bust at this site. It isnt a ‘smoking gun’ all by itself, however, as the methods are understood to produce these errors. And allegedly, these sort of errors happen infrequently enough that the overall result is still improved vs not applying the method.
That assertion, along with all of the method assumptions, certainly deserve a thorough going over, IMO.
Also, note that even if they are operating perfectly according to plan, these methods dont necessarily pick up all non climate signals in the data. The relative method keys in on discontinuities (jumps) in the data. A non climate signal that presents as a continuous trend change will likely not be identified by . UHI is suspected to

Roger Knights
December 11, 2009 1:02 am

I like the idea of focusing on a single fishy-looking case and asking climate scientists and data-adjusters to explain it in full detail. Even if the result is inconclusive after this length of time due to various uncertainties, it would still establish a range for the unknowns and shed light on the issues involved (amount of subjectivity in making adjustments, amount of automation in the adjustments, etc.) in the larger controversy.
Even if it’s not much progress, it’s some — and we all should take what we can get and be happy for that.

December 11, 2009 1:19 am

I posted this over at the Accuweather site:
Mark,
You should check out the actual sites, before you state, “Global warming contrarians often don’t understand why it might be necessary to adjust raw temperature data, so they create in their minds theories involving misconduct and fabrication.”
It is obvious it is you who are creating things in your mind, for it is obvious you have failed to do the hard work of double-checking the adjustments made by these frauds.
The GHCN data of Darwin, Australia shows four .5 degree upwards adjustments. A single .5 degree adjustment would raise eyebrows, but four (for a total of two degrees) is just plain absurd.
You are quite correct when you state adjustments have been made for urban heating, but have you ever gone to see what those adjustments actually are? In some cases they are an upward adjustment!!!
More and more people are double checking the three main data sets graphed above. They just go look at the raw data from a local weather station, and compare it with the “adjusted” data. Over and over these people, (foot soldiers in the battle for Truth,) report the raw data shows little rise, or a slow recovery from the Little Ice Age, and see nothing like the sharp rise shown in the three graphs shown above.
I am afraid, Mark, that you have been hoodwinked. Your trust has been abused. You sit there and parrot the talking points you read at Alarmist sites, but never get down to the dirty details of actually checking the individual weather stations.
The three major data sets are compiled by three closely associated groups of quasi-scientists who are constantly cross-checking with each other. They remind me of three school boys sitting in the waiting room of the headmaster, desperately whispering to each other, attempting to get their alibis straight.
However, when the headmaster is Truth, (as is the case with strict science,) all alibis unravel.
I don’t know if you’ve ever been in the position of questioning fibbing school boys, but they resemble Climate Scientists. When asked to produce proof, they often resort to stating, “I lost it,” (much like UEA, concerning raw data.) Also they often point at the other guy, and state “he did it too,” which is what is being said when you stress “the three data sets agree with each other.”
If one data set is shown to be rotten, it does not make the other two look better. Quite the opposite. It makes them look worse.
What is needed is full exposure. Show the raw data. Show each adjustment. Explain each adjustment. Only then can this “new and improved” data be trusted.
If you trust without verification, you are likely to find yourself in the uncomfortable position of waking up one day and finding you have been a sucker and a chump.
If you are too busy (or lazy) to do the verifying yourself, at least you should appreciate the efforts of those who are doing the hard work, station by station, all over the world.

December 11, 2009 2:48 am

JJ (23:48:48) :
That’s all very interesting, I am a long time (PhD) consulting environmental scientist myself with a fair bit of modeling experience e.g catchment hydrology, mine site hydrology etc. So I understand reasonably well this ‘territory’ you have so succinctly summarized. But it still doesn’t help me understand the logic behind such as this:
http://thedogatemydata.blogspot.com/2009/12/raw-v-adjusted-ghcn-data.html
For example, how can a relative method be expected to satisfactorily ‘key in’ as you put it on discontinuities (jumps) in the data associated with (in particular) widely separated coastal sites when each coastal site is at the edge of expanse of open ocean intent on creating natural discontinuities with decadal or longer term warm/cold current or prevailing wind shifts.
Similar difficulties can be envisioned for sites separated by mountain chains, major rivers, etc., etc.
I suspect there are plenty of ‘busts’ out there.

carrot eater
December 11, 2009 3:50 am

@wobble
“I’m fine dropping the accusations of wrongdoing. But let’s be realistic about the chances of a replication exercise yielding any fruit at all.”
That is no attitude to take. The little objections you are bringing up are easily surmounted. You don’t know what ‘neighboring’ means? Then do a spatial correlation test for yourself, to see how correlation drops off with distance. You’ll find out. Or, as jj recommends, keep reading through the paper tree to get a full idea. These papers look pretty detailed to me; don’t ask to be spoon-fed.

carrot eater
December 11, 2009 4:31 am

Willis:
“Thanks for the question. Sure, it has occurred to me. But I must confess, I’ve given up asking. I’ve asked for so much data, and filed FOI requests for so much data, and been blown off every time, that I’ve grown tired of being ignored.”
Willis, I’m saying this earnestly. You’ve made rather strong accusations in this post, without doing nearly enough work to back them up. Your credibility is at stake when you do such things. What will you do if the Australians dig out the site history, and show a bunch of things about this site that require adjustments? Where does that leave you?
The least you could have done was to politely ask the Australians first, with a good faith question. If they don’t respond, then so be it.
“And those adjustments are the GHCN adjustments, not the Australian adjustments. I was writing about the GHCN. I haven’t a clue what the Aussies do, there’s only so many hours in a day.”
Yes, but again, the GHCN adjustments should correspond to some physical thing. So if you don’t have the time to repeat the GHCN method for yourself, asking the Aussies for the site history would help.
I can’t emphasize this enough: if you think you have the time to make accusations of fraud, then you have to take the time to do the required legwork. If you don’t have that time, then all you can do is make a post saying “I don’t understand these adjustments; does anybody know what’s going on here?”
“So what adjustment are you proposing we make? A tenth of a degree? Half a degree? A degree and a half? Show your work.”
For a rough start, just do whatever you showed in Fig 6; it’d be better than doing nothing. I agree that a rigorous adjustment is difficult using only the data you have above; perhaps the sites overlapped by a bit – it’d be worth checking the Aussie site.
“Your statement of nearby stations in the sixties doesn’t help at all with the adjustments that were done in 1920, 1930, and 1941.”
Then expand your definition of nearby. See how far out you can go and still have a correlation.
“And while it is true that GISS, CRU, and the Aussies all do their own homogenization and they all roughly agree, that does not comfort my soul in the slightest … if you don’t understand why, read the CRU emails.”
I don’t know that they do all agree in all ways.. the picture I see on GISS leaves out the mess at 1941. It’d be useful to put them all up, especially the CRU set since that’s what’s used to make the IPCC diagrams at the top of your post. And finally, you can’t just use ‘oh, read the emails’ as an excuse to not do the work to back up your claims. Not everybody thinks the emails are as dastardly as the regulars at this site.

Nick Stokes
December 11, 2009 4:34 am

Steve Short
Yes, there are lots of other nearby stations, mostrly with fairly short records, but covering some time in all. I posted some analysis on the other thread which may be relevant, and I’ll repeat it here:
I did a test on a block of stations in the v2.temperature.inv listing, which are in north NT. I noted that wherever there was an adjustment, the most recent reading was unchanged. So I listed the adjustment (down) that was made to the first (oldest) reading in the sequence. Many stations, with shorter records, did not appear in the _adj file – no adjustment had been calculated. That is indicated by “None” in the list – as opposed to a calculated 0.0. Darwin’s 2.9 is certainly the exception.
In this listing, the station number is followed by the name, and the adjustment.
50194117000 MANGO FARM None
50194119000 GARDEN POINT None
50194120000 DARWIN AIRPOR 2.9
50194124000 MIDDLE POINT 0.0
50194132000 KATHERINE AER 0.0
50194137000 JABIRU AIRPOR None
50194138000 GUNBALUNYA None
50194139000 WARRUWI None
50194140000 MILINGIMBI AW None
50194142000 MANINGRIDA None
50194144000 ROPER BAR STO None
50194146000 ELCHO ISLAND 0.7
50194150000 GOVE AIRPORT None

Nick Stokes
December 11, 2009 5:11 am

Here ia a comment from a scientist who has been involved in adjustments to the Australian data sets. Here’s what he says about Darwin in particular:

In the specific case of Darwin, while we haven’t done the updated analysis yet, I am expecting to find that the required adjustment between the PO and the airport is greater in the dry season than the wet season, and greater on cool nights than on warm ones. The reason for this is fairly simple – in the dry season Darwin is in more or less permanent southeasterlies, and because the old PO was on the end of a peninsula southeasterlies pass over water (which on dry-season nights is warmer than land) before reaching the site. This is fairly obvious from even a cursory glance at the data – the record low at the airport is 10.4, at the PO 13.4.
Darwin is quite a difficult record to work with. There were 12 months of overlap between the PO and the airport, but the observing site at the PO deteriorated quite badly in what turned out to be its last few years because of tree growth overhanging the instruments. Fortunately, we recently uncovered some previously undiscovered data from 1935-42 from the old Darwin Airport (at Parap) and should be able to use this to bypass the promblematic last few years at the PO.
The post-1941 adjustments (all small) at Darwin Airport relate to a number of site moves within the airport boundary. These days it’s on the opposite side to the terminal, not too far from the Stuart Highway.

Slartibartfast
December 11, 2009 5:15 am

How can I “do what they said” when they merely tell me to estimate the “magnitude of adjustment” based on visual findings of discontinuity.

No, it’s an algorithm. I think they describe what they do, even if they’re not showing you the algorithm in detail. IIRC, they’re taking the first difference of each of the stations’ annual records, and then using “nearby” stations to resolve step changes (a unidirectional spike in the first difference data) in a single station. If all the stations show that step change, then there’s nothing to resolve, or at least no way to resolve it.
I haven’t actually attempted this, but I’ve done some of this kind of data analysis WRT other, unrelated applications.
It’s probably not all that simple, because if for some reason there are holes in the station data record, you’re going to have to toss out some data, so you have to code for those eventualities.

djellers
December 11, 2009 5:37 am

Interesting that the just published EPA “Endangerment Finding” responds to surface station criticisms by referencing a 2006 study by Peterson which concluded that the adjustments result in no bias to long term trends. This looks like another case of the fox guarding the hen house.
http://www.epa.gov/climatechange/endangerment/downloads/RTC%20Volume%202.pdf
I wonder how the subset of sites selected for Peterson’s study was selected.

psi
December 11, 2009 6:00 am

“And CRU? Who knows what they use? We’re still waiting on that one, no data yet …”
It might appear that the only way to get the CRU data is for someone to “steal” it. Wait, isn’t that what Robin Hood did?

Peter Christensen
December 11, 2009 6:17 am

I’m trying to understand JWM’s (21:34:47) explanation.
So, if for instance ‘large’ drops in temperature tends to occur suddenly (within a year), but increases in temperature occurs gradually, then only the drops will be cancelled out automatically because they fulfill the 95% discontinuance test level?
And if the confidence level is fixed at 99.9% instead then the global average adjusted temperature will likely match the unadjusted?

JJ
December 11, 2009 6:38 am

… I accidently hit the ‘Submit’ button while typing last night, and cut myself off. Probably wasnt a bad thing, given how long that post was 🙂
I’ll finish the truncated thought by stating that UHI effects are expected to often present as continuous trends, so these homogenization schemes that do search and destroy on discontinuities will miss such UHI effects.
I’ll round out the general discussion of homogenization schmes by noting that Willis’ analysis of Darwin appears to establish that an incorrect ‘correction’ of something more than a degree C can be applied by the GHCN homogenization scheme. If there are busts in the assumptions of that scheme, particularly in the assumption that there is not bias in the errors, that 0.6F trend in the effect of the adjustments on the global result could be largely error.

JJ
December 11, 2009 6:49 am

Wobble,
You cannot possibly have worked you way thru that paper and its antecedents with sufficent attention to detail to make that statement.
“Please. Here’s a quick example.”
Please? Thats not a response. Did you in fact work your way thru that paper and its antecedents before posting? I will bet a six-pack not.
The two ‘missing facts’ you identify (definition of ‘neigboring’ and the t-Test significance cut off for calling a discontinuity) are almost certainly given in the papers cited, with (Peterson and Easterling 1994) being the most likely candidate. I’d wager a pizza on that one.
Please respond promptly. It’s Friday and I’ve got a cheap date planned 🙂

Ryan Stephenson
December 11, 2009 7:06 am

@carrot eater.
I think you are going overboard. From what we have now been told by the Aussies the temperature monitoring was re-invented three times and then moved so often they might as well have strapped the damn thing to a 4×4 and driven all over the outback.
To claim that any scientifically useful data could be drawn from such a piece of equipment is itself fraudulent, and begs the question “Why?”.

wobble
December 11, 2009 7:07 am

JWM (21:34:47) :
“”From the article (pp. 2844–2845), they: (1) state the error on their “second minimizing technique” of p=0.01 from a Multivariate Randomized Block Permutation (thus allowing the reader to determine whether they think that is reasonable); (2) they go on to explain their discontinuity adjustments as being based on tests at every annual data point (95% confidence being the cutoff for adjustment, using a nonparametric technique not subject to distributional biases)””
I just want to make sure that everyone understands this properly.
These aren’t confidence levels associated with the actual discontinuity adjustments. These confidence levels are associated with the establishment of the reference series and the IDENTIFICATION of actual discontinuities.
There is no definitive statement regarding the confidence level associated with the actual adjustment applied to the discontinuity.
“”Finally, a purely graphical approach may not be able get to the bottom of their statistical techniques. Unless you can see in dimensions > 3 (I certainly can’t), then projecting multiple station moves, etc., onto a 2D surface may be misleading.””
I think this is a fair point. However, it’s now obvious that discontinuities – as defined by GHCN (in other words – strictly as compared to reference series) – exist in data sets which present a smooth 2D graph and have no associated metadata.
So we should be looking for these counter-intuitive GHCN defined discontinuities in which no algorithmic adjustments were applied.

wobble
December 11, 2009 7:14 am

carrot eater (03:50:07) :
“”You don’t know what ‘neighboring’ means? Then do a spatial correlation test for yourself, to see how correlation drops off with distance. You’ll find out.””
OK. So tell me this. If a station 350 miles away correlates well, a station 500 miles away correlates better, a station 800 miles away correlates best, and a station 1,000 miles away correlates perfectly then which station is the most highly correlated neighboring station?

wobble
December 11, 2009 7:21 am

carrot eater (04:31:18) :
“”Yes, but again, the GHCN adjustments should correspond to some physical thing.””
I think many of us have concluded that this isn’t the case. CHCN adjustments (outside the US) are supposedly algorithmically done based purely on discontinuities with reference data. These adjustments supposedly occur whether or not there was any physical change with the station sensor or any other explanation for the defined discontinuity.

wobble
December 11, 2009 7:27 am

Nick Stokes (05:11:33) :
“”In the specific case of Darwin, while we haven’t done the updated analysis yet, I am expecting to find that the required adjustment between the PO and the airport is greater in the dry season than the wet season, and greater on cool nights than on warm ones. The reason for this is fairly simple – in the dry season Darwin is in more or less permanent southeasterlies, and because the old PO was on the end of a peninsula southeasterlies pass over water (which on dry-season nights is warmer than land) before reaching the site.””
If the southeasterlies are more or less permanent, then shouldn’t there also be pre-1941 adjustments?

wobble
December 11, 2009 7:30 am

Slartibartfast (05:15:40) :
“”No, it’s an algorithm. I think they describe what they do, even if they’re not showing you the algorithm in detail.””
“”I haven’t actually attempted this, but I’ve done some of this kind of data analysis WRT other, unrelated applications.””
OK. Well, I’ll leave the ABM replication to you. I think it is more important to focus on GHCN.

alkali
December 11, 2009 7:50 am

A few thoughts:
1) The purpose of collecting and adjusting this data is not solely to use it for climate change study (although that’s important). There are all kinds of commercial, agricultural, and scientific uses for historical climate data. You might want to have good local and regional data even if the adjustments come out in the wash when you’re using it to look at global temperature.
2) For climate change study purposes, I’m not sure how much it matters whether the adjustments are overall positive, negative, or zero if the important thing is the historical trend in the data. If _old_ temperature data are adjusted upward, that will tend to reduce any warming trend (or will reinforce a cooling trend).
3) Suppose the adjusted data shows a greater overall warming trend (or cooling trend). On the one hand, that could mean an indicator of bias. On the other hand, maybe that’s just what happens when you take the noise out of data. (Hypothetical: Suppose I ask a bad typist to transcribe a George Carlin monologue. Then I give it to a copy editor. After copy editing, the ratio of dirty words in the text goes up. Does that mean the copy editor’s mind is in the gutter, or does it mean that the corrected text is a better reflection of the actual performance?) I’d want to think about this a bit because it seems like a hard question.

wobble
December 11, 2009 7:50 am

JJ (06:49:46) :
“”The two ‘missing facts’ you identify (definition of ‘neigboring’ and the t-Test significance cut off for calling a discontinuity) are almost certainly given in the papers cited, with (Peterson and Easterling 1994) being the most likely candidate.””
I still haven’t found Peterson. I’ll search more thoroughly this weekend. Don’t keep your cheap date waiting.

alkali
December 11, 2009 7:57 am

@carrot eater: “[T]he GHCN adjustments should correspond to some physical thing.”
@wobble: “I think many of us have concluded that this isn’t the case. GHCN adjustments (outside the US) are supposedly algorithmically done based purely on discontinuities with reference data. These adjustments supposedly occur whether or not there was any physical change with the station sensor or any other explanation for the defined discontinuity.”
Ideally, if the algorithm were very cleverly designed, it would make adjustments at the “right” places — that is to say, the adjustments made by the algorithm would correspond to the adjustments an unbiased researcher with access to perfect historical information would make based on changes in station conditions. This algorithm is certainly not that good, but it would not surprise me that if in many cases the GHCN algorithmic adjustments do roughly correspond with historical events that are the basis for the ABM’s adjustments based on metadata.

carrot eater
December 11, 2009 8:05 am

wobble (07:21:33) :
“These adjustments supposedly occur whether or not there was any physical change with the station sensor or any other explanation for the defined discontinuity.”
I think I’ve been clear about this. Yes, the GHCN statistical method detects the adjustments to be made without using the historical metadata (knowledge of specific things that happened at that site).
However, if you were able to obtain a list of site changes (the Aussies would have that info), you might be able to match up the GHCN statistically detected adjustments with actual events on the ground. You might not be able to find a match for all of them, but at least some of them.
Hence, getting that data from the Australian BoM would be of interest, as well as looking at their own adjustments, which apparently use both the statistical methods and knowledge of the historical metadata.

carrot eater
December 11, 2009 8:13 am

alkali (07:57:21) :
You have expressed better than I have the point I am trying to make. If you had access to the historical metadata, you should see some correspondence between the GHCN adjustments and the metadata.
You wouldn’t be able to put a label on every single adjustment as Willis is demanding, but you’d have some idea as to what’s going on, physically.
There won’t be perfect matches either way, as there are limits to both the GHCN statistical method and the record-keeping of the metadata, but it’d be a useful exercise, I think.

tallbloke
December 11, 2009 8:28 am

Nick Stokes (05:11:33) :
Here ia a comment from a scientist who has been involved in adjustments to the Australian data sets. Here’s what he says about Darwin in particular:
The post-1941 adjustments (all small) at Darwin Airport relate to a number of site moves within the airport boundary. These days it’s on the opposite side to the terminal, not too far from the Stuart Highway.

All small?
totalling 2C??
Is he a BOM scientist?

wobble
December 11, 2009 8:32 am

carrot eater and alkali,
I agree with both your points. Non-US GHCN adjustments probably match up often to actual physical events on the ground.
But how many instances does it really take to bias the entire average towards warming?
Even if the Peterson and Easterling methodology is sound, how difficult would it be to program a slight bias in the code which performs the adjustments. And how often was human intervention required to address a discontinuity because the code wasn’t robust enough to handle a minority of the situations?
Having studies this issue more carefully (with the help of many of you) it’s now clearly obvious how bad the world’s temperature records are, that the worst data is adjusted by data merely considered to be less bad, and that the whole issue is completely vulnerable to personal biases.

JJ
December 11, 2009 8:38 am

alkalai,
“2) For climate change study purposes, I’m not sure how much it matters whether the adjustments are overall positive, negative, or zero …”
&
“3) Suppose the adjusted data shows a greater overall warming trend (or cooling trend). On the one hand, that could mean an indicator of bias.”
I do have concern over the net effect of the adjustments as reported by NOAA. I would of course not expect an overall effect of zero – no point in an adjustment if it doesnt change anything. It is not at all clear to me why the effect would be almost universally positive, let alone why it would be increasing in magnitude leading to a trend line with … that particular shape.
Why would the non climate effects have that distribution?
Hmmmmmm ….

December 11, 2009 8:52 am

Excellent Job. You do some fantastically thorough reporting. This is the second time I have read one of your posts, and was more impressed than the first. I have added this post to my ‘Recent Reading’.
Truth, I’d like another helping…
Nathan R. Jessup
http://www.the-raw-deal.com

carrot eater
December 11, 2009 8:58 am

wobble (08:32:14) :
I think you’re reaching unfounded conclusions, at least on the basis of what you’ve seen so far. I’d suggest spending much more time reading the literature, studying the statistics and looking at several examples before reaching any broad conclusions over how robust the methods are, or how poor the quality control is, or whether there are any systematic biases.
I haven’t done that, so I don’t have any personal opinion on the matter. For all I know, there could be a handful of stations where one of the analyses (GHCN, GISS, CRU, the national met bureau) might send up a weird result that hasn’t been noticed by their quality control methods. I don’t know. Maybe Darwin is such a spot, though Willis hasn’t done the work necessary to actually demonstrate that. But I find it unlikely that they’re all so incompetent at statistics that the overall results are way off.

CC
December 11, 2009 9:04 am

JJ (23:48:48) :
Thanks for answering my questions. These blog comment threads can be a really useful source of information once all the cheerleading has died down!
The method they use seems to be a reasonable approach, but I agree that if one dodgy homogenization is found, there’s no obvious reason why it can be simply written off as unrepresentative without further investigation of the other data sets.

C Pierce
December 11, 2009 9:13 am

Nice detective work.

wobble
December 11, 2009 10:02 am

carrot eater (08:58:46) :
“”I’d suggest spending much more time reading the literature, studying the statistics and looking at several examples before reaching any broad conclusions over how robust the methods are, or how poor the quality control is, or whether there are any systematic biases.””
I didn’t conclude that the quality control was bad or that there are systematic biases.
I stated that the raw data is bad and that the quality control efforts are vulnerable to biases.

December 11, 2009 10:40 am

WAG (20:49:23):
“…the burden of proof is to show why the adjustments should not have been made.”
Fixed.

carrot eater
December 11, 2009 11:16 am

Smokey: If you’re going to explicitly accuse somebody of fraud, as Willis is doing here, then the burden is on him to actually show that.
If he would have phrased it as “here is a station with some large adjustments, can somebody help me understand why these adjustments are made?”, that’s another matter.
But Willis didn’t say that. He went straight to “indisputable evidence” that somebody was fiddling the numbers. If he’s going to say that, the burden is on him to actually demonstrate that. He didn’t. He just showed that some adjustments were made. Well, we know that adjustments are made, and at a few stations here and there, those adjustments can be pretty big. To support his statement, Willis has to show that the adjustments are not only unjustified, but intentionally so. I think he has rather more work to do.

December 11, 2009 11:57 am

Nick Stokes (05:11:33) :
All very interesting but here’s a few facts for you to consider:
(1) Apparently (Bill’s analysis of GHCN) two major adjustments at Darwin – one in 1941 (seems reasonable to me), but only one major shift June 1994 – doesn’t look to me like many minor adjustments as your BOM? informant suggests.
(2) Oenpelli (NT). BOM ‘loses’ all records prior to 1963 (or is it 1957 – even their own modern metadata is contradictory on that point) going back to 1920 – 1925 despite apparently having the earlier records in 1961 (and researchers accessing it over the following 1 – 2 decades).
(3) GISS Oenpelli record (back to 1964). One major shift only (1977) of the order of 0.5±0.3 C (record finishes in the 1990s so I can’t be more precise).
(4) Brisbane GHCN record. Major upward shift(s) by comparison with the raw BOM record. We therefore have two Australian state capital sites, both major, longstanding airports, neither likely to be subject to UHI, both subject to (at least cumulatively) major upward shifts in the post-AGW ‘realisation’ era.

December 11, 2009 12:53 pm

Good on Ya.
If you go to the UK Met.Office they have several data sets comprising the annual ‘ anomalies’, sample errors and biased data. What hit me was the insertion of a value of -1.000 for ‘missing data’ throughout the data. It occurs widely in the early years when negative values of anomaly are common but much less so in recent years where the number of readings has increased. These latter readings have a tendency to positive but the effect of the -1.000 seems to amplify the time series differences especially as the actual values fall between -.787 and +.581. Is there any justification for this? I only ask as a half smart schmuck who feels that the wool is being pulled over our eyes. Your site is like an oasis in a desert. Many Thanks

John McManus
December 11, 2009 12:59 pm

As we know, more climate info is openly availble than most unpaid people can read. FOI requests for public domain stuff is disengenuous at best and sabotage at worst.
Knowing this I checked you allegations about the Darwin weather station. It is not the lonely site you claim. Australia shows 88 weather stations, 17 of them within your 500 km. radius. Your post is meaningless.
Sorry. You may be smarter than me, but I’m not as stupid as you think.

December 11, 2009 1:12 pm

carrot eater (11:16:49),
You are turning the scientific method on its head. Those putting forth a new hypothesis are obligated to fully cooperate with skeptical scientists [the only honest kind of scientist], by providing all of the methods and data that have any bearing on their hypothetical conclusions. The result is an unsupported opinion; a conjecture, as opposed to a legitimate hypothesis.
As we can see, the purveyors of the CO2=CAGW hypothesis continue to stonewall those requesting cooperation. See Steve Short’s example above.
To bring you up to speed on the difference between a scientific Law, Theory, Hypothesis and Conjecture, see here: click.
For the umpteenth time: the burden of the AGW hypothesis is on those promoting it — not on those questioning it.
Why does the alarmist crowd run like scared girls from a garter snake when they are asked for their raw and massaged data and their methodologies? Here’s Richard Feynman on peer review and the scientific method:

“It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty – a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid – not only what you think is right about it but other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment; and how they worked – to make sure the other fellow can tell they have been eliminated.
“Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can – if you know anything at all wrong, or possibly wrong – to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it.
“In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.” [emphasis added]

Until the relatively small clique of stonewalling CRU scientists and their U.S. and UN counterparts start following the scientific method, they are only making unfalsifiable, untestable conjectures. Deliberately hiding their data and methods from those who pay for their work product doesn’t pass the smell test. And neither do their flimsy retroactive excuses.

wobble
December 11, 2009 1:33 pm

Has anyone already tried summing all the GHCN adjustments to see if the result is close to zero?

JJ
December 11, 2009 1:44 pm

Given the nearly universal positive values for GHCN (US) Final-Raw, and the strong, increasing, positive trend (i.e. Hockey anyone?), that hardly seems possible.
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
I would like to see a similar graph for the non US stations, and for the globe.
Anybody know how to find those?

carrot eater
December 11, 2009 1:59 pm

Smokey: You say about the same thing in every thread. Yes, if somebody puts forth a hypothesis, it won’t be accepted until some convincing and consistent evidence is given for it. Clearly, you don’t think this has happened yet, while many people think it has happened.
But that order of events is entirely unrelated to what I’m talking about here.
Here, Willis has quite explicitly accused somebody of fraud. This is entirely different from Willis saying somebody’s conclusion isn’t well supported, or somebody’s analysis is flawed, or anything like that. No, he is explicitly saying that somebody intentionally fudged some numbers. That is not a claim to be made lightly, and it is absolutely a claim you do not make unless you can provide evidence to support it. If you make this claim, the burden is on you to actually back up your accusation with something. The credibility of the accuser is on the line, if he does not support the claim. Willis has not done that. All that he’s shown is that there a single station with a sizable homogenisation adjustment. That’s all he’s done. But it’s already known that at some individual stations, some large adjustments are made. That shouldn’t have been surprising to anybody.

Larry Scalf
December 11, 2009 2:20 pm

With all of the people from Australia commenting, I think we should put the politicians on the barbie. Nice work, Willis and Anthony. More evidence that something in temperature measurement is amiss.

carrot eater
December 11, 2009 2:26 pm

JJ: I think I found what you want.
There is a such a comparison for max/min temp data, if not the mean, in Peterson and Easterling, “The effect of artificial discontinuities on recent
trends in minimum and maximum temperatures”, Atmospheric Research 37 (1995) 19-26.
For the entire Northern Hemisphere, for max temps, the raw data give a trend of +0.73 C/century; the adjusted data give +0.92 C/century. So there is some difference in the max temps. But for the temperature minima, raw and adjusted are essentially the same, at +2 C/century.
So the temperature minimum data are unaffected by homogenisation; the maximum data are somewhat affected. This makes some sense to me – adding or removing a Stevenson screen won’t affect the nighttime measurement.
They go on to show that for smaller regions, you might see bigger differences between raw and adjusted, and they mention that individual stations can see dramatic shifts on adjustment. They toss a station out if the adjustment is greater than 3 C. So that’s a statistical property of the data: taken as a whole, the adjustments make little difference. But as you take smaller subsets of data, you might find some bigger effects.
For what it is worth, I will quote the following from the paper. If the quote is too long to be fair use, the moderator can remove it.
“However, non-random changes in location (e.g. movements from urban to rural airport locations), or in instrument types (e.g. liquid in glass thermometers to thermistors) may cause biases to be consistently in the same direction (e.g. all warm or all cool) at many or even all stations in a network.”
So this is perhaps a reason why you adjustments are not random in sign, within a given subset of the data (like the US).

CO-Two Guy
December 11, 2009 2:55 pm

This is a long and roundabout way of saying “We don’t have the facts”. To ‘suppose’ that the central premise may still be true is not how science works. There’s facts, then you know its true. That’s the only way it works. In the case of temperature records, and their bearing on climate change, we don’t have any facts. Furthermore, if this data constitutes a significant portion of the climate reconstruction FOR AN ENTIRE CONTINENT, then you can boldly say that the whole CRU record is a farce.
Like atheists who pray on their deathbed, anyone presenting this data, then leaving the door open to the idea that the central premise may still have merit, is doing nothing but openly undermining their own credibility. Is there a scientist left in the world besides me who is still willing to stand behind facts and call a spade a spade. If someone can present some hard and verifiable FACTS to PROVE the Earth is warming at all, I’ll be their most vocal spokesman. But play wishy washy with statistics, and say “well its not totally true or untrue” is not science, nor enlightening, nor does it even require one shred of scientific intellect.
We simply don’t have the data or knowledge at this point to determine the answer either way. Even the raw temperature data cannot be verified to be calibrated or validated across stations in addition to having significant gaps. That is the one and only FACT you can extract from the entire global circus around manmade global warming. I defy anyone to present a fact-based, verifiable argument to the contrary.
I thought you people were on the side of facts.

slothrop
December 11, 2009 3:04 pm

ouch, brutal takedown of your article in the economist…
http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists

JJ
December 11, 2009 3:53 pm

From the department of “I really hate to say I told you so, but …”

wobble
December 11, 2009 4:14 pm

JJ, the economist blogger would have written the same post with or without the accusations.
Let them keep hiding behind the algorithm – the all mighty algorithm.

wobble
December 11, 2009 4:25 pm

carrot eater (14:26:43) :
“”For the entire Northern Hemisphere, for max temps, the raw data give a trend of +0.73 C/century; the adjusted data give +0.92 C/century. So there is some difference in the max temps. But for the temperature minima, raw and adjusted are essentially the same, at +2 C/century.””
What about the SH and avg temp? Do you know why the paper provided this without the entire world avg?

Roger Knights
December 11, 2009 4:27 pm

John McManus (12:59:32) :
“I checked you allegations about the Darwin weather station. It is not the lonely site you claim. Australia shows 88 weather stations, 17 of them within your 500 km. radius.

Now, yes. Then, no. (Read back through the comments to find the spot where this was clarified.)
JJ (15:53:01) :
From the department of “I really hate to say I told you so, but …”

I also watch this “gotcha” claims with dread, and wish that claimants would remember my constant urgings not make roundhouse swings aiming for a knockout, which leaves our side open to a counterpunch. Our credibility is very important at this moment. And we don’t need to do anything now, because time is in the processes of taking down the other sides’ house of cards. Just let it happen.

Kestral
December 11, 2009 4:44 pm

Just to check my understanding, the debate here is about homogenization (trying to eliminate errors in reporting by looking at nearby stations and seeing what they show, and then adjusting the station if it shows significant differences from its “neighbors”), correct? But an earlier poster notes that this is done only for regional analysis, not for the overall global temp, where it is assumed errors average out. Does anyone have a source for that I can check?

carrot eater
December 11, 2009 4:46 pm

wobble:
Without the accusations, this isn’t a topic with 900 some comments, between the two threads. Without the accusations, this isn’t a topic that people forward to each other.
Without the accusations, this is just the musings of a guy who doesn’t understand why the homogenisation process produces certain results for this one station, but who hasn’t put the work in to see what the process is doing. It’d generate some discussion here about the homogenisation process; people would look up the papers and learn something about how it’s done; people might be impressed or unimpressed by the method; maybe somebody from the ABoM would eventually come up and describe the site changes – but it wouldn’t get the attention, nor would it attract the disdain once people realise what’s shown here, and what isn’t.
Like it or not, there is a credibility issue here. Don’t make accusations of fraud, don’t talk about smoking guns, unless you’ve actually done the work to back it up. People might not pay any attention to you, next time.

December 11, 2009 4:47 pm

what is the point of a “global average”?
in respect of the melting ice cap (for example) the only temperature that matters is the local tgemperature (specifically, is it >0degC)
Kilamanjaro – the ice cap is, as I understand it, above the freeze altitude – so again what relevance is a global average?

John
December 11, 2009 4:52 pm

As someone who is trying to make up their own mind about climate change I was pleased to see this. However friends have referred me to http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
I am confused about the truth. Please can you explain?

carrot eater
December 11, 2009 4:53 pm

wobble (16:25:27) :
Sorry, I didn’t note if they gave a reason for only doing a NH analysis, and not global. Don’t have the paper in front of me anymore, but I can check later if you want.
The mean would be doing something between the max and min, I’d imagine.
I didn’t actually do a search for this, so there might be other papers or online material of this sort. I just came across this one because it was cited by another paper I was reading.
I agree with you that it’s an interesting question.

December 11, 2009 5:17 pm

carrot eater (13:59:58):

Willis has quite explicitly accused somebody of fraud… That is not a claim to be made lightly, and it is absolutely a claim you do not make unless you can provide evidence to support it. If you make this claim, the burden is on you to actually back up your accusation with something. The credibility of the accuser is on the line, if he does not support the claim…

But WAG, who is on your side, says @(08:56:36):

If someone says something that is not true, that is a lie. If you claim that adjusting temperature data is baseless, “blatantly bogus,” or “indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming,” that is factually incorrect, [is] therefore a lie.

I replied that WAG’s…

“…lack of understanding and ethics by imputing motive to what could just as well be a mistake [or not], or simply a difference of opinion. If WAG can prove a deliberately dishonest motive for anything Willis said, then I will concede that Willis was lying. But given the fact that Willis repeatedly concedes whenever someone points out an error in his analysis, it is apparent that unlike WAG, he is interested in finding the truth.
That makes clear that both WAG and Lambert are deliberately making their reprehensible ad hominem attacks. There is a reason that alarmists like WAG and Lambert make ad hominem accusations against those they disagree with: because they lack the facts to support their arguments. Their name calling doesn’t make them right, only despicable.

So you see, what you’re trying to accuse Willis of is routinely done in spades by the alarmist crowd. They do it, of course, because they lack credible facts to support their assertions.
But enough of this. You’re trying to make a case and I don’t agree with it. Unless, of course, you want to get it on with WAG and Lambert for doing what you’re trying to accuse Willis of.
So let’s cut to the chase: government entities like GISS, for example, fudge the numbers. And they stonewall requests to disclose how they did it. Since we’re talking about the climate, and not national defense secrets, I think they’re refusing to open the books simply because they are being dishonest, and they know it.
For instance, Mike McMillan produced a page of graphs showing the shenanigans that GISS used to show warming by diddling the figures. The original raw data, going back to 1900 and before, was taken directly from surface stations and hand-written onto B-91 forms, signed, and dated. Here’s an example: click.
But GISS has changed those raw numbers — and they refuse to explain exactly how. It’s a secret, see? It’s none of the public’s business.
Here’s McMillan’s chart page: click
Notice that those are some really, really BIG “adjustments” of readings that were taken directly from mercury thermometers [which are even today more accurate than most thermocouple and even RTD based thermometers].
Getting all red faced and arm-waving over an interpretation of motives is just a distraction from the rampant corruption endemic to GISS and the rest of the mainstream climate scientists, who have become rich and famous by using fudged scare tactics. The deliberate alteration of the record by GISS exposes what they’re up to. And hiding their methodology is exactly what people would do who want to artificially show that the planet is warming far beyond what it really is. That also is a claim not to be made lightly, and I am making it — and providing evidence to support it.

bill
December 11, 2009 5:38 pm

Willis you seem to have ignored my analysis so I will repost it. Please comment.
bill (17:11:19) :
Posted on the stick thread:
Willis Looking at the unadjusted plots leads me to suspect that there are 2 major changes in measurement methods/location.
This occur in january 1941 and June 1994 – The 1941 is well known (po to airport move) . I can find no connection for the 1994 shift
These plots show the 2 periods each giving a shift of 0.8C
http://img33.imageshack.us/img33/2505/darwincorrectionpoints.png
The red line shows the effect of a suggested correction
This plot compares the GHCN corrected curve (green) to that suggested by me (red).
The difference between the 2 is approx 1C compared to the 2.5 you quote as the “cheat”.
http://img37.imageshack.us/img37/4617/ghcnsuggestedcorrection.png

carrot eater
December 11, 2009 5:40 pm

Smokey: I disagree with Tim Lambert in that, 100%. I don’t think Willis lying; I do think he’s throwing around wild and baseless accusations. Lambert is given to using stronger attacking language than I would, as does Joe Romm. My respect for either is diminished because of it. Neither are actual climate scientists; perhaps there’s a trend there?
But an accusation of scientific fraud is a much more serious charge, and that’s what Willis has levied here. I repeat, if you want to maintain your credibility, you don’t say such things lightly.
As for the rest of your missive, I haven’t a clue who Mike McMillan is, but right off the top, I’ve got my doubts of your claims. GISS doesn’t maintain raw data; they get the raw data from NOAA, in the form of GHCN/USHCN. So it isn’t even possible for GISS to secretly mess with the raw data. Just because somebody somewhere claims a conspiracy doesn’t mean you should just take their claims at face value.
Funny that you bring up GISS in this respect. All their code is freely available. You want to see a code that does homogenisation adjustments? You’ll probably find one on the GISS site.

Derek D
December 11, 2009 6:57 pm

john (16:52), I’ll explain. All statistics are junk science. They are laden with human assumption and never represent facts. Statistics are not needed to calculate the speed of an object in motion, determine if a building will stand, or tell you what you see in a microscope. That is science. Observation, measurement, mechanisms and reproducibility. What you’re seeing is the opposite and that is the precise reason why you question it.
What constitutes “the data” here is a choppy dataset from a hodge podge of weather stations, arranged in disproportionate number around the coasts of a continent the size of the US. A dozen different people have come along and applied their own unique set of assumptions to massage the data in a manner that they are sure is the ‘right’ way. And be they alarmists or skeptics each will claim the high ground, and a scientific ‘truth’ thus hinges on who can massage better or scream louder. Many have already pointed out that Eschenbach’s assertions about the proximity of other weather stations are embarrassingly and verifiably wrong, but the whole exercise ceased to be scientific long before that.
The earth has major geologic events every 50-100 million years. It has a 100,000 year orbital precess in which the both the radius from the sun and axial tilt go from a maximum to a minimum. This causes ice ages and interglacials on a 20,000/80,000 year cycle. Within this time there are 800-1000 year warming and cooling cycles where temperature and CO2 levels rise and fall in oscillating intervals. The deep Atlantic ocean currents oscillate in multidecadal cycles while the Pacific oscillates on decade cycles. Typical sunspot cycles run ~11 years, El Ninos and La Ninas happen in ~4 year cycles, and cloud formation changes randomly by the minute. This in addition to the fact that the sun, our main energy input, goes through internal energy cycles that we cannot fully predict. To think that 100 years of sketchy temperature station data is going to lead to an accurate composite prediction of future climate and temperatures within fractions of a degree is pure foolishness. The complex and differential forces acting on the earth in different timescales deserve more than just an extrapolated straight line. Its an insult to our intelligence no matter what the conclusion.
Much ado about nothing.

Dr A Burns
December 11, 2009 7:28 pm

This paper is interesting “A historical Annual Temperature Dataset for Australia”
http://www.bom.gov.au/amm/docs/1996/torok.pdf
It includes a detailed description of “adjustments” made to one example of a station, at Mildura. How on earth can there be any confidence in temperature records with this sort of thing taking place and with who knows what sort of corrections being added ? “Move to airport” is interesting in that it scores a negative adjustment !
year adjustment reason
<1989 -0.6 move to higher ground
<1946 -0.9 move to airport
<1939 +0.4 new screen
<1930 +0.3 move from park
1943 +1.0 pile of dirt near screen
1903 +1.5 temporary site
1902 -1.0 problems with shelter
1901 -0.5 problems with shelter
1900 -0.5 problems with shelter
1892 +1.0 temporary site
1890 -1.0 detect

Phillip Somerville
December 11, 2009 7:28 pm

It’s Darwinian data selection… survival of the warmest?

Nick Stokes
December 11, 2009 8:28 pm

Dr A Burns (19:28:34) :
You’re not telling the full story about that Torok paper. Firstly, they are not describing official BoM data – these are adjustments they did for their paper. It’s possible the BoM followed it later, but you haven’t shown that.
Second, you did not explain the notation on that list. All but the top four adjustments were for one year only, and would have had very little effect on the trend. And those four were two up, two down.
You focussed on the move to airport. Mildura Airport in 1946 would have been just a field somewhere, with maybe a few light planes and maybe a weekly DC3. There would have been very little paving – possibly not even the runway.

Ripper
December 11, 2009 9:00 pm

Nick there are all the Bom adjustments.
The “0” indicates all years backwards.
The “1” is one year only.
1021 are the minimums
1011 are the maximums
14015 1021 1991 0 -0.3 -0.3 dm
14015 1021 1987 0 -0.3 -0.6 dm*
14015 1021 1964 0 -0.6 -1.2 orm*
14015 1021 1942 0 -1.0 -2.2 oda
14015 1021 1894 0 +0.3 -1.9 fds
14015 1001 1982 0 -0.5 -0.5 or
14015 1001 1967 0 +0.5 +0.0 or
14015 1001 1942 0 -0.6 -0.6 da
14015 1001 1941 1 +0.9 +0.3 rp
14015 1001 1940 1 +0.9 +0.3 rp
14015 1001 1939 1 +0.9 +0.3 rp
14015 1001 1938 1 +0.9 +0.3 rp
14015 1001 1937 1 +0.9 +0.3 rp
14015 1001 1907 0 -0.3 -0.9 rd
14015 1001 1894 0 -1.0 -1.9 rds
ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/

wobble
December 11, 2009 10:20 pm

carrot eater (16:46:53) :
“”Without the accusations, this isn’t a topic with 900 some comments, between the two threads. Without the accusations, this isn’t a topic that people forward to each other.””
Your preachings are getting quite annoying. I’ve already acknowledged many times that Willis shouldn’t have included accusation in his post.
My last point was that the Economist blogger would have written his post even without the accusations (if it had been posted by skeptic bloggers – and I think it would have). You’re wrong if you disagree.

gg
December 12, 2009 12:09 am

I have calculated the bias of adjustment for the *entire* CRU dataset. You find the result here. In short: there is no bias and no smoking gun.

VG
December 12, 2009 1:30 am

Willis are you prepared to deconstruct this?
http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists it would be helpful (apologies if you have already)

Nick Stokes
December 12, 2009 3:38 am

Ripper (21:00:43) :
Thanks, that’s really helpful. Just noting that these are for Darwin, not Mildura. I’ll note too that, while it looks like a lot, generally each adjustment is entered twice, once for min, once for max. There are thus five ongoing adjustment years. I’ve added the code expansion to these adjustments to the min:
14015 1021 1991 0 -0.3 -0.3 dm detect move
14015 1021 1987 0 -0.3 -0.6 dm* detect move documentation unclear
14015 1021 1964 0 -0.6 -1.2 orm* objective test time change move documentation unclear
14015 1021 1942 0 -1.0 -2.2 oda objective test detect composite move
14015 1021 1894 0 +0.3 -1.9 fds median detect stevenson screen supplied
Here are the max adjustments, eliminating isolated years. It’s odd that in three cases the years are different from the min changes, and the reasons are different. Note that the nett change back to 1942 was zero.
14015 1001 1982 0 -0.5 -0.5 or
14015 1001 1967 0 +0.5 +0.0 or
14015 1001 1942 0 -0.6 -0.6 da
14015 1001 1907 0 -0.3 -0.9 rd
14015 1001 1894 0 -1.0 -1.9 rds
So responding to tallbloke (08:28:56) : , yes, Blair was right – the post 1942 adjustments are small.
It would be nice if these changes lined up well with Willis’ Fig 8, but they don’t.

Nick Stokes
December 12, 2009 3:53 am

gg (00:09:11) :
Congratulations. You seem to have done the calculation here that should have been done long ago. Not just one station, but all the stations plotted as a distribution of trend change induced by the adjustment. And if I’m reading it right, it’s symmetric. Adjustments are just as likely to be up as down, if you look at the whole set.

Brian D Finch
December 12, 2009 3:54 am

Derek D (18:57:49)
Emperor Joseph: ‘Too many notes, Herr Mozart.’
Mozart: ‘Yes, your Majesty. Which ones would you like me to leave out?’

carrot eater
December 12, 2009 3:59 am

wobble (22:20:46) :
“I’ve already acknowledged many times that Willis shouldn’t have included accusation in his post.”
Yes, you have, and I applaud you for it.
“My last point was that the Economist blogger would have written his post even without the accusations ”
I’m wrong if I disagree. That’s great. It’s a combination of the email release and the title ‘smoking gun’/the accusations. The email thing has put a lot more attention on sites like WUWT; it’s the flavor of the moment. Combine that with a promise of a smoking gun, and this post got attention in much wider circles than it normally would have. With no climategate and no claim of smoking gun, the Economist doesn’t bother with this. I can’t think of the last time anybody at the Economist took the time to specifically look at a WUWT post like this; it just doesn’t happen.

December 12, 2009 4:42 am

gg,
A good effort andmuch appreciated, however I don’t know that it goes far enough to draw the conclusion you did. Would you not need to look at the time distribution of the adjustments over say ten year intervals to look for bias where in time the adjustments were made. For example if you had a data set of just one station and made an adjustment in the first year of -10 C and and adjustment in the last year of +10 C (forget the scale of the adj. – just to illustrate), then this would have a significant impact as it would bias early years to cold and later years to hot and produce a warming trend (or at least reduce an existing cooling trend). Overall of course there would be no bias as overall the adjustments cancel out. I would be interested in seeing your code and method tweeked a little to look for bias in early years in say ten year periods v. latter years.

S Patterson
December 12, 2009 5:18 am

It always amazes me how the “skeptics” like Eschenbach will allow for the possibility of warming or the possibility that data sets or their hockey stick homogenization might be correct. Don’t they realize that real climate “scientists” do not tolerate dissent? Clearly Mr. Eschenbach has not been formally trained in the new scientific method. ; )

bill
December 12, 2009 5:36 am

Willis apologies if you have urgent problems but comments would be appreciated on the 2 step functions in the data I noted here.
bill (17:38:01) :11/12
bill (17:11:19)
These are only steps that produce a visible discontinuity in the data. any slow changes or small steps would not have been seen have not been included (obviously!).
Also comments on
http://www.gilestro.tk/2009/lots-of-smoke-hardly-any-gun-do-climatologists-falsify-data/ would seem to be necessary otherwise your whole premise of Fraud is debunked.
REPLY: Willis is in the south Pacific right now with his job, his Internet connections are spotty, repeating your demands for personal attention to your issues won’t help – Anthony

B. Williams
December 12, 2009 6:12 am

whoever turns out to be right, the whole global warming debate is just a distraction from the fact that we’re going to run out of fossil fuel anyway, so why not find something better for that reason alone, and let the globe do what it will?

gg
December 12, 2009 6:50 am

@TheSkyIsFalling
I don’t think there is no need to do what you propose. By dividing the dataset in decades, you may be able to find whether the adjustment was biased by time but still it would not have an effect on the final result because as long as the distribution is normal and peaking zero there is no way you can change the overall output.
In fact, If I where to do what you suggest the only informative thing I would expect to see that most of the 0 adjustments are concentrated in the most recent time, after we gradually switched to digital readings. I suppose in former time readings were much more prone to errors and that must be when most adjustments needed to be made.

carrot eater
December 12, 2009 7:16 am

TheSkyIsFalling (04:42:27) : and gg:
If I understand what gg did, there’s no need to do what you suggest. He didn’t give a distribution of adjustments, but a distribution of trend changes. So if there were an adjustment that lowered the temp in early years and an adjustment that raised the temp in later years, that would *not* cancel out in his analysis – it would show up as having an increased trend after adjustment, just as you want it.
This analysis on the CRU adjustments seems to be consistent with the analysis of GHCN adjustments I cited above – that if you look at a big enough set of stations, the overall change due to adjustments is minor. If you look at individual stations, you’ll be able to find a few with huge adjustments. But this was already known, so it continues to appear that Willis has not found anything particularly noteworthy here.
gg: I’ll again note the Peterson article I cited which stated that adjustments in a given area and time won’t necessarily be random. If everybody in a given country switches to a Stevenson screen in a given period, or if everybody changes from old thermometers to thermocouples, then you’ll have non-random adjustments.

Bruce Cobb
December 12, 2009 7:45 am

B. Williams (06:12:16) :
whoever turns out to be right, the whole global warming debate is just a distraction from the fact that we’re going to run out of fossil fuel anyway, so why not find something better for that reason alone, and let the globe do what it will?
That is what’s called a red herring, a favorite ploy of Alarmists. But, yes, if you want, it is the Alarmists – you know, the ones making all the claims about how we’re destroying the planet with our evil C02, who are distracting mankind from the real problems or challenges. Finding new, better (and cost effective) ways of providing energy is just one of those challenges. Forcing man to switch to other, far more expensive, and less reliable forms of energy at this point, based on nothing but baseless fears is just plain stupid, and will only set mankind back. And that, in a nutshell is what the debate is all about.

December 12, 2009 8:36 am

JUNK SCIENCE’s self-chosen disability
Deterministic systems, ideological symbols of abdication
by man from his natural role as earth’s Choicemaker,
inevitably degenerate into collectivism; the negation of
singularity, they become a conglomerate plural-based
system of measuring human value. Blunting an awareness
of diversity, blurring alternatives, and limiting the
selective creative process, they are self-relegated to
a passive and circular regression.
Tampering with man’s selective nature endangers his
survival for it would render him impotent and obsolete
by denying the tools of variety, individuality,
perception, criteria, selectivity, and progress.
Coercive attempts produce revulsion, for such acts
are contrary to an indeterminate nature and nature’s
indeterminate off-spring, man the Choicemaker.
Until the oppressors discover that wisdom only just
begins with a respectful acknowledgment of The Creator,
The Creation, and The Choicemaker, they will be ever
learning but never coming to a knowledge of the truth.
The rejection of Creator-initiated standards relegates
the mind of man to its own primitive, empirical, and
delimited devices. It is thus that the human intellect
cannot ascend and function at any level higher than the
criteria by which it perceives and measures values.
Additionally, such rejection of transcendent criteria
self-denies man the vision and foresight essential to
decision-making for survival and progression. He is left,
instead, with the redundant wreckage of expensive hind-
sight, including human institutions characterized by
averages, mediocrity, and regression.
Humanism, mired in the circular and mundane egocentric
predicament, is ill-equipped to produce transcendent
criteria. Evidenced by those who do not perceive
superiority and thus find themselves beset by the shifting
winds of the carnal-ego; i.e., moods, feelings, desires,
appetites, etc., the mind becomes subordinate: a mere
device for excuse-making and rationalizing self-justifica-
tion.
The carnal-ego rejects criteria and self-discipline for such
instruments are tools of the mind and the attitude. The
appetites of the flesh have no need of standards for at the
point of contention standards are perceived as alien, re-
strictive, and inhibiting. Yet, the very survival of our
physical nature itself depends upon a maintained sover-
eignty of the mind and of the spirit.
It remained, therefore, to the initiative of a personal
and living Creator to traverse the human horizon and
fill the vast void of human ignorance with an intelli-
gent and definitive faith. Man is thus afforded the
prime tool of the intellect – a Transcendent Standard
by which he may measure values in experience, anticipate
results, and make enlightened and visionary choices.
Only the unique and superior God-man Person can deserved-
ly displace the ego-person from his predicament and free
the individual to measure values and choose in a more
excellent way. That sublime Person was indicated in the
words of the prophet Amos, “…said the Lord, Behold,
I will set a plumbline in the midst of my people Israel.”
Y’shua Mashiyach Jesus said, “If I be lifted up I will
draw all men unto myself.”
As long as some choose to abdicate their personal reality
and submit to the delusions of humanism, determinism, and
collectivism, just so long will they be subject and re-
acting only, to be tossed by every impulse emanating from
others. Those who abdicate such reality may, in perfect
justice, find themselves weighed in the balances of their
own choosing.
“No one is smarter than their criteria.”
Jim Baxter
semper fidelis
– from “2010 AD: The Season of Generation-Choicemaker”
DEDICATION
Sir Isaac Newton
The greatest scientist in human history a Bible-Believing Christian, an
authority on the Bible’s Book of Daniel committed to individual value and
individual liberty
Daniel 9:25-26 Habakkuk 2:2-3 selah
“What is man…?” Earth’s Choicemaker Psalm 25:12

wobble
December 12, 2009 9:29 am

gg (00:09:11) :
“”I have calculated the bias of adjustment for the *entire* CRU dataset. You find the result here. In short: there is no bias and no smoking gun.””
That’s great work, but many of us have now realized that we need to look for a pivoting of the biases – not just the overall bias.
For example. It’s possible to adjust pre-1960 data down and post-1960 data up by the same amount. Doing so shows a warming trend yet yields a zero bias.

wobble
December 12, 2009 9:31 am

carrot eater (03:59:51) :
OK, good point. The smoking gun promise was also counter-productive.

wobble
December 12, 2009 9:34 am

B. Williams (06:12:16) :
“”we’re going to run out of fossil fuel anyway, so why not find something better for that reason alone””
Concur that we need to start shifting our sources of energy. However, let’s not be constrained by assuming that CO2 is a pollutant.

JohnV
December 12, 2009 12:25 pm

wobble:
That’s a good point about the “pivoting of the biases”. You should read Giorgio’s study — he looked at the impact of the biases on the *trend*. To use your words, he looked at the “pivoting of the biases”.

December 12, 2009 12:43 pm

dd/carrot eater,
Thanks that seems clear now to me and I agree with the conclusion.

MikeF
December 12, 2009 12:45 pm

gg:
I have calculated the bias of adjustment for the *entire* CRU dataset. You find the result here. In short: there is no bias and no smoking gun.
Nick Stokes (03:53:25) :
gg (00:09:11) :
Congratulations. You seem to have done the calculation here that should have been done long ago. Not just one station, but all the stations plotted as a distribution of trend change induced by the adjustment. And if I’m reading it right, it’s symmetric. Adjustments are just as likely to be up as down, if you look at the whole set.

I am an engineer. In my line of work, if you have a data set that behaves strangely you need to explain very carefully why and prove that this behavior is correct.
This is what I see as a problem that Willis had found in your dataset:

Here is a good quality temperature data that shows no warming trend from Darwin, Australia. In fact, there are lots of good stations in Australia that show no warming trend at all.
Despite that, after processing those stations show significant warming trend.

Here is what I see as your prove that this is not a problem:

There is nothing wrong with that because warming adjustments to this data are counterbalanced by symmetrical adjustments in opposite direction elsewhere.

This is not an explanation at all. You are essentially telling us that there is nothing wrong with the algorithm that introduces bias to known good data because it also introduces opposite bias to (presumably ) bad data. Trust us. It all comes out good at the end
What you just did is showed that even if your processing assumes that 2=3 it’s OK because somewhere else it assumes that 3=2
You might have get away with this sort of few months ago. Not anymore though. What I personally want to see is processing algorithm that does NOT change good data at all. This will be my first “sanity check” of your output. If it passes, then we can look deeper into the issues.
I hope that enough people see it the same way.

Todd
December 12, 2009 3:07 pm

The homogenization method is incorrectly and incompletely explained by the author. I am still waiting for someone to come forward with accurate descriptions of all the steps. This is a science blog? ***sigh***
Blind lead the blind
There’s a teacher in a school room
Somewhere on the edge of town
Telling innocent little children what we used to be
They look and listen without a question
They see the pictures passed around
Making facts out of a theory and they all believe
As the lost lead the way
Another heart is led astray
These are the days when the blind lead the blind
And there’s one narrow way out of here
So pray that the light of the world will keep your eyes clear
‘Cause it’s a dangerous place here where the blind lead the blind

Geoff Sherrington
December 12, 2009 3:40 pm

For those who might be interested, I’m about half way through a compilation of all available data sources on Darwin temperatures. It is impartial, non-judgemental and qualified as needed. If you would like to see how the figures speak for themselves, please feel free to email me. I am deeply appreciative of the work of colleagues before me like Willis Eschenbach, Warwick Hughes, David Stockwell, Steve McIntyre, to name a few, and to the hard yards put in by the Bureau of Meteorology in Australia. It just happens that I have visited Darwin many, many times since 1960 and have some local knowledge. If you have any “early” data from Darwin, collected (say) pre 1993, I’d be delighted to receive a file. sherro1 at optusnet dot com dot au.

Manny
December 12, 2009 4:14 pm

Willis Eschenbach caught lying about temperature trends
http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
I wonder if Mr. Eschenbach would mind responding to this charge. It appears to my layman’s eye that the alleged deception perpetrated by Eschenbach was that he did not adjust the raw data at the Darwin Airport station – which seems like a spurious claim of course.
REPLY: see the main page of WUWT Top and Center – Anthony

Graham
December 12, 2009 4:40 pm

From my experience in South Australia, the Darwin case above is not unique.
Our local bureau has been spinning to us ever increasing numbers of “hottest day”, “hottest month” etc. for years now, when I and others who have experienced past heat waves know these proclamations to be totally untrue.
I have made enquiries to their office seeking the original raw data, particularly from the old location, so that I could do my own number-crunching, but have been repeatedly refused.

Dale Emde, Pacifica,Calif
December 12, 2009 7:50 pm

It’s interesting that with all this brainpower submitted by readers, there is never a reference as to why Gore and his backers are insisting on this lie. My take which most of you must know is that the “global elite”, this will be the billionaires that infest the world who are intent on increasing their control and power, are really pushing now to cement their control by bringing the US to its knees as quickly as possible, destroy the dollar, destroy the middle class, destroy our borders and have one big happy family which will be under the control of the UN and all of its inventions. 1984 IS fast approaching quickly!

RoHa
December 12, 2009 8:53 pm

Good letter to the Eco. They’ll probably cut it to pieces before they publish it.
Now all this stuff needs to be put together into a new post for the site, rather than allowed to fester in the comments.

December 12, 2009 9:10 pm

Top marks to MikeF (12:45:43) : for pointing out that the following logic:
gg:
I have calculated the bias of adjustment for the *entire* CRU dataset. You find the result here. In short: there is no bias and no smoking gun.
Nick Stokes (03:53:25) :
gg (00:09:11) :
Congratulations. You seem to have done the calculation here that should have been done long ago. Not just one station, but all the stations plotted as a distribution of trend change induced by the adjustment. And if I’m reading it right, it’s symmetric. Adjustments are just as likely to be up as down, if you look at the whole set.
is fairly laughable.
Firstly, this condition becomes valid only when testing whether there was a trend FOR THE ENTIRE PLANET.
Even so, there still remains the question of whether all the adjustments (all over the world) were uniformly rigorous and of adequate quality? Evidence is now emerging that the probability of that being so is not very high.
This probabilistic aspect (by definition also contributing to confidence in any perceived trend) would no doubt get left right out of estimations of statistical degree of confidence in the trend. I can recall no discussion of this issue (as a contributor to confidence) in IPCC supporting reports.
Furthermore, this condition (of symmetry in the set of all adjustments) cannot, by definition, be made to apply to subsets of the whole database. One only needs to ask the question what happens when only certain important subsets of the entire CRU database e.g. regions are considered?
Is the test for bias of adjustments applied there as well (as it should rightly be)?
A good example would be some of the statistical work on regional temperature trends which went into 2008 CSIRO/BOM ‘Drought Exceptional Circumstances Report’. Again I can recall no discussion of this aspect of the Australian data in the above report.
In other words, where are the assessments of level of confidence in the adjustments metadata?

wobble
December 12, 2009 10:28 pm

JohnV (12:25:46) :
“” You should read Giorgio’s study — he looked at the impact of the biases on the *trend*. To use your words, he looked at the “pivoting of the biases”.””
I know that’s what he claims, but I don’t think his analysis was thorough enough to have truly considered “trends” over time.

December 12, 2009 11:08 pm

Still trying to get something.
In 1941, they moved the temperature gauge from a post office to an airport, where it remains today. (T / F)
They did no “adjustment” for this move. (T / F)

DaveK
December 13, 2009 12:03 am

>”But I digress”
You certainly do, and it is a really bad strategy to engage in this kind of silliness. Trying to draw some lesson from the abbreviation of a URL? It’s a bit rich of you to fling accusations around that scientists are somehow cheating, when your “method” consists of attempting to read someone’s mind based on the URL of an article. That’s just divination or augury, crossed with ad-hominem; it’s certainly not anything based on observation or experiment.
Now to the actual substance of your rebuttal:
>”This might make sense if there were any “dramatic change in 1941″. But as I clearly stated in my article, there is no such dramatic change.”
>”LOOK AT THE DATA. There is no big change in January of 1941″
Ok, then what precisely did you mean by presenting a graph featuring this?
http://img152.imageshack.us/img152/5830/hereitiswhatdidyouthink.png
If you don’t understand that this sharp edge is what the guy was talking about, you simply haven’t understood nor rebutted what he was actually talking about, you’ve attacked some other line of argument that he didn’t present. Natty move, rhetorically speaking, but not actually relevant.
>”And you give your article the URL “trust_scientists””
No, he didn’t, and I can hardly believe you’re making such an issue over something so ridiculous. The URL was “given” by the requirements of HTTP: you can’t have a question mark in a URL. If it had been “trust_scientists%3f” how would that have changed your argument? Would you still have had one? You can’t prove facts about the historical truth of temperature records by use of sarcasm, that’s merely a populist appeal for support on emotive rather than rational grounds.
So: you missed the main point of the rebuttal, admitted it was correct and you were wrong about the other parts, and engaged in some childish wordplay and sarcasm. Do you have an actual defence, though? The main point: that you have cherry-picked one single outlying statistical fluctuation from a huge body of data, and present it as if it were the exemplar of all of the data, while simultaneously ignoring the fact that the statistical mechanisms are designed precisely to cope with and smooth out error variances over the entire corpus.
That, sir, is a pointless, meaningless, and in fact deceitful enterprise.

December 13, 2009 12:08 am

Something of interest among the e-mails. Don’t know if it pertains.

From: Phil Jones
To: Kevin Trenberth
Subject: One small thing
Date: Mon Jul 11 13:36:14 2005
Kevin,
In the caption to Fig 3.6.2, can you change 1882-2004 to 1866-2004 and
add a reference to Konnen (with umlaut over the o) et al. (1998). Reference
is in the list. Dennis must have picked up the MSLP file from our web site, that has the early pre-1882 data in. These are fine as from 1869 they are Darwin, with the few missing months (and 1866-68) infilled by regression with Jakarta. This regression is very good (r>0.8). Much better than the infilling of Tahiti, which is said in the text to be less reliable before 1935, which I agree with.
Cheers
Phil

Hey, Jakarta is about the same latitude.

December 13, 2009 4:51 am

But hey, according to Aust. BOM metadata lists right now (December 2009) there simply is no Darwin data pre-January 1885.
So, sometime between July 2005 and December 2009 BOM have discarded 16 years of 19th century Darwin data. Curiouser and curiouser.
Just like; sometime after 1961 BOM ‘discarded’ at least 30-odd years of pre-Jan 1957 Oenpelli data.
Darwin: 12.4 deg. S ; 130.8 deg. E
Jakarta: 6.8 deg. S; 147.1 deg. E
Oh hey, Oenpelli: 12.3 deg. S ; 133.1 deg. E
Anyway, not to worry.
As gg and Nick Stokes have assured us – these adjustments all balance out. It’s all kosher. Honest Injun. Why would they lie to us?
Pssst, wanna buy a used car?

John Perry
December 13, 2009 5:20 am

So Mr Eschenbach is not a scientist, but rather an amateur. No problem with being an amateur, however you need to make an effort to understand how science works. This is possible, and some people managed to reach the level of being regarded as a scientist.
How does Mr Eschenbach reply to
http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists
Mr Eschenbach, if you have any new research, could you please try to publish first to a reputable journal (such as nature) and then make it public?
The ground that the denialists are standing on gets smaller, as it is flooded progressively.
It makes one think that the whole denialist fracas has more to with incompetence rather than malice.

wesley bruce
December 13, 2009 5:27 am

As many have pointed out Darwin was bombed heavily in WW2. It should be noted that as a consequence there was a very large upgrade to the airport becoming an airbase. This was reversed in the post war period this could have produced a rise after 42 and a fall after 46 due to urban heat island effects of the airport/airbase its self. When I was there on an army exercise in the early 90’s there were grassed areas with what looked like old foundations.
Also WW2 defences included many structures, air craft and sand bag walls that can both protect the Stevenson screen from wind and afternoon sun, these can change the temperature but they are not good reasons for permanent changes. Your argument stands.
PS On war, Has anyone looked at whether weather men operating weather stations under occupation keep accurate records or mess with them hoping to mess up the enemy? I believe some French under German occupation sent one set of weather data to the Germans and one to the resistance. I can’t prove it. It was only one line in a documentary about free French resistance collecting vital weather data for the British but it should be asked. The 40’s were quite warm in some places but could those numbers have been understated or over stated in some cases? The same goes for many colonies in the 59’s and 60’s. Does the GHCN assess this? How would we check that their not using this as a excuse to move data up and down? Hope fully I’m wrong but data from war zones sounds a little doubtful to me.

steve bunn
December 13, 2009 9:08 am

Is it possible to detect a signal of 1 degree per century from data which has had adjustments 7 times greater added to it?
I’m no scientist but my logic says no.
steve
>year adjustment reason mildura
<1989 -0.6 move to higher ground
<1946 -0.9 move to airport
<1939 +0.4 new screen
<1930 +0.3 move from park
1943 +1.0 pile of dirt near screen
1903 +1.5 temporary site
1902 -1.0 problems with shelter
1901 -0.5 problems with shelter
1900 -0.5 problems with shelter
1892 +1.0 temporary site
1890 -1.0 detect

steve bunn
December 13, 2009 9:20 am

As the following are rounded to the nearset 0.5 of a degree i interpret the margin of error as = +- 0.2 degrees for each adjustment.
Compounded for each adjustment makes the data totally useless to detect global warming at the current consensus rate of 1 degree per century.
1943 +1.0 pile of dirt near screen
1903 +1.5 temporary site
1902 -1.0 problems with shelter
1901 -0.5 problems with shelter
1900 -0.5 problems with shelter
1892 +1.0 temporary site
1890 -1.0 detect

bobrgeologist
December 13, 2009 11:58 am

It is suggestive that the Darwin temperature has suffered the fate of the rest of the world to conform with the political opinion that the world is warming up from our present glacial one. In their perverted reasoning that is a bad thing but indicative that they are ignorant of past climates and the fossil record of past life extinction events. Climates have seldom been benign for extended periods of time as evidenced by the fact that 98+% of all species that have ever lived are now extinct. No mass extinctions have ever been attributed to our planet becoming too warm. All this villification of Greenhouse Gases of which carbon dioxide is an insignificant member, is the equivalent of “biting the hand that feeds you.” These stupid AGW zealots do not realize we need a warmer world. As long as we have ice in our polar regions we are at risk of slipping back into another major glaciation event. Lets face reality. We are still in a glacial climate, the Pleistocene. We are in the 5th interglacial period of a glaciation cycle that began only 1.75 million years ago and we do not know whether we have reached the end of the cycle or not. Weather is a geologic process that man has not the power to control. He can micromanage it for short terms at a trememdous cost, but ineffective in the longer term. Are you sure now that maintaining our ice capped poles are worth expending our last dime on?

Roger Knights
December 13, 2009 1:00 pm

“we’re going to run out of fossil fuel anyway, so why not find something better for that reason alone, …?”
Here’s a three-part solution I endorse, spelled out in a book called “Prescription for the Planet: The Painless Remedy for Our Energy & Environmental Crises,” whose details are outlined in the first reader-review, by G. Meyerson:
This book is a must read for people who want to be informed about our worsening energy and ecology crisis. Before I read this book, I was opposed to nuclear power for the usual reasons: weapons proliferation and the waste problem. But also because I had read that in fact nuclear power was not as clean as advertised nor as cost competitive as advertised and was, moreover, not a renewable form of energy, as it depends upon depleting stocks of uranium, which would become an especially acute problem in the event of “a nuclear renaissance.” Before I read this book, I was also of the opinion that growth economies (meaning for now global capitalism) were in the process of becoming unsustainable, that, as a consequence, our global economy would itself unravel due to increasing energy costs and the inability of renewable technologies genuinely and humanely to solve the global transport problem of finding real replacements for the billions of gallons of gasoline consumed by the global economy, and the billions more gallons required to fuel the growth imperative. I was thus attracted to the most egalitarian versions of Richard Heinberg’s power down/relocalization thesis.
Blees’ book has turned many of my assumptions upside down and so anyone who shares these assumptions needs to read this book and come to terms with the implications of Blees’ excellent arguments. To wit: the nuclear power provided by Integral Fast Reactors (IFR) can provide clean, safe and for all practical purposes renewable power for a growing economy provided this power is properly regulated (I’ll return to this issue below). The transportation problems can be solved by burning boron as fuel (a 100% recyclable resource) and the waste problem inevitably caused by exponential growth can be at least partially solved by fully recycling all waste in plasma converters, which themselves can provide both significant power (the heat from these converters can turn a turbine to generate electricity) and important products: non toxic vitrified slag (which Blees notes can be used to refurbish ocean reefs), rock wool (to be used to insulate our houses–it is superior to fiber glass or cellulose) and clean syngas, which can assume the role played by petroleum in the production of products beyond fuel itself. Blees’s discussion of how these three elements of a new energy economy can be introduced and integrated is detailed and convincing. Other forms of renewable energy can play a significant role also, though it is his argument that only IFRs can deal with the awesome scale problems of powering a global economy which would still need to grow. Tom’s critique of biofuels is devastating and in line with the excellent critiques proferred by both the powerdown people and the red greens (John Bellamy Foster, Fred Magdoff); his critique of the “hydrogen economy” is also devastating (similar to critiques by Joseph Romm or David Strahan); his critique of a solar grand plan must be paid heed by solar enthusiasts of various political stripes.
The heart of this book, though, really resides with the plausibility of the IFR. His central argument is that these reactors can solve the principal problems plaguing other forms of nuclear power. It handles the nuclear waste problem by eating it to produce power: The nuclear waste would fire up the IFRs and our stocks of depleted uranium alone would keep the reactors going for a couple hundred years (factoring in substantial economic growth) due to the stunning efficiency of these reactors, an efficiency enabled by the fact that “a fast reactor can burn up virtually all of the uranium in the ore,” not just one percent of the ore as in thermal reactors. This means no uranium mining and milling for hundreds of years.
The plutonium bred by the reactor will be fed back into it to produce more energy and cannot be weaponized due to the different pyroprocessing that occurs in the IFR reactor. In this process, plutonium is not isolated, a prerequisite to its weaponization. The IFR breeders can produce enough nonweaponizable plutonium to start up another IFR in seven years. Moreover, these reactors can be produced quickly (100 per year starting in 2015, with the goal of building 3500 by 2050)), according to Blees, with improvements in modular design, which would facilitate standardization, thus bringing down cost and construction lead time.
Importantly, nuclear accidents would be made virtually impossible due to the integration of “passive” safety features in the reactors, which rely on “the inherent physical properties of the reactor’s components to shut it down.” (129)
Blees is no shill for the nuclear industry and is in fact quite hostile to corporate power. He thinks that these IFRs must be both run and regulated by a globally accountable, international and public body of experts. Blees has in mind a global energy democracy in which profit would play minimal if any role. Blees realizes that democratizing energy in this way, including technology sharing, will be fought by vested interests. But he thinks that the severity of the climate crisis will persuade people of the necessity of global public ownership over energy resources. My greatest disagreements with this book focus on the scale of conflict that would emerge around such proposals. Blees’ energy democracy is a great idea, but I doubt the ruling elites would go for it no matter how much sense it makes. Blees is banking on the unique character of the climate crisis to convert a significant sector of our elites to humanity’s cause and not their class interests. Let’s hope he’s right, but I’m less optimistic that this revolution will be as “painless” as Blees suggests.
That said, Blees’s solutions make possible the kind of relatively clean growth I did not think was possible under current global regimes. Still, if such a new energy regime as Blees proposes can solve the climate crisis, this is not to say, in my opinion, that a growth regime is fully compatible with a healthy planet and thus a healthy humanity. There are other resources crucial to us–the world’s soils, forests and oceans come to mind–that a constantly expanding global economy can destroy even if we recycle all the world’s garbage and stop global warming.
Before I read this book, I did not think contemporary global capitalism could sustain itself for long, due to its pathological inequity and its seeming inability to solve the energy and ecological challenge. Blees’ book seems to offer immediate solutions to our energy and ecology problems while breathing new life into some kind of growth economy–whether that economy can rightly be called capitalist given its commitment to energy democracy and democratic planning is a question, perhaps, for Blees’s next book.“

Here’s the Amazon link:
http://www.amazon.com/Prescription-Planet-Painless-Remedy-Environmental/dp/1419655825/ref=sr_1_1?ie=UTF8&s=books&qid=1236568501&sr=1-1

December 13, 2009 2:04 pm

This document published by CSIRO in 2008 on Processed Climate Data for Timber Service Life Prediction Modelling has many graphs of interest – in particular mean annual dry AND wet bulb temperature data for 132 Australian sites. Incidentally, it also identifies a significant number of Northern Territory sites where data was supposedly collected back into the early decades of the 20th century but for which records can no longer be accessed via BOM.
It is interesting to note that at the resolution scale of the graphs of annual dry and wet bulb temperature of all these sites in this document there are hardly any sites where any upward trend over the 32 year period 1965 – 1997 post can be discerned (at say the 1 C level) in BOTH dry and wet bulb temperatures.
The lack of a national trend in wet bulb temperatures over this 30 year plus late 20th century period is particular interesting. Perhaps the wet bulb temperature database has largely escaped ‘homogenization’ (;-).
http://www.timber.org.au/resources/ManualNo1-ClimateData.pdf

Nick Stokes
December 13, 2009 2:09 pm

steve bunn (09:20:56)
Your missing what the notation means. All in your list are adjustments for one year only – no cumulative effect, and very little effect on trend. Only the top four in your first list affected more than one year.

Nick Stokes
December 13, 2009 2:23 pm

Steve Short (21:10:53) :
In other words, where are the assessments of level of confidence in the adjustments metadata?

GHCN does not use metadata. It’s a change detection algorithm. That’s why, for example, it did not put a break exactly at 1941. The calc put it somewhere else.
The point of Giorgio’s analysis is that it generalises what Willis has done. He reported that Darwin’s adjustment added 1.9 C/century to the trend. GG has shown what happened to the other 6735 stations. Some went up, some down. 575 stations went up more than Darwin. That puts it in the upper tail. But in fact the tail contains mostly short records, where just one adjustment changes the trend a lot. Darwin is a real outlier among long data series.

MikeF
December 13, 2009 2:38 pm

Hey, DaveK (00:03:55) ,
You say:

>”This might make sense if there were any “dramatic change in 1941″. But as I clearly stated in my article, there is no such dramatic change.”
>”LOOK AT THE DATA. There is no big change in January of 1941″
Ok, then what precisely did you mean by presenting a graph featuring this?
http://img152.imageshack.us/img152/5830/hereitiswhatdidyouthink.png
If you don’t understand that this sharp edge is what the guy was talking about, you simply haven’t understood nor rebutted what he was actually talking about, you’ve attacked some other line of argument that he didn’t present. Natty move, rhetorically speaking, but not actually relevant.

It is you who have difficulties with comprehension here. The step change in your graph is in corrected data, as well as in correction that had been applied to the data. Raw data has no such change. Why would someone applied step change to data that doesn’t show any need for it? I guess you have to be “certified climate scientist” ™ or “official climate scientist helper” ™ to understand that (which one are you?)

December 13, 2009 3:40 pm

Nick Stokes (14:23:11) :
Steve Short (21:10:53) :
In other words, where are the assessments of level of confidence in the adjustments metadata?
GHCN does not use metadata. It’s a change detection algorithm. That’s why, for example, it did not put a break exactly at 1941. The calc put it somewhere else.
There are essentially two types of adjustments as I see it:
(1) the adjustments made by the host country wherein the data originates. This may include a mix of judgements adjustments based on site changes etc.
(2) the change detection algorithmic adjustments done by GHCN etc.
Both of these types of these (retrospective) adjustments introduce an added reduced probability (below 100%) of inferred trends, global, meridional, regional etc being ‘real’.
My question is:
Where are are the studies which have attempted to evaluate the ‘efficacy’ (= level of confidence) of such adjustments (both types) against (say):
* Datasets which were not originally included – being ‘other less conventional sources’ e.g. mines, mission stations, large ranches, etc.
* Datasets which have been discarded for obscure reasons by national agencies. This seems to have been quite common in Australia for example – particularly in the post-1980 period – I’m starting to rack up a whole list now.
* More recent/alternative/improved algorithmic treatments e.g. humidity, pan evaporation…
* Modern satellite and sonde records.
Why can’t the ‘science’ of temperature trends over the last century or so we are being asked to implicitly accept produces its own body of (yes peer reviewed) QC/QA studies?
Everywhere I look one can easily detect signs of sloppy or dodgy retrospective corrections (in the homogenized datasets of GHCN and CRU) , and even with BOM whole chunks of now retrospectively missing data (yet referenced previously!!!) which would presumably have been ideal ‘fodder’ for regressions and algorithmic correction schemes.
IPCC should have established a whole section devoted just to the science of data handling and the maintenance of proper standards of core/key data manipulation from the outset. A sort of Quality Control Department.
IMO, given the implications of this madness of crowds (?) ‘paradigm’ it’s not good enough and as a scientist I suspect even you know it Nick. I’m tired of the weasel word diversions and apologetics for this flawed process now flooding the blogs.
You haven’t made any comment on the ‘Brisbane problem’. Another outlier?

fhell
December 13, 2009 3:50 pm

Ah, comments are moderated, so thats why you seem to have so much support.
The paranoia shown here woulkd be funny if it was not sop serious.
If you hgad any balls at all you would allow unmoderated comments.

Street
December 13, 2009 5:19 pm

I’ve done a cursory analysis of the adjustment trends in GHCNv2 using different methods. It generally agrees with gg’s results.
That being said, it’s still possible for the adjustment process to introduce bias. Why? Because both gg and I analyzed the cumulative effect of the adjustments on the mean. However, the global average does not weight all stations equally. They are gridded. Therefore, the most remote stations have a greater effect than stations in densely measured areas. These remote stations are also going to be the most difficult to adjust because of the lack of nearby reference stations. So it is possible that the adjustment process could cause biases in this specific subset that would effect the gridded global average.
Personnally, I don’t think a problem will be found there, but I lack the tools to analyze it at that level, so I can’t confirm it. If someone else wants to try, we can move on to more likely sources of error.
My current line of inquiry is to look into the records that were dropped in the adjustment process. By my count, it’s about 28% of records after 1850, but that doesn’t tell the whole story. Between 1988 and 1995, the number of adjusted stations dropped 76% due to a general loss of stations, however loss of dropped stations was ‘only’ 52%. Because of that, every year thereafter there are more dropped stations than adjusted ones.
The thing that bugs me is that the dropped stations exhibit different average trends than the adjusted stations (raw or otherwise). Now, I don’t discount that the data from these stations may appear unreliable by someone’s measure, but in the quest for data quality, we may be unrepresenting entire regions of the globe and those instruments may be saying something different than the networks in the richer areas. Honestly, I don’t know. It’s going to take me a while to sort this out.
Does anyone know if there’s ever been a sensitivity analysis regarding which stations are dropped and which are adjusted?

carrot eater
December 13, 2009 8:13 pm

Geoff Sherrington has found the historical metadata for this site. Turns out, it was online, after all.
I don’t see that he’s posted it here yet, so if I may, it’s in the alladj file here
ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/
In case anybody is still confused, the GHCN does not use this metadata when it does homogenisation, but the ABoM does its own independent homogenisation, and it does consider the metadata.
So now, not only can we compare the results of the two different homogenisations, but we can see what sorts of actual physical things the GHCN statistical method is trying to sniff out.

December 13, 2009 8:38 pm

Street (17:19:05) :
“My current line of inquiry is to look into the records that were dropped in the adjustment process. By my count, it’s about 28% of records after 1850, but that doesn’t tell the whole story. Between 1988 and 1995, the number of adjusted stations dropped 76% due to a general loss of stations, however loss of dropped stations was ‘only’ 52%. Because of that, every year thereafter there are more dropped stations than adjusted ones.
The thing that bugs me is that the dropped stations exhibit different average trends than the adjusted stations (raw or otherwise). Now, I don’t discount that the data from these stations may appear unreliable by someone’s measure, but in the quest for data quality, we may be unrepresenting entire regions of the globe and those instruments may be saying something different than the networks in the richer areas. Honestly, I don’t know. It’s going to take me a while to sort this out.
Does anyone know if there’s ever been a sensitivity analysis regarding which stations are dropped and which are adjusted?”
Exactly. As IPCC never bothered to put a data Quality Control system in process, how would they (or us) know the probabilistic effects (on confidence in any trend, up or down) of dropping all that data?
Yet, they still have the front to say they know there was 0.6 C rise over the 20th century ‘with a high degree of confidence”.
I also wonder about the rate ‘data dropping’ after 1998?

Geoff Sherrington
December 13, 2009 11:24 pm

Just for interest, spaghetti graph of Darwin at
http://i260.photobucket.com/albums/ii14/sherro_2008/DARWIN_SPAGHETTI_2.jpg?t=1260775310
I have about 3 more series that might be added. The more spag, the merrier.
Needs to be bigger to appreciate the subtleties.
GISS unadjusted is yellow diamonds. KNMI is adopted from GISS adjusted. Need I say more? Like 3 degrees C of separation?
All from the one original set of obs.

Geoff Sherrington
December 13, 2009 11:33 pm

carrot eater (20:13:35) :
You might add that I have questioned how you know that USA bodies receive raw data from the BOM Australia, as you asserted elsewhere. Until you can prove that, the jury has to stay out.
For Nick Stokes, here we go again. I’ll show you my marbles if you show me yours first, because you stated
“Darwin is a real outlier among long data series.”
You tell me what you base this vague statement on and I’ll post some graphs that differ from your comment. Over to you.

neil0mac
December 13, 2009 11:44 pm

It strikes me as being a bit strange that they use Darwin figures for their stats, and then rave on about the Antactic ic Ice Cap melting.
Requies a great leap of imagination?

neil0mac
December 13, 2009 11:49 pm

Steve Short …
Yet, they still have the front to say they know there was 0.6 C rise over the 20th century ‘with a high degree of confidence” … … ….
… That no-one have the gall to question it?

sibeen
December 14, 2009 12:51 am

Perhaps relevant.
“Subject: Darwin temperature record
Date: Wed, 29 Mar 2000 12:51:29 +0930
From: “legalnet”
To: “John Daly”
Dear John,
Further to my emails of earlier today, I have now heard back from Darwin Bureau of Meteorology. The facts are as follows.
As previously advised, the main temperature station moved to the radar station at the newly built Darwin airport in January 1941. The temperature station had previously been at the Darwin Post Office in the middle of the CBD, on the cliff above the port. Thus, there is a likely factor of removal of a slight urban heat island effect from 1941 onwards. However, the main factor appears to be a change in screening. The new station located at Darwin airport from January 1941 used a standard Stevenson screen. However, the previous station at Darwin PO did not have a Stevenson screen. Instead, the instrument was mounted on a horizontal enclosure without a back or sides. The postmaster had to move it during the day so that the direct tropical sun didn’t strike it! Obviously, if he forgot or was too busy, the temperature readings were a hell of a lot hotter than it really was! I am sure that this factor accounts for almost the whole of the observed sudden cooling in 1939-41.
The record after 1941 is accurate, but the record before then has a significant warming bias. The Bureau’s senior meterologist Ian Butterworth has written an internally published paper on all the factors affecting the historical Darwin temperature record, and they are going to fax it to me. I could send a copy to you if you are interested.
Regards Ken Parish

bradley13
December 14, 2009 1:22 am

The main defense of the Darwin data seems to be the sensor replacement in 1941. Fine, let’s make this a non-issue: look only at the post-1941 data. The adjustments are no less egregious.
You know what bugs me the most about this? It’s the FOI requests.
In a different field I’ve written computer models, run statistical analyses on the results, and published peer-reviewed papers on the results. My raw data and code were online from day one, and even years later I sent out code to the occasional inquiry.
Sure, the researchers should not have fought the FOI requests. More to the point: FOI requests should never have been necessary. Any reputable journal in any field of endeavor should require complete data and source code to be placed online. If you will not enable others to verify your work, you cannot claim to be doing science.

Nick Stokes
December 14, 2009 1:59 am

Geoff Sherrington (23:33:14) :
You tell me what you base this vague statement on and I’ll post some graphs that differ from your comment.

Not vague, Geoff, or it needn’t be. As I’ve been saying, I’ve reproduced GG’s calc, which gives the distribution of trends calculated for each station. There are 6736 of them, and Darwin’s trend diff of 0.23 C/decade came in at number 576. A bit over 1 sd at the high end.

Nick Stokes
December 14, 2009 2:11 am

Geoff Sherrington (23:33:14) :
“Darwin is a real outlier among long data series.”

Geoff, I didn’t notice that you’d given the quote specifying long data series. I gave the figures for all series, where a lot of short series pack the tails. For series with over 40 years data, of which there are 4387, Darwin came in at #243, nearly at the 95% level.

djekume
December 14, 2009 6:56 am

I am attempting to reconstruct the graph you used showing a mounting slope of step-function adjustments, but I can’t do it. Can you explain where you are getting your data? If you use the appinsys website to plot GHCN Darwin 0 raw data against GHCN Darwin 0 adjusted data, you find that the temperature in 1942 was about the same as the temperature in 2008, on both the adjusted and unadjusted data.
http://www.appinsys.com/GlobalWarming/climgraph.aspx?pltparms=GHCNT100XJanDecI188020080900111AR50194120000x
Consequently, the correction in 1942 was the same as in 2008. The correction rises in the 60s-80s but then drops back again. On your chart, however, you show a much higher adjusted temperature in 2008 than in 1942 and the correction rising steadily until 2008.
What Darwin 0 data set are you using? Where can I find it?

carrot eater
December 14, 2009 8:45 am

Geoff Sherrington (23:33:14) :
Even if you don’t want to believe all the literature, you can see it for yourself, as I’ve noted elsewhere.
Go to the BoM’s ‘high quality’ page, and see the data for Darwin. What you see there is the current result of the ABoM homogenisation procedure, as described by Torok and Della-Marta, etc.
That data is clearly not the same as the raw data in the GHCN.
So you can see for yourself that the homogenisation done by the ABoM does not make it to the raw records used by GHCN.
The GHCN then goes on to do its own homogenisation using fairly similar procedures, except that they don’t have the benefit of having the historical metadata.

carrot eater
December 14, 2009 9:19 am

One thing getting lost in the mix is whether anybody even uses the GHCN homogenisation for the types of plots Willis was discussing. GISS Temp does not, and it looks like CRU doesn’t, either. So on top of everything, we’re discussing a homogenisation that doesn’t even go into the global temp anomaly data series that we usually see. Does it?
@djekume (06:56:08) :
Eschenbach moved the anomalies up and down so that they match at the beginning of the record, as opposed to the end. I don’t know if that viewer lets you do such a thing, so you might want to just download the data.

KevinUK
December 14, 2009 12:06 pm

Moderators
Whats this blatant piece of advertising for someone’s book doing on this thread
“Roger Knights (13:00:13)”
“Here’s a three-part solution I endorse, spelled out in a book called “Prescription for the Planet: The Painless Remedy for Our Energy & Environmental Crises”
Roger this is not the place for trying to promote someone’s book. Not one word of what you have posted bares any relevance to this popular thread. Hoping to get a higher Google ranking are we off the back of this thread? Do us all a favour and go and astroturf somewhere else.
For those of us who have actually worked in the UK nuclear industry including periods at Dounreay on the ‘slow breeder reactor’ (a Tom Marsham phrase), I can only say that Blees needs to come back and live on this planet where we’ve nevr built nor are ever likely to design and build a fast reactor with a positive breeding coefficient and ceratinly not 3500 of them by 2050.
Oh and by the way does this marvellous design of IFR ‘burn’ all the low level waste and intermediate level waste that arises from the reprocessing of its spent nucear fuel and its ‘breeder blanket’? If we want to get rid of weopons grade plutonium, its much easier to just recycle it into the thermal reactors as ‘mixed oxide’ fuel. And the solution to non-proliferation is also easy too. Just don’t bloody reprocess the spent fuel. End of!!
Now back to talking about the Darwin adjustments.
KevinUK

AMac
December 14, 2009 12:57 pm

Far upthread, Nick Stokes made an important contribution that has been largely bypassed.
Willis Eschenbach, could you address the remarks on Darwin temperature adjustments made by Aussie climate person “Blair”?
Nick Stokes’ comment is on 11 Dec 2009 at (05:11:33).
He quotes “Blair,” who originally made his remarks on Darwin as Comment #64 (11 Dec 2009 at 4:15pm) at Brian’s Lavartus Prodeo post How low can you go? (A Nov. 24th 2009 comment of general relevance by “Blair” on temperature data adjustment in Australia is here.)
The Dec. 11th comment by “Blair” begins,

Roger posted a link to my post from a couple of weeks ago so I won’t repeat the information there. As it happens, we’re currently reworking the Australian historical temperature data set, using the more complex adjustment scheme outlined in that post rather than the single annual adjustment used at present, and also incorporating a fair bit of pre-1960 data that was effectively unavailable for use last time round because it was only available on paper and hadn’t been entered into the computer database. Hopefully that will be done earlyish next year.
In the specific case of Darwin, while we haven’t done the updated analysis yet, I am expecting to find that the required adjustment between the PO and the airport is greater in the dry season than the wet season, and greater on cool nights than on warm ones. The reason for this is fairly simple – in the dry season Darwin is in more or less permanent southeasterlies, and because the old PO was on the end of a peninsula southeasterlies pass over water (which on dry-season nights is warmer than land) before reaching the site. This is fairly obvious from even a cursory glance at the data – the record low at the airport is 10.4, at the PO 13.4.
Darwin is quite a difficult record to work with…
[Continues]

December 14, 2009 1:28 pm

“…As it happens, we’re currently reworking the Australian historical temperature data set, using the more complex adjustment scheme outlined in that post rather than the single annual adjustment used at present, and also incorporating a fair bit of pre-1960 data that was effectively unavailable for use last time round because it was only available on paper and hadn’t been entered into the computer database. Hopefully that will be done earlyish next year.”
The game’s afoot, Watson.

carrot eater
December 14, 2009 2:25 pm

AMac (12:57:01) :
That speaks to the historical metadata and the homogenisation done by the ABoM. Those aren’t used by the GHCN discussed in this topic, but they can be compared to the GHCN results.
Of course, if the ABoM finds more raw data, or enters more hand-written data into the computers, then the GHCN can use that. Another comment by Blair (I assume that is Blair Trewin) said they found some old data from another site in Darwin, so maybe that will help with the weirdness around 1941.

Geoff Sherrington
December 14, 2009 3:00 pm

Re Nick Stokes 05:11:33 on 11 Dec
Nick,
You quote a preson working with Australian adjustments at Darwin:
” in the dry season Darwin is in more or less permanent southeasterlies, and because the old PO was on the end of a peninsula southeasterlies pass over water (which on dry-season nights is warmer than land) before reaching the site. This is fairly obvious from even a cursory glance at the data – the record low at the airport is 10.4, at the PO 13.4″
Nick, what you forget to mention is that although southeasterly winds do indeed pass over water before reaching the old Darwin P O site, they pass over water for about 8 km (5 miles).
Before that, they have passed over some 3,000 km of dry hot country (geometrically speaking).
Testing: Where is your comment wrong?
Hint: Look at a wind rose, if you know what it is.
Yours was not a quality post.

Ripper
December 14, 2009 3:07 pm

“Steve Short (14:04:50) :
This document published by CSIRO in 2008 on Processed Climate Data for Timber Service Life Prediction Modelling has many graphs of interest”
Crikey! Lots of graphs of interest!
They appear to match the decline in Briffa’s tree ring.

December 14, 2009 3:59 pm

“…and also incorporating a fair bit of pre-1960 data that was effectively unavailable for use last time round because it was only available on paper and hadn’t been entered into the computer database. Hopefully that will be done earlyish next year.”
This also speaks (strongly) to the issue of whether data was dropped or not. I am particularly interested in data which is:
(a) known to exist from independent historical and/or academic sources source and/or acknowledgment by BOM itself in some way e.g. past summary reports; and
(b) potentially has application for regressions and algorithmically based corrections of sites such as Darwin and Brisbane (airports),
and hence can be used in due diligence examination of the quality of modern trends revealed post-adjustment by BOM (and hence by GHCN, CRU etc).
Soooo interesting that Blair (Trewin?) is clearly stating that, here in 2009, BOM is still in possession of ‘a fair bit’ of pre-1961 records which have somehow never been entered into their electronic records over (errrr) the last 48 years. If memory serves me correctly I first started putting data onto punch cards at uni in about 1970 during my masters.
So now we know from the tough nuts over at the Lavartus Prodeo blog that ‘data never entered’ (still!) could simply have been misinterpreted by sceptics as ‘dropped data’. How very silly of us (yet again).
In respect of Darwin, maybe BOM might post the reputedly mythical Oenpelli 1920 – 1963 record for a site just 230 km from Darwin. Hope springs eternal (I suppose)….

Geoff Sherrington
December 14, 2009 4:27 pm

Here is some detail attributed to Torok and Nicholls of the Australia Bureau of Meteorology, for Darwin (inclusing both the old PO site and the post-1940 airport site), dated mid 1990s.
14015 1021 1991 0 -0.3 -0.3 dm
14015 1021 1987 0 -0.3 -0.6 dm*
14015 1021 1964 0 -0.6 -1.2 orm*
14015 1021 1942 0 -1 -2.2 oda
14015 1021 1894 0 0.3 -1.9 fds
14015 1001 1982 0 -0.5 -0.5 or
14015 1001 1967 0 0.5 0 or
14015 1001 1942 0 -0.6 -0.6 da
14015 1001 1941 1 0.9 0.3 rp
14015 1001 1940 1 0.9 0.3 rp
14015 1001 1939 1 0.9 0.3 rp
14015 1001 1938 1 0.9 0.3 rp
14015 1001 1937 1 0.9 0.3 rp
14015 1001 1907 0 -0.3 -0.9 rd
14015 1001 1894 0 -1 -1.9 rds
014015 is the Australian station number.
1002 mean minimum temperature, 1001 means maximum.
The next column is the year of a change.
Next column, a 1 is for a change in a single year. A 0 is for all previous years.
The first column with temperature changes in deg C is the magnitude of an adjustment, though I am uncertain about the sense of the sign.
The next column is the cumulative effect of the changes, presumably preceding the corresponding year, but uncertainly extending back (to the start of the data or to the previous change?)
Then lastly there is a code for the reason for the adjustment. The common “rp” means “poor site, site cleared”. Now if you are familiar with Darwin, you will know that unchecked grass can grow taller than a Stevenson screen, so what effect that has on the temperature is rather hard to envisage – if it was grass. There was also an episode reported above of a pile of adjacent dirt being levelled.
Someone above has said that the most recent site is next to the Stuart highway, which is the main link from Darwin to the outside world. If I have identified it correctly on Google earth, it is about 190 meters from the Highway, but I’m not confident. The airport fence is the boundary on the Northern side of the Highway for 5.5 km. Given that the dominant winds are from the east and SE, this would blow the UHI effect from suburbs like Winnellie and Berrimah more or less for 15 km until past Palmerston. This is the direction of the wind for more than 40% of the year. There has been a lot of urbanisation to the SE of this area since about 1980. Have a look on Google Earth.
The big problem arises when you try to discover what the Torok and Nicholls adjustments reported above were applied to, because there had been earlier adjustments (eg Simon Torok’s Ph.D. thesis). Do these adjustments apply to raw data or to adjusted data?
The next big question is whether these adjustments are still pertinent. There have been later adjustments. Did they replace or add to these adjustments? The BOM seems reluctant to make a statement.
Indeed, the silence of the BOM on this whole Darwin episode is deafening. Could not someone like Dr David Jones, Head of Climate Change at the BOM, make a definitive statement to clear the air? When we correspond, he tells me that certain “products” are sold by BOM to the public, but that fees (which are not small) will be charged for further inquiries. Is this a variant of “don’t release the data”? If so, it’s directed against the taxpayers who funded it. As I understand it, all Australian data supplied to NOAA or whomever is the global source, is supplied for free.
Not a healthy, democratic arrangement
Here is some detail attributed to Torok and Nicholls of the Australia Bureau of Meteorology, for Darwin (inclusing both the old PO site and the post-1940 airport site), dated mid 1990s.
14015 1021 1991 0 -0.3 -0.3 dm
14015 1021 1987 0 -0.3 -0.6 dm*
14015 1021 1964 0 -0.6 -1.2 orm*
14015 1021 1942 0 -1 -2.2 oda
14015 1021 1894 0 0.3 -1.9 fds
14015 1001 1982 0 -0.5 -0.5 or
14015 1001 1967 0 0.5 0 or
14015 1001 1942 0 -0.6 -0.6 da
14015 1001 1941 1 0.9 0.3 rp
14015 1001 1940 1 0.9 0.3 rp
14015 1001 1939 1 0.9 0.3 rp
14015 1001 1938 1 0.9 0.3 rp
14015 1001 1937 1 0.9 0.3 rp
14015 1001 1907 0 -0.3 -0.9 rd
14015 1001 1894 0 -1 -1.9 rds
014015 is the Australian station number.
1002 mean minimum temperature, 1001 means maximum.
The next column is the year of a change.
Next column, a 1 is for a change in a single year. A 0 is for all previous years.
The first column with temperature changes in deg C is the magnitude of an adjustment, though I am uncertain about the sense of the sign.
The next column is the cumulative effect of the changes, presumably preceding the corresponding year, but uncertainly extending back (to the start of the data or to the previous change?)
Then lastly there is a code for the reason for the adjustment. The common “rp” means “poor site, site cleared”. Now if you are familiar with Darwin, you will know that unchecked grass can grow taller than a Stevenson screen, so what effect that has on the temperature is rather hard to envisage – if it was grass. There was also an episode reported above of a pile of adjacent dirt being levelled.
Someone above has said that the most recent site is next to the Stuart highway, which is the main link from Darwin to the outside world. The airport fence is the boundary on the Northern side of the Highway for a few km. Given that the dominant winds are from the east and SE, this would blow the UHI effect from suburbs like Winnellie in the direction of the weather station about 30 % of the year. There has been a lot of urbanisation to the SE of this area since about 1980. Have a look on Google Earth.
The big problem arises when you try to discover what the Torok and Nicholls adjustments reported above were applied to, because there has been earlier adjustments (eg Simon Torok’s Ph.D. thesis). Do these adjustments apply to raw data or to adjusted data?
The next big question is whether these adjustments are still pertinent. There have been later adjustements. Did they replace or add to these adjustments? The BOM seems reluctant to make a statement.
Indeed, the silence of the BOM on this whole episode is deafening. Could not someone like Dr David Jones, Head of Climate Change at the BOM, make a definitive statement to clear the air? When we correspond, he tells me that certain “products” are sold by BOM to the public, but that fees (which are not small) will be charged for further inquiries. Is this a variant of “don’t release the data”. If so, it’s directed against the taxpayers who funded it. As I understand it, all Australian data supplied to NOAA or whomever is the global source, is supplied for free.
Not a healthy, democratic arrangement, is it?
I think it would clear air for the BOM to post (a) the Australian raw data, warts and all and (b) the methods used to adjust the data to arrive at the BOM online values.
That would be a bit scientific, at least.

Geoff Sherrington
December 14, 2009 4:59 pm

Nick Stokes,
I quoted you above as writing
Nick Stokes (02:11:43) :
“Darwin is a real outlier among long data series.”
An ordinary person would read this as meaning that Darwin is an odd man out among long data series. (Maybe because of its high trend when adjusted).
I am asking if you have examined other long term data series to support this insinuation.
Have you?

Geoff Sherrington
December 14, 2009 5:15 pm

carrot eater (09:19:57) : on 14/12
So where does this leave an eager new researcher who wishes to do a proxy study. Unless he has been deeply immersed in the history of adjustments, how in hell can he choose the right temperature series, let alone know that others exist and that some are quite different?
So we come back to where on gfg – giorgio gilestro’s webpage – a distribution of corrections to slope is given to purportedly show that the plusses balanced the minuses. More or less, since the time factor was left out. Eric Steig approves of it.
Would a good scientist use inputs from two different adjustment methods (or probably more, including other countries) to produce such a global graph? Not this one. Why, there is not even public documentation on how the changes to date have been made, which have been dropped, which have been modified. How can you produce such a graph when a change this year can change an estimate 100 years ago and when several hundred changes are being made each year in the USA alone?
As we say here, “dreamin’ “

Roger Owens
December 14, 2009 5:18 pm

A simple question. I seem to remember that we’re not allowed to average averages. Is that wrong? Seems using these “adjusted” figures is even worse. I agree completely with Mr. Ryan Stephenson, Gail Combs, et. al, but who is going to do all this work? For many places all over the world? Maybe we should just go back to common sense (?) or find an equivalent way to measure temperatures similar to reading tree rings.

December 14, 2009 6:05 pm

Geoff Sherrington (17:15:39) :
carrot eater (09:19:57) : on 14/12
“Why, there is not even public documentation on how the changes to date have been made, which have been dropped, which have been modified.”
As we say here, “dreamin’ “
Absolutely!
Quiate apart from any suggestion that NOAA, NASA, ABoM have been remiss in this area , this is clearly where, IMHO, the people who set up IPCC and the ‘leading scientists’ who advise it and are the ‘leading authors’ of the report fell down on their responsibilities very badly.
They should have set up a whole unit from the outset whose sole responsibility was to carefully record, document and oversee the QUALITY CONTROL/QUALITY ASSURANCE aspect of all key/core datasets presented by the UN to the world in respect of global warming, despite those datasets arising within a national (US, UK principally) context.
I note that IPCC was founded very many years after the modern principles of QC/QA were developed in the later 19th/early 20th century by intelligent people (Whitney, Taylor, Shewhard etc) working firmly within a context of (shock horror) American manufacturing i.e. BUSINESS.
Maybe that is the answer. Those ‘leading scientists’ and ‘leading authors’ possibly had never have been much exposed to a deep culture of quality control and quality assurance. We may well rue the day there were very few engineers amongst them. Not the sort that could put men on the Moon and bring ’em back alive.

carrot eater
December 14, 2009 8:15 pm

Geoff Sherrington (16:27:47) :
“The first column with temperature changes in deg C is the magnitude of an adjustment, though I am uncertain about the sense of the sign.”
I think the sign is the direction of the adjustment for the previous years. The idea is to get everything on the same scale as the most recent measurement, so discontinuities are reconciled by adjusting the older data, not the newer data.
“There have been later adjustments. Did they replace or add to these adjustments?”
Sounds like Della-Marta et al mostly left Torok’s adjustments in place through 1993 (except as noted), and then did their own homogenisation from scratch on the ensuing data. I’m not sure what choice you pose between ‘replace’ and ‘add’.
“The BOM seems reluctant to make a statement.”
What, have you been asking them things?
“Could not someone like Dr David Jones, Head of Climate Change at the BOM, make a definitive statement to clear the air? ”
Nowhere in Willis’s post does the BoM come directly into play. The entire post is about the GHCN, so it’s the NOAA that’s in the line of fire here. I don’t think a response is absolutely necessary, since I don’t think Willis actually made any discernible point, but maybe somebody is working one up; it’s been what, a whole week? That said, the ABoM could provide come clarification on the historical metadata (which you’ve already found). As I’ve repeated, the GHCN does not use this metadata, but having it would give an idea of the physical underpinnings for why adjustments are made.
“Is this a variant of “don’t release the data”? ”
As we’ve been hearing, governments want their weather services to make some money. This is of course unfortunate for people in the general public who want to play with the data, and you should encourage them to reconsider. Though the pricing is not prohibitive, if I’m looking at the right thing.
http://reg.bom.gov.au/silo/Subscriptions.shtml
But in any case, they give raw data (well, monthly averages) to the GHCN (I haven’t checked if it’s only mean, or also max and min), and they don’t seem to mind the GHCN having that available on their website.
“I think it would clear air for the BOM to post (a) the Australian raw data, warts and all and (b) the methods used to adjust the data to arrive at the BOM online values.”
In principle it’d be nice to get the unadjusted data straight from the ABoM page, but getting it from the GHCN is good enough. If the GHCN has max/min data for Australia, then that’s good enough. As for the method, it looks like Torok to 1993, then Della-Marta to maybe 2000; I’m unsure if the most recent stuff is homogenised yet. A comparison of the ABoM homogenised set with the GHCN raw would tell you that, I suppose.

Nick Stokes
December 14, 2009 8:28 pm

Geoff Sherrington (16:59:39) :
Nick Stokes,
I quoted you above as writing
Nick Stokes (02:11:43) :
“Darwin is a real outlier among long data series.”
An ordinary person would read this as meaning that Darwin is an odd man out among long data series. (Maybe because of its high trend when adjusted).
I am asking if you have examined other long term data series to support this insinuation.
Have you?

Yes. Did you read my reply (02:11:43)? You quoted it. I said:
For series with over 40 years data, of which there are 4387, Darwin came in at #243, nearly at the 95% level.

So yes, I examined 4387 other series. All of them.

carrot eater
December 14, 2009 8:39 pm

Geoff Sherrington (16:59:39) :
(to Nick) “I am asking if you have examined other long term data series to support this insinuation.”
Doesn’t his reply at (02:11:43) tell you? He looked at >40 year records.
Geoff Sherrington (17:15:39) :
“So where does this leave an eager new researcher who wishes to do a proxy study.”
I don’t see the difficulty. You’ve got CRU, GISS, NOAA and now JMA, as well as two satellite records. Any eager new researcher would read the literature and get an idea of how each is constructed, if he were competent. To the limited extent that these don’t all agree, I think even laymen have a feel for the relevant differences – GISS’s Arctic infilling, for example.
“More or less, since the time factor was left out. Eric Steig approves of it.”
The initial analysis was very elegant and simple, yet informative. Nick Stokes took it a step further, by limiting it to post 1940. I bet many people here would not have guessed the shape of the distribution – hence, it’s informative. We see that homogenisation has subtle net effects on GHCN, not a huge one; we see that Darwin not representative, and we see that you’ll also find the opposite of Darwin. This does not tell us that the homogenisation method of GHCN is good, but it is a good indication that it isn’t some crude intentional fraud, as claimed by Willis.
“Would a good scientist use inputs from two different adjustment methods (or probably more, including other countries) to produce such a global graph?”
I have no idea what you’re talking about. GHCN and GISS both start with raw, except in the case of US data. They do their own adjustments.
“Why, there is not even public documentation on how the changes to date have been made, which have been dropped, which have been modified. How can you produce such a graph when a change this year can change an estimate 100 years ago and when several hundred changes are being made each year in the USA alone?”
Have you read every single paper from NOAA, GISS and CRU on the topic?

Kevin Aldcroft
December 14, 2009 8:53 pm

It just goes to show that you that with the right incentives some so called scientist can be pesauded to adjust any results or findings to suite a hidden agenda. The pay off for these so called climate scientists must be worth millions in grants sponsorships and government funding but its is no excuse or reason to taint the scientific community by using fake data, just build their case, they should all be charged with fraud.

December 14, 2009 10:28 pm

carrot eater (20:39:09) :
“GHCN and GISS both start with raw, except in the case of US data.”
I basically agree with most of what you have said with the exception of the above statement.
How do we know this data is raw? It seems like while it is one small step for you to assert that, it is one giant leap for GHCN and GISS to be certain it is so.
For example, we only have the claim by ABoM that it supplies only raw data to NCDC and NASA. As a patriotic Aussie I’d like to believe (and do) that is true. I can’t speak for all the other countries in the world and I don’t think you can either. I can imagine many have carried out good, bad and truly ugly adjustments of their own. I think it is quite probable NOAA and NASA have and do receive ostensibly raw data which was not truly raw. NCDC and NASA have no means of knowing otherwise.
Beyond this, every country has the right to not upload, lose, or drop data and often does as even the case of ABoM shows. As far as NOAA and NASA are concerned all the data they get is what the supplying countries choose to supply. So as we have seen, this obviously constrains what GHCN and GISS can do e.g. in terms of subsequent gridding and data adjustments.
The only value of Willis’ work lies only in the fact that he has highlighted a highly complex situation in the treatment of JUST ONE station, Darwin by JUST ONE agency GHCN. But I respectfully suggests this at least shows just how convoluted and complex the situation can get.
At the end of the day NOAA, NASA and CRU are just organisations within two (developed) countries. They are not the whole world or the UN.
The adjusted and homogenized data is being implicitly used by the UN (IPCC) on behalf of all the human race, rich, middle class and poor alike. Most are blithely unaware of the complexity of this (data gathering) situation and clearly IPCC (aka the Phil Jones’ of this world) doesn’t want to tell them.
Yet in the ultimate irony for the first time in human history we are being lobbied by the UN to act both very radically and fully in concert as a sentient species on an issue facing every human and Earth itself, in significant part on the basis of this data!
In this unique context, endless blatherings of ‘don’t see the difficulty’ and ‘have you read every single paper’ references to authority just don’t resolve my ethical and scientific issues with this ‘mere exercise in data gathering’ – very sorry.

December 14, 2009 11:53 pm

Calling all activists and people who want to do street actions regarding climate change madness. Please visit and join http://www.truthmovementaustralia.com.au
And our meetups page.
http://www.meetup.com/truth-movement-australia/
We are taking to the streets this DEC and JAN Or untill Climate Change (carbon) phobia ENDS!

Geoff Sherrington
December 15, 2009 1:45 am

From January1967 to December 1973 incl, there awere 2 sets of readings taken daily in Darwin and reported by the Bureau of meteorology. I have converted them into monthly averages for both maximum and minimum temperatures.
One station was the BOM regional office, with lats and longs being shown as -12.4667, 130.8333, number 014161. The other was the airport, lats and longs being (now) -12.4239, 130.8925, number 014015. The second station is about 7 km NE of the first.
Here is a graph of the monthly readings:
http://i260.photobucket.com/albums/ii14/sherro_2008/Darwinoverlap.jpg?t=1260869317
Despite their proximity, some monthly averages differ by up to 2 degree C. The correlation coefficients between similar pairs are typicallu around 0.98 or better and the mean of both over the period are equal.
However, the presence of monthly differnces of up to 2 deg C goes with a systematic difference. The minima averages are 23.2 and 23.8 and the maxima averages are 32.1 and 32.6. Thus there is a systematic 0.4 degree difference between the max and mins of these 2 stations just 7 km apart in both maxima and minima over these 7 years. Remember, the world is making much of a global 0.7 degrees in the 20th century mean as constituted somehow.
This leads one to question if the correlation coefficient between the average temperature of 2 stations is an adequate criterion for levelling data or replacing missing data. The technique is widely used. Should it be?
Caveat: The available data do not allow confirmation that these actual sites were used. It is said that the actual position of the Stevenson screen at the airport was moved several times. The positions given above are those issued by the BOM for the present locations.
Comment on armwaving: I have asked a number of times if other readers know for certain that some adjustments were made to Darwin or World data and if certain data were truly raw. It is not good enough to receive an opinion in return. In my working career, if I had asked a colleague such questions and if I had received such answers, that employee would not be in the office for much longer. We are at the stage of questioning fundamental assumptions, not at the stage of saying “I think that is right”. We want to know if it is right.

Geoff Sherrington
December 15, 2009 1:49 am

Re: carrot eater (20:39:09) :
Geoff Sherrington (16:59:39) :
(to Nick) “I am asking if you have examined other long term data series to support this insinuation.”
Doesn’t his reply at (02:11:43) tell you? He looked at >40 year records.
………………………
Now, be a good chap and do the job properly. Look at >100 year records. Blind Freddie knows that there was a major break point in the data of many stations, about 50 years ago.

carrot eater
December 15, 2009 3:52 am

Just saw this.
Ryan Stephenson (07:06:07) :
“To claim that any scientifically useful data could be drawn from such a piece of equipment is itself fraudulent, and begs the question “Why?”.”
You use the word fraudulent way too easily. That isn’t ‘fraudulent’, at least, not that anybody has demonstrated.
But in any case, I agree that at some point, a data set just can’t be used anymore, if it has too many discontinuities. But one would have to come up with a statistically-justified objective measure of when to not use a data set. Perhaps this is why the GISS set leaves out the Darwin data before 1963; I’ve not looked into that. The GHCN has criteria for not using a station, as well.
The answer to ‘why’ is simply to have more spatial coverage at all times. Almost any weather station will have some discontinuity, whether it be an equipment change or time of observation change. You can’t go back in time and tell the people in 1880 where to put the weather stations, how to set them up and maintain them, and give them modern equipment as well; you have to do the best you can with what you have.

carrot eater
December 15, 2009 4:15 am

Steve Short (22:28:27) :
If the country wants to lie about what it’s giving to the GHCN, and give them somehow processed data while calling it raw, I suppose they could, if that’s what we’re coming to.
I’ll note that the GHCN’s methods seem able to sniff out some weirdnesses though; if you read the papers they give examples where the NOAA figured out mistakes in the data – people recording numbers in the wrong units, data misassigned to the wrong station, etc. Even the little bit of processing the countries are supposed to do – calculating the monthly means – can go wrong, and is one of the things the NOAA tries to sort out.
So if the countries give the NOAA some weirdly processed data, that itself could be detected by the NOAA methods.
“The only value of Willis’ work lies only in the fact that he has highlighted a highly complex situation in the treatment of JUST ONE station, Darwin by JUST ONE agency GHCN. But I respectfully suggests this at least shows just how convoluted and complex the situation can get.”
It shows you the difficulties that can arise at some stations. But that’s a far, far cry from what Willis was claiming it showed.
“In this unique context, endless blatherings of ‘don’t see the difficulty’ and ‘have you read every single paper’ references to authority just don’t resolve my ethical and scientific issues with this ‘mere exercise in data gathering’ – very sorry.”
I was answering specific questions – questions that can be addressed by actually reading the available literature, like ‘how do they do x’, or ‘why do they do x’, or ‘how do I know what GISS does, as opposed to what NCDC does’. If your personal point is simply that the whole thing can be difficult, then fine. But one has to move beyond that observation, and assess the uncertainty in the exercise.

carrot eater
December 15, 2009 4:27 am

Geoff Sherrington (01:49:39) :
Look at what they’ve reported so far. gg used essentially all stations for the entire lengths of their histories; nick stokes repeated that for only the longer stations (>40 years of adjusted data); nick stokes also did all data since 1940 for stations with at least 9 years in that period.
So if you’re interested only in what happened since 1950, does the last one answer your question?
gg and Nick Stokes have posted the code they’re using for this analysis. If you want to see every conceivable variation, at some point you can do it yourself.

carrot eater
December 15, 2009 8:21 am

One correction to something I said earlier: I suggested looking at GISS code for homogenisation. While that would help you with GISS’s methods, it wouldn’t help at all for GHCN because they use very different methods. GISS looks for UHI; GHCN uses the methods discussed here to try to detect all sorts of other changes.
Sorry for any misunderstanding. It’s unclear to me if any NCDC source code is available, though I still maintain that the method is described well enough that somebody could give it a try.
I do think it worth noting that the results are broadly similar either way. Different methods giving consistent results is a good thing; it shows you things aren’t overly sensitive to how you go about it.

December 15, 2009 1:10 pm

carrot eater (04:27:24) :
“gg and Nick Stokes have posted the code they’re using for this analysis. ”
Could you direct me to these (code + analysis) please? Thanks.

December 15, 2009 9:02 pm

Willis
This is why I have been expending efforts to find the missing data from Oenpelli i.e. starting 1920-1925, possibly as early as 1910, through 1963. Oenpelli is 230 km east of Darwin and (considering the prevailing winds at Darwin) if any site is going to give you a +0.80 or better correlation it might well be Oenpelli. I did my PhD (hydrogeology/geochemistry) in the Alligator Rivers region 1983 – 1988, including long annual visits to Oenpelli and Narbalek Mine nearby. The Oenpelli site had a Stevenson Screen when I first saw it in 1983. I feel sure I sighted the long term temperature record (on paper) during that time – either at Oenpelli or more probably at the Narbalek mine site. ABoM’s online record is only from December 1963 although their metadata states from Jan 1957 (?). It is possible ABoM have the earlier Oenpelli data on paper in their archives if comments by ABoM’s Blair Trewin at Larvatus Prodeo is anything to go be, although why it wasn’t uploaded long ago is anyone’s guess.

david baer
December 15, 2009 11:11 pm

. Its all about chasing shadows.
By that I mean latching on to this or that latest, most innovative idea that some self styled money making guru has put out in the hope it’ll go viral and make them a lot of money off the backs of all the headless chickens who will follow them blindly down a blind alley. Its a shame but a truism nonetheless that people will follow where someone they see as an expert leads. Even if they lead them to certain disaster, which is what most of the gurus tend to do to their flocks.
The trick is to recognize a shadow when you see it!
http://www.onlineuniversalwork.com

Ripper
December 15, 2009 11:28 pm

Can any of you experts out there explain to me how the BOM did this to Halls Creek?
http://members.westnet.com.au/rippersc/hchomog.jpg

carrot eater
December 16, 2009 12:22 am

Willis Eschenbach (19:27:35) :
The Aussies have this to do with you: You were wondering what caused the GHCN to make the adjustments they made. To do that, you’d have to look at the neighboring stations and follow their method. But their method is meant to infer actual changes on the ground, so as a test of its ability to do that, you can ask the Aussies for the historical metadata. Turns out, it’s online, after all.
Willis Eschenbach (19:31:39) :
Re quality control vs homogenization: At that point, we weren’t talking about your post anymore; Steve Short was onto his own personal peeves. In any case, I’d say that the homogenization step will also root out major errors, in addition to the quality control. When they go to build the reference network, and the nearby stations don’t correlate at all, that would throw up a flag.
Willis Eschenbach (20:34:06) :
“The hard part is to find those stations whose first difference has an 0.80 correlation with the Darwin station first difference.”
You absolutely should have tried doing that before your initial post. I can’t imagine why you didn’t. It would have added some substance.
“So I hold to my claim, that this station was not adjusted by the listed GHCN procedure.”
OH, come now. In some academic sense, I’m also curious to see how those (small) early year adjustments were made. I’ve more or less got your Fig 8 reproduced now; I just need to confirm how GHCN computes annual means.
But let’s be honest.
Nobody who read this post cared one bit about the tiny adjustments in the 1920s. It’s the “stepped pyramid climbing to heaven” that got everybody excited, and we all know it. So the first priority would be to look at neighboring stations in the more recent times; the distant past adjustments are secondary in importance.

December 16, 2009 12:28 am

Ripper (23:28:09) :
“Can any of you experts out there explain to me how the BOM did this to Halls Creek?
http://members.westnet.com.au/rippersc/hchomog.jpg
Fairly blown away by that:
“Station 2011 min 1998 – 2998 = 0.451 deg/100yrs
Station 2011 max 1998 – 1951 = 0.0466 deg/100yrs”
stuff….

December 16, 2009 12:32 am

Again
carrot eater (04:27:24) :
“gg and Nick Stokes have posted the code they’re using for this analysis. ”
Could you direct me to these (code + analysis) please? Thanks.

Geoff Sherrington
December 16, 2009 12:40 am

Ripper,
I’ll see your chart and raise you $2.
Is this the same data as yours?
http://s260.photobucket.com/albums/ii14/sherro_2008/?action=view&current=HallsCreek.jpg
This was created from BOM data available in 2006, with the 2007 and 2008 years added from an online service.
I would like to be able to comment, but I do not know with confidence whether you have raw data to start with, or whether it has been adjusted before your earliest stage.
It is possible to comment that the minima are more often rising or falling than the maxima when one does this type of plotting. The maxima are steady in many places, especially near to the coast. What were your data sources?

Nick Stokes
December 16, 2009 2:14 am

Steve Short
You’ll find analysis and both codes on GG’s site.

Nick Stokes
December 16, 2009 2:34 am

Geoff Sherrington (01:49:39) :
OK, if you go to 80 year records after adjustment, there are 2074 such stations, and Darwin is now number #31, ranked by change to slope caused by GHCN adjustment.
This histogram shows what an outlier it is.

December 16, 2009 3:13 am

Nick Stokes (02:14:28) :
“Steve Short
You’ll find analysis and both codes on GG’s site.”
Thanks Nick. I hope to get into this over the Xmas/New year break.
Just one quick question jumps out at me as follows:
GG has calculated a summary stat by year (trend) over all stations and gets an average of +0.0175 C/decade. This is ‘the trend of the adjustment’ (in GGs own words) or in effect the aggregate bias of all adjustments.
Romanm has calculated a trend over time and weighted by the number of stations in each year (as you noted), this is +0.0170 C/decade.
The agreement is good between these two different approaches. Call the average aggregate bias +0.0172 C/decade?
However, correct me if I’m wrong, but isn’t the 20th century total global warming (from all sources) supposed to be ~0.65 C or 0.065 C/decade?
Isn’t a +0.0172 C/decade bias then a significant 26% of the supposed warming – reducing the unbiased warming to 0.048 C/decade?
And doesn’t this have a significant implication for an inferred CO2 sensitivity?
What have I missed (or misunderstood) here?

Geoff Sherrington
December 16, 2009 3:13 am

Steve Short (21:02:49) :
One of the first constructions when my colleagues-to-be discovered Ranger One was to erect a proper weather station. There are records from 1970-71. I’m trying to get paper copies. It’s 256 km by road to Darwin PO. (2 hours drive before the speed was restricted).
We share a problem in not being able to trace which version of which station was adjusted by whom, by how much, in which direction and why. That is why I am wearing thin about other people who assert that such-and-such data are the same as such-and-such version 2, but not the same as version 3. There does not seem to be an adequate fixed frame of reference in time, for either the form of the first records in time or the date stamp on many of the later versions.

Geoff Sherrington
December 16, 2009 3:18 am

Nick Stokes (02:14:28) :
“Steve Short
You’ll find analysis and both codes on GG’s site.”
Yes, we know that there has been an analysis there for some days now, but we do not know that you could stand in a Court with a Bible and swear that the versions had the origins you imagine. We are past the stage of analysis and saying “this version looks like that version”. It’s time for rigid documentation the sources, is it not? And how does one do this when some USA records are adjusted many, many times?

Ripper
December 16, 2009 3:19 am
carrot eater
December 16, 2009 5:03 am

Steve Short (03:13:07) :
“However, correct me if I’m wrong, but isn’t the 20th century total global warming (from all sources) supposed to be ~0.65 C or 0.065 C/decade?”
Taken over the whole century, you’ll get something like that. Taken since 1970, you’ll get something rather higher, ~ 0.15 to 0.2 C/dec or so. The 20th century wasn’t exactly linear. Although we should be a bit careful; the data being analysed here are land-only, and we’re comparing it to land+ocean. Without looking, I’m not sure how big a deal that is, but the oceans do slow things down.
“Isn’t a +0.0172 C/decade bias then a significant 26% of the supposed warming – reducing the unbiased warming to 0.048 C/decade?”
You’re a priori assuming that any net effect due to homogenisation is bad?
“And doesn’t this have a significant implication for an inferred CO2 sensitivity?”
Zero implication.

carrot eater
December 16, 2009 5:09 am

Geoff Sherrington (03:18:23) :
I don’t know what you’re going on about. It is absolutely without a doubt that the GHCN does not use the Australian ‘high quality’ homogenised reference data that you’ve been talking about.
Other adjustments, like going back and finding new data, finding misrecorded or mislabeled data, and so on, are perfectly fine, and will happen continuously.

3x2
December 16, 2009 7:52 am

Probably a dead thread by now but …
Concerning the adjustments made for moves and equipment changes at Darwin, is the maximum adjustment (2°C (+/- doesn’t matter)) even possible?
(from v2.mean) If we take the january mean (over the whole record no adjustments) as around 28 and the july mean as 25 we have a difference of 3°C over the whole year. An adjustment of (v2.mean_adj) 2°C within that band seems a little excessive without some kind of major problem in the equipment or the site.

carrot eater
December 16, 2009 10:42 am

3×2 (07:52:00) :
I still check about once a day the thread, so I’ll see what you say.
We have seen that Darwin is atypical in how large the adjustment is, but yes, it is still a curiosity as to why that came about. This will take some time, gathering the neighboring stations, doing correlation checks, and so on. I don’t have the time for this now, but hopefully somebody else does.
The time of most interest here (due to the big stepladder) is 1940 to 2000.

carrot eater
December 16, 2009 10:50 am

Oh, and another remaining point of some interest is the treatment of duplicate records (series 0 to 4, where they overlap, are largely duplicate), how each one is homogenised by GHCN, and how they’re finally pasted together (In Willis Fig 7, you can see the overall result has much more moderate adjustment).
I’ve finally found what I think is the raw data on the ABoM page, and it is a better way to get it than GHCN or GISS because it explicitly separates the station moves – there are separate files for Airport and PO.

December 16, 2009 12:46 pm

carrot eater (05:03:28) :
“Although we should be a bit careful; the data being analysed here are land-only, and we’re comparing it to land+ocean. Without looking, I’m not sure how big a deal that is, but the oceans do slow things down.
“Isn’t a +0.0172 C/decade bias then a significant 26% of the supposed warming – reducing the unbiased warming to 0.048 C/decade?”
You’re a priori assuming that any net effect due to homogenisation is bad?
“And doesn’t this have a significant implication for an inferred CO2 sensitivity?”
Zero implication.”
No, I am not priori assuming that any net effect due to homogenisation is bad. What I am saying is this:
(1) So now it is out in the open. We now objectively know homogenisation introduces a positive bias. It has a positive sign. We can see it is NOT a trivial number. We saw plenty of lead-up DENIAL by the warmists about that!
(2) At the end of the day that ADDED +0.017 C/decade bias is a scientific judgement call. It’s peer review should have been a transparent part of the IPCC process. It was not. Ergo: it should have been subject to a transparent international QC/QA process by IPCC. It was not.
(3) You appear to be oblivious of these well known aspects of ‘consensual’ reality:
http://www.globalwarmingart.com/wiki/File:Climate_Change_Attribution_png
Zero implication for CO2 sensitivity my ass.
You’ve been eating far too many carrots, Bugs.

Geoff Sherrington
December 16, 2009 5:50 pm

carrot eater (10:42:31) :
Darwin is not atypical in Australia in having a large GISS adjustment to a long set. There are other 100-year records of similar shape. Here is Broome, which is 1,100 km s-w of Darwin, showing the anomaly graphs of Giss unadjusted & adjusted, taken from KNMI in Dec_09. (The BOM online data show an essentially horizontal line over the period from 1940 that I have worked on, but not finished yet) for both Tmax and Tmin.
http://i260.photobucket.com/albums/ii14/sherro_2008/BroomeGISSunadjusted.jpg?t=1261014137
http://i260.photobucket.com/albums/ii14/sherro_2008/BroomeGiss-1.jpg?t=1261014172
Quite a difference generated from a flat response, eh?
…………………………………………..
We seem to be misunderstanding each other a little. I have not mentioned the Reference Climate Station set in this discussion. My interest are broader and in short sentence form can be expressed as:
1. Australia’s BOM sends data to USA & British bodies for incorporation in global sets. Which authorities are primary recipients?
2. How do you know that these are the necessary and sufficient bodies for the purposes of this thread? Can you confirm it independently? e.g. does NOAA and GHCN both get data or does it go to WMO?
3. Which geographic set of Australian information is currently sent? Is if the RCS set, or a different one?
4. Are the data aggregated into monthly before sending?
5. Are the data as currenly used by the USA folks available in a file that can be accessed today, or have there been a number of files, some of which might have been updated in Australia – or not updated?
6. What is a site for all Australian data so far digitised, in primordial, raw form?
I do not know where you work so I do not know if you can answer these. Maybe Nick Stokes (ditto).

December 16, 2009 9:51 pm

Sigh, in short….a quality control/quality assurance problem.

Geoff Sherrington
December 17, 2009 12:19 am

If you can work out the formal meaning of “correlation %” then here are the curves for Australia by Della-Marta et al. If the Y-axis is correlation coefficient x 100, then you will see very few correlations above 0.8. What are the implications of this for Australia? Easy. Just lower the hurdle to 0.5 or something that gives you more data points to work with. The science is settled.
http://i260.photobucket.com/albums/ii14/sherro_2008/d-m05.jpg?t=1261037758

Gavin Andrews
December 17, 2009 6:58 am

Willis,
That’s an interesting article, and you’ve obviously put a fair amount of work into it, however I believe you’ve missed a fairly vital factor that I’d expect to be largely responsible for the amplification of the rise in temperatures post 1941 in the homogonised GHCN temperature records.
Darwin station is surrounded on 3 sides by the ocean, and is around 1-2km from the Ocean. The ocean acts as a major moderating factor of climate, so any stations located close to the ocean will show much more moderate increases in temperature than landlocked stations. If the darwin data was being used solely to demonstrate temperature change in Darwin itself, then this would not be a factor, and there would be no justification for adjusting for it.
However, the GHCN dataset is not used for this purpose, it is used to extrapolate temperature change across the entire region, and must therefore take the proximity to the coast into account and adjust the figures accordingly. In order for the Darwin dataset to best represent the entire region any change would need to be amplified to represent the average change across the region, most of which is landlocked, and only a small percentage of which is located close to the coast.
It will be this adjustment to the actual station data to allow a coastal stations readings to better represent the average for the area that is causing the amplification of the warming trend post 1941 that you’re finding hard to explain.
I can see why it would look odd at first glance, but if you think about it, it would be much worse to attempt to represent the average temperature of an entire mostly landlocked region based on uncorrected data from a coastal station that’s not at all representative of the average geography of the region.
If you’re wondering how they’d come up with the correction factors to use, essentially they’d take data from several areas of the world where there were stations at relatively regular intervals moving inland from the coast, produce an average graph of the difference in temperature change related to distance from the coast, and use this to calculate a correction factor to apply in areas where coastal stations are the only available stations based on the distance a station was from the coast compared to the average distance from the coast of the surrounding area. They’d also do the a similar thing to adjust for height above sea level, topography, vegetation and population density.
I’ll have a dig around and see if I can find any papers to support what I’m saying, but this how I remember the process used being taught to me when I was studying it shortly after the GHCN dataset was first released.
Granted it’s not a perfect method of estimating climate change across australia, but until someone invents a time machine that allows scientists to establish extra long term inland climate monitoring stations across areas where the network is sparse, it’s the best estimate that can be produced, and we really have to live with that rather than demanding perfection when this is an impossible to achieve.

Gavin Andrews
December 17, 2009 9:24 pm

Willis,
It looks like I was wrong about the methodology used, but at least partially right about the overall reason for the amplification of the rise over the period from the 1940’s onwards. In working this out though I’ve also sussed out a major flaw to your article…
Basically as far as I can work out, as additional air temp monitoring stations data comes on line from stations surrounding Darwin they use the data from these stations to create a reference series, which they then use to homogonise the Darwin Airport data. Via the BOM website, I’ve checked the data for most of the surrounding sites, and found 6 that seem to meet the criteria quoted in your article. 5 out of the 6 surrounding sites are located inland away from the coast, and as I predicted, all 5 show a significantly bigger warming signature in the parts of the 1940’s-1990’s than either Darwin or the other coastal site.
Presumably as outlined in the GHCN documents you link to, the homogonised data from these sites (and possibly others I’ve missed) would then have been used to produce a reference series which would have been used to adjust the Darwin series, to produce a homogonised series with a significantly higher rate of increase from the 1950s-1990’s.
Pre 1950ish, I don’t think there’s enough other local stations to be able to homogonise the data using this method, but there is enough meta data for the location to enable the data analyst to manually adjust the data, apparently due to tree shading the Post Office site in the 1930’s, and the move to the Airport in 1942. (Butterworth)
Now I’ll come to the major flaw in your article…
You state that there are 5 individual station records that combine to form the Darwin record, and then at the end of the article use a graph from the Darwin Zero dataset as the conclusion to the article. The GHCN dataset derives it’s data from the BOM monitoring stations, yet on the full list of all monitoring stations every operating at all in Australia, there are only 4 stations listed for Darwin, these being
Darwin Airport
Darwin Airport Comparison
Darwin Post Office
Darwin Regional Office
Darwin Zero is not an actual monitoring site then, and I’m 99% certain that Darwin Zero is actually just the name given to the file for the reference dataset of the average temperatures of the surrounding sites, running alongside the unajusted average data for Darwin. This also explains why this dataset ends around 1993, which ties in roughly with the end date on the original GHCN datasets being worked up.
After this point (1993) on the full graph as well, it’s notable that the correction factor remains constant, which is consistant with GHCN having stopped actively doing the comparisons beyond this point, and simply using the last correction factor produced from the original data analysis. This would corroborate the idea that the Darwin Zero data is the reference dataset.
Bottom line, as far as I can see the homogonisation has been done in the way they describe where the data is available, and they’ve reverted to using the metadata to make some justified adjustments prior to that point, which is exactly the way it should be done.
The main point that really needs to be understand in all this though is that this dataset is not aimed at producing the most accurate data for temperature change in Darwin city, it’s aimed at using Darwin’s temperature data and the temperature data from surrounding stations to produce the most accurate estimate for average air temperature change across the Darwin region as part of a global dataset that’s used to estimate global temperature change.
here’s the full list of australian monitoring stations ftp://ftp.bom.gov.au/anon2/home/ncc/metadata/lists_by_element/alpha/alphaAUS_3.txt
and here are the 6 stations that I’ve found that I believe meet the criteria set for being part of the reference series
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=014633&p_nccObsCode=36&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=015131&p_nccObsCode=36&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=014840&p_nccObsCode=36&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=014612&p_nccObsCode=36&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=014902&p_nccObsCode=36&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=014401&p_nccObsCode=36&p_month=13

December 17, 2009 9:55 pm

Here is what seems a thoughtful analysis of the Darwin issue from the other side. I’m not qualified to judge its merits:
http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists

Geoff Sherrington
December 17, 2009 10:50 pm

Gavin,
It would help if you indicated where you are guessing, where you have documentation and where you know you are right.
Darwin is not surrounded on 3 sides by sea. There is a small tidal creek to the S-E, a few km long, but one can hardly count that in. On a scale of say 500 km, there is marginally more sea off Darwin than there is land to the interior.
There are many, many places where temperatures have been taken within 300 km of Darwin. Oenpelli, Port Keats, Pine Creek, Daly Waters, Katherine are some, with establishment dates going back to the 1870s. The question is, which of these, if any, are used for “adjustment” of Darwin, and by whom, and for how long, and for what purpose?
Darwin Zero I’m not sure of, but the 1993 date seems to be when observations ceased at the original Post Office site 014016. The move to the Airport 014015 was in 1940. There is an overlap period of 7 years documented between these 2, ending in 1993.
It might also be relevany that about June 1990, there was a change from daily to half-hourly recording. It is likely that this resulted in a step change, but it might have been spliced and tapered over several years.
Willis has shown above that there are really poor correlation coefficients between Darwin and other major bases. This would make them non-starters for adjustment.
Now, Gavin, you write “The ocean acts as a major moderating factor of climate, so any stations located close to the ocean will show much more moderate increases in temperature than landlocked stations.” If this statement was correct – and it is not – what conditions prevailed 100 years ago, 200 years ago, 1000 years ago? The sea, if it moderates, can do so only in a period of long change. If you take the last 100 years and compare the slope of Darwin’s temperature (about 1.2 deg C cooling) with that of say Alice Springs, (0.8 deg C warming), you will realise that this differncing cannot go on forever, or it would have snowed in Darwin a few centuries ago. All you have done id to poiunt to the need for long records, which we know anyhow.
Darwin Regional Office is now in northerly suburbia at 13 Scaturchio St Casuarina. It is used more for other purposes than taking temperatures at ground level.
Willis was basically correct. I could argue a few minor points with him, but it’s best to read again and learn from him.
The “adjusted” data that used to be put out by Giss and was adopted by KNMI is just dreamin’ . It has no basis in physics, mathematics or reality.
Finally, the primary purpose of the weather station 014015 is for aviation. It can be quite tricky landing at Darwin, sometimes with a severe inversion a few m above ground. That is one reason for the unusually long airstrip 29 of 3.35 km. Pilots want to know the temperature as they are about to land, not some complicated adjustment of it.
So have a think about what Willis and Steve and I have written and then come back with info that you are certain about.

Willis Eschenbach
December 17, 2009 11:55 pm

Don Dixon (21:55:21) :

Here is what seems a thoughtful analysis of the Darwin issue from the other side. I’m not qualified to judge its merits:

Don, that is not a “thoughtful analysis”. It is a farrago of lies, ad hominem attacks, and misunderstandings. See my response at “Willis: Reply to the Economist”.

Willis Eschenbach
December 18, 2009 12:13 am

Gavin Andrews (21:24:15) :

Willis,
It looks like I was wrong about the methodology used, but at least partially right about the overall reason for the amplification of the rise over the period from the 1940’s onwards. In working this out though I’ve also sussed out a major flaw to your article…
Basically as far as I can work out, as additional air temp monitoring stations data comes on line from stations surrounding Darwin they use the data from these stations to create a reference series, which they then use to homogonise the Darwin Airport data. Via the BOM website, I’ve checked the data for most of the surrounding sites, and found 6 that seem to meet the criteria quoted in your article. 5 out of the 6 surrounding sites are located inland away from the coast, and as I predicted, all 5 show a significantly bigger warming signature in the parts of the 1940’s-1990’s than either Darwin or the other coastal site.
Presumably as outlined in the GHCN documents you link to, the homogonised data from these sites (and possibly others I’ve missed) would then have been used to produce a reference series which would have been used to adjust the Darwin series, to produce a homogonised series with a significantly higher rate of increase from the 1950s-1990’s.

You can’t just claim that the sites “seem to meet the criteria quoted in [my] article.” The fact is that they don’t. The earliest of these starts in 1941, the second earliest in 1965. Thus they are useless for the 1920, 1930, and 1950 adjustments.

Pre 1950ish, I don’t think there’s enough other local stations to be able to homogonise the data using this method, but there is enough meta data for the location to enable the data analyst to manually adjust the data, apparently due to tree shading the Post Office site in the 1930’s, and the move to the Airport in 1942. (Butterworth)

Say what? The GHCN specifically says if there’s not enough other local stations to homogenize a given station, they don’t use the station. In other words … you’re just making it up as you go along.

Now I’ll come to the major flaw in your article…
You state that there are 5 individual station records that combine to form the Darwin record, and then at the end of the article use a graph from the Darwin Zero dataset as the conclusion to the article. The GHCN dataset derives it’s data from the BOM monitoring stations, yet on the full list of all monitoring stations every operating at all in Australia, there are only 4 stations listed for Darwin, these being
Darwin Airport
Darwin Airport Comparison
Darwin Post Office
Darwin Regional Office
Darwin Zero is not an actual monitoring site then, and I’m 99% certain that Darwin Zero is actually just the name given to the file for the reference dataset of the average temperatures of the surrounding sites, running alongside the unajusted average data for Darwin. This also explains why this dataset ends around 1993, which ties in roughly with the end date on the original GHCN datasets being worked up.

If this were true, if Darwin Zero were just an average of the other four files, then why does it end in 1991? (NB, it does not end in 1993 as you claim.) The other four records end in 1980, 1990, 1994, and 2009. Why would an average of those four records end in 1993?
In addition, look at the data. For a number of the years, Darwin Zero is NOT the average of the other data. Investigate 1952, for one of many examples.
Gavin, I appreciate your enthusiasm, but you need to read things more carefully. You also need to think your ideas out to the end, and actually test them. If you had averaged the other data, you would have seen at once that Darwin Zero simply is not the average of the others.
Best regards,
w.

Geoff Sherrington
December 18, 2009 3:16 am

Gavin,
You mention “and the move to the Airport in 1942. (Butterworth)”
Actually, Butterworth is in Malaysia and has essentially nought to do with the argument.
How much are you making up as you go along?
We are not setting out to show that these raw data are invalid unless they are adjusted. We don’t think that flat data need to be given a warming trend.
To the contrary, we are asking why plausible data need to be adjusted, especially to the synthetic degree noted by Willis.

Geoff Sherrington
December 18, 2009 3:58 am

Don Dixon (21:55:21) :
If I was a school master marking the article you quote, I’d give about 2/10. It is absolutley riddled with wrong statements, unjustified inferences, misquotes, etc. It is NOT science at work. Its author admits to not understanding correlation coefficients.
Blair Trewin would need to answer the question of why station shifts around Darwin Airport grounds resulted in an upward adjustment. In the comparison with the old site from 1967 to 1973, the respective means are near enough to identical at the 2 sites and they are more than 4 km apart. So why does it get hotter each time you move the site around within the confines of the airport? Why is there a need to make a step adjustment to the mean temp at the time of the shift to the airport in 1940, if the stations have identical means over a later 7 year daily test period?
Repeating again, Darwin is NOT surrounded on 3 sides with water, to a degree that the physics of latent heat of evaporation and the likes would be likely to make a detectable T difference (Caveat: I have not done the calculations as they do not warrant the time, so that statement is intuitive and not science based). Also, if there is land 9 km WEST of the old met station. The prevailing winds blow very little of the time exactly along the path of the mouth of Elizabeth River and it is a bit of a red herring – especially when the tide is out, for there is a tidal range in Darwin Harbour in the vicinity of the old Met Office of some 3-5 m or so. Yes, there is a limited amount of water to the south, but the south is a short line on the wind rose.
Also, the present site of the weather station at the airport is just over the road from one of our former company offices. I used to sit there watching the B52s and KC-135 tankers blasting heat as they taxied for takeoff just 350 m away from where the station is now. These days there are jumbo jets, bad scenario, pilots need to push up the power on taxi to turn sharp left heavily loaded, with wash heading towards the station.
Why do people write about subjects about which they know so little?

Street
December 18, 2009 5:16 am

Gavin Wrote:
“Darwin Zero is not an actual monitoring site then, and I’m 99% certain that Darwin Zero is actually just the name given to the file for the reference dataset of the average temperatures of the surrounding sites”
This part of your theory is interesting. We’ve seen occasions where the infilling process has kept creating measurements for years after a station has closed down. If Darwin Zero was created in the database as a new station in 1993, might the GHCN have infilled the entire measurement history by accident?
It would have to be an accident though. The process you describe goes against everything I’ve read on the adjustment process. What you are describing, creating a temperature series that reflects the region, is the gridding process.

Geoff Sherrington
December 18, 2009 5:21 am

Geoff Sherrington (22:50:46) :
To self – correction – the Darwin between site comparison ended 1973, not 1993. If 1993 was also the wrong date for the end of Darwin 0 and 1990 was correct, than that is the year when half-hourly readings commenced as per the brief public meta data sheet.

carrot eater
December 18, 2009 6:21 am

Steve Short (12:46:24) :
I think you’re mistaken on a number of points. Nothing is out in the open now, that wasn’t before. The analysis of gg was absolutely simple, and not news to anybody who follows the issue. But I still maintain that the effect is minor on a large scale (more below).
From your comment, one would think that nobody has tested and refined the adjustment methods before; that nobody has looked to see if they are giving reasonable results. You say it hasn’t been a ‘transparent part’ of the review process – I don’t know how you can just say that. Just because *you* don’t know of the work, doesn’t mean it hasn’t happened, or been published.
In order to finally assess what effect homogenisation actually has, you need to go a step further and grid the data and find spatial averages for whichever region. Like that plot people love to show for the US data. Do you think that plot would be on the NOAA page if they were somehow trying to hide it? No, but you can see there the effect of time of observation bias and automated weather station adjustments.
Or like the paper I keep citing from Peterson from way back in… 1995, (““The effect of artificial discontinuities on recent trends in minimum and maximum temperatures””) where he showed the difference between raw and homogenised GHCN for Northern Hemisphere, and then parts of China and the US. I don’t see how you can say that these things aren’t being looked at.
As for computing climate sensitivity: It isn’t done based on curve fits to the last century, though hindcasting is something of a test of the model that you’ve got. But even the graph that you show there would not change much, if you arbitrarily decided that all GHCN adjustments are bad and that all data should just be used, as is, regardless of how obviously bad it is, unadjusted. I’m basing that off the figures in Peterson’s paper, that show the effect of GHCN homogenisation. (which homogenisation, by the way, isn’t used by most others)

carrot eater
December 18, 2009 6:36 am

Geoff Sherrington (17:50:21) :
So gg has shown that Darwin is atypical, in general. You then say that it’s typical for Australia, by picking two examples. Now that’s a much less interesting question to me; but in any case, you’re still going about it the wrong way. Don’t pick me two stations. Use gg’s code to show me all of Australia. The station numbers have a country code in them, so the modification wouldn’t be hard. Of course, the spatial average would be better, but that would take rather more work.
“I have not mentioned the Reference Climate Station set in this discussion. ”
I’m pretty sure you have. You quoted directly from a file that listed the adjustments that were made for the purpose of that reference set – ‘high quality’, or whatever they call it.
“My interest are broader and in short sentence form can be expressed as:”
You basically want to know who mails exactly what to whom, when and how. Not something that interests me. The monthly max/min data are on the ABoM web page. Why not just compare it to whatever is on the GHCN, marked as raw? If there’s a difference, we can then try to figure it out. Further, the NOAA has a list of data sources, so you can see where they’re getting things from. As it happens, they get the same data from multiple sources sometimes, resulting in duplicates.
Finally, it’s my understanding that NOAA receives the monthly data. They mention that there are sometimes issues when different countries compute the mean in different ways. I don’t know if some country goes above and beyond and sends daily or hourly data, as well, or if the NOAA would even have the capacity to keep all that, if they wanted it.

carrot eater
December 18, 2009 6:56 am

Willis Eschenbach (21:32:14) :
The GHCN starts with raw data (inasumuch as monthly means are raw). It does not accept data unless the raw form is available. Now if the sending country messes up or lies, and sends something somewhat adjusted instead, that’s a different matter.
There might be some confusion here on what we all mean by ‘adjustment’. If there are obvious errors in the daily data (like a day where the temp was recorded as -5843 C), I think it’s reasonable for the nation’s weather service to leave that out before computing the monthly means. That’s quality control, not homogenisation.
“. There’s an adjustment in 1930, and 1950, but none around 1940. The Aussies, on the other hand, make huge adjustments around 1940. Make what you will of that.”
I noticed that pretty much right away. It isn’t that surprising. I wouldn’t expect the GHCN homogenisations to be timed exactly the same as those from somebody working with the metadata. The question is in the overall effect. That said, the GHCN adjustments for the record in “station 0” are stronger in overall effect, as well, as compared to the ABoM. Then again, the composite for Darwin looks reasonable again; just station 0 taken in isolation looks a bit weird.
So it may well be that the GHCN algorithm spit out somewhat unreasonable results for record 0.
“While the homogenization step may root out errors, it is also true that nearby stations may inherently not be homogeneous.”
If you read the literature, everybody is painfully aware of that. There’s no such thing a perfect reference network, because there’s no such thing as a station known to be perfect. I’m just saying that a total lack of correlation with anything could be an indication of messed up data, in case the QC step missed it.
“Why didn’t I do it your way? Lack of time. ”
I don’t think that’s valid. If you don’t have time to add some substance, then you don’t have time to make accusations of fraud or smoking gun. I’d suggest you could have just made a post saying, “this looks odd to me, but I haven’t put the work in yet.”
“All of the adjustments are of equal importance when we are trying to decide if GHCN did what they claimed to do. They didn’t, and it happens to be easiest to prove that using the earlier adjustments.”
When I have time, I’ll have to look at your findings for the early times. Or rather, just do it myself from scratch. I won’t have time for such a thing anytime soon, though. But I still maintain that your post would have received no particular interest if it were only the 1920s/1930s we were talking about.

carrot eater
December 18, 2009 7:01 am

I don’t agree with Gavin’s interpretation, but he comes across a point that keeps getting lost –
The multiple records that Willis took to be independent measurements simply aren’t. Where they overlap, they are largely duplicates.

December 18, 2009 7:41 am

carrot eater (06:56:49):
You certainly have ample time to post here pretty much non-stop, and on other sites too. So I have a proposal for you.
With plenty of time to criticize Willis Eschenbach for his unpaid, amateur scientist’s efforts [keeping in mind that Willis follows in the footsteps of Pascal, Einstein Pasteur, Semmelweiss and numerous other unpaid amateur scientists], why don’t you use the time you spend endlessly trying to find fault with Willis, and write your own article instead? I think Anthony would be happy to post it.
See, it’s easy to take constant potshots from the sidelines at someone like Willis, who wrote about his findings, posted them, and answers questions and criticisms in a straightforward manner [unlike those pushing the current AGW malarky that pretends to follow the scientific method].
When Willis makes a mistake, he corrects it. Everyone makes mistakes. But when the devious CRU, Michael Mann and the IPCC make major errors that negate their conclusions, and intentionally fabricate data, unlike honest amateur scientists they go running for cover and never correct their errors or answer critics. Because, of course, their errors were intentional.
So instead of endlessly trying to find fault with Mr Eschenbach, why don’t you write your own article for WUWT, and see what it’s like for your belief system to be deconstructed?
You certainly have plenty of time to comment here and on other blogs [where you label scientific skeptics “deniers”], so your protestations of not having the time are questionable at best:
“When I have time, I’ll have to look at your findings for the early times. Or rather, just do it myself from scratch. I won’t have time for such a thing anytime soon, though.”
I look forward to poking holes in any AGW conjecture you can come up with. If you’ve got the cojones to write your own article.

carrot eater
December 18, 2009 8:59 am

Steve Short:
In terms of what difference the GHCN homogenisation makes (keeping in mind that GISS does its own thing altogether): why don’t we both, for ourselves, make plots of a simple average of all data, as raw and adjusted anomaly vs time? It is less good than doing a proper spatial average, but using v2.mean and v2.mean_adj, it should be doable with a half day’s effort. If you want a starting point, the page at gg has some code from gg, Nick and Steven van Heuven, in three different languages; the first two do something a bit different, but at least have the file input in there.
Smokey: There is nothing in particular that I’d like to write an article on. It just bothers me when somebody makes an accusation of fraud based on so little. If you do that, you’re sticking your neck out, and you shouldn’t be surprised if people are critical.

Willis Eschenbach
December 18, 2009 1:53 pm

carrot eater (07:01:59) :

I don’t agree with Gavin’s interpretation, but he comes across a point that keeps getting lost –
The multiple records that Willis took to be independent measurements simply aren’t. Where they overlap, they are largely duplicates.

carrot eater, you’re talking to the wrong guy. The GHCN took them to be independent measurements. The GHCN adjusted each of them separately, and adjusted them differently.
I’m just trying to understand what the GHCN have done, and they treat them as independent measurements. So if you have a problem with that, I suggest that you take it up with GHCN, and not with me.

Geoff Sherrington
December 18, 2009 2:44 pm

carrot eater (06:36:44) :
Geoff Sherrington (17:50:21) :
Look, I’m used to more precise expression than you are. When I say I did NOT refer to the Reference Climate Stations set, I did not. There’s no point in you rebutting by saying as you did “I’m pretty sure you have. You quoted directly from a file that listed the adjustments that were made for the purpose of that reference set – ‘high quality’, or whatever they call it” I simply did not. I quoted a bit from Torok & Nicholls on post Geoff Sherrington (16:27:47) : on 14.12 . T & N published some corrections for a particular exercise, but my statement did not rest on whther it was the RCS or not. I did not even use the term RCS. I quoted T&N in the sense that I do not know, and I suspect that you do not know, if those correction were used, for how long, on which stations, whether they were later replaced by others. So any putative reference I might have made to the RCS was not to make a point about its properties, but as an example of residual uncertainty. Indeed, we hear little about the RCS these days, as if it was abandoined because it was a wrong idea.
Then you admit that Darwin was atypical and I gave you Broome, with similar suspect patterns. You say that one exception does not prove a point, nor does two. But I could continue on, giving you more and more stations where the earliest data I can find (pre 1993) are essentially flat and horizontal, with the GISS adjusted being a strong warming trend. How many counter-examples does it take to shoot down a proposition? In some cases, merely one. How many Australian stations were removed from GISS calculations since 1993?
You are still horribly confused in what you write about assuming this and assuming that. So am I. Why don’t you start with a clean sheet and answer these questions one by one. You have seen them before, above:
1. Australia’s BOM sends data to USA & British bodies for incorporation in global sets. Which authorities are primary recipients?
2. How do you know that these are the necessary and sufficient bodies for the purposes of this thread? Can you confirm it independently? e.g. does NOAA and GHCN both get data or does it go to WMO?
3. Which geographic set of Australian information is currently sent? Is if the RCS set, or a different one?
4. Are the data aggregated into monthly before sending?
5. Are the data as currenly used by the USA folks available in a file that can be accessed today, or have there been a number of files, some of which might have been updated in Australia – or not updated?
6. What is a site for all Australian data so far digitised, in primordial, raw form?
I do not know where you work so I do not know if you can answer these. Maybe Nick Stokes (ditto).
Re gg’s quasi-symmetrical distribution graph, the key is in the choice of stations. I am starting to show you, one by one, stations that have a strong artificial adjustment. I don’t recall seeing any Australian stations that have a strong negative correction. So I presume that the symmetry arises from not including stations that I can show you that are positively corrected but not in the set making the graph.
It would be good science to keep on going after constructing that graph, picking a number of stations that are not on the list used, to see if the conclusion hold for excluded stations as well as included stations. This is called verification and it is almost a mandatory step. But you can’t do that until you clarify the several questions above, because withpout answers to those questions you are flying in the dark. Like Gavin not knowing that Butterworth is in Malaysia?????

Gavin Andrews
December 18, 2009 6:11 pm

Geoff
1 – Ian Butterworth is the name of the BOM scientist who’s report on Darwin and the adjustments needed due to changes in location, geography or other such factors contained in the meta data I am referencing in that paragraph.
2 – The whole of Darwin Airport is on a peninsula that it is surrounded on 3 sides by the sea, on these 3 sides within the airport boundaries the furthest away point from the sea is around 4km on all 3 sides, and the closest is around 1-2 km on 2 sides and 5-6 km on the 3rd side. Unless Google Maps is lying.
3 – Darwin post office is listed as having closed in 1942 by BOM, and I think it likely they’d have noticed at some point in the 50 years between that data and the date that the Darwin Zero plot ends if someone was still operating a weather station at the Post Office site. Unless you’re suggesting the Darwin Zero data was being complied for 50 years by some rogue scientist not connected with BOM who only submitted their data direct to GHCN, and then managed to get GHCN to accept their data and not reference the source in their comprehensive list of where their data originates? is this your contention?
4 – Darwin Regional Office is listed as only supplying temperature data for 6 years from 1967-73, so it’s unlikely to have been used at all in relation to Darwin Airport’s Homogonisation, and it’s location is therefore irrelevant.
5 – I’d suggest you firstly look up the definition of ‘moderating factor’, and secondly produce any evidence you might have to back up the idea that the ocean doesn’t act as a moderating factor on climate, and then consider emailing it to every meteorologist on the planet because you’ll have just disproved one of the most widely accepted scientific paradigms in history. This is schoolboy geography stuff.
6 – Any pilot who decided to use the GHCN data to tell them the air temperatures when they’re about to land rather than the actual raw data feed from the monitoring station would deserve to be sacked on the spot. At best the GHCN data would be at least a month out of data, and it’s also clearly not designed for the purpose you’re attempting to give it. The GHCN data’s purpose is to allow climate modellers to represent the average temperature and temperature change over time for a region as part of a global dataset that gives a best estimate for global average temperature change over time. That is all it is for, it is not designed to produce an accurate picture of what it actually happening at any one individual location, and it’s definately not designed to be used by pilots.
7 – What am I making up as I go along?
Where I am hypothesising is over whether Darwin Zero was the dataset originally used by either GHCN or possibly the Australians as their reference series for Darwin. It could be something else, but as it’s clearly not an actual station in it’s own right, I’m struggling to think what else it could be, and as I mention there are other indicators that support this hypothesis.
I’m also hypothesising about which of the surrounding stations might have been used to produce a reference dataset, but the point I was trying to make was that there actually were datasets in the surrounding area of over 20 years length that could be used to produce a reference series, as well as that these reference series are likely to have produced the increased warming due to being mostly landlocked.
In my original and second post I hypothesised 2 potential justifiable methods that could have been used to homogonise the darwin data, the first method would only be justifiable for areas where datapoints were much scarcer than this, so probably wouldn’t be used here, the second method is pretty much the method outlined in the various papers, so I’d stick by that as being the most likely reason for the adjustments post 1950ish.
8 – I am certain that Darwin Zero is not a monitoring station, as evidenced by it not being on any BOM list of monitoring stations that have ever existed in Australia.
I am certain that the adjustments to the data pre 1942 are justified by the information contained in the metadata as described in the BOM report by Ian Butterworth. Notably in addition to the move from the Post Office to the Airport in 1942, the metadata describes the Post Office site as having trees shading the Post Office site in the 1930’s, which would be an excellent explanation for the drop in temperatures preceding the move to the airport. Butterworth also examines a years worth of concurrent data from both sites, concluding that the Post Office produces significantly higher readings than the airport site.
I’m also certain that expecting a report that’s described as being ‘An Overview’ to perfectly describe the methodology used to produce the homogonised data for every single climate station in the global records is neither sensible nor realistic.
(hypothesis)
If anything Darwin’s likely to be covered by the phrase in the overview…
‘…and those stations we could reliably adjust to make them homogeneous.’
which is basically slang for using the most accurate method possible for producing the most reliable dataset possible in areas where there’s not enough data to use the homogonisation techniques outlined in the overview… eg adjusting the data based on the metadata for the early period, and possibly using the less exacting Australian method of Homogonising the data for later periods using surrounding stations with at least 10 years records, and data sets with a correlation of at least 0.7 as outlined by Torok and Nicholls in their 1995 paper on the subject, or the (Plummer et al., 1995) method described in the Australian section of the 1998 review paper (both linked below), or some updated version of these.
http://134.178.63.141/amm/docs/1996/torok.pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.1131&rep=rep1&type=pdf

Ripper
December 19, 2009 7:22 am

Well I am a bit puzzed by the big brown spot in Fig 6 in the 1995 paper.
I believe that most of that is achieved by the homoginisation of
The town was moved in the 1940’s 12 km west and 62 mts uphill.
from Old Halls Creek (which you can see is in a depression) to New Halls Creek
That’s how come the homogenised version ended up like this
Figure 8 in this paper should have set off alarm bells.
http://reg.bom.gov.au/amm/docs/2004/dellamarta.pdf
Yet Jones compares it to Broome?
In fact He compares Kalgoorlie to Perth as well.
Crikey! Anyone that lives out here knows that the climate changes rapidly once you leave the coast. Indeed Mullewa has more in common climate wise with Meekatharra (450km away)then Geraldton (100km away)

Gavin Andrews
December 19, 2009 8:15 am

Willis,
I’ve just checked the raw list of all 7280 monitoring stations that make up the GHCN network available directly from the GCHN website at the link below, as well as the full list of stations actually used by GISS, and there is only one station at Darwin listed as being used by either within their actual global dataset – Darwin Airport.
As far as I can see the data you’re highlighting in this article is only found in the section marked ‘Raw GHCN data + USHCN corrections’, and is just that, the raw data files used by GHCN to compile their original global temperature record.
So, let;s be clear about this, it’s not GHCN or GISS who’ve cocked up here, it is you who has misunderstood and misrepresented what these records actually represent, and how GHCN and GISS have used them.
There isn’t and never has been an actual station called Darwin Zero, and it’s clear that all 5 of the Darwin records listed in the raw data files represent different versions of the original records and amalgamated records produced during the original GHCN process prior to the intial 1992 release of the GHCN record vs 1.
GHCN make this data available specifically to be open about where the data has come from, and to enable additional checks to be made in future about the assumptions made when the original amalgamated records were created.
I don’t and probably never will know exactly what is represented by the Homoginised graph of Darwin Zero records you highlight in fig 7, but given that it is not an actual station record, and that the non-adjusted data for it is obviously a working version of the amalgamation process, it would be reasonable to assume as I have that the homogonised data represented some version of the working process of developing the initial homogonised data for the final Darwin Airport record. As such, it potentially provides an interesting insite into some of the inner workings of the process, but seeing as nobody yet has come along with any actual factual information about what this data represents, it’s a wee bit early to be calling it a smoking gun, or using it to imply that the scientists involved are guilty of scientific dishonesty.
if there’s any dishonesty going on in this article, it would be from yourself, but I’ll give you the benefit of the doubt and call it an honest mistake if you’re prepared to add in a disclaimer to this article to recognise that Darwin Zero is not and never was an actual station, and that you’ve made an honest mistake with this article.

Gavin Andrews
December 19, 2009 8:17 am

Appologies, I forgot to add the links to the full list of stations used by GISS and GHCN in their networks, showing only one station record for Darwin being used…
GISS : http://data.giss.nasa.gov/gistemp/station_data/v2.temperature.inv.txt
GHCN : ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.temperature.inv

Geoff Sherrington
December 19, 2009 10:02 pm

It is not often that I go to this site:
http://data.giss.nasa.gov/gistemp/station_data/
Therefore, I was surprised today to read there
“Note to prior users: We no longer include data adjusted by GHCN and have renamed the middle option (old name: prior to homogeneity adjustment).”
The options are now –
raw GHCN data+USHCN corrections
after combining sources at the same location
after homogeneity adjustment
Is there a recent story behind this? It has taken all the fun from those extreme examples we seem to be discussing. KNMI is also cutting them out, as far as I can tell.
Returning to the central theme, Gavin asks above “7 – What am I making up as I go along?”. Answer – about 90 percent. You have a pathological urge to take part statements out of context and to guess instead of proving. That’s why I put in an occasional trap like Butterworth. To see if you have read the paper. I see it’s now “Ian Butterworth” so maybe you’ve been googling as well. Worthwhile effort, commendable.
The best way that I can see for you to get out of the quicksand is to answer these earlier questions, a bit enlarged now, with references:
1. Australia’s BOM sends data to USA & British bodies for incorporation in global sets. Which authorities are primary recipients?
2. How do you know that these are the necessary and sufficient bodies for the purposes of this thread? Can you confirm it independently? e.g. does NOAA and GHCN both get data or does it go to WMO?
3. Which geographic set of Australian information is currently sent? Is if the RCS set, or a different one?
4. Are the data aggregated into monthly before sending? Is is Tmax and Tmin, or just Tmean, or is it a longer record now that there is almost continuous sampling at many stations”
5. Are the data as currenly used by the USA folks available in a file that can be accessed today, or have there been a number of files, some of which might have been updated in Australia – or not updated?
6. Name a web site or other public source for all Australian data so far digitised, in primordial, raw form?

Willis Eschenbach
December 19, 2009 10:33 pm

Gavin Andrews (08:15:31) :

Willis,
I’ve just checked the raw list of all 7280 monitoring stations that make up the GHCN network available directly from the GCHN website at the link below, as well as the full list of stations actually used by GISS, and there is only one station at Darwin listed as being used by either within their actual global dataset – Darwin Airport.
As far as I can see the data you’re highlighting in this article is only found in the section marked ‘Raw GHCN data + USHCN corrections’, and is just that, the raw data files used by GHCN to compile their original global temperature record.
So, let;s be clear about this, it’s not GHCN or GISS who’ve cocked up here, it is you who has misunderstood and misrepresented what these records actually represent, and how GHCN and GISS have used them.
There isn’t and never has been an actual station called Darwin Zero, and it’s clear that all 5 of the Darwin records listed in the raw data files represent different versions of the original records and amalgamated records produced during the original GHCN process prior to the intial 1992 release of the GHCN record vs 1.

Obviously, you haven’t looked at the data. If what you claim is true, perhaps you can explain why GHCN did not use a single unified Darwin Airport record in making their adjustments? Instead, they adjusted the various records (Darwin Zero, Darwin 1, etc) separately, and applied separate and different adjustments, adjustments made at different times, to each record. If they are just “different versions of the original records”, why would they be adjusted differently? And if they are all the same record, why would they not be combined before being adjusted? And if they are all the same record, why would they not be combined after they were adjusted?
Next, if these records were “produced during the original GHCN process prior to the intial 1992 release of the GHCN record vs 1.”, perhaps you can explain why two of these records extend past 1992?
Next, if these records are all just copies of each other, why are their monthly averages different by up to three tenths of a degree? And why do Darwin Zero, Darwin 1, and Darwin 2 disagree ninety percent of the time?
Next, since you claim the only station involved is “Darwin Airport”, how do you explain the start of Darwin Zero in 1882? Are you claiming there was an airport at Darwin in 1882?
Finally, you say these records appear in “the section marked ‘Raw GHCN data + USHCN corrections’”. The GHCN has no section with that title, so what does that have to do with GHCN?
You close with:

if there’s any dishonesty going on in this article, it would be from yourself, but I’ll give you the benefit of the doubt and call it an honest mistake if you’re prepared to add in a disclaimer to this article to recognise that Darwin Zero is not and never was an actual station, and that you’ve made an honest mistake with this article.

You say I’m being dishonest (simply because you happen to disagree with me) but you’re prepared to give me “the benefit of the doubt” about my honesty if I recant and agree that the sun moves around the earth??
Gavin, that’s just too precious, it appears you really think that the “benefit of your doubt” makes the slightest difference to my honesty … lay off the personal attacks and you’ll get more traction. Concentrate on the science, calling someone a liar is just a cheap debating tactic and people see right through it.
I may well be wrong, Gavin, I’ve been wrong many times before, but I’m an honest man. Your baseless insinuations that I am lying don’t touch my honesty, but they reflect very poorly on your character. Unfortunately, that kind of vile personal attack against the bearer of bad news is becoming all too common from AGW supporters, and it reeks of desperation … understandable desperation, I suppose, but unpleasant nonetheless.

engineer
December 20, 2009 8:23 am

[snip – policy – invalid email address]

KevinUK
December 22, 2009 10:56 am

Willis,
With vjones’s help and with the aid of EMSmith’s excellent documentation, I’ve been carrying out my own analysis of the NOAA GHCN data. My first step was to reproduce you excellent analysis for Darwin (unlike the Team who think that ‘there’s nothing to see here, move on’). I’ve therefore been applying the scientiic method and have attempted to falsify your analysis. I’m sorry (actually I’m glad) to say that I failed! I’ve reproduced your charts and results almost 100% and have documented my efforts on vjones blog ‘diggingintheclay‘. You can read the thread in which I reproduce your analysis by clicking on the link below.
Reproducing Willis Eschenbach’s WUWT Darwin analysis
As I’m sure you already know and appreciate science progresses by ‘standing on the shoulders of giants’ so I’ve taken the liberty of further extending you excellent analysis for Darwin to all the WMO stations in the NOAA GHCN dataset.
Specifically I’ve attempte dto answer the question posed by others on your original Darwin thread as to whether or not Darwin is a special case or not?
Well judge for yourself by clicking on the link below which documents my extension of your analysis to include the whole NOAA GHCN dataset.
Physically unjustifiable NOAA GHCN adjustments
The following is an excerpt from the thread
“In total, I have found 194 instances of WMO stations where “cooling” has been turned into “warming” by virtue of the adjustments made by NOAA to the raw data. As can be seen from the following “Cooling turned into warming” table (Table 1) below, which lists the Top 30 WMO station on the “cooling to warming” list, Darwin is ranked in only 26th place! The list is sorted by the absolute difference in the magnitude of the raw to adjusted slopes i.e. the list is ranked so that the worst case of “cooling” converted to significant “warming” comes first, followed by the next worse etc.
It’s clear from looking at the list that Darwin is certainly not “just a special case” and that in fact that there are many other cases of WMO stations where (as with Darwin) NOAA have performed physically unjustifiable adjustments to the raw data. As can been seen from Table 1 many of these adjustments result in trend slopes which are greater than the IPCC’s claimed 0.6 deg. C/century warming during the 20th century said by the IPCC to be caused by man’s emissions of CO2 through the burning of fossil fuels.

KevinUK

Willis Eschenbach
December 22, 2009 12:14 pm

KevinUK, well played and many thanks.
w.

December 22, 2009 2:33 pm

The other possibility of the spiking data could be two or more ‘cooks’ thinking a bit more salt is necessary and unaware that someone else was adding salt too.
You could have more than one ‘true believer’ cooking the data unaware of a compatriot.
As a teacher you know something is wrong when your students can see it without your help.

Murf
December 26, 2009 3:23 pm

I’m just your average Joe Layman, having never looked at this climate debate before. So I’m a bit out of my element here, but I have a question.
I stumbled upon this website and read the initial piece above by Eschenbach and through some of the posts here, but not all (I just don’t have enough time or understanding to get it all–so my question may have already been addressed.
The analysis given by Eschenbach appears pretty impressive. But a little more Googling led me to “Are the CRU data ‘suspect’? An objective Assessment” at http://www.realclimate.org. It appears to quash the argument here by plotting random series from World Monthly Surface Station Climatology against adjusted CRU data. The result is no substantial difference in ‘raw’ and ‘adjusted.’
All these different datasets I see mentioned on various climate sites are confusing to me–I’m not real sure what’s really ‘raw’ and what’s not.
But, anyway, what is the response to the RealClimate apparent ‘objective’ refutation of ‘bad’ data?
My thanks to anyone who answers.
Mike

Willis Eschenbach
December 26, 2009 4:11 pm

Murf (15:23:20) :

I’m just your average Joe Layman, having never looked at this climate debate before. So I’m a bit out of my element here, but I have a question.
I stumbled upon this website and read the initial piece above by Eschenbach and through some of the posts here, but not all (I just don’t have enough time or understanding to get it all–so my question may have already been addressed.
The analysis given by Eschenbach appears pretty impressive. But a little more Googling led me to “Are the CRU data ’suspect’? An objective Assessment” at http://www.realclimate.org. It appears to quash the argument here by plotting random series from World Monthly Surface Station Climatology against adjusted CRU data. The result is no substantial difference in ‘raw’ and ‘adjusted.’

Mike, you’re gonna have to ask someone else about that. I’ve been censored so many times on RealClimate for asking simple scientific questions that I refuse to up their visitor count. There’s a peer-reviewed account of one of these at this link. My experience is that if RC says it is so … it ain’t. In the meantime, if they were on fire I wouldn’t piss on them to put it out. They flat-out lie about their censorship policy, which in my book is despicable.
For a view of what Gavin the Unreliable calls “no substantial difference” take a look at WUWT here. There’s plenty of other examples. If Gavin claims to have selected “random” examples, you can be sure that they aren’t.
However, the most telling question is this. If, as you say RealClimate claims, there is “no substantial difference in ‘raw’ and ‘adjusted’ data” … then why on earth would they spend thousands of man-hours to adjust it?
Get back to us when you have considered that for a while.

All these different datasets I see mentioned on various climate sites are confusing to me–I’m not real sure what’s really ‘raw’ and what’s not.
But, anyway, what is the response to the RealClimate apparent ‘objective’ refutation of ‘bad’ data?
My thanks to anyone who answers.
Mike

The problem is that nobody knows whether the data is bad or not. Nobody is sure what is raw and what is not. The data is in a huge muddle, of which Darwin is only one example among many. What the Aussies use for “raw” data differs from what GHCN uses, which differs from what CRU uses, which differs from what GISS uses. Heck, the Aussies themselves have three versions of “raw” Darwin data. Go figure …
Gavin Schmidt of RealClimate is a computer modeler for the GISS dataset. As such, he is hardly an unbiased commenter. There are a host of problems with the “adjustments” done by GISS. See Chiefio’s site for an introduction to some of the issues. Gavin thinks GISS is quality science … however, they have been very unwilling to reveal what they are actually doing to the data. In addition, they make (IIRC) five separate adjustments to the data, and then ignore what that does to the error estimates. Coincidence? You be the judge.
Thanks for your questions,
w.

Murf
December 26, 2009 8:05 pm

Willis,
Thanks for the response. I’ll take a look at the stuff you suggest and study on your points.
I guess the only question I still have at the moment is what exactly is your take on the World Monthly Surface Station Climatology data? Do you have any particular reason to think it’s not good data overall, other than there just generally seems to be a big data muddle. I’m wondering since, apparently, one can download the data to see for oneself what it shows ( I’m a bit puzzled why RealClimate takes a random sample rather than just all the data, but they seem to give some sort of rationale for it).
I guess, though, If I’m getting your point, it is that your examination of the Aussie (and other) particular data series call into question all the climate data. Again, I’m confused about datasets. How is the Aussie data related to the World Monthly Surface Station Climatology data, or is it?
I think I’m a little confused as to your general point: is it that’s there is in fact no reliable raw data to analyze or is it that if you take what raw data there is and don’t adjust it then any warming trend is much reduced or eliminated?
I hope these questions aren’t too dumb. You’ve probably already answered them, but it’s taking me a while to get there, there’s so much stuff here.
Oh yes, I think the rationale the RealClimate guys give for the need for data adjustment is so that individual stations can be properly compared over time. But, they argue, those adjustments over many stations don’t create any bias in the overall trends. That seems like a reasonable argument in principle.
Thanks,
M

Willis Eschenbach
December 26, 2009 8:42 pm

Murf (20:05:19) :

Willis,
Thanks for the response. I’ll take a look at the stuff you suggest and study on your points.
I guess the only question I still have at the moment is what exactly is your take on the World Monthly Surface Station Climatology data? Do you have any particular reason to think it’s not good data overall, other than there just generally seems to be a big data muddle. I’m wondering since, apparently, one can download the data to see for oneself what it shows ( I’m a bit puzzled why RealClimate takes a random sample rather than just all the data, but they seem to give some sort of rationale for it).

I’ve never been able to find enough metadata in the WMSSC archives to come to any kind of conclusion. Might be there, but I haven’t found it.
You can download a host of data. WMSSC data, GISS data, GHCN data, CRU data, plus data from the Aussies and the individual met services. Trouble is, they’re all different.

I guess, though, If I’m getting your point, it is that your examination of the Aussie (and other) particular data series call into question all the climate data. Again, I’m confused about datasets. How is the Aussie data related to the World Monthly Surface Station Climatology data, or is it?

Can’t help you there. The WMSSC data is listed as:
941200 1882-1993 DARWIN AIRPORT
Unfortunately, the Aussie data does not show any dataset that ends in 1993 … so what is the relationship? Anyone’s guess.
And sadly, this is all too typical. There is no authoritative, agreed upon data anywhere. Every group has its own, and they are all different.
I think I’m a little confused as to your general point: is it that’s there is in fact no reliable raw data to analyze or is it that if you take what raw data there is and don’t adjust it then any warming trend is much reduced or eliminated?

I hope these questions aren’t too dumb. You’ve probably already answered them, but it’s taking me a while to get there, there’s so much stuff here.

The only dumb questions are the ones you don’t ask, because then you don’t get any smarter …

Oh yes, I think the rationale the RealClimate guys give for the need for data adjustment is so that individual stations can be properly compared over time. But, they argue, those adjustments over many stations don’t create any bias in the overall trends. That seems like a reasonable argument in principle.
Thanks,
M

Nonsense, at least for the GISS/USHCN adjustments. Overall, they add a distinct and quite large warming trend. Might be valid and justified … or it might not. See WUWT here for more details.
But it’s just like RC to wave their hands and say “nothing to see here, folks, move along now” …
w.

Geoff Sherrington
December 27, 2009 12:58 am

Re Murf,
If I can chip in here, I have been sending more Darwin material to Willis than he can handle, because it is so mixed up. One Australian climatologist wrote a while back that maximum and minimum temperatures are commonly taken each day, but that about 100 different ways were used to arrive at an average temperature from them.
So, when you write “Oh yes, I think the rationale the RealClimate guys give for the need for data adjustment is so that individual stations can be properly compared over time,” I have to ask you in return, “compared with what?” AFAICS, a series of temperatures taken way back to the 1860s are likely to be mostly correct. They might have larger error bars than recent equipment allows, but that’s no a reason to infer a bias and try to correct it.
You can only really compare a temperature record with itself and note events like dropping and breaking a thermometer, or a recalibation that gives a slightly different answer. It is BIAS that is the root of the global warming problem, or rather, imaginary bias and its imaginary correction.
The adjustments to the negative that “balance” the adjustments to the positive do not stand close scrutiny. One example offered to me recently involved a cooling correction from about 1900, but the responsible collection agency had already voluntarily deleted the pre-1950 data as unacceptable. The post- 1950 data showed a gentle warming. I supect that a station by station analysis of the “balanced” claim would result in destruction of the hypothesis. I’ve seem too many warming adjustments and seldom a cooling adjustment. Besides, for many stations, nobody seems to know which initial data were used for the performance of adjustments because the meta data are too sparse.

Murf
December 27, 2009 11:30 am

Willis and Geoff,
Thanks again for the replies.
I don’t think I’m sophisticated enough in this to ask any more questions or make any replies myself to your questions to me.
I’ll continue to read over your comments and suggested materials, but here’s my working plan to see if I can just establish one or two reasonable facts on my own:
(1) see if I can actually download the World Monthly Surface Station Climatology ‘raw’ data as the RealClimate guys claim is possible and, if it is,
(2) see if I can plot it myself (I don’t know why I couldn’t, but maybe there’s some special knowledge involved in how to properly use the data–I’ll have to see), and, if I can, take a look at what it shows.
Does that sound reasonable? It seems to me that narrows things down to a more managable size. If that data shows little warming, RC’s foundational case for warming goes out the window, it seems to me.
If it does show serious warming, then the question is the quality of that data (possibly vis-a-vis the other datasets mentioned above).
Since RC claims this WMSSC data is very highly correlated with the adjusted data, I don’t see any reason to mess with the adjusted data at at all. But maybe I’m missing something here, so correct me if I’m off base.
I’ll let you know what happens.
M

Geoff Sherrington
December 28, 2009 2:38 am

Murf,
It’s not so hard to download the data. I can do it, so can you.
One quick caution – it you are going to use an early Excel, it has more than 65,000 rows when unzipped and overflows so it’s best to open in another program. I can get there via Word, have not tried Access, but it would most likely work. But maybe you are more modern.
The NOAA data for Darwin start with WMO station #94120, then go to slightly different numbers about 5 times. Some of the years overlap, some are similar, some are different.
So here is a deeply philosophical question. If it is necessary to make an adjustment to the data supplied from another country, and that string of data has missing values, what is the proper procedure for in filling “adjusted missing values”? To me, there is one answer. Guesswork. There is one right answer. Leave the series separate and treat them as entities without combining them.
Here is an amusing little “difference” graph, where I have taken Darwin’s BOM online monthly data and calculated annual averages. Then, I have taken the efforts of the BOM at other times, plus the efforts of various adjusters, and subtracted their values from BOM on-line. Keep in mind that I used only one of the Giss options from several, merely the first one that came along so no cherry picking. They are all supposed to be identical in an ideal world.
(Note that I have tailed down when the data stop before 2009 or start after 1885)
If you can make sense of this, you are a better man than I.
http://i260.photobucket.com/albums/ii14/sherro_2008/Darwindifferencespaghetti.jpg?t=1261996491

Murf
December 28, 2009 10:44 am

Geoff,
I’ll have to come back to your graph a little later–I’m not following it right off. (I think I missed the BOM acronym somewhere–what is that?)
Right now I’m looking at the WMSSC temp data (ds570.0) I’ve downloaded.
I see there’s lots of missing data in any given station series. at least outside the U.S. This, I guess, relates to your comments above. Clearly there’s no ‘right’ answer to fill in the data. I guess I’d maybe want to take an average from a couple of years on either side where that’s possible, as an approximation, in order not to throw so many records out.
The most significant thing I notice right off: It looks like the stations are pretty much in large urban areas. Isn’t it well known and accepted that urban areas are warming due to the inherent characteristics of their being developed areas (and not due to greenhouse gases)?
If so, I don’t know that this data is worth doing anything with; it would just probably show warming that virtually nobody disagrees with. Do you think that’s right?
If it is, it would seem RC is misleading when using it to support global warming in their article: “Are the CRU data ‘suspect’? An objective Assessment.” Seems like they would note the urban characteristics of the records. Maybe the stations are not actually in the cities?
Perplexing! I must be missing something.
M

Murf
December 28, 2009 1:58 pm

I just ran across this site that has a graph of satellite temp data:
http://www.drroyspencer.com/latest-global-temperatures/
I read somewhere along the way that satellite data doesn’t have the urban heat problem. Does anybody know if that’s right?
What do people here think of the graph? Do you think the data is good?
It seems to show warming in the latest decade but no reason to believe it’s a long term upward trend of a magnitude leading to disaster. That’s how I would view it, anyway.
This guy Roy Spencer and John Christy seem to be pretty reasonable and reputable people. Does anybody know what either side thinks of them?
M

Geoff Sherrington
December 28, 2009 4:13 pm

Murf,
You ask “Perplexing! I must be missing something.”
To the contrary. You are catching on fast. You should read earlier posts by Anthony and those noted in the margin of Climate Audit, for background.
“BOM” is Bureau of Meteorology, Australia, who have collected and compiled the one record from which all the others on the graph are derived. My fault, I made it up for you in a hurry late at night and it lacks labels. Willis has better examples at the top of his articles. They say the same thing. Geoff.

Murf
December 28, 2009 6:14 pm

I’m now at the Goddard Institute site for Surface Temperature Analysis. I thought I might see what I could do with that data for just rural areas.
It seems odd; I only see data for stations on a one-at-a-time basis. Surely you don’t have to get the data one little chunk at a time.
Am I missing it somewhere? Does anybody here have the complete Station data?
M

hillbilly
December 28, 2009 11:31 pm

A little off thread but of great interest. Australian John P Costella has done an excellent (ongoing) analysis of the climategate emails. He’s cleared the technical jargon and lists the pertinent ones in chronological order, plus links to the emails in full. http://www.assassinationscience.com/climategate/

Murf
December 29, 2009 7:36 am

I’m reading through the emails in the link in Hillbilly post. Here’s an excerpt I just got to:
“Phil Jones to Ray Bradley, Mike Mann, Malcolm Hughes, Keith Briffa, and Tim Osborn, regarding a diagram for a World Meteorological Organization Statement:
(Jones statement) ‘I’ve just completed Mike’s Nature trick of adding in the real temperatures to each series for the last 20 years (i.e. from 1981 onwards) and from 1961 for Keith’s to hide the decline. ‘
Those thirty-three words summarize the hoax so magnificently succinctly that the Nobel Committee should consider retrieving their Peace Prize from the Intergovernmental Panel on Climate Change and Al Gore, and re-issuing it as a Literature Prize to Phil Jones.
This email was sent less than two months after the one analyzed above. Clearly, Mike Mann’s problems with Keith Briffa’s data—that it didn’t agree with the real temperature measurements from 1961 onwards—had by this time spread to the data for the other “temperature proxies”, albeit only from 1981 onwards. Jones reveals that Mann did not address this problem by making honest note of it in the paper that he and his co-authors pubished in Nature, but rather by fraudulently substituting the real temperature data into the graphs, for the past twenty or forty years as required. ”
*****
I’m not sure I fully understand the ‘bad’ here. Seems like you’d want to use the ‘real’ data. If they didn’t make the splicing known, that’s not good, of course, but I guess the big problem must be that only a later portion of the ‘real’ data are used while, if you used all the ‘real’ data, you wouldn’t get the warming they show.
Am I getting that right? (Also, which real data is being discussed here? Do we know what you’d get if you used all the real data?)
M

Geoff Sherrington
December 29, 2009 4:04 pm

Murf,
The next battle will be to get world coverage of genuinely raw data. The gatekeepers are busy building more gates.
You see, once the really raw data are known, the adjustments can be calculated and audited. That thought might terrify some adjusters.

Murf
December 31, 2009 4:29 pm

I think I’m starting to get the hang of some of this confusing climate data. I finally found the full global GISS Surface Temp Station Data. At the moment I’m ignoring any possible quality of data issues, and I’m just doing some graphs using the [GHCN raw + USHCN corrections] dataset as it is.
This may be old hat for some here, which I might know if I read thru everything, but I decided to use my time to do some data analysis myself rather than study other people’s stuff who may be considerably ahead of me.
Here’s what I’ve done. I’ve calculated monthly averages using as many values for each month as are available, just tossing out missing values. I suppose this method could somehow introduce some bias, but, if so, I’m ignoring that for now.
My first results using all the data did actually show something of a hockey stick kind of curve of annual averages, with perhaps a 4 or 5C trend change over the 20th century (I’m just eyeballing–I haven’t actually calculated the trend yet).
It does seem a little odd that the upward bend in the monthly curves is much more pronounced in the colder months than in the warmer ones, but, maybe, there’s some scientically known reason for that.
All in all, I was starting to think the alarmists have a point. Overall it looks a bit scary.
I then decided to do just the rural stations to test the proposition that there is/isn’t an urban heating effect. Maybe I’m off by thinking this is an appropriate test, but it seems reasonable to me at the moment.
The result? The curves are dramatically different for the rural set of data. Looking at annual averages, there are 3 unusually high years, 1990-1992, but really not much of a trend over the century. Maybe a little something to cause some concern, but certainly nothing alarming (to me) in the 100-year record.
Oddly, the variation for the warm vs cold seasons for the rural data looks, if anything, somewhat reversed from the all-data situation.
Seems to me that casts some doubt on the carbon-as-primary-cause of warming proposition, but I’m open to argument.
I should add I’ve not QA’d anything, so I certainly could have made errors in this first cut, and I claim some future grace to change what I’ve said here if so. I’m wondering, though, has anybody here done this same analysis and found anything similar?
M

Editor
December 31, 2009 8:08 pm

Murf – You raised the possibility of bias being introduced by just ignoring missing months. Yes, it’s possible, if the missing months are in some way non-random.
I downloaded temperature data for all the Australian stations that had been going for 100 years or more. The first ones started in the 1850s. When I just took the monthly averages of all available data, I got a huge temperature increase from 1850 to 1900, and very little trend from then on. Luckily, I was suspicious of the result and did some checking before I showed it to anyone else – turns out that the first stations were in cooler places than the stations that started later.
So any time you get an interesting result it’s not a bad idea to try to prove to yourself that it is wrong.
Re urban heat effect :
http://wattsupwiththat.com/2009/12/09/picking-out-the-uhi-in-global-temperature-records-so-easy-a-6th-grader-can-do-it/

Murf
December 31, 2009 8:34 pm

As it turns out, it gets even more interesting. I plotted avg annuals for the 18 best country series–i.e., series with no missing values (a few had with one or two) for 1900-2008.
Eyeballing it once again, it looks like there is very little, if any, evidence for warming in that set–certainly not dramatic warming.
Most, if not all, the rise in the total global data is apparently coming from countries with more discontinuities in the data–at least by the method of calculating AAs I’m using.
Is this news to anyone?
Not conclusive, I guess, for the global situation, but it does make me wonder. I would think it, if my numbers are correct, it would give anyone pause.
M

Geoff Sherrington
December 31, 2009 11:26 pm

Murf, you have to take a lot of care that the numbers you are using have not already been adjusted. I do not possess a data string of which I can say “This is the raw data as collected by the observer and unchanged since”. Geoff.

Murf
January 1, 2010 8:25 am

Geoff,
I undertand the data itself is in question, but I’m not dealing with that right now. I’m just interested at the moment in what the data the alarmists use actually shows. I’m using, as I mentioned, the [GCHCN raw + USHCN corrections] series, so there are obviously changes to the raw data (but as I understand it, it’s the rawest available). I just want to see what the warmists’ own (presumably least adulterated) data show.
It appears to me that even using their data as it stands, the warming alarm is in serious question.
For me, from what I’m seeing so far, I don’t know that it’s really necessary to get into all the detailed, messy data quality details and the endless arguments of data problems and counter-arguments of cherry-picking etc. to make a case of reasonable doubt on carbon-caused cataclysm.
My argument is based on the rural and country-specific results as I outlined in previous posts. Of course, my argument could be wrong, in which case data quality would take on more significance.
Am I missing something?
M

Geoff Sherrington
January 1, 2010 5:08 pm

Murf,
Many people have been down this road before. I have not played with the USA data much, but one has to watch the definition of “rural” You are using population 15,000 IIRC, but it also depends on factors like whether the temp sensor is 20 miles out of town or slap in the middle. My feeling is, if it’s in the middle, you will pick up a UHI effect at a population of below 5,000, even as low as 1,000 if there are a few airconditioner outlets nearby. So you might be able to sort by lower population and see where that leads. G.

Murf
January 1, 2010 8:59 pm

Geoff,
I’ve already done the graphs. My point is that using their (GISS) definition of rural, I get a very significant rural effect, virtually wiping warming out. That, it seems to me, makes a strong case against CO2 emissions as the cause of any apparent global temp rise.
I’m just trying to meet the AGW-CO2 advocates on their own terms and see where it leads, and my analysis looks like it argues against them, at least as far as the carbon dioxide as the primary cause of GW goes.
You (or anyone) can look at my charts by going here:
http://murf-thisandthat.blogspot.com/
and clicking the link (I hope it works–this is new for me).
Let me know what you think of them.
Can you give me some references where others have “been down this road before” so I can look at what they’ve done? Is there anything on this site where someone else has charted the GISS data and made it available? There’s tons of stuff out there I know, but in my limited searching (there’s only so much time) I have yet to run across anyone who actually provides the charted data.
M

Kevin Kilty
January 1, 2010 9:47 pm

Murf (20:34:18) :
As it turns out, it gets even more interesting. I plotted avg annuals for the 18 best country series–i.e., series with no missing values (a few had with one or two) for 1900-2008.
Eyeballing it once again, it looks like there is very little, if any, evidence for warming in that set–certainly not dramatic warming.
Most, if not all, the rise in the total global data is apparently coming from countries with more discontinuities in the data–at least by the method of calculating AAs I’m using.
Is this news to anyone?
Not conclusive, I guess, for the global situation, but it does make me wonder. I would think it, if my numbers are correct, it would give anyone pause.

I’m not sure what you mean by “18 countries”. Some countries contribute quite a lot of data, some only a little. The data aren’t weighted uniformly, someone please correct me if I am wrong, but the data are area weighted i think, which makes some data more important than other. You are not following that same process. However, I’m not all that surprised that you could take a subset of the data, even a best quality subset, and find no trend at all. I also have no doubt that if you follow the prescription of GISS, providing this is possible for an independent to do, you will get the GISS result. The important question is whether or not the processing of data by GISS makes good sense. Which brings me to
Dr. Eschenbach
Over the Christmas break I read most of the research papers (from the 1980s) that describe and justify the adjustments that NCDC make to the USHCN data. I produced a summary of the steps, and then added my own commentary, which is available at this link. I have no idea if this adds to the debate or not, but my reading of these documents shows that NCDC is doing their adjustments out of order, which could lead to very wrong results even if we assume the adjustments themselves are justifiable. I had other concerns with adjustments as well. I am wondering if the GHCN data are adjusted similarly or if there is only homogenization going on that is expected to find and correct all station troubles?

January 2, 2010 12:21 pm

“The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.”
That doesn’t sound the least bit reasonable. If they do any of these things at all, then the data has to be considered dirty until proven otherwise. We want to be looking also at these CO2 measuring stations. The main one bizzarely located over a volcano. Those figures ought to be ruled out as hopelessly corrupted right there. We see the pattern of collusion. So we ought to assume collusion is part of this CO2-measuring as well as everything else.

Murf
January 2, 2010 6:38 pm

Maybe it’s just me and an inability to communicate clearly, or maybe what I’m saying really doesn’t make sense, but it seems to me my point is not being picked up on at this site. I think I’m going to fold up here and go to other sites. Maybe I’ll try an alarmist site and see if I can mount a challenge with anybody there.
Thanks for responding, even though it didn’t seem like my posts were being well read.
Happy New Year and good luck to all with your endeavors.
M

Geoff Sherrington
January 2, 2010 7:24 pm

Murf,
Don’t throw a hissy fit and leave.
There are 2 barriers.
1. It is not so clear that what you are doing is an advance on what has been done before. It pays to advertise your incremental improvements, unsafe to assume that busy readers will detect it.
2. It’s fairly easy to find problems and smoking guns; but the destination we seek is the fundamental “truth”, inasmuch as there are few truths in science. Even since Willis started this thread, other readers have brought to light some important early info on Darwin that materially affects the picture. If you can do that too, for the region you have selected, than that is a gain for which you will be appreciated.

January 3, 2010 2:52 am

Murf I think your arguments are fine. But they are a little old hat. And there is really no point in meeting these frauds halfway. Or trying to reason with them. If their data is dirty, or suspected of being so, we cannot be making judgements on it. Next thing you will forget that its suspect. We always have to go back to first assumptions and to the veracity of the data. If you jump in halfway, with data you know may be bad, you are running the risk of giving them an “out”.
Really its pointless running off to a religious site with your argument. You will be scorned, blocked, belittled or co-opted. The first step is to always make sure of the rightness of the data. The veracity of ones statistical methodology comes second. Or if you were to jump in halfway, then what you make of that becomes a minor point. Since what we really are after is honest data.

Pappadave
January 6, 2010 5:19 am

Well, for MY part, I’ve STILL got about 2 feet of “global warming” in my driveway here in Oklahoma City left over from Christmas and we’re expecting record low wind chill temps by this time tomorrow (1/7/2010)!

January 6, 2010 5:43 am

This link provides some really good on the ground info about the Darwin sites:
http://www.john-daly.com/darwin.htm

Cliff B
January 8, 2010 5:55 pm

I’m afraid I’m coming to this discussion a little late. I have some comments and questions – I would be grateful for any answers to the questions, which I would assume are easily answered by anyone familiar with Australian Bureau of Meteorology (BOM) statistics.
Firstly, it seems to me that the ‘folks at CRU’ went to some trouble to address the questions that Karlen raised. Regardless of whether one thinks the answers are complete or conclusive, this suggests openness, rather than the opposite. The easiest course for them would have been simply to ignore him.
Secondly, I had a look at the charts of average annual maximum and minimum temperatures for Darwin Airport, which are available in the ‘climate data online’ section of the BOM website, and go back to 1941. I assume, and perhaps someone can correct me if I am wrong, that these averages are calculated directly from the raw data, and that either records were not kept for this site prior to 1941, or, if there were records kept, the BOM did not include them because it considered them to be inconsistent with the later data. Just eyeballing these charts, there seems to be a slight upward trend in the average annual maximum figures, and no trend at all in the average annual minimum temperatures.
Finally, the BOM website also has what it calls a high quality data series for average annual maximum and minimum temperatures for this site (Darwin Airport). The charts go back to about 1910. From 1941 on, these charts show the reverse of what is shown in the charts mentioned above – that is, there is a very very slight upward trend in the maximum averages, and a pronounced upward trend in the minimum averages. The ‘high quality’ data (I’m not using the quotation marks in a pejorative sense, but simply to distinguish this data from that mentioned above) for Darwin Airport are the only ‘high quality’ data provided for any Northern Territory site by the BOM. I am assuming that these ‘high quality’ data have been produced by processing the original data in some way – again I would appreciate any comments on this, and on why the two sets of data are so different and cover different time periods.
Of course I could ask these questions of the BOM but looking at the previous commentary I get the impression that some commentators are quite au fait with the BOM statistics and could probably give me a quicker answer.

Geoff Sherrington
January 8, 2010 11:10 pm

Cliff B (17:55:28) :
There is a great deal of information on blogs about Darwin’s temperature records. Much of it is supposition and guesswork.
In answer to your questions, might I respectfully suggest that you re-read the introduction by Willis and then that you search for the further entries by him and by me (an Australian).
It is extrememy difficult to obtain original records of Darwin’s climate, from the BOM or anywhere. I am in the course of seeking access to library records, for a USA Dept of Energy document which covers some of the earlier years.
The information published by the BOM goes through reviews from time to time and a review can mean that adjustments are made. Some of the official papers regarding adjustments are not internally consistent. I have not been successful in determining if the data sent from the BOM to NOAA – or to whomever initially receives the data processed by GISS – is raw, homogenised or adjusted. I have not been able to eliminate the possibility that multiple addustment are to the contemporary data, some done in Australia and some done in USA. Likewise, there is a shortage of material explaining what the adjustments are and what their magnitude is.
Therefore, it would not be accurate to assume that the answers to the Karlen and Eschenbach questions have been answered.
As one who has visited Darwin many, many times since 1960, if “feel” and “observation” has any value, I have not noticed any anomalous event attributable to global warming. Darwin is but one of many places about which I could make the same comment.
The whole uncertainty could be answered by an official statement by the BOM and another by GISS, but each body seems to have a reluctance to act, for reasons beyond my comprehension.

Martin Bennett
January 11, 2010 10:54 pm

Europe is currently experiencing its coldest winter for many decades. Since the global warming “science” is so “settled”, we must be sensible and attribute this to global warming!

Pappadave
January 12, 2010 2:06 am

When I landed in Darwin in 1968 it was swelteringly hot. Admittedly, this was in February–summer in the southern hemisphere–and admittedly, this part of Australia is closest to the equator and as tropical as New Guinea. Seems to me that warm temperature records in the tropics won’t really tell us much about GLOBAL warming, but why quibble? Records from all over are required to obtain a global average, one supposes.

January 12, 2010 5:47 pm

Papadave. The point is that their data is dirty. We have to be disciplined about it. We cannot use this data. Nor can we accept global warmers arguments based on corrupted data. It might be nice to accept this data, given the lack of choices in the matter. But we cannot. We can’t put up with these guys making sweeping statements about how the 90’s are warmer than the 30’s or the naughties are warmer than the 90’s. They cannot possibly know any of this since their data is no good.

January 18, 2010 5:43 am

Fahrenheit first proposed a standardized scale in 1724. Building something that is reliable, manufactured in a standardized way, and caliabrated so as to be comparable from one unit to another, does not happen immediately.
The impulse to keep precise records did not occur right away.
Wasn’t it about the Civil War before we start seeing actual measurements on a standardized, precise thermometer recorded in a systematic way?
And then the SAMPLE set consists only of a few large European cities. The existence of a large number of measurements distributed throughout the Earth did not occur until around World War II during the big push for aviation.
I am not saying that people did not have thermometers.
That is a very different question from someone keeping regular, consistent, temperature measurements in a reliable record.
And that pretty much needs to be daily. For example, if you have a temperature measurement for Amsterdam on May 20, 1910, but in South Africa you have only a temperature measurement on May 31, 1910, how useful is that for comparison?
Measurements really would need to be on the same day in different cities throughout the world to have much value as a set of RECORDS.
People playing with thermometers is very different from keeping reliable, precise, and usable records.

KevinUK
January 18, 2010 2:52 pm

For any Australians (particularly western australians) following this thread who are thinking of applying to Kevin Rudd for a rebate on their carbon taxes, have a look at this new thread that I’ve just put up on ‘digginintheclay’.
Mapping global warming
The main conclusion reached in the thread is that global warming is hardly global and that based on the evidence shown in the colour coded trend maps presente din the thread, ‘global warming’ is not global but is in fact largely NH winter warming. I’ve stated that given what the maps show, it’s hard to see how CO2 could be the cause of this warming unless the demon CO2 is happy to allow notable exceptions while being choosey in selectively warming parts of the planet while allowing other parts to cool at the same time.
I’ve suggested that Western Australians apply for a rebate on their carbon taxes and have also recommended where us ‘pommies’ should all go if we want a good tan this summer.
Regards
KevinUK

Samboc
January 25, 2010 3:38 am

Quote “I’ve suggested that Western Australians apply for a rebate on their carbon taxes and have also recommended where us ‘pommies’ should all go if we want a good tan this summer.”
Try Florida – Had their coldest Winter ever I believe. Bring your ski’s
Poor old OZ – Very hot summer but ” Snowed to down to 1000M in mid January.
Very odd – a very HOT Summer with “snow” . Maybe someone can explain.
I can’t???

Samboc
January 25, 2010 4:10 am

I wonder if we ( the Skeptics) have got it all wrong.
The world is heading into a full blown Ice Age and ‘Al Gore’ and His cohorts are trying to mislead the population to save them from the Fear of a slow Death.
Tell everyone that the World is warming to hide them from the truth.
When you freeze peacefully in your bed at night you won’t know you have been protected from the ” Peril’s of Global Warming”

Jon Brooke
February 1, 2010 1:41 am

First time I’ve looked at this as a climate sceptic (I’m not one) threw “Darwin Zero” into a discussion and I googled it.
The first thing that strikes me is that you put up a graph from the IPCC from about 1920 and then the GHCN graph from about 1890 and say “they look nothing like each other” without pointing out the different date ranges.
But if you compare the GHCN graph from 1920 onwards, actually, it does look remarkably like the other graph.
Of course I can see that there does look to have been a significant drop in temperature before this date (and I haven’t looked for an explanation for that – maybe another discussion), but your point seems to be that the data have been (mal)adjusted, whereas I find it hard to see what you claim is blatantly obvious.

Jon Brooke
February 1, 2010 2:00 am

Me again.
Not sure how much time you have for this, but maybe you could do a similar analysis on a RANDOM sample of other stations world wide.
I see loads of people saying, “wow, that just shows the whole thing is corrupt” but the raw data is there for anyone to look at. Lets see some more widescale alternative analysis, not just a cherry picked one to make a point.

Geoff Sherrington
February 1, 2010 3:06 pm

Jon Brooke (02:00:24)
Why don’y you select a few stations and do some analysis for yourself and show it here? If you lack the skills to do that, you probably lack the understanding to comment as you did.

Pappadave
February 1, 2010 3:44 pm

Sorry to burst your bubble, but the data ARE bad and were designed to serve a specific purpose…to convince everyone that we were in “imminent peril” from global warming and that global warming is caused by CO2 emissions of humans. WE knew it was BS from the outset since CO2 is such a minute part of our atmosphere and an even smaller part of the totality of so-called “greenhouse gasses” present IN the atmosphere and that CO2 is absolutely NECESSARY for life to exist on the planet. Some of you folks so wedded to the religion of anthropogenic global warming need to get a life and see what’s been before your eyes all along.

Jon Brooke
February 2, 2010 4:27 pm

Geoff,
I’m the one who essentially trusts the existing analysis, remember?
It’s you who is trying to convince the world that the data is bad. So why don’t *you* select a few stations? My suspicion is that far from this station being one chosen at random, it is actually the “worst” that Mr Watts could find, and even in this case his analysis is far from convincing – he derides the modifications made to the data by others but has applied his own modifications instead and asks that people accept them at face value.
Personally I’m unconvinced, but if you think that this analysis could be applied to other stations, lets see them.

Geoff Sherrington
February 2, 2010 7:07 pm

Jon Brooke,
There are several dozen stations where I have looked into the matter graphically, using data from a number of sources and stages of adjustment.
Because I live in Oz, I have used Oz data because there is less chance that it has gone through the overseas sausage machines.
There was an earlier post where a distribution was given, to show that as many adjustments were downwards as upwards. The place to go looking is on the limbs of the bell curve and Darwin is one of these. Not the worst, just one. Another is Coonabarabran, buts its early data was used by the adjusters even though our own BOM has deleted the temperatures pre-1950 on grounds of poor quality. Because I could not identify many Australian stations in the outliers, I did not proceed further.
There is a problem with supporting data. It is very hard to find for a particular station from another country, whether it has home country adjustments before it goes into GISS etc for more. The Oz data are being refined all the time, but GISS probably cannot tell you if they are using BOM refined version 1, 2, 5, 10 or 20; or when they started using a version, or when they stopped and changed to another, or whether they deleted the first try and substituted the next or latest …
Let’s call a spade a spade. The GISS and HadCRU and NOAA data sets are an untidy mess of record keeping, including some adjustments that any prudent scientist would have to regard as prima facie evidence of unjustified fiddling.
Darwin is but one. It takes only one to establish a principle.
Einstein: “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”

Pappadave
February 3, 2010 2:06 am

Let me write here about what all this means to the average, non-scientists “out-there” who see this fooling around with warming data. First, they (the so-called climate “scientists,” take about 100 years or so of temperature data and invent a computer program to “adjust” the data in some way. “Well, it was adjusted by computer which takes the human biases out of the process.” Nonsense! All one has to do is to build in human biases into the computer PROGRAMMING. Secondly, they present the “adjusted” data as somehow being “proof” that warming is occurring. It is NOT “proof” of any sort…other than proof that the “scientists” involved are capable of manipulating the data to fit a theory.

KevinUK
February 3, 2010 3:55 am

Jon Brooke,
Jon Brooke (16:27:40) :
“My suspicion is that far from this station being one chosen at random, it is actually the “worst” that Mr Watts could find,”
Have you actually taken the trouble to read back through this whole thread? It doesn’t look like you have otherwise you’d kno wthat it is Willis Eschenbach who has done the anlysis for Darwin and not Anthony Watts. You would also have noticed
KevinUK (10:56:57) :
“Willis,
With vjones’s help and with the aid of EMSmith’s excellent documentation, I’ve been carrying out my own analysis of the NOAA GHCN data. My first step was to reproduce you excellent analysis for Darwin (unlike the Team who think that ‘there’s nothing to see here, move on’). I’ve therefore been applying the scientiic method and have attempted to falsify your analysis. I’m sorry (actually I’m glad) to say that I failed! I’ve reproduced your charts and results almost 100% and have documented my efforts on vjones blog ‘diggingintheclay‘. You can read the thread in which I reproduce your analysis by clicking on the link below.
Reproducing Willis Eschenbach’s WUWT Darwin analysis
As I’m sure you already know and appreciate science progresses by ’standing on the shoulders of giants’ so I’ve taken the liberty of further extending you excellent analysis for Darwin to all the WMO stations in the NOAA GHCN dataset.
Specifically I’ve attempted to answer the question posed by others on your original Darwin thread as to whether or not Darwin is a special case or not?
Well judge for yourself by clicking on the link below which documents my extension of your analysis to include the whole NOAA GHCN dataset.
Physically unjustifiable NOAA GHCN adjustments
The following is an excerpt from the thread
“In total, I have found 194 instances of WMO stations where “cooling” has been turned into “warming” by virtue of the adjustments made by NOAA to the raw data. As can be seen from the following “Cooling turned into warming” table (Table 1) below, which lists the Top 30 WMO station on the “cooling to warming” list, Darwin is ranked in only 26th place! The list is sorted by the absolute difference in the magnitude of the raw to adjusted slopes i.e. the list is ranked so that the worst case of “cooling” converted to significant “warming” comes first, followed by the next worse etc.
It’s clear from looking at the list that Darwin is certainly not “just a special case” and that in fact that there are many other cases of WMO stations where (as with Darwin) NOAA have performed physically unjustifiable adjustments to the raw data. As can been seen from Table 1 many of these adjustments result in trend slopes which are greater than the IPCC’s claimed 0.6 deg. C/century warming during the 20th century said by the IPCC to be caused by man’s emissions of CO2 through the burning of fossil fuels.

“Personally I’m unconvinced, but if you think that this analysis could be applied to other stations, lets see them”
So click on the links above and you’ll be able to ‘see the anlysis applie dto other stations’ and hopefully then you will be convinced that the adjustments NOAA make to the raw data for many stations whether they result in ‘cooling turned into warming’ or ‘warming turned into coooling’ are just not physically justifiable. Even NOAA have now admitted that this is the case and are in the process of correcting their adjustment algorithms.