The Smoking Gun At Darwin Zero

by Willis Eschenbach

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport - by Dominic Perrin via Panoramio

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

DATA SOURCE: http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=darwin

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.
  • The shortest record is only nine years long.
  • There are gaps of a month and more in almost all of the records.
  • It looks like there are problems with the data at around 1941.
  • Most of the datasets are missing months.
  • For most of the period there are few nearby stations.
  • There is no one year covered by all five records.
  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say in An Overview of the GHCN Database:

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My previous post on this subject.

The late and much missed John Daly, irrepressible as always.

More on Darwin history, it wasn’t Stevenson Screens.

NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony

The Smoking Gun At Darwin Zero

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. <a href=” http://www.iarc.uaf.edu/highlights/2007/akasofu_3_07/Earth_recovering_from_LIA.pdf”&gt;Akasufo</a>. Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted <a href=”http://wattsupwiththat.com/2009/11/29/when-results-go-bad/“>here</a>:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from <a href=http://www.appinsys.com/GlobalWarming/climate.aspx>AIS</a>:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

DATA SOURCE: http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=darwin

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.

  • The shortest record is only nine years long.

  • There are gaps of a month and more in almost all of the records.

  • It looks like there are problems with the data at around 1941.

  • Most of the datasets are missing months.

  • For most of the period there are few nearby stations.

  • There is no one year covered by all five records.

  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say in <a href=http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/images/ghcn_temp_overview.pdf>An Overview of the GHCN Database</a>:

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My <a href=” http://wattsupwiththat.com/2009/11/29/when-results-go-bad/”>previous </a>post on this subject.

The late and much missed John Daly, irrepressible <a href=” http://www.john-daly.com/darwin.htm”>as always</a>.

More on Darwin history, it wasn’t <a href=” http://www.warwickhughes.com/blog/?p=302#comment-23412

“>StevensonScreens. </a>

About these ads

909 thoughts on “The Smoking Gun At Darwin Zero

  1. John F. Kennedy was assassinated Friday, November 22, 1963. I think that’s when all the shenanigans began.

  2. Anyone can tell you Darwin and Northern Australia is the Tropical part of the country, where it’s always warm! ;-)

  3. Wow, just amazing work and effort. This has just got to get publicity in the msm somehow to spark an investigation and rework of the temperature record.

  4. This is exactly the kind of public, open examination of the raw and adjusted data that needs to be done for ALL stations globally to establish, once and for all, IF the entire earth is warming un-naturally. (and I have yet to see any definitive proof that the current warming is un-natural).

    IF after that process has been done, out in the open for ALL interested parties to see, examine and pick holes in, and IF that shows un-natural warming, THEN I would be more than happy for John Travolta, Leanardo di Caprio and Al Gore to give up their private jets to save the earth!

  5. Impressive detective work, congratulations!
    I wish that journalists today would work this way, instead of copy pasting simply some press releases and showing a few pictures of polar bears looking lost in the water…

  6. Sure that UFO sighting was faked. But look at the other 2500 UFO sightings, they obviously couldn’t all be faked (very tired of hearing that argument regarding both the data and the ‘scientists’).

    Not sure why I still find these things so shocking, I think mostly how they didn’t even use some made up excuse and hide things in complex math. They literally just move the line by hand and then submit it. Usually cheating is ‘going against the spirit of the game’, but I guess sometimes it is just cheating.

    Excellent work figuring all this out, kudos.

  7. Michael, we do not need wild conspiracy theories to distract us. If you want wild conspiracy theories, go to a warmist site. As a heads up, I believe that their latest, baseless, conspiracy theory involves Russians hacking the servers at CRU!

  8. Peterson’s adjustments,… climate science is really a small world.

    peterson is also the person with the deliberate untrue statements in his NOAA internal ‘talking points’ about anthony watt’s study and, of course, he is well represented in the CRU emails.

  9. Michael. Or could it have been in December 1942 when the British radio station in Hong Kong picked up radio traffic about the forthcoming attack on Pearl Harbour and decyphered it – and it was made known to Roosevelt, who kept it to himself in order to allow a way in to the war? Ooohh!

  10. Darwin was bombed in February, 1942. There was a build-up of military presence prior to that date (and certainly afterwards), so perhaps that has something to do with this anomalous 1941-data.

  11. It’s interesting that at least parts of the Southern Hemisphere appear to show a slight cooling during the recent period of an active sun once artificial ‘adjustments’ are stripped out.

    I have seen recent empirical evidence that counterintuitively indicates that a more turbulent flow of energy from the sun cools the stratosphere rather than warming it and that the cooling effect in the stratosphere exceeds the value of any warming effect on the seas and troposphere from any small increase in solar power.

    http://www.nasa.gov/topics/earth/features/AGU-SABER.html

    Thus overall during the 20th Century for rural and unadjusted uncorrupted sites we might see a cooling trend especially in the southern hemisphere where land heating from the small increase in solar power is less significant.

    However it will still depend on the precise balance between energy release from oceans to troposphere and energy transfer from troposphere to stratosphere and thence to space.

    Nevertheless I think we urgently need to get a precise grip on the temperature trend from suitably ‘pure’ sites as soon as possible.

    I think there may be surprises in store in the light of that observed effect of a more turbulent solar flow actually increasing the rate of energy loss to space from the stratosphere.

  12. It can’t be a one off. The link below is to a graph from the NOAA’s website, and it shows that over 0.5 degree F of warming is all down to the adjustments. I find this hard to reconcile with common sense – surely they would have to adjust down for UHI effects?

  13. Willis, A fascinating article. Many thanks for doing the work…

    one question, how does the data from ground stations in australia compare to the data from satellite ? I can’t find the data to compare.

    It certainly looks like there is some pretty clumsy homogenising going on. How many ground stations are used in the ipcc figures ? I wonder how long it would take to re-assess all of them as you have done ?

    Thanks again

    regards

    Adrian

  14. For the record, Darwin was bombed by the Japanese (using more bombs than were used on Pearl Harbor) in Feb 1942. The town’s population was only 2000 at that time and would hardly warrant adjustments for UHI. The town was razed by cyclone Tracy on Christmas Day in 1974, by which time the population was 42000. Currently the town has a population of over 120,000 (see Wikipedia records).

    It seems neither catastrophe resulted in station moves — something I expected to read about when I first started to follow this contribution.

    I congratulate the author’s careful analysis and comparison of the raw and “value added” records. This sort of commentary cannot be ignored and needs explanation by the “powers that be”.

    Also remember that Darwin is well within the tropics (12 degS). Would such large shifts in temperature (alleged) be expected at that latitude?

  15. Willis – ty for the time and effort you have put into your research on the above work – This sort of work is where the real fraudulant nature of the shonky scientists will be revealed – the code holds the answers – its people like you Willis that will out this lot – once again- Thank you

  16. I think that this paper should be emailed to every single delegate in Copenhagen, every single Cabinet Minister in every country in the world which uses Representative Government and every editor of every national newspaper and media station with one simple question:

    ‘This paper judiciously, ruthlessly, relentlessly and scientifically demonstrates the true nature of the difference between raw data and adjusted data where temperature records are concerned. Given that the raw data shows cooling whilst adjusted temperature shows rapid warming, would you agree that until ALL raw data for ALL sites used in the three global temperature record organisations GISS, CRU and GCHN are made public, examined critically by INDEPENDENT parties and the nature of the adjustments understood, that NO DECISIONS CONCERNING GLOBAL ACTION ON SUPPOSED GLOBAL WARMING SHOULD TAKE PLACE?’

    And if they say no, I think we know what the attitude to all those folks is to rigorous science.

    Not in my back yard, buster.

    Well done, keep it up and produce 50 to 100 similar analyses.

    Whatever that shows up. Criminal fraud or the occasional mistake. Well-meaning mistakes or the greatest scam in history.

    And make all those politicians out there face the consequences. Resignations, imprisonments, whatever.

    It’s time for the gloves to come off…..

  17. The whole thing clearly needs to be gone through with a fine toothcomb, station by station, and the histories and the adjustments taken into account, and it needs to be done with double blind methodology. What is critical is that the person doing the adjustment to the readings must not know what the effect of those adjustments will be on temperature. Don’t know exactly how you do that, but its the only way to get an unbiased set of adjustments solely on the merits of the station histories.

    The ones who adjust must do it by objective criteria, which have to be tested in advance to make sure that they are robust between different operators, to verify the methodology is sound, and they must adjust without knowing the effect. It is like medical double blind studies, those rating the patient symptoms do so without knowing whether the patient is part of the drug or part of the control group. Then they are applied. The results might then be superior to raw unadjusted data.

    Or maybe you just have to decide that the raw data is all we have, and that we cannot improve on it, uncertain as it is, and so you have to accept a larger measure of uncertainty in the conclusions drawn than any of us would like.

  18. This is apalling. Speaking as a signal processor, this is psuedo science. The GHCN should release all their data so that the nature of adjustments can be seen. They should have to validate the adjustments.

    Why? I guess is that the idea of global warming is so engrained that anything that doesn’t conform to it is regarded as an error.

    It is astonishing that the MSM and wider scientific community havn’t really understood what is going on and our economies are going to be reshaped on the basis of evidence such as this. The political mainstream in the UK will not engage in any argument of AGW, “the science is settled”. Maybe in 5 years time, when the arctic ice remains normal and the Earth hasn’t warmed up, common sense may prevail.

    We live in an age of enlightenment.

  19. You will find the problem in 1941 was due to the first Japanese air attack on Darwin and most of the population headed south as fast as possible

  20. I hope we are going to see Anthony’s work on surface data published not too long after Copenhagen.

  21. Send it to every delegate at Copenhagen, every President, Prime Minister or Cabinet Minister, every TV station, every school and every media mogul.

    Tell ‘em that the data in Darwin stinks. Stinks of shit.

    And as sewage recycling is high on the Copenhagen agenda, you’d like to stick your nosey ass into another 100 stations in the GHCN record.

  22. An adjustment trend slope of 6 degrees C per century? Maybe the thermometer was put in a fridge, and they cranked the fridge up by a degree in 1930, 1950, 1962 and 1980?

  23. Willis I’ve just looked at the BOM site here in Australia and 2 stations cover the period 1882 to 2009.
    The first is the Darwin post office 1882 to 1941 no. 014016 and Darwin airport 1941 to 2009 no. 014015.
    The average mean temp ( high) is 32.7c for the PO and 32.0c for the airport.

  24. I am not surprised with your findings but I am very impressed with the work you have done and shared with us. Whenever someone withholds scientific data, it is natural to ask a lot of questions. In this case I have so many questions for the three climate information holders I am not sure where to start. Thank you!

  25. This behavior keeps popping up over and over and I can’t believe that they are brazen enough to hide it in plain site. I’ve been looking over data from historical weather stations where I live (Kamloops BC, Canada) and I find variations between stations just in a small area which fits with the siting of stations; ie airport temperatures appear higher. Since I’ve started looking into climatology, I’ve been comparing “official” temperatures in comparison to my house thermometer and there are differences of a few degrees. Using the USB temperature monitors this summer I found it difficult to compute the average temperature of even my yard as each physical location appears to have it’s own unique temporal heat signature. This is amplified by adjacent plants in the summer and the only time one sees homogeneity is in the winter where every place in my yard not close to the heated house is uniformly cold (about -10 C today).

    I suggest that we do some distributed processing by chosing weather stations where we live and performing the same type of data analysis. I’m in the process of writing a scraping program for environment Canada weather sites and then it’s just a matter of averaging daily temperatures to get monthly values and comparing them to “adjusted” values. This would be similar to Anthony’s surface stations project. If anyone has data analysis software already written to deal with averaging/displaying the data I’d be interested in getting it as while I like programming as a hobby, I don’t want to reinvent the wheel unless I absolutely have to.

    I’ll choose Kamloops BC as my part of the project.

  26. According to breakfast TV in the UK the met have just realeased data that proves the last decade is the warmest on record

  27. Thanks for the brilliant and painstaking work. Just one question: Is there any information about the actual locations of the stations? That is, does the homogenization account for *actual knowledge* about the possibly changing environment around the stations. If, for example, Station 2, added around 1950, was in a field surrounded by trees whereas Station 1was on the runway…

  28. Hey, I’ve seen some adjustments similar to that somewhere else:

    [0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,2.6,2.6,2.6]*0.75 ; fudge factor

  29. Willis, greetings from the lucky country and – awesome post, thanks.

    As you say “So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead.”

    I have the same fascination, I’m just glad you have done all the hard work and turned it into a something clear and digestible.

    I’m sure most informed skeptics believe the world has been warming for 150 years. But when you see a post like this you do start to wonder. Maybe the modern warming is a “regional phenomenon”.. UAH does show much less warming in the southern hemisphere..

    Are you going to do more posts on this subject? Please!

  30. I think the entire temperature record needs to be overhauled before politicians commit us to a path of economic meltdown.

  31. Your excellent analysis of this manipulation of raw data at Darwin shows what all programmers and business analysts know, a computer model will give the results it is programmed to give. In this case the artificial increase in weather station temperature readings appears fraudulent.

    An obvious question is: “Where is the scientific research that can justify or legitimise the level of GHCN adjustments shown in your Figure 8?”

    In any scientific study you inevitably have to sample datasets and sometimes remove bias from results. But such changes would normally be carefully documented and tested to demonstrate that they were not biasing the results. You would also expect error bars or confidence levels to be clearly identified so that the “accuracy” of the resultscan be assessed. Given the level of manipulation you have revealed on this site, the uncertainty equals or exceeds the alleged temperature rise.

    The behaviour of the climate scientists involved in this work does not inspire confidence – the apparent lack of transparency and their refusal to share data or clarify their sources and methods points to chicanery and deception rather than open scientific enquiry and debate.

  32. I must say that this is methodologically impressive . Really nothing to say about the analysis .
    But why isn’t there a program that’d do a systematic screening ?
    It would take all the unadjusted data in a given region , then the adjusted data and compute the adjustment .
    That seems a rather easy way to check if the Darwin craziness appears in many other cases or not .
    I know that I probably underestimate the amount of coding but from the functional analysis point of view it really doesn’t seem a very hard task .
    At least not as compared to the problems of atmospheric non equilibrium dynamics .
    If such a program doesn’t exist , it would certainly be worth to do it .

  33. The ‘adjustments’ in Fig.7 wouldn’t be based on the atmospheric CO2 levels at Moana Loa, by any chance?

    That would be a neat way of ‘adjusting’ the data. (Data Rape, I’d call it).

    After all, we have Pershing’s evidence that the ‘science is incredibly robust’.

    Well, he got the ‘incredibly’ bit right.

  34. Will we ever get a politician brave enough to stand and fight the ever more obvious corruption in the world of climate change. Look how the Saudi minister has been vilified for speaking out about it, yes he may have oil interests and that has been jumped on, but the climate change gang have also got big business on their side for the renewable energy products. No-one disagrees that the climate is varying, but it is natural and we have to prepare for what ever it is going to do. Let’s stop talking about ‘greenhouse gas’ being the cause, once it is proven that Co2 is not the problem, are they then going to blame the only ‘real’ greenhouse gas – water vapour – clouds!
    Thank you Anthony for keeping the real world sane!

  35. What they are hiding, apart from just “the decline.’

    “Iceland Temperatures Higher In Both Roman & Medieval Warming Periods Than Present Temps Peer-Research Confirms”

    http://www.c3headlines.com/2009/12/iceland-temperatures-higher-in-both-roman-medieval-warming-periods-than-present-temps-peerresearch-c.html

    “Climategate: Is There Evidence That NASA/GISS Researchers Have Fabricated Global Warming? If There’s Smoke, It’s Usually A Fire “

    http://www.c3headlines.com/2009/12/climategate-is-there-evidence-that-nasagiss-researchers-have-fabricated-global-warming-if-theres-smoke.html

    (Or, context is everything.)

    “The Climate Liars: Obama Administration Claims Fossil Fuels Kills Millions – A 100% Lie, Opposite of All Known Health Facts & Statistics”

    http://www.c3headlines.com/2009/11/the-climate-liars-obama-administration-claims-fossil-fuels-kills-millions-a-100-lie-opposite-of-all-.html

    And, so very much more.

    The data is the data. The only reason to “adjust” it is to hide the fact that it was bad data to begin with. On top of that, the “adjustments,” made by the same people who couldn’t do the measurements properly, are only likely to multiply rather than “correct” any “errors” in the data. There is no reason to trust people who have been lying for decades. They have been doing it so long, it’s second nature to them. They no longer care about or know how to tell the truth.

  36. Thank you so much for this. I’m currently learning R, although it’s slow going due to other commitments, but it’s exactly these type of articles that someone like me needs. It looks like we’re losing the political fight, so the only way to respond is with the science.

    From what I’ve seen we have E.M. Smith’s work, A.J. Strata, a site called CCC, Steve McIntyre and Jean S over at CA, and I’m sure various others, and of course your good self, all working in various ways on the various datasets.

    Can a way be found to get you all together, plus interested parties willing to do some work (like me), to really work on this and produce a single temperature record, but rather than rehash CRU’s, GHCN’s or GISS’s code in something like R, actually come up with a new set of rules for adjusting temperatures.

    I’ve seen yourself, among others, complain about the way they adjust for TOBS and FILNET, and now we have this article, demonstrably showing other shenanigans going on. Whatever was come up with would probably need to be peer-reviewed to get the methodolgy accepted, but at least there’d be something we could all trust.

    I’d be willing to put in work, I have plenty of spare bandwidth on a fast shared server, and skills in web programming, but that said it would probably be better co-ordinated from here or CA, as you already have the presence and the interested parties coming here.

    Unless something like this is done, we’ll see the Met spending three years using the same code and coming up with almost exactly the same dataset, and we’ll have lost.

  37. I must contend that no homogenisation at all should be used on a global scale!

    The averaging will sort it self out since what one is showing is: averages.

    All homogenisation will lead to distorted data on a global scale.

    Of course if one specifically needs to collect trends for a smaller region, then it could be necessary to homogenise, but never on a global scale.

  38. As a walker, in this nation. I have mixed with people who keep rainfall records in Australia. Australia is a harsh hot place, mostly desert or desert borderline, it’t not temp it’s rainfall, we watch, hot is hot, cold is cold.

    No one dicks with rainfall measure. Not done.

    No one dicks with station data.

    Australia needs Stevenson screens Aus wide.

  39. Stephen Wilde (00:51:39) : Nasa’s SABER satellite is on to something. Thanks Stephen. This program needs to be extended.

  40. Wow. More manufactured fakeness than a million Hollywood blockbusters! Not a smoking gun, not a nuclear explosion, the birth of new GALAXY! When this hits the fan, Copenhagen will become “broken wagon” – the wheels are falling off! Great job!

  41. RE my yonason (01:26:02) :

    I said, “The only reason to “adjust” it is to hide the fact that it was bad data to begin with.” By that I meant, of course, that if “adjustments” really were needed, then the data was bad. However, when the data is good, that’s even worse, because we are no longer dealing with incompetence, but premeditated deliberate deception.

  42. Greetings from Australia.

    Nobody in the current Australian government cares a damn about whether the temperature has gone up down or side ways. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).

    Climate Change, Climate Change, Climate Change, Tax will fix it, Tax will fix it, Tax will fix it… Climate Change, Climate Change…..

    No will you critical thinkers just give up and love Big Brother. 2 + 2 = 5 remember…. 2 + 2 has always equalled 5…..

    As an engineer educated in Australia in the 80’s it both breaks my heart and scares the [snip] out of me…

    (Interestingly, before 1984, Orwells’ 1984 was required reading for all year 11/12(?) students in Victoria… now I can’t find anybody under the age of 40 who has read it)

  43. “Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.”

    Wow. So GHCN blatently adjusts the raw data, to create a post-war warming trend where none existed. And the GISS data appears to match GHCN’s (once it has also been ‘adjusted’). And CRU won’t release their raw data, which doesn’t inspire much confidence – not that I had much in them after recent developments.

    From this example, and given the implications for the world economy currently being discussed in Copenhagen, I think that the precautionary principle should be adopted, and all adjusted data from GHCN, GISS and CRU should be classed as fraudulent, until proven otherwise. Willis’ essay should be sent to every politician and journalist possible.

  44. “Barry Foster (00:51:02) :

    Michael. Or could it have been in December 1942 when the British radio station in Hong Kong picked up radio traffic about the forthcoming attack on Pearl Harbour and decyphered it – and it was made known to Roosevelt, who kept it to himself in order to allow a way in to the war? Ooohh!”

    Seeing how the Japanese attack on Pearl Harbour took place a year earlier then I think this is unlikely.

    Seriously, though – this is great work and demonstrates exactly why climate science must be conducted openly and with free access not only to the raw data, but the methodology used to analyse it.

    As a layman I look at the raw data and the “homogenized” version and can only assume that “homogenized” actually means massaged to fit a political preconception.

  45. I think NASA knows that their CO2 models are flawed. Note the comment in this paper (thanks Stephen Wilde) that SABER is direct measuring CO2 ratios where GCM models are being used in climate simulations. Interesting that one hand doesn’t know what the other is doing or do they?

    Abstract from:

    http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20090004446_2009001269.pdf

    The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) experiment is one of four instruments on NASA’s Thermosphere-Ionosphere-Energetics and Dynamics (TIMED) satellite. SABER measures broadband infrared limb emission and derives vertical profiles of kinetic temperature (Tk) from the lower stratosphere to approximately 120 km, and vertical profiles of carbon dioxide (CO2) volume mixing ratio (vmr) from approximately 70 km to 120 km. In this paper we report on SABER Tk/CO2 data in the mesosphere and lower thermosphere (MLT) region from the version 1.06 dataset. The continuous SABER measurements provide an excellent dataset to understand the evolution and mechanisms responsible for the global two-level structure of the mesopause altitude. SABER MLT Tk comparisons with ground-based sodium lidar and rocket falling sphere Tk measurements are generally in good agreement. However, SABER CO2 data differs significantly from TIME-GCM model simulations. Indirect CO2 validation through SABER-lidar MLT Tk comparisons and SABER-radiation transfer comparisons of nighttime 4.3 micron limb emission suggest the SABER-derived CO2 data is a better representation of the true atmospheric MLT CO2 abundance compared to model simulations of CO2 vmr.

  46. Dear Willis

    This is a great post and it demonstrates something that I have always intuitively felt about the treatment of data and the way th graphs are deliberately drawn to create alarm.

    I know you looked at the CET and would appreciate a link, which I have lost?

  47. I have just forwarded this link to one of our more science savy opposition senators in Australia.

    (you know the ones that revolted against the Carbon tax here in Australia and voted it down)

    Plus had to include Andrew Bolt hoping it will get into his column in the MSM tomorrow.

    Nothing like giving them a little Ammo.

    Great article

    Scott

  48. Excellent work Willis,
    When it comes to the agreement between the three main global temperature data sets the term “self fulfilling prophecy” comes to mind.
    Someone somewhere started with the notion that there has been a warming of something like 0.7 of a degree over the twentieth century, and the CRU, GISS, and GHCN have manipulated their data, using three different, contrived methodologies, until it agrees with their pre-conceived ideal.
    It appears to me that the whole exercise of reconstructing historic global temperatures owes very little to science and much more to “clever” statistics.

  49. Raw data is just that.

    And, anybody subsequently approaching the raw data must have an a-priori motivation, which must be made explicit.

    If there are gaps, and bits of the raw data that are unsuitable for the purposes of the current study, then why not leave them out altogether?

    IF the motivation is to determine whether or not today’s ambient global air temperatures are hotter or colder than they were, then a continuous record is NOT required. Rather, as long as there were matching continuous sequences of a few years, this would be sufficient for the purpose.

    So why do the climate scientists need a ‘continous record’? And for what purpose are they trying to create an artefactual proxy of the real raw data? And in so doing, aren’t they creating a subjective fiction? an artefact? A man-made simulation?

    Isn’t this similar to producing a marked-up copy of the dead-sea scrolls, with the corrections in felt-tipped pen, and missing bits added-in in biro, and then calling it ‘the definitive data-set’ ?

  50. There is an unknown in the equation.

    The Darwin data were collected by the Bureau of Meteorology, who have their own sets of “adjustments”. I am trying to discover if the Darwin data sent to GNCN are unadjusted or Australian-adjusted. By coincidence, I have been working on Darwin records for some months. There was an early station shift from the Post Office to the Regional Office near the town in 1962 (which would have gradually built up some UHI) to the airport, which in 1940 was way, way out of town but which is now surrounded by suburbs on most sides, so UHI is complicit again.

    There is a BOM record of data overlap in the period 1967 to 1973. Overall, the Tmax was about 0.5 deg C higher at the airport and the Tmin was about 0.5 deg C lower at the airport during these 7 years. The Tmax averaged 31.5 deg C and 32.1 deg C at Office and Airport respectively. The Tmin averaged 23.8 and 23.2 at Office and Airport. Of course, if you take the mean, the Office is the same as the airport.

    However, my problem is that I do not know if the Office and the Airport use raw or Australian adjusted data. I suspect the latter. If you can tell me how to display graphs on this blog I’ll put up a spaghetti graph of 5 different versions of annual Taverage at Darwin, 1886 to 2008. The worst years show a difference between adjusters of about 3.5 deg C, with KNMI GHCN version 2 adjusted being lower than recent BOM official figures.

    I still do not know if any of us has seen truly raw data for Darwin.

    Or from any other Australian station.

  51. An excellent article.

    But it doesn’t stop there. In order to arrive at a “global average” gridding is used with backfilling of sparse areas using methods such as averaging or interpolating from “nearby” stations, taking no account of topography, etc. Any error such as this therefore carries more weight than in areas where records are more prolific.

    And nowhere is the margin of error introduced accounted for. It has always seemed to me that the degree of warming being measured is probably less than the margin of error of the temperature record itself, especially when SSTs from bucket measurements are added. Add in UHI effect (even if you adjust for that too) and the margin for error increases again.

    Ultimately, therefore, IMHO the whole exercise becomes meaningless and making alarmist observations, let alone blaming CO2, preposterous.

  52. It would be interesting to feed this through to Copenhagen, and have some brave soul present it to the assembled zombies.

    A clove of garlic, sunlight, or the sight of a wooden stake could not arouse more panic, or howls of anger.

  53. Lets all “homogenize” our data
    Into chunks of bits and pieces,
    Lets forget which way is up or down
    And randomize our thesis,
    So black is white and white is brown
    And purple wears a hat,
    And when our data’s goose is cooked,
    We’ll all say, “How HOT is that?”

    .
    .
    ©2009 Dave Stephens

    http://www.caricaturesbydave.com

  54. This could explain the MSM reaction: he owns ALL the papers in Australia. This country has no freedom of the press anymore basically

    copied from another site

    “Phil Kean wonders why Sky gives so much time to the global warming scare. Perhaps it could be because it is owned by News International which is run by James Murdoch who is married to a climate change fanatic. Kathryn Hufschmid runs the Clinton Climate Initiative.
    .
    I understand that News International also owns a number of newspapers in this country. I don’t suppose that the fact that the boss’s wife is AGW nutter has any influence on the editorial policy of those newspapers.
    .
    It almost makes me wish that Daddy Rupert still had personal control of the media in this country.

  55. Onwards and upwards.

    Great work Willis; much appreciated amidst all the BS surrounding Copenhagen.

  56. The lack of transparency is the problem. The adjustments should be completely disclosed for all stations including reasons for those adjustments. You have to be careful drawing conclusions without knowing why the adjustments were made. It certainly looks suspicious. In Torok, S. and Nicholls, N., 1996, An historical temperature record for Australia. Aust. Met. Mag. 45, 251-260 which I think was the first paper developing a “High Quality” (not sure that is how I would personally describe it given the Australian data and station history but moving along…) one example of adjustments is given for 224 stations used in that paper and they are for Mildura. The adjustments and reasons (see p.257):

    <1989 -0.6 Move to higher, clearer ground
    <1946 -0.9 Move from Post Office to Airport
    <1939 +0.4 New screen
    <1930 +0.3 Move from park to Post Office
    1943 +1.0 Pile of dirt near screen during construction of air-raid shelter
    1903 +1.5 Temporary site one mile east
    1902 -1.0 Problems with shelter
    1901 -0.5 Problems with shelter
    1900 -0.5 Problems with shelter
    1892 +1.0 Temporary site
    1890 -1.0 Detect

    “Detect” refers to use of the Detect program (see paper). The “<” symbol indicates that the adjustment was made to all years prior to the indicated year.

    The above gives an idea of the type of adjustments used in that paper and the number of adjustments made to data. For the 224 candidate stations 2,812 adjustments were made in total. A couple of points, the adjustments are subjective by their very nature. Use of overlapping multi station data can assist. I have concerns about the size of the errors these multiple adjustments introduce but I am certainly no expert. I wonder what the error bar is on the final plot when we are talking of average warming in the tenths of a degree C over a century. The stations really never were designed to provide the data that it is being used for but that is well known.

    My point is without the detailed station metadata it might be too early to draw a conclusion. This is why we need to know what were the adjustments made to each station and the reasons. Surely this data exists (if it doesn’t then the entire adjusted data series is useless as it can’t be scrutinised by other scientists – maybe they did a CRU with it!?) and if they do why are they not made public or at the very least made available to researchers. Have the data keepers been asked for this? I am assuming they have.

  57. From FOIA2009.zip/documents/osborn-tree6/tsdependent/compute_neff.pro

    ***Although there are no programming errors, as far as I know, the
    ; ***method would seem to be in error, since neff(raw) is always greater
    ; ***than neff(hi) plus neff(low) – which shouldn’t be true, otherwise
    ; ***some information has somehow been lost. For now, therefore, run
    ; ***compute_neff for unfiltered series, then for low-pass series, and
    ; ***subtract the results to obtain the neff for the high-pass series!

  58. Can I please correct you. You keep using the phrase “raw data”. Averaged figures are not “raw data”. Stevenson screens record maximum and minimum DAILY temperatures. This is the real RAW data.

    When you do an analysis of temperature data over one year then you should always show it as a distribution. It will have a mean and a standard deviation. Take the UK. It may have a mean annual temperature of 15Celsius with a standard deviation of 20Celsius.

    Without the distribution the warmists can say “The mean of 2001 was 0.1Celsius higher than the mean of 2000. This is significant – we are heating the planet”. With the distribution you would say “The mean of 2001 was 0.1Celsius higher than 2000 but since the standard deviation of the annual distribution is 20Celsius, we cannot consider this as being statistically significant”.

    If we had the REAL raw data we could almost certainly show that the off-trend averages of the last few decades was of no statistical significance anyway, before we got into the nitty-gritty of fudges to the data. By using slack language to describe the mean annual temperatures as “Raw Data” we are falling into a trap set by the warmists.

  59. Amazinging. Couldn’t sleep (West Coast); when I began reading WUWT there was only one comment. Now 55. When I finish commenting, probably over 200.

    Anthony and Willis Eschenbach, masterful work, skillful expose of the purposeful fraud. This becomes a whodunnit escapade and I am beginning to want to know WHEN it began in earnest. When was the temperature data of Darwin doctored? Who ordered it! Sometime between 1998 and 2001 (change in IPCC reports)?

    I no longer believe “they probably began with good moral purposes from a desire to save humankind”. This deed was foul from the beginning. The 2008 U.S. election cycle had to be part of the “plan”. Too much fraud; too much unexplained; too much money from financial types; too much money from overseas; the ballot boxes stuffed or votes changed at the end in too many areas, not just the “normal” expected areas of voting fraud (such as Chicago) in American history that goes back into the 19th century. This was/is massive.

    harpo (01:48:28) : “Greetings from Australia.

    Nobody in the current Australian government cares a damn about whether the temperature has gone up down or side ways. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).”

    Harpo seems to have a handle on the matter, or at least on the “rationale”. The “they” who are implementing the tax are also getting large salaries, excellent medical care, fantastic retirement bundles, and jet set perks (Copenhagen anyone, with free prostitute services?). “They” also can direct this tax money any which way “they” want. This “they” also includes corporations that are no longer making a profit on their products so they are turning to trading fees and largesse from the “they” their money helped elect; Enron seemed to begin this kind of trading scam. It is like vultures descending on the savings and retirements of the hard-working developed-world individuals and families — now that manufacturing and its collateral industries have left for China, India and other parts of the world.

    Keep up the good work. Maybe “we” can “save the world” from the “they”.

  60. FOI2009.zip\FOIA\documents\ipcc-tar-master

    Lot of dissent displayed.
    What happened to it all?

  61. The last step up seems to be around 1979. It would be interesting to see if any other upsteps have happened since then, once satellite data went online.

  62. Supreb case study! Absolutely superb! It is also a lesson to other scientists on how to articulate issues in simple, layman’s terms. Sir, I salute your communication skills.

    And I am sure the people from the crocodile country would be delighted to hear that a world class blogger is putting Darwin on to the world map.

  63. So lets sum up then:

    IPCC data = GHCN data = GISS data = CRU data

    The official line + genuine believers CRU data is reliable, why because it “independently” shows the same profile and trends as the “independent” GHCN data and the GISS data.

    Actually the 3 data sets are very nearly one and the same.

    [ While on the subject] – but hey the satellite data also show the same trends and same profile since 1979. But a little bell rings – somewhere I read – satellite data is “calibrated” with ground data. There is only 1 ground data for all practical purposes GHCN. So does the satellite data unwittingly correspond to the ground data because it is calibrated with them?

    But then again I read that the satellite temperatures correspond with the balloon transonde temps. So maybe that cannot be?

    Dr Spencer if you read this could you comment please?

    GHCN Aussie adjusted data big warming since 1950. (Funnily enough this is the period also when AGW started and was identified by the IPCC.) Raw data shows cooling. (Sounds familiar? NIWA?)

    Darwin adjustments – oh oh….

  64. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).

    I don’t believe this. Even an economic illiterate knows that increasing costs on business are passed on to the consumer. The margins don’t change, do they? Not to mention the fact that the poor working man (in the UK at least) will have to holiday at Bognor Regis, rather than in Alecante and he’ll find it increasingly difficult to have his heating on in the winter, etc. Not really the kind of policy platform designed to improve the life of the poor, unwashed masses, is it?

    No, I think this is more to do with energy security (vital national interest), coupled with a lot of highly stupid activist scientists. I would use the word opportunistic, but I don’t think it strong enough.

  65. I have often wanted to check out the long term Darwin record but never got around to it. Since I have been a boy (which is some time) I have noticed Darwins temperature on the evening news is always between 32-34 C. A perfect place to test climate change.

    Thanks for the great work….this could grow.

  66. I have the good fortune to have been born in Darwin, not so many years after the Japanese bombs stopped falling.

    Darwin Post Office site temp records (1882-1942) show an mean max of 32.6C and a mean min of 23.6C. The monthly mean max temps ranged from 30.6C in January to 34.2C in November (just before the monsoon arrives). It is 24 metres above sea level.

    Darwin Airport site temp records (1941 – 2009) show a mean max of 32.0C and a mean min of 23.2C. The monthly mean max temps range from 30.5C in January to 33.3C in November. It is 30 metres above sea level.

    Both sites are close to the sea on three sides.

    It can be easily seen that (a) being tropical, temps in Darwin do not vary all that much over the year and (b) that the temps were slightly higher in the earlier years – i.e. there would appear to have been some cooling since the early part of the 20th century.

  67. I’ve never said this a to a datahead man (or any other sort) before…but I think I love you. This is about as clear, convincing and robust a paper as anyone could put together given the available data.
    Is that a Copenhagen herring I smell, or just something altogether fishy?

  68. Why homogenise?

    I see no need to homogenise at all. If you have a record starting in 1950 and ending in 1980, then you can fit a line and get the trend for that period. Is it up or down?

    Likewise for the adjacent sites.

    1. There is no need to join up different temperature records to make one.

    2. Picking one site and not others is a cherry pick. Why exclude the other sites from the records if you use them for adjusting another site? There is no reason.

    3. Missing months are relatively easy to fix. Create an average seasonal record. Average all Jan, Feb, Mar, … figures. If you miss Feb’s figure you can interpolate based on this curve.

    4. Accuracy should improve as you get closer to today. Anything else is evidence of fraud. So if the adjustments become larger as we get to today, its fraud.

    5. I’m not sure what you do about the UHI. It seems to me that the adjustments are large positive adjustments. At the same time the alarmists are say its a small effect. It should be a negative effect. It’s not consistent.

    6. Surveys of sites are the only way of deciding if they are fit for purpose.

    There is a research paper that can be easily done in the US. With Anthony’s data, you can show that the rural/urban decisions of Nasa base on lights etc is wrong.

    Nick

  69. Willis – hats off to you – an amazing post and superb clear analysis of the data.

    Have blogged/linked/twittered and sent to the Opposition Leader here in the UK.

  70. If we don’t have documentation for each and every adjustment made to homogenize the data, such homogenization must be considered suspect. Similarly that audit trail must be externally audited before the assumption of being suspect is changed! If the UEA/CRU don’t have that information they simply haven’t been doing their job and one has to wonder what they’ve been doing to earn their grants. The process of adjusting has the smell of “synthesising data” as we used to call making up results for high school science pracs.

    Repetitive and unexciting work? Perhaps, but that’s what lies at the heart of much data collection in experimental science.

  71. A clear and well reasoned argument that shows what a proper analysis should be. How did the CRU get so far from what is so clearly outlined here ?. My thanks.

  72. Dave UK:
    – Podesta:
    Leadership role my fanny!
    This is what we call authoritarianism based on Stalinist Science..
    Next it’s:
    1. water (a water crisis is currently being manufactured)
    2. food (meat, sugar and fat)
    3. information
    Is there a Paul Revere left in USA?

  73. Thank you for Mr Eschenbach for a superb article – a tribute to genuine investigation and perseverence. It is the sort of thing that the “Quality” papers in the UK would once have done themselves in the “Special Investigations” they so love to pursue, but for the subject now being utterly inimical to their current editorial positions. The lack of journalistic investigation of climategate and the whole AGW movement is rapidly emerging as the 2nd most scandalous aspect (after the suborning of the scientifice method itself) of the whole global warming farrago.

    I am not very hopeful that the global warming juggernaut, even after climatege can be prevented from causing at least tempoarary economic damage to the planet, rich and poor nations alike. But resistance is not futile; the true position is gradually being established with work such as yours, and I am quite sure that the future at least will universally recognise the voices of sanity such as yours and McCintyre’s that were raised at this time of Global Warming hysteria; just as the MSM will look back on it as their time of greatest shame.

  74. I know what we can do with Guantanamo. We can fill it with all the AGW desciples who are terrorizing the Earth with their fraudulant data!

  75. As a 7th generation Tasmanian I’m proud that many scientists have taken up the work of the Tasmania based late great John L Daly. I recommend his “What’s Wrong With Surface Record?” at http://www.john-daly.com/ges/surftmp/surftemp.htm as a great resource. Read about Low Head ground station and how the scientists ignored the changed circumstances there even when told. Also see http://www.john-daly.com-cru-index.htm for his email exchanges (not leaked) with East Anglia CRU head Phil Jones after John had caught them out in an obvious mistake. It is a great insight intothe mindset of those scientists and very relevant to the current climategate scandal. No wonder that on hearing of John’s death Jones callously told Mike Mann that “in an odd way, that’s cheering news”!

  76. Is it just me or does the raw dat in Fig 2 and Fig 6 look like it corresponds to the GCM’s non CO2 forced blue section? You know what the temps would be if there was no CO2 forcing it just looks like they correlate to my MK1 eyeball while scrolling back and forth. Willis have you tried to superimpose the raw to the Model Non CO2 forced temp graph? It would be funny if they did match, becasue then there own model would show that the only Man made warming is adjustments to the Raw data.

  77. I have been a lurker on this website for some time and continue to be impressed with the quality of analyses presented here.
    Keep up the great work!!

  78. Satellite measurements are calibrated against SI standard measures or by in situ temperature measurements.

  79. I’m very interested in following the discussions here. But there’s one thing I don’t understand, and I know that this is going to sound really seriously dumb, but I do want to know. It’s this. When people talk about an “anomaly”, I understand that this means a deviation from some value (probably some average, or a single reference value). But it seems that this value is never mentioned. So, my question is this: is there some standard definition of the value upon which temperature (and other) anomolies are based? If so, what is it? If not, how do people know what the actual temperature for some point is, given the value for the anomaly at that point?

    Many thanks for any pointers to some 101 (or even a kid’s pre-101).

    PS – I’ve tried googling “temperature anomaly definition” etc., with no luck.

  80. Further to my last post,
    This item was aired on UK TV 9 months ago. Yesterdays statement by the EPA reminded me of it.
    That Obama and Podesta are using the EPA as a tool to bypass the democratic process in America.
    Judging by what the EPA had to say things are going to plan for Obama.
    Suppression of debate and oppression of democracy are what is being used.
    People should fear that more than MGW.

  81. .

    And Darwin is a good location to see what is reeaally happening to the climate. Darwin was and is hundreds of miles from anywhere, and so its temperatures will be unaffected by urban growth (as long as someone did not build a barbie next to the station). Darwin is as ‘pure’ in climate terms as you are going to get.

    Can everyone forward this item onto your local parliamentarians and media. This IS important.

    .

  82. Richard (03:16:38)
    “So does the satellite data unwittingly correspond to the ground data because it is calibrated with them?

    I believe that satellite temperature data is calibrated by comparison with independent and concurrent thermometer data, but it is interesting that since we now have more confidence in our global temperature metrics (except GISS…?) global warming seems to have stopped, instead the manipulation is being applied retrospectively in an attempt to reduce early 20th Century temperatures.
    As Churchill once said. “It is all right to rat, but you can’t re-rat.

  83. David Archibald (02:46:51);
    Geoff Sherrington (02:14:27);
    Geoff Sharp (03:25:34);

    And Willis;

    Please be aware that there is no continuous station in Darwin from 1880 to 1962 (as per Sherro’s post) or 1991 as per GISS (station zero).

    The station of record was Darwin Post Office from 1982 till it suffered a direct hit from a Japanese bomb during the first raid on 19 February 1942. (The postmaster Hurtle Bald, his wife and daughter and 7 post office staff members were all killed instantly, and the post office itself was utterly destroyed). The station of record from then was Darwin Airport (which had about a year’s overlap with the Post Office at that time).

    So (as per Willis’ graph above) Station Zero is in itself a splice of at least two stations (The Post Office and presumably the Airport – but I have no explanation of why it ends in 1991…)

    Warwick Hughes did a post up on this about a month ago: http://www.warwickhughes.com/blog/?p=302#comments
    Where there is a photograph of the Stevenson Screen at the PO from the 1880’s…

    And I did one at Weatherzone at the same time:

    http://forum.weatherzone.com.au/ubbthreads.php?ubb=showflat&Number=795794#Post795794

    Where I have links to BoM data for the stations plus a link to some interesging stuff John Daly did a while back on Darwin.

    cheers

    Arnost

  84. Wow! It’s bad enough to use highly aggressive step function adjustments when “correcting” for station moves. But these continuous adjustments are inexcusable.

  85. “K … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

    So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.”

    Apart from “MIDDLE POINT” http://www.bom.gov.au/climate/averages/tables/cw_014090.shtml
    and Darwin Post Office http://www.bom.gov.au/climate/averages/tables/cw_014016.shtml
    and CAPE DON http://www.bom.gov.au/climate/averages/tables/cw_014008.shtml to name but 3.

    Ho hum.

  86. “Unless something like this is done, we’ll see the Met spending three years using the same code and coming up with almost exactly the same dataset, and we’ll have lost.”

    Agreed, more importantly, it will also be the science that is lost and we will be plunged into a new dark age.

    No matter how many times that the Met office run the same data through the same code, they will get the same result. That is why the raw data and the code need to be independently analysed.

  87. Mike in London (03:50:58) :

    “” just as the MSM will look back on it as their time of greatest shame.””

    I think it’s time to tell the MSM and other scientists that it’s time for them to get on the right side of this now.

    Climategate has given them the excuse to claim that they were duped, but if they don’t switch sides now then they are part of the duping and will be held responsible for their shame.

  88. ” This has just got to get publicity in the msm somehow to spark an investigation and rework of the temperature record.”

    There is slim hope that your average journalista ot talking head has the attention span to follow such a beautifully crafted and lucid argument.

    Want to try it on Boxer and Waxman?

  89. So the Australian raw data shows no warming and we already know that the USA was warmer in the 1930s. If this continues we’ll be left with one thermometer, maintained by a peasant farmer somewhere in Siberia, that shows warming, and the whole of the 20th Century temperature reconstructions will be based on this…
    … but haven’t I heard that one already? Remember Yamal?

  90. A very clear, well written article. Congratulations and keep up the good work! But still this whole thing is driving me crazy. When will the MSM wake up?

    Hugh

  91. Gore’s immolation vs Polar bear convicts.
    …-

    “*The cold snap is expected to continue all week with daytime highs ranging from -12 C on Thursday to a bitterly cold weekend that will see the high drop to -25 C. The normal high for this time of year is -6 C.”
    …-

    “UEA asks for local support over ‘Climategate’

    Bosses at the University of East Anglia insisted last night the under- fire institution could ride out the storm of the climategate scandal – but called on the support of people in Norfolk and Norwich to help them through the most damaging row in its history.”

    http://www.freerepublic.com/focus/f-news/2402716/posts

    …-

    “*Alberta deep freeze saps power
    Record high energy usage in province as mercury plummets”

    http://cnews.canoe.ca/CNEWS/Canada/2009/12/08/12077286-sun.html

    …-

    “Hudson Bay jail upgraded for wayward polar bears
    Cnews ^ | December 8, 2009 | Canadian Press

    WINNIPEG — Manitoba is spending more money to upgrade a polar bear jail in Churchill. Conservation Minister Bill Blaikie says the province is spending $105,000 to improve the jail’s walls and main entrance.

    The compound is used to house wayward polar bears that get too close to the town or return to the community after being scared away.”

  92. This subject was covered in a CA thread some years ago. I believe it came up when someone discovered the TOBS adjustment that NOAA began using. The TOBS adusted the 1930s down, but the 1990s up. Someone calculated that the TOBS accounted for 25-30% of the rise in global temps.

    The question about adjusting local station data to adjacent data also came up. Especially concerning grid cells. If San Fran and Palm Beach are in the same grid cell, how does one extrapolate and average. One station is affected by maritime polar air masses; the other continental tropical. If the enviorment is not homogenious, how can one extrapolate at all? One would be mixing apples and oranges. In California this wouldn’t be much of a problem ( there are plenty of adjacent stations in proximity to San Fran and Palm Beach); however, in places like South America (where Steve McIntyre found GISS only uses 6 reporting stations), or Africa this problem is very real. If one must apply such drastic adustments to the raw data, why even use raw data at all? Why not just say “this is what is really going on -the weather observers these last 6 decades were either drunk or incompetent.”

    The answer to all of this is simple. Rely on RSS/UAH data. Yes, the records go back to only 1979, and there are geographical limitis. But, the idea that we can find a global climate signal therough thermometer records, and we can measure that signal to the tenth or hundredth of a degree is absurd. Thrermomer only measure microsite data at a single point in time. Supposedly the thermometers are measuring ambient air temps (which they do not. I don’t think sling psychrometers are used anymore). And supposedly the temperatures are measured over green grass, away from the shade, and away from things like patios, parking lots, and buildings.

    Surface temps traditonally have the single purpose of assisting weather forecaster in making up mesoscale and macroscale forecasts. They can provide general trends in tracking broad climate changes. Surface temps do not have the precision that people like Jones, Hansen et als. say they do. If they did, how come the data must be sliced, diced, and obliterated by our climate experts.

  93. When you say “throw away all of the inconveniently colder data prior to 1941″, do you mean “warmer”?

  94. I just have a question about the dates covered in this analysis. How was it that there was a thermometer at the Darwin Airport about 20 years before the Wright brothers’ flight? Were we Australians so prescient that we built one in anticipation?

  95. It amazes me that this is still being debated. The warmist argument has already been disproven by the recent climate behavior of the Earth. CO2 has risen and temperature has not. How much simpler can it get?

    Also, I don’t know why studies like these that show CO2 levels were 4 to 5 times higher during the Medieval Warm Period than they are now don’t get mentioned more press. It’s obviously not possible to attribute increased CO2 levels in that time period to human activity. And equally obviously, the temperature eventually came down so no runaway warming caused by CO2.

    The correlation issue is the achilles heel of the warmest argument. Regardless of the current temperature, the correlation showing CO2 as a driver to warming doesn’t hold up.

    On an aside, I keep hearing the argument that 8 of the last 10 years are the warmest on record but I also remember an article in which it said that NASA had revised their data to show 1933 as the warmest on record. Considering just the last 100 years what are the warmest years on record?

  96. A bit OT:
    Climategate is convincing for who follows it – who reads the articles here.
    But when I explain it to friends, they keep coming back to one thought that makes it hard for them to get their mind around it: “How could the scientific community let this happen?”

    They know that many politicians are ignorant and corruptible and that many activists are ignorant and extreme. Andf they know that most people are ignorant and naif.
    But why didn’t the scientists speak up?
    And I still find that hard to explain.

    I tell my friends that these scientists of CRU and NASA don’t publish the raw measurements nor how they adjust them. And they are shocked and ask: “How can that be true. The whole scientific community would demand to see them.”

    I tell my friends that the paleoclimatologists who come up with the hockey sticks on which the whole case rests of the uniqueness of the warming in the last decades of the 20st century have hijacked the peer review process.
    They ask me why the scientists who were pushed out didn’t protest and if they did, why didn’t the scientific community stand up and put things right?

    I tell that many scientific organizations are controlled by small groups of activists who support claim there is scientific consensus over catastrophic AGW. And again they ask me how that could happen. Why don’t the thousands of scientists who are members get rid of them?

    I think that if we want the public to understand Climategate, we need to be able to answer these questions satisfactorily.

  97. Ken Hall (00:33:54) :

    “This is exactly the kind of public, open examination of the raw and adjusted data that needs to be done for ALL stations globally to establish, once and for all, IF the entire earth is warming un-naturally. (and I have yet to see any definitive proof that the current warming is un-natural).”

    Ken,

    Watch this space!! Someone :-) is very close to doing exactly that!

    Next step after that, what happens if we do some different far more scientically justifiable (so realistic) alternative homogenity adjustments? Does the blade of the ‘hockey stick’ go away?

    If so what on earth are all those poor GCMs going to use when they are ‘spun up’ using the gridded datasets that no longer have a pre-determined warming trend in them? Will the ‘flux adjustments’ have to make a re-appearance in the AOCGCMs?

    What will poor Tom Wigley and Sarah Raper do when MAGICC doesn’t have any ‘unprecendented warming’ model outputs to fit itself to?

    KevinUK

  98. Willis Eschenbach, is this the only site you examined? Or did you examine many before you found one that appears was blatantly rigged?

    I just wonder because of all the thousands of sites available, it would seem unlikely that the first one examined in this detail would be a ‘rigged’ one, IF the record was generally sound. If the record was generally “fixed around the theory” then most , if not all, of them will be dodgy.

    If this is the only one you examined, then you have a 100% record of dodgy data manipulation for every site examined.

  99. I had a look at Alice Springs, lovely dataset, daily records with only a few days blank; flat as a pancake. Projection between 0.3-0.6 degrees per decade.

  100. For those that are interested here is the temperature graphs for Darwin from the 1880 to today.
    As has been mentioned elsewhere Darwin was bombed in Feb 1942 which destroyed the post office so my guess would be that that was when record keeping moved to the airport were it would appear that it is today.
    First graph 1880-1942

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14016&p_nccObsCode=36&p_month=13

    Second graph 1942-2009

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=14015&p_nccObsCode=36&p_month=13

    Hope the linking works

  101. The Darwin Zero series shown above has large and relentless upward adjustments starting in 1930. But those are today’s adjustments, as they stand according to current practice. The entire station history could be readjusted some entirely different way if, for example, a historic fact like a previously forgotten station move were discovered tomorrow.

    Anybody who has been awake recently knows that global cooling was the big scare in the 1970s, although that one didn’t catch on with the public nearly so well. Climate scientists then, as now, had to be homogenizing station data for all the reasons listed in this article. It would be fascinating to see if their adjustments then tended to confirm the cooling they expected to see, just as their adjustments now do the opposite. Is the GHCN data stored in such a way that one could see what the homogenized Darwin Zero series looked like in 1975, with the adjustments as they stood at that time?

  102. “But the data is still good.” How do they know since CRU kept the raw data secret all this time? Every scientist who says, “but the data is still good” has to either have seen the data and evaluated it, or they have to have complete faith in the data-keepers. But we know they have NOT seen or evaluated the data that CRU has kept hidden. And as for faith in Jones et al.? The emails show that only gullible people could continue to have faith in them.

    An answer like “But the data is still good” is pure propaganda; it is not even close to being science.

  103. A proposal

    Would it be possible for the knowledgeable people here to publish a “recipe” telling ordinary folk how to do this sort of analysis, comparing the raw GHCN data with the “cooked” version?

    I’m sure there’d be enough volunteers lurking on the blogs who would be willing to carry out this analysis, then we’d be able to provide a definitive answer to the point that Willis makes at the end of this article: “This may be an isolated incident, we don’t know.”

  104. I just looked at the example the Met Office has released. The provenence is the CRU!

    I’m nor surprised the Met Office is going to do a 3 year re-investigation of raw temperature records. They’ve been taken for ride as well as the rest of us.

  105. This has probably been asked…

    Does anyone foresee an academic (or three or four) reviewing the raw data – objectively! – or will this be left to volunteers working pro-bono. (or maybe willie get’s paid?).

    Point is, is there anyway to organize activities and crank through the data a bit faster. (I recognize you don’t find people who can do this loitering at the TEXACO.)

    Just a thought.

  106. Knut Witberg (05:01:07) :

    “The scale on the left side is not the same as on the right side. Error?”

    I think the scale on the right refers to the adjustments.

  107. Very interesting ShowsOn that graph shows temperatures for times before 1942 when the temperature was being recorded at station 14016 the post office.
    So how do you reconcile your graph that shows a steady rise for station 14015 with the graph I linked above that shows a reasonably flat graph over the same period and they both come from the BOM?.

  108. The shape of the difference function you are getting in figure 8 looks remarkably similar to the artificial fudge factor in the briffa_Sep98_d.pro file from the released documents. The artificial correction has puzzled me because of the swing down before it starts to ramp up. I cannot see why you would apply a function of this shape at all.

  109. The black line as presented in the IPCC report has been deliberately started at a low (1910) in the actual raw data as opposed to the real start of the data set which, from Figure 2, appears to start in 1880. So they’ve deliberately missed out one whole degree centigrade of cooling from 1880 to 1910 just so they (IPCC) can print a graph showing a 0.5% rise from 1910 to 2000 and a ‘shock’ 1deg C increase from 1950 – 2000.

    Oh come on chaps.

    And I’m new to this so can somebody explain what are the blue and red overlays on the Fig 9.12 in the IPCC data? Maximum and minimum of all the data sets (3?)(30?) in the input? Maximum and minimum temperature predictions from various models?

    In either case why is the actual black line outside the shaded zone for a few years either side of 1950?

  110. This maybe of some interest it is a temperature graph for a small place called Menindee which is in the far west of New South Wales(Australia) and I would think would not suffer from any UHI effect.

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=47019&p_nccObsCode=36&p_month=13

    Notice a reasonably flat graph.And no maximum temp for 1998 in fact if you check every capital city airport temperature for Australia for 1998(the hottest year ever) only Darwin recorded a record maximum.
    So how come Australia missed out or is it just because we are down under we are forgotten about.

  111. 5:29 a.m. PST … The temp right here, right now, in Susanville in northern CA is -10F or -23C, yet the national temp map with isobars and the pretty blue colors that is regularly updated is showing us toasty warm at between +20F and +25F. On the way in to work this morning, my driver’s Pick-up registered a -14F. What the???

  112. KeithGuy (05:16:08) :

    Knut Witberg (05:01:07) :

    “The scale on the left side is not the same as on the right side. Error?”

    I think the scale on the right refers to the adjustments.

    I see what you mean (fig 7 and 8). It does give an exaggerated impression of the adjustments.

    I know see that the BBC are making a big play out of the recently released data that shows that that this year is the 5th hottest on record and that the last decade is the hottest ever.
    I don’t doubt it, but what’s important is that the trend now shows cooling.

  113. This is truly a smoking gun. There is no amount of rationalization that can be used to justify the changes made to the raw data sets. Anyone who is objective will be able to see this.

    Now someone needs to take this excellent work and summarize it to a point where the average person can easily discern how temperature data is being manipulated.

    I always suspected the most manipulation would take place in the remote corners of the Earth where unscrupulous scientist thought they could get away with it.

  114. So … how many temperature stations will people have to do this for before they admit just how large the problem is, and just how badly they have betrayed science? Not to mention betraying all the real scientists who have used this kind of adjusted data in good faith and will soon realise just how many years of their lives have been wasted in analysing meaningless numbers.

  115. Global warming looks like a fraud when data are mis used by scientists and politicians want to use it for massive tax and control.

  116. Thanks for the compliments. Also, I greatly appreciate the questions. It is enjoyable research, and I strive to present it clearly and with the satisfaction it brings me. I will answer these as time permits.

    w.

  117. Just a note – Steve Mc on CA is pointing out that he’s had several old posts along the same lines.

    Perhaps a bit of blog archeology would be useful at this stage to see if the data massaging is the same?

  118. Yep , Showson is quoting the new Climate site which has been adjusted.

    Marble bar , Kalgoorlie , Meekatharra , & Southern Cross have all lost 1.5 degrees C in the 1920-1940 period resulting in most of Australia’s warming being filled over 1/3 of the most sparsely populated area of the continent.

    These are supposedly the same

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=012074&p_nccObsCode=36&p_month=13

    http://reg.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=maxT&area=wa&station=012074&period=annual

  119. TheSkyIsFalling (02:44:02), you raise a good issue:

    The lack of transparency is the problem. The adjustments should be completely disclosed for all stations including reasons for those adjustments. You have to be careful drawing conclusions without knowing why the adjustments were made. It certainly looks suspicious. In Torok, S. and Nicholls, N., 1996, An historical temperature record for Australia. Aust. Met. Mag. 45, 251-260 which I think was the first paper developing a “High Quality” (not sure that is how I would personally describe it given the Australian data and station history but moving along…) one example of adjustments is given for 224 stations used in that paper and they are for Mildura. The adjustments and reasons (see p.257):

    <1989 -0.6 Move to higher, clearer ground
    <1946 -0.9 Move from Post Office to Airport
    <1939 +0.4 New screen
    <1930 +0.3 Move from park to Post Office
    1943 +1.0 Pile of dirt near screen during construction of air-raid shelter
    1903 +1.5 Temporary site one mile east
    1902 -1.0 Problems with shelter
    1901 -0.5 Problems with shelter
    1900 -0.5 Problems with shelter
    1892 +1.0 Temporary site
    1890 -1.0 Detect

    My point is without the detailed station metadata it might be too early to draw a conclusion. This is why we need to know what were the adjustments made to each station and the reasons. Surely this data exists (if it doesn’t then the entire adjusted data series is useless as it can’t be scrutinised by other scientists – maybe they did a CRU with it!?) and if they do why are they not made public or at the very least made available to researchers. Have the data keepers been asked for this? I am assuming they have.

    While there are valid reasons for adjustments as you point out, adjusting a station from a 0.7C per century cooling to a 6 C per century warming is a complete and utter fabrication. We don’t have the whole story yet, but we do know that Darwin hasn’t warmed at 6 C per century. My only conclusion is that someone had their thumb on the scales. It’s not too early for that conclusion … it’s too late.

    w.

  120. Robin Cool wrote:

    “I tell that many scientific organizations are controlled by small groups of activists who support claim there is scientific consensus over catastrophic AGW. And again they ask me how that could happen. Why don’t the thousands of scientists who are members get rid of them?

    I think that if we want the public to understand Climategate, we need to be able to answer these questions satisfactorily.”

    A small group of activists have always been able to take control of any situation where the majority is apathetic and splintered – study the Bolsheviks in 1917, who took over a country even though they had barely 10% support. And in these organizations you question, even though many scientists are members, they are controlled by only a handful of people at the top. Once an organization is corrupted (as the APC is currently) the only alternative is probably for those who disagree to quit and start their own organization – and this could take years if not decades before it achieves equal recognition.

    Furthermore, the climatologists weren’t alone – they had government and the media on their side, the two most poweful weapons that scientists are afraid of. Annoy government and lose all your funding, annoy the media, get a negative story and lose your chance at tenure and a career. That trifecta of power was unassailable. Then, add in the fact that most scientists are specialists, not generalists, and thus as long as the controversy was outside their own little spheres of speciality, they felt that they needed to ignore it. How much trouble could it cause for them, afterall?

    A lot more than they thought it could, it’s turning out.

  121. Outstanding work, clearly explained and illustrated! I can only imagine the self-delusion and groupthink that went into all these “adjustments” at the GHCN, all earnestly applied in service to science. Those early warming revelations in the 1990’s were heady times, when almost any researcher could derive important new observations of the data. I’m sure it all seemed so very right.

    Like phrenology.

    But upon inspection by a disinterested outsider who isn’t caught up in the machine, it doesn’t even pass the sniff test.

    Our culture has only begun to wake from this delusion, with many still falling more tightly into its grip. Not even George Orwell could have anticipated the EPA’s move to regulate carbon dioxide as a pollutant. It will take literally thousands of revelations like this one to reverse the tide.

  122. Went to the met site and they admit in their FAQ sheet that it isn’t 100% Grade A Raw Data. Then they try to spin why it isn’t the raw data back onto the dog ate it in the 1980’s, but never fear we know its good because it’s from CRU, GISS and NCDC and “peer reviewed”!

    My god that’s like hauling out a counterfeit $20 bill to prove that your not a counterfeiter.

    Sorry Met office you need to show 100% Raw data, no adjustments, no peer review, no more appeals to authority. You also need to publish the codes used to make your adjustments. With what you have published you have not shown there is nothing there, matter of fact you showed the opposite when you admitted the data is gone. No Data to back up you claim then it gets trashed. I give the Met office a B for effort, a C for propaganda effect since most people will not look nor understand what the FAQ says and a big fat F for proving there is Man Made Global warming. At best with good codes and the raw data you could have proved warming but not causation to man.

  123. David Archibald (02:46:51) :

    Willis,

    Please email me your last graph as I would like to use it in public lectures, with attribution. A low resolution one would look shoddy. david.archibald [at westnet.com.au
    With thanks

    It’s just a screenshot from AIS , as referenced above. Select by Country, Australia, unadjusted, average stations, smoothed, plot anomalies, plot all stations.

    If you still need it let me know. I cleaned up your internet address in my reply, spambots, don’cha know …

    w.

  124. willis, a HUUUGE hug! last week I found a BOM page showing 3 graphs of OLD datasets, and they were remarkably level over a long timeframe , 20’s onwards. yet when I went back to my history i find its not there..well not the same page but..
    i did get a page with data and some files my pc cannot translate?
    i posted the link elsewhere today, to ask for help, so I am going to add the link and let you see what you can make of it all.
    BOM advises they are updating and removing…gee how very convenient!

    ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/

    the charts I saw before had Kalgoorlie in WA listed and it had gotten cooler from a very high time in 1930 /1.. and Longreach in Qld was another.
    again thanks heaps for our effort, I will be sharing this page around aus and o/s!
    ps the missing info in the 30.40s would be depression years and Wartime.

  125. Someone wrote :

    They want to implement a tax so they can collect money from the rich polluters …

    Please, dear friends, can we keep politics and taxes and economics on another thread? Your comments are fascinating, but this is a science blog.

    Thanks for your cooperation,

    w.

  126. Stacey (01:57:20) :

    Dear Willis

    I know you looked at the CET and would appreciate a link, which I have lost?

    Sure, it’s here. I’d forgotten I’d written that.

  127. Now we know.
    I was certain there had to be a good reason for losing the raw data, as has been claimed by the Jones group.
    The good reason for having no raw data to use to either validate or invalidate the various IPCC reports is in the graphs shown in this thread.
    The so-called homogenization is in fact blatant fraud.
    One starts off with the conclusion: AGW must be “scientifically” proved in order to implement worldwide Carbon taxes. Such AGW is not found in the raw data. Darwin Station 0 is an instance of such raw data. How does one solve such an evident problem? One manipulates the raw data in order to arrive at the pre-determined conclusion.
    This is an instance of one picture being worth a thousand words. Alas that my browser does not allow me to superpose the various graphs. Would it be possible to present all of the graphs to a uniform scale, so that superposition would allow a better compare/contrast?
    This could best be done by the author, with the use of the original tables of information. The alternative is for the author to provide us with the tables of data used to generate the graphs, for us to use with our own graphing programs.

  128. I’m no climate scientist but the ones making the news lately aren’t either.The homogenized data,skimpy tree ring data,and stopping release of raw data was just a part of their bad science.They are simply criminals who should be prosecuted.
    I’m no climate scientist,but I did spend last night reading WUWT.

  129. Great work!

    Seems we are slowly getting there.

    And your own line sums it up nicely: “when those guys “adjust”, they don’t mess around”!

  130. Give us credit for honesty down here in the Great South Land. When our blokes ginger up the numbers they explain how they do it — peer-reviewed, of course.

    Check this out, for example:The liberties they took make my brain ache.

    http://www.giub.unibe.ch/~dmarta/publications.dir/Della-Marta2004.pdf

    This little beauty of deduction explains all the lurks while also suggesting that the Australian records contain even more egregious examples of data goosing. In addition to the six degrees of difference (latitude and longitude) and 300 metres *(plus or minus) elevation deemed acceptable in the selection of “appropriate” substitute stations, the authors explain that that they sometimes go even further. Quote:

    “…these limits were relaxed to include stations within eight degrees of latitude and longitude and 500 metre altitude difference…” (bottom of pg. 77)

    Oy!

    I do wish someone smarter than I could take a good, hard look at the above document.

  131. ” Your comments are fascinating, but this is a science blog.”

    If the topic is AGW related, it has very little to do with science. If we keep to your terms, we wouldn’t be able to talk about any of these issues. No science here as far as I can tell.

  132. “(Interestingly, before 1984, Orwells’ 1984 was required reading for all year 11/12(?) students in Victoria… now I can’t find anybody under the age of 40 who has read it)”

    Even though I am 40, I was never required to read it, although I did read it for the first time in 2004 and it scared the [snip] out of me too!

    I too have found few people who have read it. In fact I do not know of a single ‘X-factor, celebrity dance, jungle got talent on ice’ reality TV viewer that has read it. I wonder if there is a correlation there? Hmmmmmmmmmmmm.

  133. Thanks Willis, keep up the good work!

    I made a graph on the annual mean temperatures of Sweden, just like Wibjorn Karlen did. The result sure doesnt look anything like the graph for the Nordic area in the IPCC report!

    Try it yourselves : http://www.smhi.se/hfa_coord/nordklim/index.php?page=dataset

    This needs to be done for all raw data there is, and where there are strange differences we need to demand a reasonable explanation!

  134. Willis,

    “While there are valid reasons for adjustments as you point out, adjusting a station from a 0.7C per century cooling to a 6 C per century warming is a complete and utter fabrication. We don’t have the whole story yet, but we do know that Darwin hasn’t warmed at 6 C per century. My only conclusion is that someone had their thumb on the scales. It’s not too early for that conclusion … it’s too late.”

    Apologies I read the per century trend how I thought I saw it +0.6/100 yrs rather than +6/100 yrs! I am in total agreement with you. I also realise you are more than well aware of everything I posted but I thought I would throw it in anyway for information for others in case the actual nature of the of changes was of general interest for those who (like myself until recently) were totally unaware of this fiddle with data. I am actually outraged over this and love the work you did/are doing. Can’t wait to see more!

  135. Perhaps there is nothing dishonest or silly here, but, when you won’t release data or method details what are people expected to think? Until the raw data and the method of “correcting it” are made fully public, as scientific method requires, the correct thing to do from a method standpoint is to treat this data as junk. It can’t be reproduced so it isn’t science.

  136. My Democratic representatives aren’t smart enough to read this stuff for themselves (along with one or two Repubs believe it or not). Every time I have sent a letter I get back a nearly identical talking points response from the lot of them. I never thought I would ever be reduced to just wanting to throw them all out, or even not vote at all. This is truly making my left leaning, registered Democrat, AND patriotic Irish blood boil!

  137. w,

    Fascinating bit of work. Maybe you could comment, either here, or in an update to the post above, on the following, taken from the Easterling and Petersen paper you quote from:

    A great deal of effort went into the homogeneity adjustments. Yet the effects of the homogeneity adjustments on global average temperature trends are minor (Easterling and Peterson 1995b).

    Do they ever put a figure on to just how “minor” this effect is on the global average temperature trends? Are they referring to this?

    Or are they referring to something else?

    However, onscales of half a continent or smaller, the homogeneity
    adjustments can have an impact. On an individual
    time series, the effects of the adjustments can be enormous.

    Duh. I think you’ve demonstrated that very well.

    These adjustments are the best we could do
    given the paucity of historical station history metadata
    on a global scale.

    Well, maybe we need a global average that is without the adjustments.

    But using an approach based on a
    reference series created from surrounding stations
    means that the adjusted station’s data is more indicative
    of regional climate change and less representative
    of local microclimatic change than an individual
    station not needing adjustments.

    Important admission, and qualification.

    Therefore, the best
    use for homogeneity-adjusted data is regional analyses
    of long-term climate trends (Easterling et al.
    1996b). Though the homogeneity-adjusted data are
    more reliable for long-term trend analysis, the original
    data are also available in GHCN and may be preferred
    for most other uses given the higher density of
    the network.

    I’m not persuaded about the usefulness, even for regional analysis. I think any use must look at before and after comparisons, like you’ve done here, before assuming amything about the usefulness of the adjustments.

  138. “Isn’t this similar to producing a marked-up copy of the dead-sea scrolls, with the corrections in felt-tipped pen, and missing bits added-in in biro, and then calling it ‘the definitive data-set’ ?”

    This is a perfect analogy.

    There is NO WAY that one can move a weather station 20 miles from a valley, or shore location, to the side of a mountain and then continue to validate the temperature record just by making an arbitrary correction. The entire weather patterns between those locations will be entirely different and the temperature record will not follow the same pattern, but at a different set average temperature.

    The fact is that the temperature record is sooooo messed up that there is no way to determine a constant increase in temperature from the raw data, so it appears that they have made the data fit the science and hidden the fraud in the way they use the ‘necessary’ adjustments. All the people involved in the fraud agree with the outcome, they are all insiders in the man-made climate change religion and so they all peer-review each others data and methodology and sign it off as sound. After all, they all get the same amount of warming. It is entirely conclusion lead science AKA propaganda!

    Another analogy us that the Climateologists are saying, OK the car has a scratch on the door, and a dent in the boot, the tyres may be a little bald, depending on how you define bald, but the car is basically still sound and entirely roadworthy. We are saying, SHOW ME THE ENGINE! we got a glimpse under the bonnet and did not see one.

    This article is someone sneaking under the car to get a peek into the engine compartment and seeing no engine there yet they still want to force us to buy the car!

  139. Before Climategate, I thought the science was dubious and the aim political but the temperature data correct, at least for the last hundred years or so.

    Right now I realise I was being too naive, the data seems to be flawed just like the rest of the “science”!

  140. Source file= Jones+Anders
    Jones data to= 1998

    Says it all. More BS than a herd of buffalo.

  141. ozspeakup – that data set on the ftp site at BOM is, I think, the old Torok data set I referred to in an earlier comment. Check out the method.txt file if it is there and do a search for the paper mentioned online and you’ll find what you seek.

  142. Thanks Willis and Anthony.
    It was a very depressing day yesterday, thinking about Pearl and the EPA. I can only hope that truth will win out, it has in the past , as it might in the future ( I even feel that hope and change are tainted words so I did not use them).

  143. Campbell Oliver (04:12:34) :

    I’m very interested in following the discussions here. But there’s one thing I don’t understand, and I know that this is going to sound really seriously dumb, but I do want to know. It’s this. When people talk about an “anomaly”, I understand that this means a deviation from some value (probably some average, or a single reference value). But it seems that this value is never mentioned. So, my question is this: is there some standard definition of the value upon which temperature (and other) anomolies are based? If so, what is it? If not, how do people know what the actual temperature for some point is, given the value for the anomaly at that point?

    Many thanks for any pointers to some 101 (or even a kid’s pre-101).

    PS – I’ve tried googling “temperature anomaly definition” etc., with no luck.

    The only dumb questions are the ones you don’t ask. An anomaly can be taken around any point. Usually it is an average over a specified period, like 1961-1990. In this case it is an anomaly around the average of the dataset.

    w.

  144. This thing about not having raw data anymore. I am confused about that. There is raw unadjusted station data that apparently can still be had by any Susie Q or Tommy T out there. Isn’t that the raw data? So who needs raw data from the Met? Correct me if I’m wrong, but the stationsurvey is able to capture the raw data from each station and display it here. Isn’t that raw data? And easily obtained? What if we decided that our next challenge was to tabulate and average climate zone station data (totally unadjusted) and run our own graphs here at WUWT? And I do mean by climate zone so that we are not averaging together apples and oranges. Then let THEM argue about our methods.

  145. Very nice work Willis.

    How much time did you devote to the efforts described in your post? Just wondering how much effort would be involved to do the same thing for a random selection of a hundred or so stations around the world.

  146. More BBC Propaganda. Husky dogs may not have employment and face a bleak future in a warmer world. This is really pathetic. Teams of Husky dogs (which pull a sled) where replaced by motorized machines called Snowmobiles or in Canada a Skidoo over 50 years ago.

    http://news.bbc.co.uk/2/hi/programmes/hardtalk/8399823.stm

    What does it matter – keep telling enough lies and keep repeating them and eventually people will believe.

    It is the same with polar bears – they are not in trouble or drowning at all (the population has actually increased five-fold since we restricted hunting).

  147. So the Copenhagen summit has just heard that the last 9 years are the warmest on record by a long way.

    You’d have trouble selling that one in the UK. 2003 experienced a particularly hot Summer due to the collision of two high pressure weather systems over Northern Europe. Even the warmists aren’t suggesting that was anything more than a weather event. But since 2003 Britain has had increasingly severe winters and overcast cool summers. Last winter we had snow so bad it brought the country to a standstill.

    It is like listening to a bunch of ranting lunatics of the kind that carry boards with “The End is Nigh” on them.

  148. Last week while updating my website (http://www.waclimate.net) with temperatures across Western Australia for November, I noticed something peculiar about August 2009 on the BoM website…

    http://www.bom.gov.au/climate/dwo/IDCJDW0600.shtml

    The mean min and max temps for August had all gone up by about half a degree C since first being posted by the BoM on Sep 1.

    Below are the min and max temps for the 32 WA locations I monitor, with the BoM website data at the top as recorded from Sep 1 to Nov 17, and below them the new figures on the BoM website since Nov 17 …

    August 2009

    Albany
    9 16.2
    9.4 16.6

    Balladonia
    5 20.7
    5.5 21.1

    Bridgetown
    5.7 15.7
    6.2 16.1

    Broome
    14.6 29.2
    15.1 29.7

    Bunbury
    8.2 16.7
    8.7 17.2

    Busselton
    8.7 17
    9.2 17.4

    Cape Leeuwin
    11.8 16.2
    12.2 16.6

    Cape Naturaliste
    10.5 16.7
    11 17.1

    Carnarvon
    11.4 23.2
    11.8 23.6

    Derby
    15 32.7
    15.6 33.2

    Donnybrook
    6.7 17.2
    7.2 17.6

    Esperance
    8.3 17.7
    8.8 18.1

    Eucla
    7.9 21.5
    8.4 21.9

    Eyre
    4.3 21.6
    4.5 22

    Geraldton
    9.5 20
    10 20.5

    Halls Creek
    16.1 32.6
    16.6 33

    Kalgoorlie
    6.8 20.3
    7.2 20.7

    Katanning
    6.1 14.7
    6.5 15.1

    Kellerberrin
    5.3 18.6
    5.6 18.9

    Laverton
    7.5 22.4
    7.9 22.9

    Marble Bar
    13.8 31.1
    14.3 31.5

    Merredin
    6.1 17.7
    6.5 18.1

    Mt Barker
    6.8 15.6
    7 15.8

    Northam
    6.2 18.4
    6.6 18.7

    Onslow
    13.8 27.7
    14.3 28.1

    Perth
    8.8 18.5
    9.3 18.9

    Rottnest Island
    12.4 17.3
    12.9 17.7

    Southern Cross
    4.6 18.1
    5 18.6

    Wandering
    5.3 16.1
    5.6 16.6

    Wiluna
    7.5 24.8
    7.7 25.2

    Wyndham
    18.3 34
    18.8 34.4

    York
    5.6 17.9
    5.9 18.3

    I’ve questioned the BoM on what happened and received this reply …

    “Thanks for pointing this problem out to us. Yes, there was a bug in the Daily Weather Observations (DWO) on the web, when the updated version replaced the old one around mid November. The program rounded temperatures to the nearest degree, resulting in mean maximum/minimum temperature being higher. The bug has been fixed since and the means for August 2009 on the web are corrected.”

    I’m still scratching my head, partly because the bug only affected August, not any other month including September or October. There’s been no change to the August data on the BoM website since I pointed out the problem and they’re still the higher temps.

    So if anybody has been monitoring any Western Australia sites at all (or other states?) via the BoM website, be aware that your August 2009 temperature data may be wrong, depending upon whether you recorded it before or since Nov 17, and it’s not yet known what’s right and what’s wrong.

  149. A fine piece of detective work.

    Now all that remains to be done is to translate the terms into language a very simple layman (like a journalist or politician) can understand. Let’s try this – the raw data are the real temperatures while the adjusted data are what some scientists think the recorded temperatures should be.

    Its no wonder all the public normally sees is the adjusted data. Don’t want to confuse people with too many facts. They might start acting smart, like asking embarrassing questions such as, “How many adjustments are compound adjustments?” {Adjustments on top of adjustments on top of adjustments and so on.}

  150. Barry Foster (00:51:02) :

    That should read December 1941, December 1942 was a whole year out & our colonial cousins would not have needed any warning by then, they would have figured it out themselves by the carnage left by the Japanese carrier fleet!

    That was a great post & it certainly looks like somebody has been telling porkies! I always understood that “homogenising” was what they did to milk to make it sterile for public consumption! Not far off the mark.

  151. I see that the surfacestations project of Anthony’s has to be expanded to include what is done with the data because it seems that if the readings aren’t to the AGWers’ liking they “homogenize it” and if they are concerned that one might find the homogenization a way beyond reasonable, they are tempted to throw the raw data away! We better get at this quickly.

  152. “http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.”

    But /94/941200 is

    Number= 941200
    Name= DARWIN AIRPORT
    Country= AUSTRALIA

    non-adjusted I guess

  153. Unbelievable. First, they put thermometer on the airport. Then skew the readings upwards. Then merge all skewed and biased readings by highly suspicious algorithm into one sleazy mess called global temperature dataset.

    I am convinced, that with unfudged SST data and high quality station data, even much less in number than thousand used for GISTEM/HadCRUT, global dataset should look like Arctic record with less amplitude. 40ties equal to present times.

    http://www.junkscience.com/MSU_Temps/Arctic75Nan.html

  154. Wow.
    Thank you for all your hard work, Willis – for the clear explanations, the graphs, and he smoking gun.

    I can’t believe these ‘adjustments’ – with steps like those one could prove anything at all.
    And that’s ‘science’?
    I must have been absent when they taught that in college …

  155. In view of the (quoted) current (as of 10:00 AM EST) headline on Climate Depot, may this curmudgeon of a former English teacher remind everyone that ‘breath’ is a noun. (Proposed new motto for the EPA: ‘Every breath you take, I’ll be watching you.’) The verb is ‘breathe’.

    There are, of course, phonemic changes in the pronunciation: In the verb, of the dental fricative represented by the spelling /th/ is ‘voiced’ i.e. produced with vibration of the vocal cords. Since in English voiced consonants invite lengthening of preceding vowels, this is ‘long’ /ea/. All of this in contrast with unvoiced /th/ and ‘short’ /ea/ in the noun ‘breath’.

    This variation in medial vowel sounds is called in linguistics ‘guna’, from the Sanskrit. It is a persisting characteristic of Indo-European languages, and often has grammatical significance, as in irregular verbs in English, e.g. ‘lead’ is present tense, ‘led’ is past. In languages with complex rules for producing verb stems, e.g classical Greek, guna is crucial.

    Of course in the case of classical Greek, as well as of many modern languages, we have to deal with the phenomena of ‘declension; ‘ whereas in English we ‘hide the declension.’

    Whew! That was a long way to go for a punchline!

  156. Ok, my chin hurts.

    Once from jaw dropping and hitting the desk having seen the magnitude of the man-made global warming, then from looking at the the severity of the Darwin bombing. Great article, and thanks also for adding the local history.

  157. Darwin was attacked by the Japanese during WW2. So, accurate temperature recording may have been a lower priority during the early 1940’s.

  158. Story in the BBC today.

    “We’ve seen above average temperatures in most continents, and only in North America were there conditions that were cooler than average,” said WMO secretary-general Michel Jarraud.

    It’s interesting how North America, with the most stations and technology, is the only one that shows cooling. It’s warming everywhere else. Naturally.

    These people are shameless in their manipulation.

    http://news.bbc.co.uk/2/hi/science/nature/8400905.stm

  159. Having edited and graphed up a lot of N. European stations from v2.mean, I don’t think that what you have found in Darwin is in any way unique.

    My current theory about the large differences between v2.mean and GISS (who knows with HadCRUT) is that they are not the result of malice as many believe but are a result of the kind of bulk processing operations made easy by the computing power available.

    Let me explain. In much the same way that individuals are lost in the bulk processing operations performed found in every day activities (I am not a number! type) the same can be said of individual stations when processing so many records. That is to say that what is being processed is lost, only the results are important.

    Just as a quick view of the scale. v2.mean has some 596000 entries. That is almost 600,000 years worth of annual records. mean_adj has some 422,000 so lets round up a little and say that the difference is 200,000 years. Each year has 12 points, that is 2,400,000 monthly means (lets not go to daily max/min)

    So in some way 2.4 million points have, by some means, disappeared. My point here is simply that it would take a very determined individual to hand process 600,000 down to 400,000 examining 7,200,000 data points along the way.

    I have hand edited about 160 “local” stations and tedious doesn’t even begin to describe the experience.

    So I’m left with my “warming as an artifact of bulk data operations” which attempt very badly to make sense of individual stations. Nobody wants to go back over the results and check what has happened to individual stations where the end result (global average) is within “expectations”. The code would only be checked where there was plainly “something wrong” with the results. The processing code would then be changed and the whole job re-run until “expectations” are met

    It is interesting to look at v2.mean Iceland and the same stations via GISS. They are very much the same and I believe that this may be because Iceland neatly escapes many of the adjustment processes that you identify in N. Australia.

  160. Copenhagen climate summit in disarray after ‘Danish text’ leak

    The UN Copenhagen climate talks are in disarray today after developing countries reacted furiously to leaked documents that show world leaders will next week be asked to sign an agreement that hands more power to rich countries and sidelines the UN’s role in all future climate change negotiations.

    http://www.guardian.co.uk/environment/2009/dec/08/copenhagen-climate-summit-disarray-danish-text

    Download here

    http://www.guardian.co.uk/environment/2009/dec/08/copenhagen-climate-change%20here

  161. Much of what we know was built on theory.

    As part of the Scientific process many theories have been proven wrong and new ones adopted.

    Al Gore, and many in our government are graining power money and influence by supporting these false data. We now have an EPA that believes they have more power and influence than God and Country combined!

    We are in the process of turning over our freedom, liberty, and wealth to a new Religion.

  162. Wow, I have not seen that airport for a long while. Last time was in 1996, but the most memorable time was just after Christmas, 1974, when I was helping get people onto planes.

    It has changed a lot.

  163. “David (07:34:50) :

    Darwin was attacked by the Japanese during WW2. So, accurate temperature recording may have been a lower priority during the early 1940’s.”

    Or higher if the temperature is of any importance to airplanes, tanks and troops…

  164. I’ve seen the light – global warming really is anthropogenic!

    The globe itself is probably not warming, certainly not any more, but the global temperature record is another matter altogether.

  165. geronimo (07:49:33) :

    OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.

    http://www.guardian.co.uk/

    Are we certain that this is not a hack, we wouldn’t want to get this wrong now, would we :)

  166. geronimo (07:49:33) :

    OT I know, but news just coming in at the Guardian has a leaked document of the proposals which is causing uproar at Copenhagen.

    http://www.guardian.co.uk/

    Are we certain that this is not a hack, we wouldn’t want to get this wrong now, would we :)

  167. RE : HadCRUT and your FOI requests.

    GISS seems to perform the adjustments and stats “on the fly” mainly using v2.mean as a base. There doesn’t seem to be a bulk list left behind to compare to the original (v2.mean) so bulk comparisons are impossible.

    But, from what I have read in the climategate files CRU seem store their adjustments in a database, the adjustments and stats being two separate processes. Perhaps this is why there is no chance of us ever seeing that data as used by CRU, it would allow bulk, station by station, comparisons with the “raw” data (v2.mean?)

  168. “http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html. It doesn’t look very user-friendly.”

    The explanations given by the Met office are illuminating…

    http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html

    “Question 4. How can you be sure that the global temperature record is accurate?

    The methodology is peer reviewed. There are three independent sets of global temperature that all clearly show the rise in global temperatures over the last 150 years. Also we can observe today that other aspects of climate are changing including reductions in Arctic seaice and glacier volume, and changes in phenological records, for example the dates on which leaves, flowers and migratory birds appear.”

    So there you go, definitive proof of data accuracy via the dates upon which migratory birds appear. If any journalists who own a garden are reading this blog perhaps the above comment will assist in understanding why some many people are sceptical of the alleged ‘science’ from these institutions.

    Does anyone else think it is curious that being ‘peer reviewed’ is cited along side migratory bird timing as proof of accuracy? Perhaps the Met office think being ‘peer reviewed’ is no longer enough after Climategate emails cast doubt on the process?

    It is sad that reputable institutions have fallen so low…

  169. As a layperson it took me a some time and study to understand and appreciate Eschenbach’s post. This is also true with similar posts on the subject of “climate change”. Of course it was not written for the layperson but for those who are engaged in the study of the subject. The advocates of anthropomorphic climate change have gotten traction in the media by simplifying the subject so the average person can understand it. Any trial lawyer will tell you that you can’t persuade a jury of laypersons using technical language, you must state it in simpler language. I would like to see someone or group with credible credentials issue public statements on the subject which are understandable.

  170. Willis,

    The truly raw data for the stations are in the daily temperature records (.dly files) on the GHCN FTP site. What is interesting is that when GHCN creates the monthly records that they (and GISS) use, they will throw out an entire month’s worth of data is a single daily reading is missing from that month.

    When a month is missing from the record, GISS turns around and estimates it using a convoluted algorithm that depends heavily on the existing trend in the station’s data, thus reinforcing any underlying trend. GISS can estimate up to six months of missing data for a single year using this method.

    It seems to me the best place to start is with the raw daily data and find out how many “missing” months have a small handful of days missing, and estimate the monthly average for those days, either by ignoring the missing days or interpolating them.

  171. Excellent analysis – looks like you did more work on this they they did in 10 years. Once they publish all the raw data

    So in summary. There are three Global datasets (CRU, GISS, and GHCN) used to justify warming temperatures, however they all all based on the same underlying data (GHCN), and this data requires adjusting because of changes to the weather stations and positions of thermometers. When you look at the underlying GHCN data in detail every time they make these adjustments they adjust the temperature upwards – without justification. Simple as that.

    When the raw data is published every fool on the planet will be left naked in their all together with their hands over their nuts. Now everybody who wanted carbon taxes raise your hands.

  172. The only thing all this hard work proves is that they have a motive to change raw data.
    I see they have a motive to fight the release of raw data.

  173. “This thing about not having raw data anymore. I am confused about that. There is raw unadjusted station data that apparently can still be had by any Susie Q or Tommy T out there. Isn’t that the raw data?”

    This has bothered me for sometime. Supposedly GISS, NOAA, and Hadley use the same stations; but they all come up with different temp reconstructions. GISS applies different adjustments to the same data than NOAA or Hadley, and vice versa. And I am not so sure they all use the same reporting stations. All perform very questionable and many times unpublished adjustments to different stations. To make sense of it all is impossible.

  174. Good analysis Willis. One minor point, if it has already been addressed above, just ignore me. Where you have the phrase and also a fine way to throw away all of the inconveniently colder data prior to 1941. , shouldn’t that be “inconveniently warmer data”?

  175. According to Torok et al (2001), the UHI in small Australian towns can be expressed as

    dT = 1.42 log(pop) -2.09

    For Darwin with a population of 2000, the UHI is 2.60 C.
    For Darwin with a poulation of 120,000, the UHI is 5.12 C.
    The net warming then is 2.52 C, which explains all the warming that Eschenbach shows in Figure 7. Presumably the rapid growth in Darwin population began in 1942 and was relatively constant before then.

    It appears that no UHI correction has been made. If they implemented it, then the warming would totally disappear.

    See http://noconsensus.wordpress.com/2009/11/05/invisible-elephants/

  176. http://news.bbc.co.uk/2/hi/science/nature/8400905.stm

    In a separate move, the Met Office has released data from more than 1,000 weather stations that make up the global land surface temperature records.
    CLIMATE CHANGE GLOSSARY
    Select a term from the dropdown:

    Suggest additions
    Glossary in full
    The decision to make the information available is the latest consequence of the hacked e-mails affair.
    “This subset release will continue the policy of putting as much of the station temperature record as possible into the public domain,” said the agency’s statement.
    “As soon as we have all permissions in place we will release the remaining station records – around 5,000 in total – that make up the full land temperature record.

    Love this quote:

    “We’ve seen above average temperatures in most continents, and only in North America[where the numbers have been open to scrutiny and adjustment – TJA] were there conditions that were cooler than average,” said WMO secretary-general Michel Jarraud.
    “We are in a warming trend – we have no doubt about it.”

  177. I think a major story will be the leaker, more so than the leak. You will find the the IPCC etc will soon not be at all interested in investigating it. neither will UEA. The investigators may be under intense pressure to not release information about the leak. This explains the intense desire of the AGW to make sure the public thinks that the KGB/goblins etc were responsible. I think that, if is found that it was an internal leak, the IPCC and all associated with it might as well disband and go home.

  178. I have just asked the Met Office if the data they have made available is raw or processed (e.g. to correct non-warming). They referred me to this statement:

    “The data that we are providing is the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.”

    So now I know. It’s the non-climatic influences I’m worried about…

    I hope you guys can find examples where clear comparisons can be made along the lines of the current post.

  179. A few months after Pearl Harbour, the Japanese bombed Darwin flat. Some good wrecks in the harbour :-)

  180. Never was a single smoking gun more clearly and devastatingly exposed.
    Major kudos to Mr. Eschenbach for his outstanding and meticulous effort.

    But: Now what ?? . . .
    Given the magnitude and extent of this demonstrated ”artificial adjustment”, as already mentioned the ”false in one false in all” assumption is a fair starting point as far as reasonable suspicions. But to have a shot at convincing the general public let alone the MSM, I expect a significant number of other stations around the world will have to be shown to have undergone similar large and unjustified ”extended warping” of the data.

    This is what computers are good at (IF the software is professionally done):

    Seems like it should be possible to write a program that would take as input:
    [1] The totally unadjusted raw data for individual stations; and:
    [2] The end-result data after all tweaking by CRU, GISS, & GHCN.
    Minor SIDEBAR detail:
    Of course 1st you have to actually GET both sets of data. . .

    Ideal program output would then be graphical comparison along the lines of the excellent presentation in this thread start. . . . . Having said that, as someone who spent 25 years on a complex software engineering project, I immediately add:
    Yes: I am aware that done right, this would be a non-trivial software project.

    And: Also recognize as was pointed out in prior comment by TheSkyIsFalling that metadata giving reasons for real-world adjustments for individual stations would need to be reviewed.
    OTOH:
    Surely would not need to do a huge number of stations world wide to reasonably demonstrate and prove a pervasive smoking gun if similar results were common; i.e.:
    Seems like a few dozen or so similar examples would start to be pretty overwhelming hard evidence.

    Interesting times, indeed. . . .

  181. Ont he CRU curve ,I really dont understand what I see. Well, I think the black line is the raw data, since you mention that under the graph. But the red and blue shaded area? Model predictions? Or?

  182. Willis,

    Where was station zero located?

    I am reasonably familiar with Darwin and might know where the station was located.

    However, I cannot think of anything that would cause them to be 0.6C high for such a long period of time and with what looks like a gradually declining trend from 1880 to 1940.

    Are you just trying to hide the decline? :-)

    While that large jump in 1941 looks suspicious I think it was later in the war that the Japanese bombed Darwin (would have to look it up, but as I recall, Pearl Harbor was late in ’41 wasn’t it. Something like December 7, 1941. While the RAAF was in the region, but probably largely operating from Tyndall, the US military presence in Darwin did not commence, I believe, until the US entered the war.) So, I suspect that the bombing did not cause the problems.

  183. Mr. Eschenbach:

    I appreciate your effort in this matter. Your post has been shared with a person who gives great authority to the existing ‘academic community’. That person has dismissed your findings as opinion and unsupported personal conjecture that the process is broken.

    Part of our discussion has hinged on your statement “So I looked at the GHCN dataset.” While acknowledging that the blog venue doesn’t require the same level of source citation as a peer-reviewed journal, your sources have been questioned.

    Could you provide a more detailed reference/ link to the HCN data in question [both raw and adjusted]? Thanks and regards.

  184. Good grief. Enough with the unmitigated speculation and hyperbole.

    Willis – excellent analysis, but you go a bridge too far with your conclusions.

    “Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? … Why adjust them at all?”

    Those are very good questions. Making claims requires answers.

    “They’ve just added a huge artificial totally imaginary trend to the last half of the raw data!”

    You dont know that. You should not claim to know that which you do not. That is Teamspeak, leave it to the Team.

    You have turned up what we already knew – that the alleged ‘global warming’ trend is a function of the adjustments applied to the raw data (or, as in the case of UHI and similar effects, not applied) as much or moreso than the raw data. That is worrying, but not necessarily illegitimate.

    The example that you have done an excellent job of laying out here demonstrates that those who are making these adjutments need to have very good explanations for why they did so. Having stuck your neck out and called them dishonest, you had better pray that they dont have good explanations for those adjustment. If they do, the best that is going to happen is that you, and by the broad brush we, are going to be made to look like a bunch of biased, ranting fools.

    Perhaps you should stick to pointing out worrying potential issues (again, good job of that) and save the claims of gross incompetance and malfeasance for after the questions you raise have been answered.

    JJ

  185. Could somebody help me out here? In The Times (http://www.timesonline.co.uk/tol/news/environment/article6936328.ece), we have the claim that CRU’s data was destroyed:

    ‘In a statement on its website, the CRU said: “We do not hold the original raw data but only the value-added (quality controlled and homogenised) data.”

    The CRU is the world’s leading centre for reconstructing past climate and temperatures. Climate change sceptics have long been keen to examine exactly how its data were compiled. That is now impossible. ‘

    However, over on RealClimate you’ve got Gavin Schmidt making claims like this:

    [Response: Lots of things get lost over time, but no data has been deleted or destroyed in any real sense. All of the raw data is curated by the relevant Met. Services. – gavin]

    [Response: No. If that was done it would be heinous, but it wasn’t. The original data rests with the met services that provided it. – gavin]

    So which is it? Does the data exist somewhere or not? If it does then I don’t even understand why CRU even made such a dramatic announcement. Why didn’t they just say what Gavin Schmidt says above?

    But on the other hand, reading this article it almost seems like the author DOES have access to raw data (and normalized data) which he used to calculate the Darwin adjustments. Is this the same data that CRU started with? If so, why not use this data to start with to reproduce CRU results? If the data does exist in some format I don’t see why there is any controversy at all.

    Seems like somebody is being disingenuous but not sure who. This is something I find incredibly frustrating about this issue. The scientific issues are understandably opaque and subject to debate. That’s hard enough to get to the bottom of. But even simple things like, ‘Was the data destroyed or not?’ are subject to so much spin it’s nearly impossible for somebody who’s trying to be objective to sort it all out.

  186. Given the amount of money they are talking about just for “Cap-n-Tax” (not to mention the IPCC money), this distortion is criminal.

  187. Given the thoroughness of Willis Eschenbach’s methodology in tracing the Darwin temperature record, and what looks like a successful survey of US surface temperature stations (www.surfacestations.org), maybe a similar effort could be developed to audit these three main temperature databases. Establish a single method/process for review of the record, with required data formats, etc., publish a manual and let the globe have at it.

  188. Interesting:

    From: Phil Jones
    To: Kevin Trenberth
    Subject: One small thing
    Date: Mon Jul 11 13:36:14 2005

    Kevin,

    In the caption to Fig 3.6.2, can you change 1882-2004 to 1866-2004 and
    add a reference to Konnen (with umlaut over the o) et al. (1998). Reference
    is in the list. Dennis must have picked up the MSLP file from our web site,
    that has the early pre-1882 data in. These are fine as from 1869 they are Darwin,
    with the few missing months (and 1866-68) infilled by regression with Jakarta.
    This regression is very good (r>0.8). Much better than the infilling of Tahiti, which
    is said in the text to be less reliable before 1935, which I agree with.
    Cheers
    Phil

    Prof. Phil Jones
    Climatic Research Unit Telephone +44 (0) 1603 592090
    School of Environmental Sciences Fax +44 (0) 1603 507784
    University of East Anglia
    Norwich Email p.jones@xxxxxxxxx.xxx
    NR4 7TJ
    UK

  189. Wonterful text.
    Minor typos :
    – figure 4, legend : bad copy-and-paste of the legend of figure 3 (remove the reference to the year 2000) ;
    – the right Latin saying is “Falsus in uno, falsus in omnibus”.

  190. kwik (08:37:09) :

    “Ont he CRU curve ,I really dont understand what I see. Well, I think the black line is the raw data, since you mention that under the graph. But the red and blue shaded area? Model predictions? Or?”

    First the black line in figure one is labeled “observations”, however that is not the Raw observation. That is the Observation after adjustment.

    Second from what I understand the Red area is what the Model says the temp should be with CO2 forcing. The Blue are is what the Model says the temp should be without CO2 forcing.

    Now when I looked at Fig 1 and and fig 2, to my Mark 1 eyeball the Raw looks like it correlates closely to what the models say the temp would be WITHOUT CO2 forcing (Blue shaded area).

    Earlier I aked if Willis had laid the raw data over the the graph in Fig 1 and see if it does correspond with the blue area. The reason being if the IPCC’s own models without CO2 forcing matchs the Raw and Willis reconstruction, that in turn gives credence that people are ajusting the Raw to match the Models Red area or in other words why you get that huge adjustment.

  191. Billy (08:49:21),

    The obvious answer: Produce the raw data. The hand-written, signed and dated B-91 forms recording the daily temps at each surface station would be a good start.

    JJ (08:48:13):

    “…the alleged ‘global warming’ trend is a function of the adjustments applied to the raw data (or, as in the case of UHI and similar effects, not applied) as much or moreso than the raw data. That is worrying, but not necessarily illegitimate.”

    What smacks of illegitimacy is the fact that when the data is massaged, it almost always shows warming: click1, click2, click3.

    For the true global temperature, a record of temperatures from rural sites uncontaminated by UHI would show little if any global warming: click1, click2 [blink gif – takes a few seconds to load].

    The CRU, the IPCC, the NOAA and the rest of the government funded sciences offices are trying to show an alarming increase in global temperatures. They almost always show a y-axis in tenths of a degree to exaggerate any minor fluctuations. But by using a chart with a less scary y-axis, we can see that nothing unusual is occurring: click

  192. I think we (the internet community) can end this debate once an for all … Using the stations cherry picked by the IPCC we could set up station teams via internet volunteers to review the raw vs “value added” GHCN data and validate those adjustments … where an adjustment appears to have been applied without good reason the team should attempt to do their own adjustment based on logical and justifiable reasoning …
    This should allow the world to have a verified record set of actual temp measurement for at least the last 100 years … we don’t have that now …
    step one would be to classify station location for appropriateness … bad sites would be marked for adjustment or exclusion … adjustments should never be averages they should be delta adjustments based on nearby reliable (i.e. non bad) sites …
    An Army of Davids so to speak …
    No reason this can’t happen within a year or two if someone can coordinate it …
    Set up clearly defined rules on site validation …
    Set up clearly defined adjustment methods to measure the warmists “valued added” against …
    Use those adjustments methods to re-adjust the eggregious site adjustments …
    Create a peer review process to allow a second, third and forth set of eyes to validate the work done by the team …
    Allow anyone to join a team … anyone … Warmists are Welcome :)
    Team decisions should be a super majority i.e. >66%

  193. “Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.” – PeterS quoting the Met Office

    And with no visibility whatsoever of what those “adjustments” were, despite knowing full well (or because they know?) that therein lies the principal suspicions of dodgy data manipulation.

    Hm. I wonder if the Met Office even *have* records of how those adjustments were done? Or whether some of them were done with long-lost bits of code on long-dead computers and what they actually have is an archive of raw data A, an archive of adjusted data B and an embarrassing lack of repeatability of how they got from A to B in the first place. Maybe they’re looking at the likes of Darwin and going [snip] too – except they can’t and won’t admit they have a problem.

    Or was that what ‘Harry’ was trying to do? Trying to replicate past adjustments in putting together the adjusted database? And did he ever “succeed”?

  194. At what point do we declare that these temperature records and proxies have too little confidence to be of any use in determining if AGW exists?

    Would a going forward position be to use a well understood set of measurements and monitor changes for the next X years and see which models (hypotheses) are working. It seems Copenhagen may fail and that the “catastrophe” hypothesis is dying under the weight of Climategate — so why not?

  195. I think what is missing from the current “coverage” so-called is this kind of analysis to beat back the claims that the science underpinning (I would love to use the word undermining) the CRUgate emails.

    What I typically see is a talking head who interviews a warmist, and the talking points are driven by the warmist – again, no debate.

    1) Emails were hacked
    2) The emails and specific comments have been taken out of context
    3) The science is sound

  196. Fine work as usual, Willis.

    ******
    8 12 2009
    JP (04:46:12) :

    This subject was covered in a CA thread some years ago. I believe it came up when someone discovered the TOBS adjustment that NOAA began using. The TOBS adusted the 1930s down, but the 1990s up. Someone calculated that the TOBS accounted for 25-30% of the rise in global temps.
    ******

    The TOBS issue was the first thing I thought, too, to explain the massive adjustments. They seem to use this as a catch-all adjustment because by nature the correction can be quite large in some specific instances (in both directions). The metadata to confirm this perhaps isn’t available — I don’t know.

  197. Willis,

    I don’t know where the NOAA URL is that gives individual station data (as opposed to data sets) so I went to the NASA/GISS site with the individual stations data.

    http://data.giss.nasa.gov/gistemp/station_data/

    The Darwin raw data there (http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=Darwin) seems to correspond with the raw data you showed in your graphs. But the homogenized data (at http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=2&name=Darwin) does not look like what you show as homogenized data.

    Could you give a URL for the site where you got the data (and indicate which file if it is an ftp or otherwise multiple file listing)?

    Thanks.

  198. It gets worse…

    The latest adjustment for Darwin airport is +2.4 deg C

    It would appear that as new data for that station is added each month going forward it will be immediately adjusted upwards by such a large amount by the “scientists”. How can this be right given that current temperature data with modern instruments can be expected to be more accurate than historical data?

    Also if the UHI effect is taken into account any adjustment should be negative, not positive….

    If this kind of fiddle is happening with current readings from many other stations around the world, then it is little wonder that the Met Office is claiming that the current decade is the hottest ever!

  199. I am an engineer but thought it was obdivious to all what a scam the GW program is. Of course the data are “poop”. See Icecap, Dec 5, 2009 Alaska Trends by Dr Richard Keen for more “poop” in Alaska records–

  200. Insightful comment:

    vdb (01:05:25) “An adjustment trend slope of 6 degrees C per century? Maybe the thermometer was put in a fridge, and they cranked the fridge up by a degree in 1930, 1950, 1962 and 1980?”


    I agree with ForestGirl (01:11:06) about the need for site profiles. When I used to do forest & soils research, we used to take 2 or 3 pages worth of notes on the physical setting of each study plot. The log notes were diligently incorporated into electronic databases with full awareness of their value to future investigators.

    Boris Gimbarzevsky (01:07:15) “[…] the only time one sees homogeneity is in the winter where every place in my yard not close to the heated house is uniformly cold […]”

    You raise some interesting observations Boris. I hope many will consider your suggestion. I’ve been studying coastal BC stations. Serious questions arise.


    Those who seem to immediately conclude ~1941 issues at Darwin must relate to war activities need to review:
    a) their Stat 101 notes on confounding.
    b) climate oscillation indices (e.g. SOI, NAO, PDO, AMO, etc.)
    c) Earth orientation parameters.
    Don’t make the mistakes Mr. Jones has made!


    Great article – only the 2nd really interesting article on WUWT since CRUgate broke. This thread has inspired me to dust off a cross-wavelet analysis of TMax vs. TMin that highlights …. (may be continued…)

  201. Is it just me but why does it seems that everytime someone tries to reconcile the raw data to the “value added” data unexplainable upward adjustments are almost always found ?
    And why would Australia which is 79% the size of the USA use only 3 stations ?
    If they told you they used 5 stations for the entire USA would anyone ever listen to them again ?

    Garbage In –> Garbage Code –> Garbage2 Out

    I’m coining the term G2O …

  202. Entertain me for a moment in history.
    I believe in the actual proven conspiracy Climategate, all the ingredients are there. Weather you believe in conspiracies or not is of no consequence. Hypothetically speaking, that there is a greater conspiracy of the Illuminati
    using Man-made climate change to be used to their ends, Climategate may be Payback to the NWO conspirators, to commemorate the assassination of JFK. The E-mails were leaked on the same day of the week Kennedy died, 2 days before his actual assassination date. Perhaps they had to be leaked to the Internet on the day they were because the offices would be closed over the weekend. Close enough eah?

  203. Just did a few calculations for raw v homogenised GHCN data for my home town of Brisbane using the Eagle Farm airport station and the trend flips from -0.6/100yrs to +0.6/100yrs looking at the data from 1950 – 2008. Not as dramatic but still interesting. The data pre 1978 is adjusted down and post 78 adjusted up. I have not looked into any reasons so I am just throwing it out there. I’ll get a few plots up tomorrow. I have no station history to see if there are reasons for this adjustment. Agree that a few dozen carefully examined stations by Willis and others who know what they are doing would be a good step.

  204. Smokey,

    My point being that smacking of this and that is not sufficient to make damning accusations. It is not only immoral to do so, it is very bad strategy. Do that enough, and at some point you are going end up having your ass handed to you on a silver platter.

    That the adjustments applied to date almost always show warming is not necessarily illegitimate in the aggregate, much less is it any proof that the adjustments applied to any particular station are wrong, as Willis is claiming here.

    It is entirely possible that the adjustments applied to the Darwin station are complete and justifiable, in which case Willis is a blathering idiot, not a very nice person for having made baseless accusations, and he, Jones and Mann can share a suite at the next American Sphincters Association convention.

    It is also entirely possible that the adjustments that have been applied are legitimate, but that there are compensating adjustments (such as UHI or other siting issues) that have innocently or intentionally been left out. That leaves the current accusations just as off the mark. GHCN would be judged incompetant and/or culpable, but not for any of the reasons listed here.

    It is also entriely possible that the adjustments that have been applied are incorrect, but that they were the result of a mistake or unconscious bias. GHCN is therefore error prone, but not criminal.

    It is also entirely possible that illegitimate adjustments that have been applied and other legitimate adjustments left out, in a concerted effort to cook the books.

    The information we have right now is consistent with all of these hypothesis. Stick to reporting the facts, and leave the fanciful storytelling to Team members describing their proxies.

    Note that Anthony’s work here has been to catalogue station siting issues that potentially demand adjustment or other response (such as outright discarding) that are every bit as intensive as the adjustments seen here. Railing against adjustments per se is to slit one’s own throat, as is unsupported accusation of criminality …

    JJ

  205. This is a long thread, so sorry if my question has been asked previously, but when were these adjustments to the raw data first made? Before 1988 or after?

  206. Anyone made a joke about those Southern Hemisphere thermometers being upside down, yet?

    That’s obviously why the trend changed direction, once they were normalized.

  207. I haven’t had the time to read all the comments, but following a link in the article and further, I found a comment which noted that the original Darwin station was at the Post Office. A station was installed at the Airport (at some point), and the Post Office was destroyed when Darwin was bombed in February 1942. The last monthly average for the Post Office station was January 1942. It is therefore possible that the discontinuity in the record is a switch to the Airport station for 1941 et seq.

  208. JJ (09:59:35),

    All of your “entirely possible” comments can be resolved by the full and complete disclosure of all of the raw and massaged data, and the hidden methodologies and “adjustments”, that the Team uses to come up with their scary AGW conclusions.

    As they say, just open the books. Then everyone can see if they’ve been cooked. The fact that they’re still stonewalling tells me all I need to know about their veracity and accuracy. The leaked emails and code only reinforces my suspicion, and any contrary and defensive red faced arm-waving does nothing to convince me otherwise.

  209. Smokey (10:12:52) :

    But Smokey, if they open the books, you know what will be found.
    Someone will have to go in and open the books.

  210. Riveting reading! Thank you.

    A little black humor

    “Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?”

    Interestingly, a study has found (through the “use of eclectic proxies for temperature and other variables where empirical data is lacking”) a link between climate modification and Aztec sacrificial rituals:

    http://geoplasma.spaces.live.com/blog/cns!C00F2616F39D0B2B!736.entry

  211. So I didn’t see any mention; in the list of “inhomogeneity” factors, of the Weber grill effect.
    Of course I believe at Darwin airport; when they say; “”just toss another shrimp on the barbie !” they simply lay it on the ground; which is hot enough to frazzle even the best kangaroo steaks.

    This whole “data adjustment” business seems like a giant fraud to me.

    If I place 12 thermometers in various places around my house, and read them every day for a hundred years, I can then perform ANY mathematical AlGorythm to that raw data, to obtain some out put data I can publish; I’ll call it GEST; the George E. Smith Temperature. Now if I move one of those thermometers; or maybe I just replace the old valve B&W TV set, with one of those modern gas hogging flat screen Plasma TVs, I would expect that the output of GEST to change, because the TV station thermometer is now getting goosed by my nifty new TV set.
    Well so what; I simply put an asterisk in the report for the day I install my new TV, to tell all my interested neighbors that I have a new TV and from here on out, GEST is going to be different.
    The fraud part comes in when I brazenly rename my GEST; and pompously call it GESHT for the George E. Smith House Temperature.

    Well the AlGorythm that I perform on my 12 thermometer raw data to get GEST, in no way represents the temperature of MY HOUSE which is what GESHT fraudulently purports to be. And the reason it isn’t the true GESHT is that 12 thermometers is not sufficient to truly sample the temperature of my house; where the temperatures can range from + 400-450 F in the oven, when I am cooking that Kangaroo roast, down to maybe 10-20 F in the freezer, where I have the rest of those kangaroos I hit on the road stashed; and according to an argument by Galileo Galilei; in “Dialog on the Two World Systems.”, every temperature between 10F and 450 F must exist somewhere in my house, sometime, while I am doing the thanksgiving kangaroo.

    Well you see GISStemp and HadCRUT have the same problem as GESHT; they are NOT representative of the mean globnal temperature, or even the mean global surface; or lower troposphere temperature. They are GISStemp, and HadCRUT respectively; the result of applying some quite arbitrary AlGorythm to a completely non representative sample of raw temperatures from various places around the world that do not together form a proper sampling of the continuous global function of temperature over space and time.

    So GISStemp and HadCRUT are about the equivalent of the average telephone number in the phone directories of say Manhattan and East Anglia Respectively; they are not true measures of the mean global temperature (surface or lower tropo) of planet earth. If they were properly sampled; it would not matter if one of the stations gets a new Weber grill.

    By the way; A really nice bunch of work there Willis; I’m going to have to print it out and try to digest it to see what you’ve uncovered

  212. Don’t know if this has been pointed out yet, but it is possible that stupidity is still the culpret. The station at the 500km away pub could have been used to calculate the correction factor. If the process was automated, it would be one of the ‘nearest 5′. Of course, that is a methodology flaw and all of the corrections are suspect, for any sparse data.

  213. Jean Parisot: “Would a going forward position be to use a well understood set of measurements and monitor changes for the next X years and see which models (hypotheses) are working.”

    The key to any analysis is good data. That data simply doesn’t exist because the local temperature readings were never taken intending them to be used to estimate global temperature to anything like the present requirement and the proxy record, is far far worse.

    Like the Irish say: “If I were going there, I wouldn’t’ start from here!” But we are here.

    The first instinct may be to rip up all the suspect temperature sites, throw out all the tainted scientists and start again. But if we did that it would be many decades before we settled this issue once and for all. But we sure need to start spending serious money on temperature measurement (I have to express potential self interest as I used to design precision temperature control equipment). We can’t shirk, we have to get the best possible coverage of temperatures globally, in purpose made sites free from the taint of Urban heating – and certainly free from the taint of political meddling! At least that we we will know for certain what the climate is doing from that time on.

    Then we have to go back to manual measuring the temperature!!! Yes, we’ve got to start measuring the temperature without automatic equipment, because the only way we will know the bias caused by automation is to get really accurate results for the discrepancy between manual and automated measurements. Similarly, we have to get some very accurate measurements and models so that we can back-calculate the Urban heating effect and take this out without some hair brained enviro-political-scientist just picking a figure at random. (Grrrr .. makes my blood boil!)

    Then we have to get a hell of a lot more proxy measurements, we have to find ways to calibrate the proxy relationships so that we have a scientifically rigorous method based on extensive laboratory testing (i.e. for tree proxies, we might have to grow a few trees in labs at historic CO2 levels – even that is difficult). I’m sure there are good dendrochronologist out there, take out the “hide the decline” labs, and give the rest some decent money without the political bias, and … hopefully we’ll bring back the science and find out what really has been happening to the climate.

    Basically, if we spent a fraction of the money the politicians wanted to tax us all on getting decent data then perhaps we wouldn’t be in this mess. It really hurts me to say this, but we have to learn all the lessons of Iraq:

    1. Science shouldn’t be “sexed up” to fit the political needs.
    &
    2. Whilst it might seem sensible to oust all the bathists (aka climategate scientists) the only workable solution is to find a way to include these people in the new “regime” – minus the worst offenders, but unless we include the foot soldiers of the current regime, we’ll get the chaos we have right now in Iraq.

    And I have to say it: the next time someone says: “we are in real imminent danger of WMD”, “the evidence is unequivocal”, can someone please remind the press that last lot of WMD was just a figment of the immigination of few oil-grabbing neocons.

  214. A reminder when hearing “news” analyses: the emails are a bright, shiny butterfly, a magician’s diversion. The indisputable scientific misconduct is in the software and data management.

  215. Smokey,

    “All of your “entirely possible” comments can be resolved by the full and complete disclosure of all of the raw and massaged data, and the hidden methodologies and “adjustments”, that the Team uses to come up with their scary AGW conclusions.”

    Exactly.

    So blog entries like this one should not conclude with unsupported accusations, they should conclude with reknewed demands for the data and methods. That is legitimate and powerful: You arent giving us the data, and here are some very suspicious results that make it look like you might be hiding something more than the decline.

    Going futher afield into unsupported claims of ‘blatantly bogus’ and ‘false warming’ is, as you put it, red faced arm waving. And claims of ‘idisputable evidence of preconcieved notions’ yada yada yada is simply a lie itself. Thats Teamwork. Leave that to the experts.

    JJ

  216. I keep remembering that Enrons fall back position was that they had Arthur Anderson doing their external audits. Meanwhile, AA was rapidly shredding files and deleating emails.

  217. JJ, if a site selected at random shows this kind of manipulation, it’s not unreasonable to postulate there is improper conduct in the handling of other data, particularly since the crew trying to use this to cripple the world’s economies have repeatedly chosen to hide the raw data and the manipulations, up until the Blessed Saint Whistleblower of East Anglia put it all on the Web. Yes it’s possible this station was handled appropriately…it’s also possible that Jesse James had bank accounts at a lot of midwestern banks where he made “urgent withdrawals”…it’s possible the core of the moon is really green cheese. But the burden of proof rests with the Warmenistas, who by their conduct, have already shown themselves untrustworthy, and Willis Eschenbach has done a great service by screening down to show what was done to the numbers at Darwin.

    Check a few more randomly chosen sites using the same methodology…that’s the scientific way. You tell us what you find.

  218. Well I guess it finally reappeared. Note that the extreme surface temperature range on earth, other than on volcanic lava flows or in boiling mud pools, goes from about +60C on the hottest tropical desert surfaces (maybe higher) down to about -90C at Vostok Station. (close to the extreme lowest low official temperature).

    Why don’t they start plotting GISStemp on that scale from -90 to + 60, instead of -1 to +1 deg C ? then we can all see how insignificant it is.

  219. Geoff wrote: “If you can tell me how to display graphs on this blog I’ll put up a spaghetti graph of 5 different versions of annual Taverage at Darwin, 1886 to 2008.”

    Upload them to TinyPic.com then click on the uploaded image until you see a View Raw Image link to click on and post that URL.

  220. It’s good to have this blog to look at after seeing the BBC’s full page of Copenhagen coverage. “Earth headed for 6 C of Warming” and “Our Warm Globe,” beh. I’m all for not doing stupid things to the environment, but come on people, can we at least take a tiny peep at the data before assembling 20K people together to throw money around and ruin our collective economies? Thanks to everyone who puts in the time and effort to keep this site running and to keep posting things like this analysis.

  221. It is perhaps the only truly transparent component of the current national administration that the Manifesto Media….continues to ignore these issues while they obfuscate. Is it chilly in Copenhagen?
    MM

  222. Thank you for this.

    I’m wondering if this is an example of what actually needs to happen for *all* of the monitoring stations datasets — quite literally we need an audit of the datasets with this exactly kind of public disclosure and debate on how the homogenization or “value added” data can be arrived at.

    Only until there is widespread agreement on the data itself can reasonable conclusions or predictions be made.

  223. JJ is absolutely correct. It’s not pleasant to hold yourself from advancing ahead of the facts, but as per strategy, it’s a bad idea to abandon your supply train. Leave inference to Mann, Jones and the rest. One day they will hang for it. If we are reasonable, measured and patient, we will not be ignored.

  224. I think Gary Pearse (07:20:39) and john (09:00:05) have the right idea. I’d like to see a parallel effort to the surface stations project. Maybe something like stationAdjust.org where a procedure could be outlined and others could use it to investigate other locations. Maybe it could be seeded by Willis laying out the steps he used to create this article.

    I think a lot of people would more than happy to help if a common approach was documented.

  225. Mr. Willis Eschenbach:

    Brilliant article – clear, understandable, incisive. Lots of hard work and dogged persistence. Many thanks and great respect to you. Sure helps explain the origins of the ‘global temperature record’. And it also highlights the genuine extreme difficulty and complexity of compiling such a thing.

    I agree with previous posters who say we should work with the ‘rawest’ data possible – down to the level of daily min/max temperatures, however and wherever measured. If we could put together a global database of that raw data, by means of a global collaborative effort, then that would be a real foundation upon which ‘citizen researchers’ could then build. Much like AW’s surfacestations.org project, in terms of the ‘citizen data gathering’ aspect. We are legion, after all.

    I think in such a database a re-sited weather station should be given a new weather_station_id – new site, new ID. Obviously a lot of work making paper records digital, but hey, many hands make light work. There was a weather station here in Tentsmuir Forest near Tayport in Scotland until recently. I should start here. Maybe I’ll do that.

    We need to use the Internet for important things, while it is available in its present form. In the future, as in the past, we may not be able to do that sort of thing quite so easily. And they do seem to be intent on rewinding history, and sending us back to the dark ages with these fictions they have brainwashed our children with. It makes my blood boil, it really does.

  226. I had found a few horrors and posted a GIStemp “Hall of Shame” a while ago:

    http://diggingintheclay.blogspot.com/2009/11/how-would-you-like-your-climate-trends.html

    According to Steve McIntyre (whose page on the GISS adjustments I have linked to), of 7364 sites, 2236 are positively (correct direction) adjusted for UHI, but a whopping 1848 (25% of the total) have a negative adjustment that increases the warming trend. IMHO there’s a lot of your global warming!

  227. Willis analysis is good.

    There is another issue with the ‘raw’ data which Joe D’Aleo and Ross McKitrick have been stating for years and that is the apparent link to station numbers. Note that there is, of course a close tie between the number of stations and global coverage.

    Judging by the quality of the code from CRU if similar or the same was used to create the data from sparse stations, it would seem that D&M could well be right.

    Some while back I took the data from McKitrick’s website and used the correlation between the station numbers and raw temperature to back out the effect of the station number variation. The result was surprising. It would seem likely a coincidence, but that cannot be ruled out, yet. My corrected values show a trend mid way between the trends of satellite data over the period of overlap.

    http://homepage.ntlworld.com/jdrake/Questioning_Climate/userfiles/Influence_of_Station_Numbers_on_Temperature_v2.pdf

    Trends for overlap period:

    Surface Stations (Raw) 1.03°C per decade
    Surface Stations (Corrected) 0.112°C per decade
    RSS Satellite 0.157°C per decade
    UAH Satellite 0.093°C per decade

    NB: I wrote the piece in two sections, so don’t stop half way.

  228. Martin (09:36:00) :

    Willis,

    I don’t know where the NOAA URL is that gives individual station data (as opposed to data sets) so I went to the NASA/GISS site with the individual stations data.

    http://data.giss.nasa.gov/gistemp/station_data/

    The Darwin raw data there (http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=0&name=Darwin) seems to correspond with the raw data you showed in your graphs. But the homogenized data (at http://data.giss.nasa.gov/cgi-bin/gistemp/findstation.py?datatype=gistemp&data_set=2&name=Darwin) does not look like what you show as homogenized data.

    Could you give a URL for the site where you got the data (and indicate which file if it is an ftp or otherwise multiple file listing)?

    Thanks.

    I’m curious about this myself. I also went to the GISS site, and noticed that the “homogenized” data for Darwin basically leaves out all the older, cooler, stuff. Now, this is GISS, not GHCN, so I don’t know if that matters. But with GISS, it looks like they may have taken the upward adjustments for Darwin from GHCN, but left out the early years.

  229. Doug,

    “JJ, if a site selected at random shows this kind of manipulation, it’s not unreasonable to postulate there is improper conduct in the handling of other data,”

    That is not correct. It is not only unreasonable to ‘postulate improper handling of other data’, it is absolutely unreasonable for you to assume that there is improper handling of these data, based on the info we have on hand. When you do that, you are acting like the men in the emails.

    Once again, there is nothing necessarily wrong with these adjustments. If you want to claim that there is, then the burden of proof is on you. That they havent yet supported their claims in sufficient detail does not relieve you of the responsibility to prove your own claims. To the contrary, it is wrong for you to make unsupported accusations, and in the long run supremely stupid.

    Allow yourself to rise to worms, you will eventually find a hook.

    JJ

  230. Willis fantastic post!

    I havent read through all the comments so please excuse me if someone has already brought this up.

    The adjustment line shown by Willis does look strangely (and disturbingly) familiar to a scaled valadj array! which BTW also looks like the adjustments put on USHCN http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
    Triple Yikes! Talk about deja vu!

    Anthony perhaps a separate post on this strange similarity in adjustments?

    APE

  231. Willis,

    I think you are seeing the results that occur when data is thrown into the meat grinder. That is, an analyst constructs what he thinks is a viable adjustment proceedure. He applies it to a few cases. he views the results and concludes that the “adjustment” make sense. Then he throws all the data at that adjustment code. When the end result confirms his bias, that the world is warming, he takes this as evidence that his adjustment “works” There are two distinct approaches to this problem. The first approach is what I would call a top down approach. Understand the problem of adjustment. Construct an adjustment approach and apply it to all data. A few spot checks and it’s good to go. Theory dominates the data. The other approach, your approach, Anthony’s approach, is to tackle the problem from the bottoms up. A station at a time. Question: which station in GHCN shows the largest trend over the past century after adjustment. That might be an interesting approach to a systematic station by station investigation of the adjustment procedure.

  232. Great work, Willis. As with E.M.Smith’s sterling work on GISS, it’s only when the data is dissected thermometer by thermometer that the real story emerges. Real science….

    I’ve suggested (having an IT/database/accounting background) that the re-done temperatuture records should have a transactional basis. Every data point (station/date/time/transaction # as the index, temperature as value) should be stamped with who/what/when/type etc, to provide a full audit trail. And most importantly, this leaves the raw data visible (as the first transaction). If we mock up a single measurement of this sort, and assume that the 2.4 adjustment is applied to that data point, it looks like this:

    station/date/time/transaction #/temperature/who/what/when/type
    Darwin/20091208/0930/30.4/WayneF/Raw data/200912090100/TEMP
    Darwin/20091208/0930/2.4/GHCN/Kludge Factor/200912090101/ADJUST

    This sort of transactional record is the heart and soul of accounting systems, and if stored in a SQL dadatbase, renders searches trivially easy: get the 2009 raw temp records for Darwin:

    Select temperature from temperturestable where type = ‘TEMP’ and station = ‘Darwin’ and date between 20090101 and 20091231

    That would give real accountability, identify at a glance what adjustments were being applied, by what process, and to what data points, and facilitate research and audit.

    Perhaps we should be thinking about an ‘Audit the Temp’ movement, because Willis’ analysis certainly points to some dark and dirty data adjustments.

    And I can’t help thinking that this is a Piltdown man moment ….. a reconstructed and apparently plausible temperature record which, when looked into, is simply a shambles.

    Audit the Temp!

  233. To those many other novices like me searching for the truth I recommend these sites. “What the Stations Say” at http://www.john-daly.com/station/stations.htm which has a global map showing locations, details and records of many ground stations. There is also a “Stations of the Week” series at http://www.john-daly.com/stations.htm . Like everything John did it is all explained in simple easy to follow terms and layout.

    The following Postscript by John L. Daly highlights many AGW sceptics concerns about GISS and CRU reliance on and manipulation of such data.

    “The whole point of this investigation of just one station is that although Darwin shows an overall cooling due to that site change, this faulty record is the one used by GISS and CRU in their compilation of global mean temperature. More importantly, most of the stations used by them will have similar local faults and anomalies, rendering any averaging of them problematical at best. The best statistical number-crunching cannot eliminate these errors.
    And why pick on a cooling station – on a site skeptical of `global warming’?
    Simply this – the vast majority of stations are affected by urbanisation with inadequate urban adjustment by GISS, as demonstrated elsewhere on this site. Even where urbanisation is not at issue, rural stations also have serious local problems, discussed in more detail in “What’s Wrong with the Surface Record?”. (One such station here in Tasmania is also featured on this site – “Hot Air at Low Head”)
    The net effect is that since most such local errors result in warming (and not cooling as in Darwin’s case), the result will be an apparent global warming in the surface record where none may actually exist. This is why the satellite record is a much more reliable guide to recent temperature trends.
    Darwin was picked out here simply because it’s record was visibly faulty even from the data itself. But if Darwin can `slip through the net’, what happens to those thousands of faulty station records where the faults are less visible and obvious? As Ken Parish pointed out above,
    “All the above historical information emphasises just how much you need to know about the history and surrounding circumstances of local surface weather readings in order to draw any meaningful long-term trend conclusions from them, especially when you are dealing with an apparent global warming trend of only around +0.25°C since 1976″.

  234. Temperature at Copenhagen at 1800 UTC was 6 Celsius. Not sure if that’s homogeneous, pasteurised or plain raw – it came from the British Met Office web site…

  235. Why wouldn’t acceleration be used as the measuring tool instead of measured value? I am no statistician, just a lowly applied math grad but hear me out. All stations around the world are going to have numerous adjustments made for various reasons not associated to actual temperature change. I believe that each station would, in general, have a log that details the adjustments and why they made. However, we already know that these adjustments are highly subjective and are often done with offsets that exceed the supposed global increase. Therefore, these adjustments need to be removed from the data set entirely. Since the logs detail the date that the adjustment was made, we are able to accurately remove those points of the data set that would introduce an invalid temperature acceleration. Having removed such data, we are now unable to measure the actual temperature. But, we are still left with data sets that show the acceleration of the temperature in either a positive or negative way and we have removed the inaccurate acceleration data. Given that we can consider most thermometers to have been relatively accurate across their range and given that we must accept the accuracy of the date of logs, we have an accurate representation of the temperature acceleration or deceleration over time. This then would allow us to measure the positive or negative gradient over time. If such analysis was applied to the entire data set of all stations, I think it would be possible to produce a largely accurate representation of the temperature record that completely excluded all man introduced adjustments. This would require the logs and the original temperature readings. Do we have those yet?

  236. You’d think if AGW people were as dedicated to science as they say, they’d be picking apart all this stuff like Willis Eschenbach just did. But they’re not.

    Science be damned.

  237. Holy cow. I’m starting to doubt that there’s any global warming at all, man-made or otherwise. Especially now that they’re telling us that 2009 is one of the five warmest years when I know for a FACT that it’s the coldest winter and summer I’ve ever experienced in La.

  238. Gary Pearse (07:20:39)

    I see that the surfacestations project of Anthony’s has to be expanded to include what is done with the data because it seems that if the readings aren’t to the AGWers’ liking they “homogenize it” and if they are concerned that one might find the homogenization a way beyond reasonable, they are tempted to throw the raw data away! We better get at this quickly.

    paulhan (01:32:59) :

    Thank you so much for this. I’m currently learning R, although it’s slow going due to other commitments, but it’s exactly these type of articles that someone like me needs. It looks like we’re losing the political fight, so the only way to respond is with the science.

    From what I’ve seen we have E.M. Smith’s work, A.J. Strata, a site called CCC, Steve McIntyre and Jean S over at CA, and I’m sure various others, and of course your good self, all working in various ways on the various datasets.

    Can a way be found to get you all together, plus interested parties willing to do some work (like me), to really work on this and produce a single temperature record, but rather than rehash CRU’s, GHCN’s or GISS’s code in something like R, actually come up with a new set of rules for adjusting temperatures.

    I’ve seen yourself, among others, complain about the way they adjust for TOBS and FILNET, and now we have this article, demonstrably showing other shenanigans going on. Whatever was come up with would probably need to be peer-reviewed to get the methodolgy accepted, but at least there’d be something we could all trust.

    I’d be willing to put in work, I have plenty of spare bandwidth on a fast shared server, and skills in web programming, but that said it would probably be better co-ordinated from here or CA, as you already have the presence and the interested parties coming here.

    Unless something like this is done, we’ll see the Met spending three years using the same code and coming up with almost exactly the same dataset, and we’ll have lost.

    I couldn’t agree more. I’ve been discussing this with Anthony before I published this piece. He’s too busy to do anything else at all, but he likes the idea.

    So last night I went and registered surfacetemps.org. I envision a site with a single page for each station. Each page would have all available raw data for that station. These would be displayed in graphic form, and be available for download. In addition there would be a historical timeline of station moves, copies of historical documents, and most importantly, a discussion page.

    I think we’d get the best bang for the buck by running an “Adopt-a Station” program. One person, one page. Or maybe a couple of people per page. I’ll sign up for the Darwin page …

    Also, of course, we’ll have a main blog page for discussions on the math and the mysteries.

    I do not think our purpose should be to create a new dataset, although that may be a by-product. I think our purpose should be to serve as a model for data transparency and availability. For each page, we should end up with a recommendedset of adjustments, clearly marked, shown, and discussed. But if someone wants to use one of the adjustments but feels the others are not justified, fine.

    On the main page, we should have a graphing/mapping interface that will allow anyone to grab a subset of the data (e.g. rural stations with a population under 100,000 in a particular area that cover 1940 to 1960).

    I would suggest that we do our work in R. It is an ideal language for the purpose, and it is free. If you are not using it, you should be. I resisted for years despite Steve McIntyre’s recommendations, then finally broke down and learned it. You can do in one line what takes Fortran a page.

    That’s the skeleton of the schemo. We’ll need some good web design folks, some good mathematicians, some champions for the idea, and some folks who can wire a database up to the web. Plus lots of people willing to adopt a station.

    Soooo … if you are interested, contact me at willis {At} surfacetemps.org. Let me know what you bring to the table in terms of skills and experience, and how much time you have to spend on the project. I’m in the Solomon Islands right now, very slow connection, plus I have work. I’ll first set up a surfacestations mailing list, and we’ll go on from there. I’ll be back in the US on the 15th for a bit.

    Oh, yeah. There’s more to come on the GHCN question, I’ve uncovered a very curious and pervasive error. Stay tuned for my next post.

    w.

  239. Oh to be a no-see-um on the wall as the rebuttal is feverishly discussed, air thick with expletives.

    Data war has commenced.

  240. vjones (11:50:15) :

    I had found a few horrors and posted a GIStemp “Hall of Shame” a while ago:

    http://diggingintheclay.blogspot.com/2009/11/how-would-you-like-your-climate-trends.html

    ************************************

    Nice link.

    Splicing, adjustment, homogenisation. Should not be allowed any longer. We need to see the data ‘in the raw’, then WE can analyse it and WE can use the medium of the internet to ‘peer review’ it. Then WE can tell Science and Nature and National Geographic to get tae Falkirk. And I did use to respect those publications….

  241. JJ (12:04:45),

    Promoters of the AGW hypothesis make their claims based on original methods and data that they either refuse to release, or that they claim has been thrown out.

    This directly violates the Scientific Method, which requires the opportunity for others to falsify the AGW hypothesis by replicating the exact methodologies and data that were used to construct the hypothesis.

    If the original raw data is either missing or is not provided, then the AGW conclusion is nothing more than an opinion. It is not a scientific hypothesis, it is a conjecture.

    The burden is always on those proposing a new hypothesis, not on those questioning it.

  242. Hugh Roper (07:55:29) :
    Thanks Willis for this fine piece of work. Was that really the first Australian site you looked at in detail?

    Yes, it was. Even a blind hog will find an acorn once in a while …

    w.

  243. Darwin airport’s history section says that the airport is now 311 Hectares, surely that’d need a large -ve adjustment for UHI?? Though I’m sure it’s not all tarmac, just lots…

  244. Thank you for this and so many other articles about Climategate. You may very well be writing the only honest contemporary account of the scandal.
    Keep it up and you might need to write a book by the time it’s through.

  245. I have long suspected this, just looking at the GISS website. It made no sense. Good work Willis, and I have wondered why S. McI has moved away from this since looking at the ROW. Can you create something similar to figure 2 in your post using all global station data that cover 1900-2000 (of which there are few, I know). They definitely don’t look like the IPCC, but if anything should be weighted heavily – i.e. they are our best data/least need for adjustment, apples vs apples, etc.

    I also noticed in the e-mails that Gil Compo @ NOAA was having troubles with the 1910-1940 data reconstruction they are attempting, where it wasn’t matching accepted data. It’s good to know they are (were, at least) double checking this.

  246. John Goetz (08:13:02) :

    Willis,

    The truly raw data for the stations are in the daily temperature records (.dly files) on the GHCN FTP site. What is interesting is that when GHCN creates the monthly records that they (and GISS) use, they will throw out an entire month’s worth of data is a single daily reading is missing from that month.

    When a month is missing from the record, GISS turns around and estimates it using a convoluted algorithm that depends heavily on the existing trend in the station’s data, thus reinforcing any underlying trend. GISS can estimate up to six months of missing data for a single year using this method.

    It seems to me the best place to start is with the raw daily data and find out how many “missing” months have a small handful of days missing, and estimate the monthly average for those days, either by ignoring the missing days or interpolating them.

    John, thanks for your interesting comment. I agree that the daily data needs to be looked at … so many clowns, so few circuses. Also, we need an answer about dealing with the missing data. Interpolate? If so, how? Another issue for surfacetemps.org to publicly wrestle with.

    w.

  247. Tim Clark (08:17:05) :

    Good analysis Willis. One minor point, if it has already been addressed above, just ignore me. Where you have the phrase and also a fine way to throw away all of the inconveniently colder data prior to 1941. , shouldn’t that be “inconveniently warmer data”?

    Good catch! Perhaps I could prevail on the moderator to fix it?

    w.

  248. kwik (08:37:09) :

    Ont he CRU curve ,I really dont understand what I see. Well, I think the black line is the raw data, since you mention that under the graph. But the red and blue shaded area? Model predictions? Or?

    Sorry for not mentioning that. The blue is what the models hindcast using only natural forcings. The red includes anthropogenic (human caused) forcings.

    w.

  249. Great work. Hopefully a climate scientist will pick up this ball and run with it to produce a “peer reviewed” paper that documents this, and hopefully other “homoginizations” of the raw data.

    (In reading through some of the EPA’s responses to comments on their endangerment finding, they deflected many comments on the [lack of] data quality with the excuse that “that blog post is not peer-reviewed”)

  250. Excellent piece of work Willis!

    In addition to the stations/locations you considered, there are a lot of Aboriginal missions and mines (both abandoned and current) scattered around the Northern Territory – including many within 500 – 750 km of Darwin e.g. Rum Jungle, Oenpelli, Nabarlek, Ranger/Jabiru etc., etc.

    From personal knowledge of these locations I know some had/have been collecting daily temperature and rainfall data for quite long periods. Not all reported their data to BOM and even for those that most likely did (the missions in my experience) it seems not all sites’ data can be accessed via BOM (I’ve tried). However, in most cases it would not be too hard to get the data from the missions themselves or from the mining companies or other bodies like NT Dept. of Mines, Ansto etc.

    By more data gathering from such ‘unofficial’ sources it would be possible to thoroughly ‘interrogate’ IPCC temperature record for the Northern Australia region shown in Fig. 9.12 from the UN IPCC Fourth Assessment Report you show in article above.

    Perhaps using all ‘official sources’ plus unconventional sources (as mine sites etc) it would also be a very useful exercise to ‘interrogate’ a suite (say 5 – 10) such IPCC test regions globally as a very powerful test of the integrity of the IPCC process.

  251. Wayne Findley (12:09:13) :

    I’ve suggested (having an IT/database/accounting background) that the re-done temperatuture records should have a transactional basis. Every data point (station/date/time/transaction # as the index, temperature as value)

    ***********************************

    I suggest

    date
    minimum temp
    max temp
    longditude
    latitude

    as the key (unique identifiers) to the central database table.

    Then, who made the claim, where they got the data from, personal notes from the claimant, station history, etc., linking to further database tables which

    MAKE THE WHOLE PROCESS TRANSPARENT

    AND LEAVE A COMPLETE AUDIT TRAIL.

    Sorry for shouting.

  252. Willis,

    I like the adopt a station idea. For maximum effect I would suggest that the stations be ordered according to the trend they show after adjustment.

  253. bsharp (08:47:58) :

    Mr. Eschenbach:

    I appreciate your effort in this matter. Your post has been shared with a person who gives great authority to the existing ‘academic community’. That person has dismissed your findings as opinion and unsupported personal conjecture that the process is broken.

    Part of our discussion has hinged on your statement “So I looked at the GHCN dataset.” While acknowledging that the blog venue doesn’t require the same level of source citation as a peer-reviewed journal, your sources have been questioned.

    Could you provide a more detailed reference/ link to the HCN data in question [both raw and adjusted]? Thanks and regards.

    My goodness, what an oversight. I was rushing to get this out, time is important right now.

    The GHCN data are available from the GHCN here . The two files I used are the unadjusted and adjusted mean temperature datasets, called “v2.mean.Z” and “v2.mean_adj.Z. The station inventory and metadata are in a file called v2.temperature.inv.

    The main data files are in a fixed-width text format. The column widths are spelled out in the readme file for temperature.

    Tell your friend that I put this out on a public blog specifically so that non-believers like him (and all good scientists) can try to find faults with it. Invite him to have at it, I have nothing to hide and I’ve been proven wrong before …

  254. Smokey,

    “The burden is always on those proposing a new hypothesis, not on those questioning it.”

    Exactly.

    So long as you are questioning you have nothing to prove.

    On the other hand, the moment that you start making claims such as: ‘These results are blatantly bogus.’ ‘These adjusted temperatures are false warming’, ‘These researchers made illegitimate adjustments to make the data match their preconceived notions’ … at that point the burden of proving those claims rests on you.

    Willis cannot prove these claims. He should not have made them. Better he should have stopped at the questioning – ‘Hey, whats up with these huge adjustments?’ ‘How do you justify that change, and not this one?’ – and demanded answers, rather than immediately rushing on to make unsupported claims of his own.

    He was wrong to do that, particularly given that the claims are in fact accusations of impropriety. And beyond that it was wrong, it was not smart. We all agree that there are legitimate adjustments that may be made to raw data. If these adjustments are legitimate, crow is on Willis’ dinner menu.

    Whenever you find yourself saying ‘I dont understand why …’ you need to keep in mind that you are ripe to be schooled.

    Stick to what you can prove – this is our message to the Team. Goose.Gander.Sauce.

    JJ

  255. JJ (08:48:13) :

    Good grief. Enough with the unmitigated speculation and hyperbole.

    Willis – excellent analysis, but you go a bridge too far with your conclusions.

    “Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? … Why adjust them at all?”

    Those are very good questions. Making claims requires answers.

    “They’ve just added a huge artificial totally imaginary trend to the last half of the raw data!”

    You dont know that. You should not claim to know that which you do not. That is Teamspeak, leave it to the Team.

    Well, yes, I do know that it is huge, it is artificial, and that it is totally imaginary. We have no less than five different thermometers at Darwin, all of which agree that the temperature did not climb at an unheard of rate (6 C per century) after 1941. Not sure how much clearer that could be.

    And my language reflected my astonishment. You are right, perhaps it is a bit over the top … but then so was their adjustment.

    Thanks for the admonishment, however. Moderation in all things. It’s just that when I am confronted with the raw evidence of their perfidy, it angrifies my blood, and I have even been known to say bad words …

  256. Billy (08:49:21) :

    … So which is it? Does the data exist somewhere or not? If it does then I don’t even understand why CRU even made such a dramatic announcement. Why didn’t they just say what Gavin Schmidt says above?

    But on the other hand, reading this article it almost seems like the author DOES have access to raw data (and normalized data) which he used to calculate the Darwin adjustments. Is this the same data that CRU started with? If so, why not use this data to start with to reproduce CRU results? If the data does exist in some format I don’t see why there is any controversy at all.

    Seems like somebody is being disingenuous but not sure who. This is something I find incredibly frustrating about this issue. The scientific issues are understandably opaque and subject to debate. That’s hard enough to get to the bottom of. But even simple things like, ‘Was the data destroyed or not?’ are subject to so much spin it’s nearly impossible for somebody who’s trying to be objective to sort it all out.

    What we’ve never been able to get from the CRU is a list of the data that they use as a starting point for their global temperature estimate. The emails did reveal, however, that they use GHCN data (although they didn’t say whether they use the raw or adjusted). Those are publicly available.

  257. Willis,
    I had the same experience as Martin (09:36:00) : . The homogenised data plotted from the GISS site doesn’t look like your Darwin 0 data, You’ve referenced everything else very well – could you please give a reference for this data, as shown in your final graph (red in Fig 7)?

    [REPLY – He’s not showing homogenized NASA/GISS data, he’s showing homogenized NOAA/GHCN data. ~ Evan]

  258. @ weschenbach (12:38:53) and John Goetz (08:13:02)

    Poking around in the GISS data and GHCN data – you see plenty of missing months – they seem happy to toss out data when it suits and yet hold on to other data for convenient reasons – like no data for years, then suddenly the station starts reporting again and you get a few much warmer points on a graph.

    I have also found a case where the station officialy stopped reporting according to the NCDC staion locator, but GISS has data clearly filled in for up to 10 years after, very much warming the location. I’m gathering data for a post on that one.

  259. Why am I not surprised to see data “adjustments” and all kind of fudging of data being the hallmark of all “story lines” supporting AGW?
    Back in the 90’s when computer models were first used to predict “catastrophic global warming”, the models could not used for “hind casting” i.e. to simulate past known temperatures. The results were way out of line with known past temperatures. Mount Pinatubo erupted in the early 90’s and the subsequent year, or two were significantly cooler because the volcanic eruption injected significant amount of aerosols into the upper part of the atmosphere. This gave the modelers an idea. Pollution was the reason why models could not do “hindcasting”. Unfortunately there was no world-wide pollution data available. No problem for the modelers. They just introduced enough ” pollution” into their models to make them simulate past temperatures accurately! While the overlying idea that aerosols reduce global warming was correct, introducing it as an adjustable variable to make their models simulate reality accurately, is not science, is data fudging. I would say that data fudging is endemic to global warming “science”.

  260. Nick Stokes (13:22:23) :
    Willis,
    I had the same experience as Martin (09:36:00) : . The homogenised data plotted from the GISS site doesn’t look like your Darwin 0 data, You’ve referenced everything else very well – could you please give a reference for this data, as shown in your final graph (red in Fig 7)?

    [REPLY – He’s not showing homogenized NASA/GISS data, he’s showing homogenized NOAA/GHCN data. ~ Evan]

    Thanks, Evan, you are correct.

    Both NOAA (GHCN) and NASA (GISS) start with about the same raw data. But they “homogenize” it in very different ways. GISS just cut off the Darwin data before 1963. Why 1963? Probably there’s a reason, but we don’t know why they didn’t keep the data back to say 1941.

    w.

    PS – if I miss a question anyone would like answered, just ask it again.

  261. I don’t understand why certain people are so biased into believing the global warming hoax. The reality is that there are a lot of people that think a one world government and currency would be a good thing.

  262. Willis,
    Following my request for a reference (13:22:23), I now see that you have given a source here weschenbach (12:57:46). But do you have an explanation for why the plot as shown on the GISS interactive site for homogeneity adjusted data for Darwin Airport does not appear to be rising (as your Fig 7 is)? In fact, if you compare their plots of raw and homogeneity-adjusted for Darwin Airport 1941-2009, there’s virtually no change at all.

  263. Willis, I managed to confuse myself with the two scales that you have on Darwin zero. The right hand scale seems to be the “amount of adjustment” scale. And the left seems to be the temperature anomaly scale. But the “amount of adjustment” scale is half the change of the anomaly scale. This makes it appear that the amount of adjustment is twice as large as what is seen in the resulting anomaly. I’ve unconfused myself, but it might be worthwhile to use the same relative scale in the future.

    Still, fine job on your part. Thanks.

  264. Willis (and parenthetically Evan)

    I understand that the homogenized data shown is NOT the GISS homogenized data. And you did label it GHCN in the text. But what I — and I think Nick — don’t know is the site where you got. I found the raw data in the GHCN v2.mean.z file. (I checked the two long series — 0 and 1 — and they have the same values as the GISS raw data.) But where is the GHCN homogenized series data?

    I got the raw data at the ftp site:
    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/

    I just could find anything around there for the homogenized data. Help?

    Thanks.

  265. I don’t think this “proves” that someone went in and adjusted everything by hand to force a warming trend on the data…it does, however, prove that – at a minimum – the statistical training techniques they used are very poorly conceived.

    I saw a lecture less than a month ago that talked about GISS’s homogenization techniques. There are time-series driven statistical techniques in common use all around the world (not just in climate studies) that try to keep jumps in the data from influencing analysis. There’s a +/- system that looks at the residuals above a trend line to see if there are unusual runs of above or below-average points and makes stepwise adjustments to stop that from happening…the problem may be that the weather/climate has significant auto-correlation (it may not be uncommon for 20 years to be colder and 20 years to be warmer based on oscillations in solar activity, the PDO, the AMO and vulcanism.

    I think what we have here is a group of people starting with a few base assumptions and choosing the wrong methods for their data analysis.

    I think the way to adjust the temperature record to account for inhomogeneities is to do historical research on each data site and only make an adjustment if you can find a physical explanation. That’s how a scientist should proceed…you need to begin with first principles and not let statistical methods run away with your data unchecked.

  266. Martin:

    “I just could[nt?] find anything around there for the homogenized data. Help?”

    Wouldn’t that be the file v2.mean_adj.Z (two lines down at /pub/data/ghcn/v2/)?

  267. This is exactly the way to get them.
    Request access to as many raw data as possible and show how they have been tweaked.

    Nobody has a right to “own” and hide such raw data.
    Not at all must it be allowed to change them however, then to present the uneducated public an “optimized” version and suggest what to do.

    Such processed data and results are nothing but garbage.
    To do it this way is a crime, not even close to any kind of science.

  268. Willis: “They’ve just added a huge artificial totally imaginary trend to the last half of the raw data!”

    JJ: “You dont know that. You should not claim to know that which you do not. That is Teamspeak, leave it to the Team.”

    I agree. McIntyre’s caution in this regard is the model to follow. We must not open ourselves to counterpunching by make roundhouse swings.

  269. What really makes me laugh are news articles that say:

    “The past decade is the warmest decade in 40 years” or “the average temperature of this decade is warmer than the previous decade”

    While both are correct, it hides the fact that no warming has taken place this decade. Heck, it could even have cooled significantly in this decade and the statement would still be correct given the rapid warming in the 90’s. They are very misleading statements, but technically correct.

  270. Nick Stokes (13:50:05) :

    Willis,
    Following my request for a reference (13:22:23), I now see that you have given a source here weschenbach (12:57:46). But do you have an explanation for why the plot as shown on the GISS interactive site for homogeneity adjusted data for Darwin Airport does not appear to be rising (as your Fig 7 is)? In fact, if you compare their plots of raw and homogeneity-adjusted for Darwin Airport 1941-2009, there’s virtually no change at all.

    They are using the GISS adjustment, which means cutting off the pre-1963 data and adding a small (0.1C) adjustment to a few years.

    I was looking at the GHCN adjustment.

  271. Tilo Reber (14:00:49) :

    Willis, I managed to confuse myself with the two scales that you have on Darwin zero. The right hand scale seems to be the “amount of adjustment” scale. And the left seems to be the temperature anomaly scale. But the “amount of adjustment” scale is half the change of the anomaly scale. This makes it appear that the amount of adjustment is twice as large as what is seen in the resulting anomaly. I’ve unconfused myself, but it might be worthwhile to use the same relative scale in the future.

    Still, fine job on your part. Thanks.

    Aw, nuts. I had noticed that and made a new graph, but I didn’t get it into the final draft. The correct graph is here. Evan, perhaps you could download it (or just change the link) to fix the head post?

    Many thanks,

    w.

  272. Chris Fountain (04:47:41) :

    I just have a question about the dates covered in this analysis. How was it that there was a thermometer at the Darwin Airport about 20 years before the Wright brothers’ flight? Were we Australians so prescient that we built one in anticipation?

    You Aussies are always ahead of the curve …

    The station was at the Post Office until 1941.

  273. Good things about a Willis Eschenbach article on WUWT:

    1. Topical
    2. Well-organized
    3. Educational
    4. Errors admitted and corrected
    5. Willis is incredibly good-humored and patient
    6. He tries to answer all questions

    Bad things about a Willis Eschenbach article:

    1. Some of the longest d….d comment threads on WUWT!

  274. Willis, I posted a link to your piece on a generally warmist site I’ve been arguing on and it got this criticism. I don’t agree with it, the guy can’t spell your name for a start… but thought it might help you tighten your argumentation to make such dismissals more difficult. If you feel like responding to it here, I’d lke to post it there if that’s ok with you.

    1. Eisenbach graphs the unadjusted data, and shows that it doesn’t match the adjusted data.

    2. Eisenbach then asks why adjustments were made to this data, and then proceeds to completely fail to answer his own question by taking a sort of vague, vanilla explanation of why adjustments are made and stating that that one paragraph does not seem to apply to the record for this particular airport.

    3. The train has already gone off the tracks at this point. . . a more meticulous person might have explored the possibility that the adjustments were possibly made to account for other things. Eisenbach seems to just jump to the conclusion that they must have been pulled out of a hat.

    4. Eisenbach then makes a motion toward throwing them a bone by making one adjustment of his own. However, he does not give any clear explanation for this adjustment – anyone who was following his line of thought up to this point should have to conclude that he just pulled it out of a hat.

    5. He follows that up with an explanation for why he wouldn’t do any further adjustments that shouldn’t have anything to do with the way climatologists normally decide to do adjustments. The decision on how to handle data like this should be made for consistent, quantitative reasons, never because someone’s just eyeballing a graph and tweaking it until they think it looks right.

    5a. In the process of doing the above, he does an interesting thing: He quoted a GHCN paragraph that indicated that adjustments are made for multiple factors, including but not limited to station location. He agreed that that made sense, so presumably he thought all of it made sense. However, he followed that up with a detailed analysis that is based on the presumption that station location is the only valid reason to make an adjustment.

    So now we’ve hit the second place where Mr. Eisenbach can’t even seem to agree with himself on how things should be done, and we’re still only halfway through the post. . .

  275. MIke (12:14:18) :

    “This would require the logs and the original temperature readings. Do we have those yet?”

    We all need to do this.

    “We all need to get it together”.

    The data, that is.

    Get that together, and we’ve got something we can work with.

  276. Maybe it would be best to go with the Japanese JMA temperature dataset instead of the “western” nation’s sets.

  277. Willis,

    “Well, yes, I do know that it is huge, it is artificial, and that it is totally imaginary.”

    No you dont. Granted, it is large. And all adjustments are ‘artificial’. But totally imaginary? You dont know that. Not yet.

    “We have no less than five different thermometers at Darwin, all of which agree that the temperature did not climb at an unheard of rate (6 C per century) after 1941. Not sure how much clearer that could be.”

    If it is legitimate to adjust one thermometer, and it may very well be, then it may also be legitimate to adjust all thermometers in the area similarly. If that is the case here, then what you may have discovered is not that they illegitimately applied an enormous false adjustment to two of the thermometers in the average, but that they failed to apply a huge legitimate adjustment to the third. Up goes the AGW.

    You dont know. You have asked some very good questions here. Time to pose those same questions directly to the people who damn well should have the answers … then draw conclusions as to effect and motive.

    Until then, be content that you have layed out some important facts in a fashion that the layman can readily grasp:

    1) The instrumental record is not merely the wholey objective exercise of reading a thermometer and writing down the number accurately.

    2) The instrumental record is subjected to various adjustments, the magnitude of which easily dwarfs the alleged ‘global warming’ signal.

    3) These adjustments are not well documented outside of the small clique of researchers who compile these datasets – and perhaps not even within those circles.

    4) The propriety of these adjustments cannot be ascertained without access to both the raw data, and very detailed method descriptions.

    You’re doing good work. Dont ruin it by overreaching.

    JJ

  278. You say
    “And CRU? Who knows what they use? We’re still waiting on that one, no data yet …”

    The CRU said(before the server was taken offline because of the hack – http://www.cru.uea.ac.uk/cru/data/)
    ” The various datasets on the CRU website are provided for all to use, provided the sources are acknowledged. Acknowledgement should preferably be by citing one or more of the papers referenced on the appropriate page. The website can also be acknowledged if deemed necessary. CRU will endeavour to update the majority of the data pages at timely intervals although this cannot be guaranteed by specific dates.”

    Who is lying?

  279. Just took a look at the Australian Bureau of Meteorology figures for Darwin. They agree almost exactly (but no quite) with the GHCN unadjusted figures and are far, far from the GHCN adjusted figures. The BOM figures are here

  280. It would be pretty hard to justify all those warming adjustments and few, if any cooling ones. Most everything I can think of would actually warm the area, such as:
    -addition of parking lots, runways, etc
    -clearing surrounding vegetation for new structures/runways
    -urbanization of surroundings

    What could possibly account for the massive warming adjustments made? There are no noticeable spikes in the temperature plot. Things I could think of would be resurfacing with high albedo material or moving the station to a bushier location. While those might explain one such adjustment, there are 3-4 of those, each one having to build on the next. I find that highly unlikely. Again, the vast majority of any adjustment I can think of would be cooling adjustments. It sure smells like data manipulation to me.

    It would be interesting to do this analysis on many other stations.

  281. Hi Willis

    Can I just confirm that I am right with respect to the GHCN file you got the adjusted (homogenized) data from please (as Martin and Nick have previously queried)?

    If GISS have not done a closely similar adjustment then you have opened a real can of worms.

    Regards

  282. I understand that the scientists have taken the raw data and manipulated it to reflect what they think is the ‘most accurate’. It makes sense to do just that.

    But if you have all of the raw data, and all of the adjusted data, could it not be demonstrated statistically if there was a bias one way or another?

    For example If 55% of the value added data is shifted up, vs 45% down, would that not prove that there was a bias in the resulting conclusions?

    Of course that would also have to be reviewed becasue it is easy to see that numbers should be shifted down as data coming from urban sites would reflect nearby land use changes.

  283. Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style …

    What a hoot.

  284. Willis (14:41:37),
    OK, GISS and GHCN seem to get very different homogeneity adjustments for the case of Darwin. This is curious, because their algorithms seem similar, and the global results track fairly well. But you raised the specific issue of the IPCC plot, and its dependence on CRU. Well, this isn’t GHCN either. As far as I can tell, the relevant CRU data is obtainable on this MetOffice site. Darwin is station 941200.

    FrancisT has plotted that data. And, as he says, it seems that the CRU is also very little different from GHCN unadjusted. And there is little upslope.

    So a further query might be why N Australia has an uptrend that isn’t reflected in the CRU Darwin figures. Or maybe the posted CRU figures aren’t the ones that the IPCC relied on after all. However, on the face of it there doesn’t seem to be any link between the adjustments you have shown and the IPCC plots.

  285. Even a drunk MONKEY would not have “homogenized” those numbers like that. What reason for the “adjustments” would excuse the massive changes, a non-stop forest fire for the last 50 years and a localized ice age for the first 50 years? LMAO

    “H-Yuck! Jest trust us scientists, we’d nay-verr miss-ah-lead ya! H-Yuck!”

  286. Brian Dodge, are you taking the position that all the data, including raw data, was freely available from CRU?

  287. tallbloke (14:57:12) : Thanks, tallbloke, quote as you wish. Regarding his questions/statements:

    1. Eisenbach graphs the unadjusted data, and shows that it doesn’t match the adjusted data.

    True

    2. Eisenbach then asks why adjustments were made to this data, and then proceeds to completely fail to answer his own question by taking a sort of vague, vanilla explanation of why adjustments are made and stating that that one paragraph does not seem to apply to the record for this particular airport.

    They describe a procedure for selecting stations that need adjustment, by comparing it to nearby stations. I show that their procedure won’t work in Darwin because there are no nearby stations.

    3. The train has already gone off the tracks at this point. . . a more meticulous person might have explored the possibility that the adjustments were possibly made to account for other things. Eisenbach seems to just jump to the conclusion that they must have been pulled out of a hat.

    Having looked at many, many temperature records, most scientifically based adjustments are small, on the order of a tenth of a degree. To alter a record by two and a half degrees is unheard of. I know of no legitimate reason for an adjustment of that size, particularly since we have five different temperature records that all agree very closely. If you know of such a reason, please bring it out. The only reason I have heard is a change to Stevenson Screens, but the link at the end of my post shows that is not the reason/

    4. Eisenbach then makes a motion toward throwing them a bone by making one adjustment of his own. However, he does not give any clear explanation for this adjustment – anyone who was following his line of thought up to this point should have to conclude that he just pulled it out of a hat.

    I have no reason for making a adjustment, I think the record is consistent. I adjusted it in 1941 because that was the year of the big decrease. I did it to show what kind of an adjustment might be done.

    5. He follows that up with an explanation for why he wouldn’t do any further adjustments that shouldn’t have anything to do with the way climatologists normally decide to do adjustments. The decision on how to handle data like this should be made for consistent, quantitative reasons, never because someone’s just eyeballing a graph and tweaking it until they think it looks right.

    I agree entirely. Adjustments should only be done for valid scientific reasons. However, adjustments should also not be done just because someone wants to increase the warming.

    5a. In the process of doing the above, he does an interesting thing: He quoted a GHCN paragraph that indicated that adjustments are made for multiple factors, including but not limited to station location. He agreed that that made sense, so presumably he thought all of it made sense. However, he followed that up with a detailed analysis that is based on the presumption that station location is the only valid reason to make an adjustment.

    Sorry for my lack of clarity. For a one-year adjustment of six tenths of a degree that occurs four times in fifty years, adjustments that total two and a half degrees, we need specific occurrences that happen suddenly. It’s not station moves. It’s not Stevenson Screens. Time of observation changes are typically small , on the order of a couple of tenths. Changing to a MMTS (automated temperature system) is small as well. The classic reference is Quayle, who puts the average change at 0.06 C. ± 0.02 C. Changes in the surroundings generally happen slowly

    So you are right, I have no alternate explanation involving the normal adjustments. But it is not because I presume that “station location is the only valid reason to make an adjustment.” It is because I can’t think of any set of circumstances that would add up to two and a half degrees.

    So now we’ve hit the second place where Mr. Eisenbach can’t even seem to agree with himself on how things should be done, and we’re still only halfway through the post. . .

    Keep going, I’m sure you’ll find more. However, I have researched this, and thought long and hard about it, and I can’t come up with any remotely plausible explanation for adjusting up to five stations at once. Handwaving about “it must have been something” does not advance the discussion. The first big upwards adjustment was in 1930, six tenths of a degree. It wasn’t a station move, or a time-of-observation adjustment, or an MMTS adjustment, or an adjustment for trees growing up around the station … so if you think it was for a real reason, what was the reason?

    Finally, before the adjustment, Darwin Zero, One and Two were all in extremely good agreement, within a few tenths. GHCN adjusted Darwin Zero a large amount, Darwin One by a smaller amount, and Darwin Two not at all … how is that explainable in any universe? They were agreeing perfectly before adjustment, so you adjust one a lot, one a little, and leave one untouched?

    Please …

  288. Perhaps this code has something to do with the Darwin problem (man, how ironic is that!)

    Found in the documents\cru-code\linux\mod directory:

    ! homoegeneity.f90
    ! written by Tim Mitchell
    ! module contains routines to carry out homoegenity testing
    ! based upon Int J Clim 17(1):25-34 (1997) Appendix 4

    [snip]

    !*************************************** calc correlation coefficients

    if (QNoTest.EQ.0) then
    ! write (99,*), “calc correlation coefficients” ! @@@@@@@@@@@@@@@@@@@@@@@@
    do XYear = 2, NYear ! calc C difference series
    if (DataC(XYear).NE.MissVal.AND.DataC((XYear-1)).NE.MissVal) &
    DifferencesC(XYear) = DataC(XYear)-DataC((XYear-1))
    end do

    CorrSum = 0
    do XRStn = 1, NRStn ! iterate by R stn
    if (BaselineR(XRStn).NE.MissVal) then
    DifferencesR = MissVal
    do XYear = 2, NYear ! calc R difference series
    if (DataR(XRStn,XYear).NE.MissVal.AND.DataR(XRStn,(XYear-1)).NE.MissVal) &
    DifferencesR(XYear) = DataR(XRStn,XYear)-DataR(XRStn,(XYear-1))
    end do
    call LinearLSRVec (DifferencesC,DifferencesR,Aye,Bee,Correlation(XRStn)) ! calc correlation
    if (Correlation(XRStn).GT.0) CorrSum = CorrSum + Correlation(XRStn)
    end if
    end do
    if (CorrSum.LT.2) then
    QNoTest = 1 ! require decent regional series to test homogeneity
    end if
    end if

    !*************************************** calc weighted anomalies

    if (QNoTest.EQ.0) then
    ! write (99,*), “calc weighted anomalies” ! @@@@@@@@@@@@@@@@@@@@@@@@
    do XYear = 1, NYear
    OpNumer=0 ; OpDenom=0 ; CorrSum=0

    if (DataC(XYear).NE.MissVal) then
    do XRStn = 1, NRStn
    if (DataR(XRStn,XYear).NE.MissVal.AND.BaselineR(XRStn).NE.MissVal.AND. &
    Correlation(XRStn).GT.0) then
    if (AnomType.EQ.0) then
    OpNumer = OpNumer + ((Correlation(XRStn) ** 2) * (DataR(XRStn,XYear) – &
    BaselineR(XRStn)))
    else if (AnomType.EQ.1.AND.BaselineR(XRStn).NE.0) then
    OpNumer = OpNumer + (((Correlation(XRStn) ** 2) * DataR(XRStn,XYear)) / &
    BaselineR(XRStn))
    end if

    OpDenom = OpDenom + (Correlation(XRStn) ** 2)
    CorrSum = CorrSum + Correlation(XRStn)
    end if
    end do
    if (OpDenom.NE.0.AND.CorrSum.GT.2) then
    if (AnomType.EQ.0) then
    Anomalies(XYear) = DataC(XYear) – BaselineC – (OpNumer/OpDenom)
    else if (AnomType.EQ.1.AND.BaselineC.NE.0.AND.OpNumer.NE.0) then
    Anomalies(XYear) = (DataC(XYear)/BaselineC) / (OpNumer/OpDenom)
    end if
    end if
    end if
    end do
    end if

    !*************************************** decide which years to test

    if (QNoTest.EQ.0) then
    ! write (99,*), “decide which years to test” ! @@@@@@@@@@@@@@@@@@@@@@@@
    if (present(Break)) then
    TestYear(Break) = .TRUE.
    else if (present(XBreakYear)) then
    TestYear(XBreakYear) = .TRUE.
    else if (present(BreakVec)) then
    do XYear = 1, NYear
    if (BreakVec(XYear).EQ..TRUE.) TestYear(XYear) = .TRUE.
    end do
    else
    QFirstA = MissVal ; QLastA = MissVal ! don’t consider first5 / last5
    XYear = 0
    do
    XYear = XYear + 1
    if (Anomalies(XYear).NE.MissVal) QFirstA = XYear
    if (QFirstA.NE.MissVal.OR.XYear.EQ.NYear) exit
    end do
    XYear = NYear + 1
    do
    XYear = XYear – 1
    if (Anomalies(XYear).NE.MissVal) QLastA = XYear
    if (QLastA.NE.MissVal.OR.XYear.EQ.1) exit
    end do

    if ((QLastA-5).GE.(QFirstA+5)) then
    do XYear = (QFirstA+5), (QLastA-5)
    TestYear(XYear) = .TRUE.
    end do
    end if
    end if
    end if

    !*************************************** test for single shift

    if (QNoTest.EQ.0) then
    ! write (99,*), “test for single shift” ! @@@@@@@@@@@@@@@@@@@@@@@@
    QPassFail = MissVal ; MaxRatio = 0.0

    do XYear = 2, NYear-1
    if (TestYear(XYear).EQ..TRUE.) then
    call SingleShift (Anomalies,XYear,TestRatio,Simple=1)

    if (present(TestVec)) TestVec(XYear) = TestRatio
    if (TestRatio.NE.MissVal) then
    if (QPassFail.EQ.MissVal) QPassFail = 0
    if (abs(TestRatio).GE.MaxRatio) then
    MaxRatio = abs(TestRatio) ; QPassFail = XYear
    end if
    end if

    end if
    end do

    if (MaxRatio.LT.2.AND.QPassFail.NE.MissVal) QPassFail = 0
    end if

    Could also be from another homogenization program, homogiter.f90 found in the documents\cru-code\linux\cruts directory. I can’t locate MakeContinuous subroutine/module. A portion:

    if (NGotRef.GT.0) then ! have got more stns with ref ts
    ! for these stns, correct original Data
    call MakeContinuous (Data,RefTS,GotRef,Disc,YearAD,Differ,Verbose,Adj=Data)
    ! … and make checked part trusted
    call Trustworthy (Data,RefTS,GotRef,Believe)

    do XStn=1,NStn
    if (GotRef(XStn)) then ! stn has ref ts
    Sought(XStn)=.FALSE. ! so no longer needs checking

    do XYear=1,NYear
    if (Believe(XYear,XStn)) then ! for the years with a valid ref ts
    Trust(NYear,NStn)=.TRUE. ! the series may enter other ref ts
    if (DiscCrude(XYear,XStn)) then
    DiscCrude(XYear,XStn)=.FALSE. ! any discon have been healed
    NDisc=NDisc-1
    end if
    end if
    end do
    end if
    end do

    Looking through this code, I’m struck that if anyone working for my employer wrote anything remotely similar to this and it was put into production, the federal government would SHUT US DOWN.

  289. This Darwin data is your smoking gun.

    The computer code from the CRU files is just a quick and dirty hack to match these types of adjustments for raw data files. Unless it can be shown that the leaked CRU code was used for anything other than that, it’s probably counter-productive to claim the CRU code is a smoking gun.

  290. Steve Short (14:22:38),

    Much thanks. “adj”: “adjusted.” Yep. Sorry to be a dummy.

    And now that have the adjusted data (averaged) and I can see how someone would discard the pre 1942-43 data, but the particular inhomogeneity can’t be adjusted for unless one knows the the early period mean, which begs the question … at least for 500 km.

    I have to now re-read your (Willis’s) post to try to understand how the adjustment produced an increased trend for the post 1950 period.

  291. Just to be clear, I agree that the CRU code is bad code. I agree that studying it is worthwhile.

    But unless it can be shown that the code was used to produce data or reports that were made available to scientists outside CRU or to the public, it’s not a smoking gun.

  292. P Gosselin (03:46:10) :

    Dave UK:
    – Podesta:
    Leadership role my fanny!
    This is what we call authoritarianism based on Stalinist Science..
    Next it’s:
    1. water (a water crisis is currently being manufactured)
    2. food (meat, sugar and fat)
    3. information
    Is there a Paul Revere left in USA?

    REPLY:
    Well I have the equines….
    Actually my equines and I spent the weekends for the last couple of years alerting people to the various fraud taking place(Federal Reserve Act, Food Safety bill & Cap and Trade)

    Does that count?

    (The equines are a pair of really cute ponies people want to pet, snags them every time better than candy)

  293. Hi Willis. I am a long term Darwin resident (hi to Richard Sharpe!) and have been following Darwin temps for a number of years as I work in environmental science. If you wish, I can provide you (by email) Google snaps of previous and current observation sites plus my versions of temp data (some very interesting insights).

    My opinion is the same as yours – the data have been fiddled. Most telling is the difference between raw data and adjusted data.

  294. I agree that until the code is proven to have been used it’s just an insight into what and how they’re manipulating this data. I don’t know why anyone would have spent the time and energy writing all the code and then not use it, however.

    I like the Darwin analysis and am considering using it as a template to examine the stations in my area.

  295. Following Nick Stokes (15:58:59) I did my own plot of the CRU data as posted, vs the GISS data, raw. In the overlap interval, they superimpose almost exactly. No evidence yet of big CRU adjustments that might have gone into IPCC. No smoking gun.

    Maybe the CRU data was further adjusted later. But that is how it is posted.

  296. emelks (16:06:13),

    Thanks for the code. Wow FORTRAN has changed since I last wrote any code (in the 70s with FORTRAN66). But it does not appear to use the standard test for breaks in time series: the Chow test. Formally it is a stability of regression coefficients test. The Wikipedia reference looks OK for a summary: http://en.wikipedia.org/wiki/Chow_test

    A Chow test on the series 0 raw data with a 1941 breakpoint and linear trend regression with AR1 and AR3 terms (to remove the major serial correlation) gives an F statistic, F(4,99), of 5.4 which is significant at the 0.001 level. Of course that doesn’t say anything the eye can’t see from the graph.

    If one had to use the earlier data, a defensible adjustment would be a drop in the mean so as to produce an ARMA forecast that best fit the post break data for 4 or 5 periods. A bit of a pain to program, but it wouldn’t be much longer than the FORTRAN code above although it would call both the ARMA estimation and forecasting routines.

  297. In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

    Has anybody tested the validity of this, or are they just making a WAG?

    That is, is it possible to take five stations distributed around a central station, interpolate their readings (assuming we know what averaging/weighting function was used), and compare it to the known readings from the central station?

    I can certainly see how this might completely fail — assume a tropical circular island with weather stations all around the periphery at sea level and one station with missing data on a mountaintop in the center. How could missing data from the central station be reasonably reconstructed from the coastal ones?

  298. Anyone understand why the last chart is cut off around 1995? Seems strange when the other charts go up to recent times.

  299. Deborah Smith ( debsmith@smh.com.au ) , wrote in a front page article in the Sydney Morning Herald on the weekend, discussing the comment in HARRY_READ_ME.txt about Australia’s climate data as being “a bloody mess” and that “the rest of the databases seem to be in nearly as poor a state as Australia.”

    From this mess, dear Debs concludes that there is “no question to the validity of temperature records” !

    At least ClimateGate has reached the front page. Hopefully the masses will soon start to realise that Labor’s ETS is based on fraud.

    Dear Debs,

    Based on the ClimateGate fraud, you concluded in your frontpage weekend article that there is “no question to the validity of temperature records” !

    I am repeatedly staggered by the magnitude and extent of the fraud associated with the temperature record, as more is revealed. Here is a new detailed analysis of temperatures close to home, which shows how 130 years of cooling in Australia was turning into warming by these fraudsters.

  300. I’ve just had a quick squiz at some data from the Oz weather Station Data site:

    http://www.bom.gov.au/climate/data/weather-data.shtml

    The first few sites I chose look pretty darn flat as a trend. Could it be that it it has all been entirely made up? When I’ve looked at sea level data from GLOSS (sp?) they seem pretty up & down but averaging out to no trend. These may be ALL showing no discernible trend if not ‘adjusted’.

    I will examine all I can tonight. To do it is easy:

    1. download the site list, opens in Excel.
    2. Go from this link entering each site in turn,
    3. select from the top left of the grid to the bottom right,
    4. copy, paste into notepad (to preserve tabs)
    5. paste into Excel
    6. highlight the annual average column and press the graph button (easier in orifice 2007).
    7. observe the flat trend
    8. write to your political representative of choice / media of choice.

    I am only speculating on step 7, but so far that’s what I’ve been seeing. Also, If I had more time, or millions (or even billions) of $’s in research grants, I’d look at the monthly data too.

  301. @ Nick Stokes

    On the Met Office site when you got the record did they specify the record was the raw numbers or not. The reason I’m asking is because not all the records the Met office released are raw some (I don’t know how many) are adjusted. This is from the FAQ on the Mets website:

    “The data that we are providing is the database used to produce the global temperature series. Some of these data are the original underlying observations and some are observations adjusted to account for non climatic influences, for example changes in observations methods.”

  302. Just another link in the chain towards a Royal Commission into the state of climate science.
    Have you dudes signed the E-petition at Steve Fieldings site yet??
    why not??
    regards

  303. JJ (15:09:36) :

    Willis,

    “Well, yes, I do know that it is huge, it is artificial, and that it is totally imaginary.”

    No you dont. Granted, it is large. And all adjustments are ‘artificial’. But totally imaginary? You dont know that. Not yet.

    “We have no less than five different thermometers at Darwin, all of which agree that the temperature did not climb at an unheard of rate (6 C per century) after 1941. Not sure how much clearer that could be.”

    If it is legitimate to adjust one thermometer, and it may very well be, then it may also be legitimate to adjust all thermometers in the area similarly. If that is the case here, then what you may have discovered is not that they illegitimately applied an enormous false adjustment to two of the thermometers in the average, but that they failed to apply a huge legitimate adjustment to the third. Up goes the AGW.

    You dont know.

    We have several pieces of evidence here:

    1. They jacked the adjusted temperatures by 2.5 C in 50 years.

    2. They adjusted one record a lot, one record a little, one record not at all, and threw out (or averaged in) two records.

    3. None of these adjustments coincided with station moves, the introduction of new stations, or the change to Stevenson Screens.

    4. TOBS and MMTS adjustments are typically quite small, much less than the four 0.6 C jumps.

    5. Station encroachment (trees, bushes, buildings) typically happens fairly slowly, not at 0.6 C in one year.

    Now, I don’t see how that could be legit. If the record for Darwin Zero needs adjusting by some amount, then as you point out you’d need to adjust them all by the same amount, since they all are reading the same before the adjustment. GHCN didn’t do that. Nor did they “fail to apply” an adjustment to one of them as you say. We know that because they made a very different adjustment to Darwin One than to Darwin Zero.

    Is it possible that these adjustments were all for some legit reason? Well, nothing is impossible … but when it gets that improbable, I say it is evidence that the system is not working and that the numbers have no scientific basis. I certainly may be wrong … but I have seen no evidence to date that says that I am wrong. If you have such evidence, bring it on, I’ve been proven wrong before. But I think I’m right here.

    w.

  304. JJ (15:09:36) :

    If it is legitimate to adjust one thermometer, and it may very well be, then it may also be legitimate to adjust all thermometers in the area similarly. If that is the case here, then what you may have discovered is not that they illegitimately applied an enormous false adjustment to two of the thermometers in the average, but that they failed to apply a huge legitimate adjustment to the third. Up goes the AGW.

    Sounds good … in a “hey was that a Rainbow Bee-eater that just went by? ” kind of way.

    Simple … get v2.mean .. extract Darwin and edit the duplicate years any way you desire to arrive at a one record per year series (using only the available data – this is important).

    Explain here how exactly you get any significant trend from your series (without resorting to making up new data – again important)

    If this were your money we were discussing there would be absolutely no question of me adjusting the reality of your bank account using the first five accounts that share similar account numbers to yours let alone using my secret “real value” formula.

    It took me but a couple of minutes to edit up Darwin from v2.mean and replicate the graph (Figure 5) on my own PC, could you at least do the same? Of course you can always continue fanning the smoke screen.

  305. This:

    “I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:

    Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

    If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

    The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:”

    and this:

    “One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000.”

    are maybe the crux of the matter and raise the following questions.

    (1) Is the GHCN (NOAA) database the same as the NASA (GISS) database as Professor Wibjorn Karlen? NOAA does not equal NASA for these purposes surely?

    (2) Can we rely on the accuracy (?) of an interpretation (?) that ‘the emails’ (remarkable isn’t it we at least all know exactly what these are ;-) suggest (?) that the CRU database relies (?) on GHCN (whew)?

    (3) Just how many independent global surface temperature databases are in physical existence at a raw data level?

    (4) Just how many discrete and well defined methods of adjustment are there?

    (5) Just how many such independent adjusted (‘homogenized’) databases are there out there?

    (6) Which are (a) the database and (b) method of adjustment which IPCC bases its official position on?

    I think I’m getting too old for this.

  306. emelks (16:06:13) :

    Perhaps this code has something to do with the Darwin problem (man, how ironic is that!) …

    Please, no huge chunks of code. If the moderator could snip that, I’d be happy. Post your ideas about what the code says, with a link to it. It’s your ideas I’m interested in, not random code.

    Thanks,

    w.

  307. imapopulist (05:37:32) :

    …I always suspected the most manipulation would take place in the remote corners of the Earth where unscrupulous scientist thought they could get away with it.

    Reply
    Looks like we need some volunteers from Russia. Perhaps the “hackers” from Tomsk? sarc/

    Actually it would be nice if some volunteers from Alaska, Russia, and/or Northern Canada could do the same type of analysis with some visual checks if it is possible.

  308. boballab (17:15:24)
    They said the data was a subset of Hadcrut3, which I take to mean that it has been adjusted.

  309. >>”Is it possible that these adjustments were all for some legit reason? Well, nothing is impossible … but when it gets that improbable, I say it is evidence that the system is not working and that the numbers have no scientific basis. I certainly may be wrong … but I have seen no evidence to date that says that I am wrong. If you have such evidence, bring it on, I’ve been proven wrong before. But I think I’m right here.”

    You aren’t wrong. The data has been deliberately manipulated. It is fraud and criminal prosecutions should occur.

  310. Thank you for the extensive and easy to understand analysis of one piece of the data.

    I’ve long been concerned with the lack of disclosure and “we are so smart, anyone else should be ignored” attitude by the scientists (perhaps I should put that in scare quotes now!) and politicians (“settled science”, Million Degree Earth Al etc…) who insist that we are suffering from GW and, specifically, AGW (and, that we “must” do something about it regardless of cost). I’ve also been casually following Steve McIntyre’s fine work and ClimateAudit and more recently the work over at SurfaceStations.org.

    The CRU leak was definitely a high point for the year for any objective observer and it was because of the groundwork laid and published by the likes of ClimateAudit, WattsUpWithThat, and SurfaceStations that caused it to resonate simultaneously with so many people at such a critical time. Thank God for the Internet.

    The CRU’s actions in the past has delayed the resolutions of questions about “AGW: if, what, how much, what’s the social impact?” substantially. Hopefully now we can redo the analysis in a traditional scientific way with open disclosure using accepted statistical methods. Sadly, if the CRU et al just _happen_ to be right, their arrogant secrecy and (sorry to say) fraud may have doomed Mankind by delaying action (but, I’m suspect the reason for the fraud is because they really don’t have solid data to support their bias so I’m betting on Mankind to Win and CRU DNF).

    Thank You All and keep up the good work.

  311. Paul (15:56:17) :

    I understand that the scientists have taken the raw data and manipulated it to reflect what they think is the ‘most accurate’. It makes sense to do just that.

    But if you have all of the raw data, and all of the adjusted data, could it not be demonstrated statistically if there was a bias one way or another?

    For example If 55% of the value added data is shifted up, vs 45% down, would that not prove that there was a bias in the resulting conclusions?

    Of course that would also have to be reviewed becasue it is easy to see that numbers should be shifted down as data coming from urban sites would reflect nearby land use changes.

    Most of the common changes tend to warm the site. These include trees and brush growing around the area and preventing wind from coming in, paving, building houses, change to MMTS sensors, heating houses … so to answer your question, no, theres no way to show statistically that the data have been munged.

  312. Bryan,

    “I can certainly see how this might completely fail — assume a tropical circular island with weather stations all around the periphery at sea level and one station with missing data on a mountaintop in the center. How could missing data from the central station be reasonably reconstructed from the coastal ones?”

    You couldn’t (since weather and climate are chaotic systems) but it would be possible to flag potential incorrect “adjustments” by determining average temperature differentials and comparing to target station. This would assist in identifying stations for closer scrutiny. IMHO

  313. This study really “puts the cookies on the bottom shelf where the kiddies can easily reach them.” Homogenizing milk liberates lipolytic enzymes from the membranes of the fat globules, which in times leads to the milk or cheese made from it going rancid. It appears that homogenizing temperature data also makes it go rancid.

  314. Not sure whether anybody has pointed this out (there are lot of comments!). Australia has over 50 “long” stations (>100 years), of which about 40 are held to have 95% plus complete records. A lot of these are at airports, but some of those are not airports as they are known in the NH!!!
    Why on earth would anybody choose to use a tiny handful to represent temperatures over an area roughly the size of the contiguous states of the US or China (twice the size of Europe)?

  315. Brian Dodge (15:11:07) :

    You say
    “And CRU? Who knows what they use? We’re still waiting on that one, no data yet …”

    The CRU said(before the server was taken offline because of the hack – http://www.cru.uea.ac.uk/cru/data/)
    ” The various datasets on the CRU website are provided for all to use, provided the sources are acknowledged. Acknowledgement should preferably be by citing one or more of the papers referenced on the appropriate page. The website can also be acknowledged if deemed necessary. CRU will endeavour to update the majority of the data pages at timely intervals although this cannot be guaranteed by specific dates.”

    Who is lying?

    Lying? Nobody is lying here. Some people are confused, some are wrong, some tend to exaggerate, but AFAIK, nobody is lying.

    Yes, there is data on the CRU website.

    No, it is not either the raw data or the station data, which is what I requested under the Freedom of Information Act.

    What is there is gridded global data. It is what comes out at the end of the great homogenization machine. I was interested in getting a look at what goes into the machine, and that’s what they have never revealed. Both GISS and GHCN keep this raw input data on their websites in a publicly accessible form. GHCN hides theirs. Go figure.

  316. For Richard 3-16-38. Re your query about Satellite(MSU), surface temperatures, radiosonde etc. Your memory was right! John Daly outlines specifically how the UNIPCC scientists “dealt with” the troublesome divergence question in 1979. i.e., – surface showing warming (after adjustments), MSU showing little or none.
    Site is http://www.john-daly.com/ges/surftmp/surftemp.htm. Been out there a long time, but still true and very relevant as their decision profoundly influenced all future UNIPCC modelling.

  317. the last few years Oct?Nov records for Drawin have been interesting
    Many temp, No. of days above *temp, have been smashed

  318. Martin Brumby (01:24:43) :

    The ‘adjustments’ in Fig.7 wouldn’t be based on the atmospheric CO2 levels at Moana Loa, by any chance?

    That would be a neat way of ‘adjusting’ the data. (Data Rape, I’d call it).

    Judging from the satellite temp. curves, CO2 seems to be becoming a rather poor proxy for temps. Something about a “divergence”….

  319. Manfred: (40:53)
    Interesting point about Peterson…As the Dean of the College of Earth and Mineral Sciences, Easterling is MMann’s superior at PSU…he might be involved in the review of Mann…PSU is not being the least bit transparent with the inquiry…

  320. So who has the originals? Handwritten by the tireless souls who march out to the little chicken coup out behind the house, open the little door, shine a flashlight into the dark reaches of it’s belly, and write down the temp on a form?

  321. Hector Pascal said:

    Hi Willis. I am a long term Darwin resident (hi to Richard Sharpe!) and have been following Darwin temps for a number of years as I work in environmental science.

    Was there from ’56 to ’74 with a few years missing …

    Do you have an email address for Ken Parish? I am interested in asking him about the temperatures in the early years … (realrichardsharpe (at) gmail.com)

  322. Let’s do some basic inductive reasoning. How hard would it be to reproduce the CRU grid record from the known reporting stations in the raw and adjusted GHCN databases?

    Not hard for land-based data. That would at least identify whether CRU are massaging the data any more (than say a standard kriging exercise).

  323. NIWA has yet to disclose how they did their “adjustments” / “homogenisation” . This may well reveal a bias, which may not be intentional. This bias may be just due to bad logic, it maybe due to the bias in their minds that the world is warming and an involuntary transfer of that bias to the adjustments… Who cares, we want to know and we want to see the adjustments – exactly how they were done.

    So far the position stands – The raw data shows No warming .

    NIWA position – NZ has warmed 1.9 C in the last 100 years or so on the basis that 7 stations in NZ (after ajustments) show this to be so. Example Wellington had to be adjusted up by 0.79 C (their adjustment in this case may or may not be correct, certainly their reasoning has holes in it).

  324. Great work Willis. Keep it coming.

    Here’s some more to add to the adjustment debate. How about the NCDC adjustments done to the US temperature record.

    If you remember Anthony’s visit to the NCDC, they provided a presentation on the (fine) US station siting (and the adjustments they needed to do fix the fine station’s siting). This was also presented in a paper which I read but is no longer available.

    http://wattsupwiththat.com/2008/05/13/ushcn-version-2-prelims-expectations-and-tests/

    Here is the current USHCN V2 TOBs adjustment – adding 0.225C to the trend since 1920.

    Here is the Homogenization adjustment – adding another 0.225C to the trend since 1915.

    There is one other adjustment (not shown in the presentation but in the paper which doesn’t have any real impact on the trend).

    So how much have US temperatures increased since 1900 or 1920 (after the adjustments) – a little more than the 0.45C that have been made in adjustments.

  325. Hector Pascal (16:42:35) :

    Hi Willis. I am a long term Darwin resident (hi to Richard Sharpe!) and have been following Darwin temps for a number of years as I work in environmental science. If you wish, I can provide you (by email) Google snaps of previous and current observation sites plus my versions of temp data (some very interesting insights).

    My opinion is the same as yours – the data have been fiddled. Most telling is the difference between raw data and adjusted data.

    That’s great, Hector. Send them to my surfacetemps address I gave above.

    Thanks,

    w.

  326. Steve Short (17:18:27) :

    … raise the following questions.

    (1) Is the GHCN (NOAA) database the same as the NASA (GISS) database as Professor Wibjorn Karlen? NOAA does not equal NASA for these purposes surely?

    No, they are completely separate.

    (2) Can we rely on the accuracy (?) of an interpretation (?) that ‘the emails’ (remarkable isn’t it we at least all know exactly what these are ;-) suggest (?) that the CRU database relies (?) on GHCN (whew)?

    That’s what Phil Jones said, and until they release their data, there’s no way to verify it.

    (3) Just how many independent global surface temperature databases are in physical existence at a raw data level?

    Well, every national met service has a local one. Global datasets? CRU, GHCN, and GISS. All three of them use mostly GHCN raw data.

    (4) Just how many discrete and well defined methods of adjustment are there?

    Just one. Look for something odd, figure out if it is actually wrong, and fix it. The devil, of course, is in the details … how do you identify an inhomogeneity? Somebody abused me for it before, but my main tools are my eyes and my experience. I’ve looked at a lot of temperature databases.

    And of course, once you’ve found an incongruity, how do you decide whether to fix it? And once decided, how do you actually fix it?

    There seems to be little agreement on the details, as evidenced by the very different actions of GHCN and GISS in re Darwin Zero.

    (5) Just how many such independent adjusted (‘homogenized’) databases are there out there?

    Basically 3, CRU, GHCN, and GISS.

    (6) Which are (a) the database and (b) method of adjustment which IPCC bases its official position on?

    The CRU keeps its methods secret, the others make handwaving gestures at explaining them, the IPCC could care less. They seem to use them somewhat interchangeably.

    I think I’m getting too old for this.

    I’m not, but I may be by the afternoon …

    Thanks for the questions,

    w.

  327. hillbilly76 (17:50:30) :
    For Richard 3-16-38. Re your query about Satellite(MSU), surface temperatures, radiosonde etc. Your memory was right! John Daly outlines specifically how the UNIPCC scientists “dealt with” the troublesome divergence question in 1979. i.e., – surface showing warming (after adjustments), MSU showing little or none.
    Site is http://www.john-daly.com/ges/surftmp/surftemp.htm. Been out there a long time, but still true and very relevant as their decision profoundly influenced all future UNIPCC modelling

    Here is what I read about Darwin from that site : “Tropical stations in Malaysia and Indonesia show warming, while Darwin and Willis Island in Australia, both tropical stations in the same region, do not.” Is it saying that the satellite data is not showing Darwin warming or the Ground data? Its not clear.

    That really is an eye opener. It should be widely read. The ground data does indeed seem faulty.

  328. Pamela Gray (06:57:41) :

    This thing about not having raw data anymore. I am confused about that. There is raw unadjusted station data that apparently can still be had by any Susie Q or Tommy T out there. Isn’t that the raw data? So who needs raw data from the Met? Correct me if I’m wrong, but the stationsurvey is able to capture the raw data from each station and display it here. Isn’t that raw data? And easily obtained? What if we decided that our next challenge was to tabulate and average climate zone station data (totally unadjusted) and run our own graphs here at WUWT? And I do mean by climate zone so that we are not averaging together apples and oranges. Then let THEM argue about our methods.

    Thats what I proposed above for surfacetemps.org (weschenbach (12:25:33)). We’ll see if it flies.

    Good to hear from you,

    w.

  329. WIllis good to see your work again. I have followed this saga now the last six years when I first heard of Steve McIntyre’s work.

    I hope three things come out of what has occurred lately:

    1. Open data,
    2. Open Code, and
    3. No more BS until 1 and 2 are done and fully vetted.

    Seeing the madness unfold in Copenhagen I have to wonder whether my hopes are realistic, but there I am.

    In the meantime, keep the pressure on these clowns.

  330. Jeremy (07:00:22) :

    More BBC Propaganda. Husky dogs may not have employment and face a bleak future in a warmer world. This is really pathetic. Teams of Husky dogs (which pull a sled) where replaced by motorized machines called Snowmobiles or in Canada a Skidoo over 50 years ago….

    I have to reply to this one. Husky teams were used in dog sled racing up until recently. The dogs live out doors so they grow the correct fur coat and underlying layer of fat. PETA throw a hissy fit and insisted the dogs must be kept in heated kennels. THAT was the end of dog sled racing because dogs kept in heated kennels come down with pneumonia when raced.

    You are correct this is pure BS. For what it is worth there are more horses in the USA now than there were a century ago.

  331. There is something I would like to slip into the mix

    There is a document “Territory 2030″ just released
    by the govt. A futures strategy document.

    In the DRAFT strategy, It is stated that in the next 20 years

    “Temperatures will rise an average of 2ₒC to 3ₒC”!!!!!!

    If one splices a 2009->2030 2.5 oC temp rise onto the Darwin airport
    record, or Hadley CRU, or almost any other temperature record, the
    statement is clearly garbage.

    This is something that could be put into play here by simply
    getting the airport record, even one with an exaggerated temperature
    rise, and sending it to the local newspaper, the NT news. The
    NT news would be quite likely to print a reasonable quality
    graphic and an accompaning short letter.

  332. Anthony, both you and Steve McIntyre are mentioned in an editorial in today’s WSJ (hope this is a good place to mention this, if it’s in the wrong spot, I apologize!):

    http://online.wsj.com/article/SB10001424052748704342404574576683216723794.html#articleTabs%3Darticle

    —-begin excerpts—
    The Tip of the Climategate Iceberg

    The opening days of the Copenhagen climate-change conference have been rife with denials and—dare we say it?—deniers. American delegate Jonathan Pershing said the emails and files leaked from East Anglia have helped make clear “the robustness of the science.” Talk about brazening it out. And Rajendra Pachauri, the head of the U.N.’s Intergovernmental Panel on Climate Change and so ex-officio guardian of the integrity of the science, said the leak proved only that his opponents would stop at nothing to avoid facing the truth of climate change. Uh-huh.

    […]

    In 2004, retired businessman Stephen McIntyre asked the National Science Foundation for information on various climate research that it funds. Affirming “the importance of public access to scientific research supported by U.S. federal funds,” the Foundation nonetheless declined, saying “in general, we allow researchers the freedom to convey their scientific results in a manner consistent with their professional judgment.”

    Which leaves researchers free to withhold information selectively from critics,
    […]

    When it comes to questionable accounting, independent researchers cite the National Oceanic and Atmospheric Administration (NOAA) and its National Climate Data Center (NCDC) as the most egregious offenders. The NCDC is the world’s largest repository of weather data, responsible for maintaining global historical climate information. But researchers, led by meteorology expert Anthony Watts, grew so frustrated with what they describe as the organization’s failure to quality-control the data, that they created Surfacestations.org to provide an up-to-date, standardized database for the continental U.S.

    Mr. McIntyre also notes unsuccessful attempts to get information from NOAA.
    [….]

    —end excerpts—

  333. Turboblocke (04:33:51) :

    “K … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

    So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.”

    Apart from “MIDDLE POINT” http://www.bom.gov.au/climate/averages/tables/cw_014090.shtml
    and Darwin Post Office http://www.bom.gov.au/climate/averages/tables/cw_014016.shtml
    and CAPE DON http://www.bom.gov.au/climate/averages/tables/cw_014008.shtml to name but 3.

    Ho hum.

    Sorry, missed this one. In order to get five stations to see if the 1936-41 slide was an “inhomogeneity”, you need to have five stations that were in operation in 1936. That rules out Cape Don and Middle Point, and everyone but the pub in the outback, 500 km. away.

    Ho hum indeed. As I have learned to my cost, one must do homework and read carefully before uncapping my electronic pen …

  334. If nobody else has pointed this out, before you go circulating that last graphic around, you might want to fix that title. “Dawin” vs “Darwin”.

    Oh, and great post. I’ve seen similar sorts of analyses (probably at ClimateAudit), so I’m not the least bit surprised. This analysis is particularly straightforward, though, and makes it very obvious as to what is going on.

    IMHO, adjusting the raw data is, for lack of a better term, corrupt – even if done in an unbiased way. Even when done innocently, what the analyst is trying to do is to create a single dataset for a single place where none really exists. To be rigorous, each dataset must be considered independently. If the thermometer is moved from the pub to the airport, you are no longer measuring the temperature at the pub – you’re measuring the temperature at the airport. It’s as simple as that. This post shows what kind of monkey business you invite when you try to pretend that there is still only one dataset, when there is really two.

    It also shows how questionable the use of these measurements are in the first place. If the temperature data at the pub is significantly different than that for the airport, for example, that only shows that you need a much higher density of measurements than you actually have to describe the temperature of the area. It’s called “aliasing”, and there is simply no substitute for an adequate number of samples. If you don’t have enough samples, you simply cannot tell what the average temperature of the area is.

    If you change the thermometer, and that changes the measurement significantly, that tells you that your thermometer stinks, or is uncalibrated. The only way to improve your data is to then then calibrate at least one of the thermometers involved – simply making “adjustments” does not increase the absolute accuracy.

    Perhaps it is only human nature to try to come up with a number, where there really isn’t one to be had. But that’s not science.

  335. Its interesting to me how well the adjustment graph for the temperature series aligns so nicely with the briffa_sep98_e.pro “valadj” artificial adjustments. Its not one for one.. but its certainly close.

  336. Thank you for all your effort to track down the truth. You deserve to be recognised around the world for your painstaking analysis.

  337. Impressive work.
    Another way to establish a bias is perhaps to look at the release date of CRU or GISS monthly temperature report.
    I noticed over the years that when ever the temperature trend was falling it took longer for GISS to publish there findings.
    When the temperature trend was increasing it was expected and published without further checks only to get caught with there pants down like last fall as GISS used the September temperatures for the October temperatures in Siberia.

  338. Willis,

    “Now, I don’t see how that could be legit.”

    That you cannot see it does not mean it isnt so. Typical of the Team’s arrogance is the notion that they know it all. This is why you should ask, before drawing unsupported conclusions. Admit your limits. You are not God.

    “If the record for Darwin Zero needs adjusting by some amount, then as you point out you’d need to adjust them all by the same amount, ”

    No, I pointed out that it might be legitimate to adjust them all by the same amount, not that it would be necessary to. There could be more than one adjustment being applied. One adjustment might be applied to all stations, another only to one. The point is you dont know. Ask.

    “Nor did they “fail to apply” an adjustment to one of them as you say.”

    You said they did. You said “but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched.” Totally untouched implies that they didn’t apply an adjustment to that one, though they did to the others. That they may have failed to apply an adjustment to the untouched one is consistent. It may have happened, it may have not. you dont know. Ask.

    “We know that because they made a very different adjustment to Darwin One than to Darwin Zero.”

    Or, a different pair of adjustments. Or five different adjustments. Three to one and two to the other. You dont know. Ask.

    “Is it possible that these adjustments were all for some legit reason?”

    Yes. Which is why you should ask.

    “Well, nothing is impossible … but when it gets that improbable, …”

    It’s like listening to Mann. How on earth do you quantify probability of that event? Quit making stuff up, back up to what you can prove, and ask about the rest.

    “I say it is evidence that the system is not working and that the numbers have no scientific basis. I certainly may be wrong … but I have seen no evidence to date that says that I am wrong.”

    Really, are you sure there arent hacked emails with your name on them? This is very shoddy reasoning you are displaying. Once again, you do not know everything. There may be perfectly legitimate adjustment(s) applied here that you are simply unfamiliar with. Before you jump to the conclusion that someone else is criminal, ask.

    “If you have such evidence, bring it on, I’ve been proven wrong before. But I think I’m right here.”

    People that think they are right and who are unwilling to take the steps necessary to find out if they are not are at the heart of this problem. Dont continue to be one of them. Ask.

    If you are going to make claims, it is up to you to prove them correct. It is not sufficient for you to make unsupported claims, and demand that other prove you wrong. That’s Teamwork. Ask.

    Honestly, I dont see what the issue is. I have bent over backwards letting you know that I support what you are doing, that you have done valuable work so far, and that I think you are on to something. The only problem is that you dont want to complete the work before you make very nasty conclusions about other people. It is not proper for you to do that without first asking them the questions that you raise but cannot answer yourself.

    Ask!

    JJ

  339. I suggest to normalized this simulation with the same parameters that NOAA uses. The US Historical Climate Network of NOAA uses a system for this very bias that we are observing.

    http://www.ncdc.noaa.gov/oa/climate/research/ushcn/

    The interesting coincidence is that beginning of the ‘bias’ or intercept adjustment is very evident in 1960 onward, as opposed to the Darwin 1940-41 bias. I speculate that there was a instrument change that occurred with a radar installation that was later bombed by the Japanese in ’42. The most interesting part is that the tree ring correlation goes to crap in the 60’s, the same time these silly corrections come into play. Coincidence… I say not. I have worked with systems with multiple correlation factors (measurement equipment) and it is difficult in the very best of situations.

  340. I’m sorry Willis but I must question your plots
    I have plotted raw GHCN Raw Giss Homogenised GISS and they do not compare with yours at all. Will you please show the source of your (faulty?) data.
    Here are my sources
    Giss: http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=501941200004&data_set=1&num_neighbors=1
    ghcn: ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2/v2.mean.Z
    Here are my plots:

    I suppose giss or ghcn may have adjusted their figures but this seems unlikely

    Note that my plot shows 2 discontinuities 1940 and 1995. If these are removed then a warming will be shown!!!!!!!!!!!!!!!!

    Comments please

  341. Excellent article Mr Eschenbach, I assume we can trust your integrity, it’s getting very hard to know who to trust these days. Your findings don’t surprise me.
    I have recently, out of curiosity, had a brief looked at the Australian temperature data that is available on the BOM site for my own local area, Newcastle, and was very intrigued by what I saw.
    I chose Newcastle’s Nobby’s weather station (61055) and compared it with Sydney’s Observatory Hill weather station (66062).
    I chose these two sites because they probably have the longest continuous record of anywhere in Australia, dating from the mid to late 19th century, and I would guess that the measuring point would not have changed by anymore than a few metres over that time.
    The most significant difference is that the Nobby’s station is isolated from urban development by water and sand for at least a kilometre all around so any heat island affect would be minimal, whereas Sydney city centre has grown around the Observatory site as well as the Sydney Harbour Bridge off ramp in the 1930’s and a major roadway in the 1950’s.
    The Newcastle site shows a generally flat temperature trend while the Sydney one shows a steady rise more in line with the accepted trends.
    It would be very interesting for someone to do a careful analysis of these two sites to confirm my observations as there are not many temperature records this long in Australia.
    I have no idea whether the data available on the BOM site is unadulterated or not.

  342. Thanks for the great article.

    I pulled up a station at West Point that has been in use since the late 1800s.

    The averages based on the raw data show little to no warming.

    Yet the homogenized data seems to depress the temps prior to 1980 and it inflates temps thereafter.

    I create a crappy, superimposed graph to show it, here: http://thevirtuousrepublic.com/?p=4813

    Two questions come to mind. One, why doesn’t the raw data show the “hockey stick?” Two, why does the manipulation of the data depress the figures pre-1980 and inflate it thereon?

    I think we all know the reason why and science isn’t involved.

  343. Darwin Zero Dirge

    Darwin Zero from the Land Down Under,
    Ground Zero for deceptive plunder,
    Lots of weather from a lot of years
    Ground to powder in the Warmist Gears!

    They digested all the data – devouring every bit and byte –
    And what came out the other end? A stinking thing as dark as night,
    Obedient to Gore and Jones, oblivious to all that’s known,
    Looking like the doomsday clock had struck it’s final hour
    And the world was
    Out of luck.

    Darwin Zero from the Land Down Under,
    Ground Zero for deceptive plunder,
    Lots of numbers from lots of years
    Ground to powder in the Warmist Gears!

    But then Eschenbach said, “Hey – full stop!”
    “These charts are wrong – they don’t match up!”
    He promptly checked the data and,
    Checked and checked and checked again,
    Until the numbers showed him true
    What “homogenized” could do…

    Darwin Zero from the Land Down Under,
    Ground Zero for deceptive plunder,
    Lots of lies from lots of years,
    All made to feed their Doomsday fears!

    .

    ©2009 Dave Stephens

    http://www.caricaturesbydave.com

  344. For JJ

    I understand your desire for purity to counter possible dishonesty in the science. But I also know that if the Team does not provide the answers requested, or even acknowledge the question, this puts the whole issue into limbo. In the meantime Copenhagen continues and some stupid deal is done, in which case Willis’s work becomes irrelevant.

    So I support the more aggressive approach. Maybe some points will be lost, but from what I see, more are likely to be won. The JJ system allows the Team to win. How much progress did Steve M make with his approach similar to your suggestion. The CRU leaks show that the Team were playing Steve – in other words science had nothing to do with it.

    Alan

  345. JJ (20:28:59) :

    Willis,

    “Now, I don’t see how that could be legit.”

    That you cannot see it does not mean it isnt so. Typical of the Team’s arrogance is the notion that they know it all. This is why you should ask, before drawing unsupported conclusions. Admit your limits. You are not God.

    “If the record for Darwin Zero needs adjusting by some amount, then as you point out you’d need to adjust them all by the same amount, ”

    No, I pointed out that it might be legitimate to adjust them all by the same amount, not that it would be necessary to. There could be more than one adjustment being applied. One adjustment might be applied to all stations, another only to one. The point is you dont know. Ask.

    First, suppose I have five clocks. They’re always within a few minutes of each other, and have been for years. One day someone comes in. He sets one clock ahead by an hour, one clock ahead by a half hour. He leaves one untouched, and throws two away. Then he says “You want to know the real time? Just average those three remaining clocks!”

    It is not arrogant, nor does it require Godlike powers, to see that there is no way that such an “adjustment” makes sense. I don’t need to ask the guy making the adjustments what his reasons were. Before, the clocks all moved in lockstep. Now they’re all over the map.

    The point of an adjustment is to bring things back together. If you have five clocks and they all tell the same time, then one gets bumped and slows down, you speed it up. You bring them back together.

    You don’t take five clocks or five temperature records that are all giving the same answer, throw two away, and adjust the remaining three to give different answers. That’s unadjusting, not adjusting.

    Second, my experience with just asking has been … well … let me call it “less than fruitful” and leave it at that. Yes, the tone of my post was aggro, probably could have been cooler, but you know what? I’m tired of being blown off, and shuffled around, and lied to. My choice of tone was quite deliberate.

    You see, perhaps there is some innocent answer. Perhaps whoever is responsible will stand up and say “Here’s why we did it, for these very good reasons.” And at that point I’ll look like a worldwide idiot … do you think I didn’t take that into account? I didn’t want something they could just ignore. If there’s an answer, I want to get it, and I’m willing to take some risks to get it.

    But if no one stands up to give the reasons, then I will have publicly shown the truth in an irrefutable manner. So I have pushed them hard, and deliberately, and risked my good name, to see if I can get an answer. I’m calling them out, put up or shut up.

    Because I assure you, with these good folks, I have a host of experience that “Just ask” doesn’t work.

    JJ, I appreciate both the tone and the content of your comments. In a regular scientific situation, this would never come up, and I would just ask, you would be totally correct. But these days, much of climate science is not science at all. It is people fiercely fighting to withhold information from the public. When they are fighting to keep information secret, “just ask” is just inadequate.

  346. Alan,

    “I understand your desire for purity to counter possible dishonesty in the science.”

    It isnt purity. It is a) common decency and b) good strategy.

    It is not moral to accuse people of committing a crime absent proof. It is not moral to accuse people of committing a crime based on your own admitted misunderstanding of their methods, especially without first asking them if you have their method right.

    And, it is very bad strategy to act like a crank, when the oppositions (very successful) strategy to date has been to paint you as a crank.

    “But I also know that if the Team …”

    Its NOAA, not Hansen or CRU. They publish their data and methods. Its worth asking for a methods clarification.

    “… does not provide the answers requested, or even acknowledge the question, this puts the whole issue into limbo.”

    No it doesnt. You’re still free to run with it. And if you’re stonewalled, you run with that, too. But you dont claim more than you can prove, unless you want to be played.

    “In the meantime Copenhagen continues and some stupid deal is done, in which case Willis’s work becomes irrelevant.”

    Nonsense.

    First, it is not necessary to make unsupported accusations of crime in order to make full use of the Darwin example. Stick with what you can prove.

    Second, there isnt going to be anything substantive coming out of Carbonhagen, and this issue doesnt end there. Paint yourself as a bunch of cranks (every climate scientist that produces data we dont understand is a criminal!!) while burning the issue out in two weeks, and you have given up the long game.

    Climategate has given us tons of sensational material, there is no need to squander any of that, much less to waste something so important as an audit of the the instrument record.

    Third, shame.

    JJ

  347. What a relief! I no longer have to worry about global warming. Or pollution. Or carbon emissions, or limited energy resources. I am so happy now. I can go back to being an idiot.

  348. As noted by others, the 1941 issue is most likely the result of record destruction in February 1942 after the major Japanese raid on Darwin (there were actually others ongoing until 1943). There was no sealed road from Darwin to the rest of Australia until mid 1942, with transport primarily being by sea; it wouldn’t be suprisinging to find that any central record keeping may have relied on dispatch of the data once a year or once every 6 months for example (and hence the record for parts of 41 was destroyed in Feb 42.)

    One thing to note, is that you’re ignoring comparing temperature records (if the still exist) to the north. Darwin is 720kms from Dili, East Timor, and closer again to Maluku in Indonesia. It’s not perfect, but it might give you another data set to compare to (presuming they exist somewhere)

  349. Billy Sanders (22:49:26) :

    What a relief! I no longer have to worry about global warming. Or pollution. Or carbon emissions, or limited energy resources. I am so happy now. I can go back to being an idiot.

    Not sure what your point is, Billy. I worry about pollution and limited energy resources. I don’t worry about carbon emissions and global warming.

  350. JJ

    Which is why you should ask.

    No, they should have “told”, right from the start. Otherwise they have no Science. No one even needs to ask. If they don’t tell, they have nothing scientific to “ask” about.

    The “nasty” conclusions result from exactly what the “Team”, which has a large supporting gang, has done and is still doing: Perpetrating Scientific Fraud. Conspiring to pass off what is not Science as Science. There may be other frauds.

  351. Smokey: “The leaked emails and code only reinforces my suspicion, and any contrary and defensive red faced arm-waving does nothing to convince me otherwise.”

    Correct if I’m wrong Smokey, but I sense that you don’t trust climate scientists very much.

    In these situations I find it useful to repair to my rule-of-thumb measure of trustworthiness. Using the “Beard Index” I find the AGW scientists are highly trustworthy, and that there is empirical evidence to show this.

    Compare the carefully cultivated beards of Schmidt and Mann with that of McIntyre. Clearly, Schmidt and Mann take great care to present a manicured and precise front, much like their climate work.

    In contrast, McIntyre looks like he was just dragged out of bed. Can we expect precision and good work from a man who neglects his appearance so? I think not.

    I have found my index a great comfort during difficult times such as we are currently undergoing. I think you would also find great benefit from this easily applied and highly reliable measure.

  352. >Not sure what your point is, Billy. I worry about pollution and limited energy resources. I don’t worry about carbon emissions and global warming.

    Oh. Cherry-picked environmentalism. Nice.

  353. Wow. WOW.

    A simple request: bugger with all the “adjustments”. Over a large-enough data series, they should all average out anyway and back to zero, absent some systemic effect like UHI.

    What does the average temperature do if you just take raw, unadjusted data, and average it all up. All stations, all sites, all years. I’d *start* with that data and analysis.

    Enough manipulation. Publish the raw data, write the code to analyze it in just a few minutes, and publish the raw statistics and averages.

  354. >Not sure what your point is, Billy. I worry about pollution and limited energy resources. I don’t worry about carbon emissions and global warming.

    Oh. Cherry-picked environmentalism. Nice.

    Yeah, Willis. Billy is the man who defines what environmentalism is, and if you don’t meet his requirements, then you are a Cherry Pickin’ Denier!

  355. Willis,

    In your analysis, you quoted from the NOAA GHCN methods document, regarding the homogeneity adjustments. This passage from that document demands your immediate attention:

    A great deal of effort went into the homogeneity
    adjustments. Yet the effects of the homogeneity adjustments
    on global average temperature trends are
    minor (Easterling and Peterson 1995b). However, on
    scales of half a continent or smaller, the homogeneity
    adjustments can have an impact. On an individual
    time series, the effects of the adjustments can be enormous.
    These adjustments are the best we could do
    given the paucity of historical station history metadata
    on a global scale. But using an approach based on a
    reference series created from surrounding stations
    means that the adjusted station’s data is more indicative
    of regional climate change and less representative
    of local microclimatic change than an individual
    station not needing adjustments. Therefore, the best
    use for homogeneity-adjusted data is regional analyses
    of long-term climate trends (Easterling et al.
    1996b). Though the homogeneity-adjusted data are
    more reliable for long-term trend analysis, the original
    data are also available in GHCN and may be preferred
    for most other uses given the higher density of
    the network.

    Emphasis mine.

    This would seem to be capable of containing the ultimate answer to many of your questions.

    It is documented that the described homogeneity adjustment methods may result in ‘enormous’ adjustments at the single station level. It is asserted that these adjustments have minor effects on global average trends, and a reference is given in support of that assertion. It is documented that these adjustments produce datasets that are potentially less valid at the single station level than the unadjusted data, which is why they look goofy to you.

    This provides an explanation for what you are seeing that does not involve outright ciminal fidgeting with the data at the individual station level to match ‘preconceived notions’.

    The homogeneity adjusted series are held to be not only valid, but more representative than unadjusted series, for the purpose of regional and larger scale long term trend analysis.

    If you have a legitimate beef with what you are seeing at Darwin, it would appear that it likely is not with adjustments that have been applied surreptitiously, but rather with the assertion that the adjusted data are superior for large scale, long term trend analysis.

    If you are going to address that issue, you are going to need to abandon clock analogies and the like. The maths involved do not always operate intuitively, and cannot be refuted by such analyses. You are going to need to refute the theoretical basis, much as Steve M did with Mannian PCA. I’ll wager that is going to take some reading on your part. It would appear that papers by Easterling and Peterson would be a good place to start.

    It remains a possibility that criminal fidgeting has occurred with the Darwin station data, but an alternate explanation for large adjustments lies before you now, one previously outside your knowledge.

    This is why you should ask.

    JJ

  356. Ah yes… the Beard Index. I know it very well.

    I’m 61 and many of my friends and acquaintances have beards. Mine comes and goes like the winter snows. I’ve long noticed bearded mean (sadly I’ve not yet interviewed a bearded woman) fall into two clear categories as follows:

    (1) The hang loose types who hate shaving (because it basically sucks). These guys require to have obligingly mellow lovers who don’t mind smooching their way through that mess of hair ‘come what may’. These guys basically don’t give a damn. They don’t mind growing old either gracefully or disgracefully or grumpily if that is to their taste. To these guys getting old usually means more laughs, deeper friendships, better wines, worse beards and increased refinement of their BS detector.

    (2) Next we have the control freaks, those often anally retentive types who carefully construct for that well manicured beard, carefully sculptured around the facial contours and features to highlight their ‘good points’. These guys are usually stuck with whiny lovers who don’t like facial hair and demand a well manicured flightpath to a very occasional peck on the lips or nose or whatever. These guys are generally secretly obsessed with wherever it is they perceive themselves to be (well) positioned in that all important human pecking order. After all, its all about presentation n’est pas? Naturally, these guys are not ageing with anything other than resentment – indeed ageing too is simply another…..Catastrophe.

    Quite frankly, Type 1 are my kinda guys.

    Type 2 have a beard essentially for one reason. It’s called Hide the Decline.

  357. I am intrigued that you include YAMBA in your readings, in the second diagram, and call it a Northern Station. It would be surprising if it fits the bill for a neighbouring station. Is that what the CRU people did, or just what you did?

    YAMBA lat -29,43 long 153.35 is about 3,000km from Darwin. It sits on a different ocean (Pacific) next to my home town, and has very equitable climate. It is the YAMBA you get when dialing up on the AIS site.

    3000km, to give some context, is a little more than the distance between London England and Ankara Turkey. More than the distance from Detroit Michigan and Kingston Jamaica.

  358. One of the UEA CRU emails (0839635440.txt) is relevant to this. It is from John Daly to n.nicholls@BoM.Gov.Au and includes data from Station 66062 which shows Sydney Observatory’s annual mean temperature.

    16.8 16.5 16.8 17 17 16.7 17.1 17.4 17.9 17.4 17.2 17.1 16.9 17 17.2 17.2 17.4
    17.6 17.6 17.6 16.7 17.1 16.8 17.4 16.8 17.3 17.8 17.5 17.1 17.2 17.6 17.3 17.1
    16.9 16.9 17.3 17.3 17.3 17.6 17.5 17.4 17.2 17.1 17.3 17.2 17.2 16.9 17.5 17.4
    17.2 17 17.5 17.4 17.5 17.7 18.3 17.8 17.4 17.2 17.4 18.3 17.3 18 18.1 18 17.5
    17.3 18 17 18.2 17.4 17.6 17.5 17.4 17.1 17.4 17.3 17.5 17.7 18 17.8 18 17.4
    17.8 16.8 17.5 17.4 17.6 17.6 17.2 17.4 17.9 17.9 17.6 17.7 17.8 17.7 17.6 17.8
    18.3 18 17.6 17.8 17.8 17.8 18.1 17.9 17.5 17.8 18.3 18 17.7 17.3 17.5 18.5 17.4
    17.8 17.7 17.8 17.7 18 18.5 18.2 17.8 18.1 17.5 17.8 17.8 18 18.6 18.1 18.1
    18.6

    These arent labelled, but the start year is 1859. If you download the current 66062 data from BoM there is complete agreement until 1970, after which no agreement. A partial comparison is given here: –

    Sydney 66062
    Year 2009 data 1992 data Difference
    1961 17.8 17.8 0.0
    1962 17.8 17.8 0.0
    1963 17.8 17.8 0.0
    1964 18.1 18.1 0.0
    1965 17.9 17.9 0.0
    1966 17.5 17.5 0.0
    1967 17.8 17.8 0.0
    1968 18.3 18.3 0.0
    1969 18.0 18.0 0.0
    1970 17.7 17.7 0.0
    1971 17.9 17.3 0.6
    1972 18.0 17.5 0.5
    1973 18.7 18.5 0.2
    1974 17.8 17.4 0.4
    1975 18.4 17.8 0.6
    1976 18.0 17.7 0.3
    1977 18.4 17.8 0.6
    1978 18.0 17.7 0.3
    1979 18.4 18.0 0.4
    1980 18.8 18.5 0.3

    Once Again a step type adjustment followed by a somewhat random walk.

    Kind of intriguing that it is so similar

    Mike Bird

  359. The law of great numbers, that as the number of samples increase the average (and median) will become more and more close to its true value, indicates that with many stations, raw and homogenized station data should start to look familiar. Alas they don’t. If there was a persistent false cooling trend it could still be justified, but I would rather guess that there is a false warming trend due to urban heat islands than a false cooling trend.

  360. I live in France. On Monday I was driving on the motorway here and was listening to “Radio Traffic” for news about any snarl ups when they broadcast a short interview with a person (can’t remember his name but I think he was Senegalese) who has been appointed by the environment minister here – Borloo – to work on the climate justice campaign at Copenhagen and after.

    He described how he had been invited by Borloo into his office and that Borloo has shown him a photo of the earth by night on which he pointed out that while there were lots of lights in Europe and South Africa, the whole strip of central africa was dark. According to this person, Borloo asked him to work on electrification of Africa. This, he said, would be paid by 250 billion euros raised from a Tobin Tax on financial transactions.

    This was news to me. Also, it makes no sense. First, electrifying Africa is not going to cut Co2 emissions. Second, the Tobin tax has not been passed into law. It is still talk. And to assume that you can raise Euro 250 billion from it is crazy since as soon as a tax like this is imposed the amount of transactions will fall and you will collect less than you might think. Also, who asked us if we want this? But it seems to have been decided. Crazy!

  361. From a web-site in Australia that struck me as very poignant when seeing Stuffushagen in the news:

    My parents used to recall the times pre WW11 and the rise of fascism and the Nazis. They often referred to the Nuremberg Rallies…the whipping up of fanatical belief and ideology…the shrill propaganda spewed forth by those sated with power, hate and bigotry. They also spoke of the punishment inflicted on those who dared to disagree.

    My parents have now passed on, yet I was reminded of their stories while I watched the reports on the events unfolding in Copenhagen. For the first time in my life, I felt a sickening sense of fear for the future.

    It might be for a different cause and acted out by different players, yet the message is the same…the cult is powerful, ruthless, intolerant…and God help those who disbelieve.

    Hmm, makes one think hey?

  362. Willis:

    A very fine analysis.

    I have one question.
    Do you have an explanation for the homogeneity adjustment made for a single year in 1900 (as indicated in your Figure 7)?

    It seems that the purpose was to force agreement at 1900 between the raw and adjusted data sets.

    Other adjustments are much larger than the adjustment I am querying, but my query is important.

    If the 1900 adjustment was only made to force agreement at the start of the century then this adjustment is de facto evidence that adjustments are made for purely arbitrary reasons.

    Richard

  363. steven mosher (00:43:07) :

    heres something odd.

    whats weird about this

    http://www.metoffice.gov.uk/climatechange/science/monitoring/locations.GIF

    What are you getting at? The uneven coverage of the US? (Lots of red dots in the western US, not so many in the midwest — where it is cooler). But the red dots are just a subset of the total.

    Or is it the dots sprinkled throughout the oceans, in places where there might not seem to be islands, normally?

    What do you see?

  364. JJ (00:09:31) :

    I quoted from the same section earlier, but Willis apparently missed it (easy enough, in a thread this huge). I’d be curious about his take on it. I also asked if the bit about “Yet the effects of the homogeneity adjustments on global average temperature trends are minor” is quantified anywhere.

  365. It seems to me the best place to start is with the raw daily data and find out how many “missing” months have a small handful of days missing, and estimate the monthly average for those days, either by ignoring the missing days or interpolating them.

    John, thanks for your interesting comment. I agree that the daily data needs to be looked at … so many clowns, so few circuses. Also, we need an answer about dealing with the missing data. Interpolate? If so, how? Another issue for surfacetemps.org to publicly wrestle with.

    Reply:

    Just a suggestion. I would think using the mean of the data even for the month with days missing would be the best way to go. The effect of using incomplete data sets would be added to the error analysis and not to the data itself. The data with have footnotes that detail the information that is missing. The statisticians should be able to address this. Given the number of individual stations used to estimate the global temperature, the deviations from the true mean of the months calculated from incomplete data, should nullify each other. The amount of error introduced would be far less than error introduction from “artificial interpolation” Interpolation could only be used by determining which stations best parallel each other and the offset. However I would think parallel and offset would be better used in determining the probable error introduced by using that particular incomplete data set than by “adjusting” the information with a “best guess fudge”

    I would also agree with MIke (12:14:18) :
    Once the data is “cleaned up” and monthly averages determined I see the advantage of using “acceleration as the measuring tool instead of measured value” because there would be no need to try and splice data sets together from different locations to make one long data set.

  366. I took a look at the data for Ross-on-Wye from the GHCN database. I took this one because Ross is an old town with long records, untroubled by very heavy traffic and therefore any Stevenson screen there is unlikely to have been tainted by “adjustments”. This was the first station I looked at – NO GLOBAL WARMING!

    I would give you a link but the GHCN database is down. What a curious coincidence….

    Anyway, I’m looking in detail at the statistics for Stornoway now. A small island in the far north. Had hoped that the data would be untainted but it seems the Stevenson screen is next to an airport – bet that wasn’t there in the 30’s! Oh, and it now has electronic thermometers for remote reading – bet they didn’t have those in the 30’s either!

    Anyway, the means for the annual distribution show a 0.35Celsius increase in the last 10 years compared to the first 10 years. Problem is the averaged monthly show a variation ranging from -0.03 to +1.0 Celsius taken over the same period. That tells you that you can’t rely on the annual means because they don’t necessarily reflect the means of the underlying distributions. Furthermore even in the monthly averages you can see a variation of 6Celsius over each January from 1931 to 2008. This is due to what is commonly referred to as weather. So the Climate change signal is 0.35Celsius in a weather signal of 6Celsius. Is that statistically significant? I don’t think so.

  367. I’m a little confused as to how temperature trends are computed. With or without raw data. It was mentioned that CRU started their use of Darwin in 1963. Exactly what do they use for Darwin temps prior to 1963? Is this where they use an average of other stations? If not, then throwing in a tropical station in the middle of a sequence would clearly bias the results. Is this what E.M. Smith often refers to as “the march of the thermometers”?

  368. JJ (00:09:31) :
    If one uses a temp station A to adjust temp station B, then station A gets more weight. The more it is used to adjust other stations, the more weight it gets. This procedure does not increase the “paucity” of stations, it just spreads the influence of a given station around. These guys have not proved their method makes the data more suitable for a genrealized climate change quantification. Just saying it does not make it so.

  369. Willis:

    I add another question.

    In your analysis the adjustements seem to end at or near 1979. This was when the satellite data series started.

    There is no calibration against which to compare any of the station records and the HadCRUT, GISS, GHCN data sets. Homogenisation adjustment of station records alter the HadCRUT, GISS, GHCN data sets. The station records and those data sets have each been subjected to alterations.

    The series of alterations to the HadCRUT, GISS, GHCN data sets seem to have had the effect (intention ?) of bringing them into closer agreement with each other. But the satellite data (i.e. RSS and UAH) provide alternative (and independent) possibility of comparison.

    The RSS and UAH data sets each show little positive trend since the start of their compilation in 1979.

    So, my question is
    is the Darwin data typical of other station data from GHCN in that it has large positive trend adjustments applied by homogenisation prior to ~1979 but not after 1979 when the ‘satellite era’ began?

    If so, then there is clear reason to distrust all the homogenised station records and the HadCRUT, GISS, GHCN data sets for time prior to the ‘satellite era’.

    Richard

  370. Hi –

    First of all, kudos to Mr. Willis Eschenbach: a right scrum job there.

    Of all the critiques – and I skimmed the comments – it seems that no one has noticed the fundamental error that the adjusters made.

    If I am constructing a proxy, I need to find a baseline, one “best fit” or best practice estimation point or series of data points where I can then leverage incomplete knowledge to form a greater whole.

    As someone who handles a huge amount of industrial data, the closer you are to the current date, the more confidence you generally have in the data, as reporting is better, samples are more complete, turnaround is faster (resulting in faster revisions) and if you have a question about the validity of a data point, you can find an answer with recent data (simply ask and someone may remember), but asking about a monthly data point of 23 years ago results in shrugs and “dunno”.

    Now, that said, I do understand that adjustments are placed on data, especially if the stations involved have changing environments. Given the difficulties in having a constant operating environment, I can understand the rational behind making corrections. But why did they do this from a baseline from the 1940s?

    Data must be understandable and pristine: hence if I have a daily temperature reading (and the poster who points out that this is an average between daily highs and lows is correct: without those, these numbers are compromised unless you are only looking at the vector of the average, rather than its level…but I digress) I want that to remain unchanged, since I can then simply add new data points without any adjustments (until, of course, something happens that requires an adjustment: in that case, I rebase the series so that my current readings always – always! – enter the database unadjusted.

    To repeat: the most current data is inviolable and is not to be adjusted!

    This is exactly, it seems, to be what Mr. Eschenbach did in his Figure #6.

    The egregious error of the original data processing was to apply the corrections from left to right: it should be applied right to left, i.e. from the current data point backwards.

    By applying it from the first data point forwards, they are making the assumption that the oldest data is the most accurate and hence forms the baseline for all following data corrections: from what I’ve understood of the data collection methods, this is, shall we say, a somewhat … heroic assumption.

    This is, really, basic statistical analysis. I’d love to be snarky about this and ask if climatologists have ever had statistics, but know too little about what they learn during their training to do that with any degree of accuracy.

    Pun intended.

  371. It appears you quote the explanation for the homogonisation results in your own entry. Have a look at http://www.bom.gov.au/climate/change/datasets/datasets.shtml which details how and why the data was homogonised (all the research is public). Their opening paragraph:

    “A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”

    No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=3

  372. Basil (04:09:16) :

    “I quoted from the same section earlier, but Willis apparently missed it (easy enough, in a thread this huge).”

    So you did! I do not doubt that Willis missed your post. I had missed it, too. I now feel a bit like Scott, showing up at the South Pole only to find Amundsen’s flag flapping in the breeze :)

    “I also asked if the bit about “Yet the effects of the homogeneity adjustments on global average temperature trends are minor” is quantified anywhere.”

    Yes, and you also provided this link:

    which does appear to quantify the ‘minor’ effect of the homogenation routine. Huh, 0.6C. Fully half of the supposed ‘global warming’ effect over the period. Yeah, that’s minor all right. And that shape. It seems so familiar …

    Why would a homogenation routine introduce/amplify that shape?

  373. ***************
    Dominic White (06:45:19) :
    “A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”

    No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=3
    *******************
    So why do they assume the drop is wrong. Maybe all the previous readings were too high and need to be adjusted downwards. It seems they always have an explanation for what they did, but never have proof that it yields an accurate temperature history. It looks like a bunch of hand-waving to me.

  374. hmm actually I can answer it myself and I think the BOM refers to the introduction of stevenson screens circa 1910, so not in the 1940s era.

  375. darwin temperature record: http://www.john-daly.com/darwin.htm

    “Dear John,

    Further to my emails of earlier today, I have now heard back from Darwin Bureau of Meteorology. The facts are as follows.

    As previously advised, the main temperature station moved to the radar station at the newly built Darwin airport in January 1941. The temperature station had previously been at the Darwin Post Office in the middle of the CBD, on the cliff above the port. Thus, there is a likely factor of removal of a slight urban heat island effect from 1941 onwards. However, the main factor appears to be a change in screening. The new station located at Darwin airport from January 1941 used a standard Stevenson screen. However, the previous station at Darwin PO did not have a Stevenson screen. Instead, the instrument was mounted on a horizontal enclosure without a back or sides. The postmaster had to move it during the day so that the direct tropical sun didn’t strike it! Obviously, if he forgot or was too busy, the temperature readings were a hell of a lot hotter than it really was! I am sure that this factor accounts for almost the whole of the observed sudden cooling in 1939-41.

    The record after 1941 is accurate, but the record before then has a significant warming bias. The Bureau’s senior meterologist Ian Butterworth has written an internally published paper on all the factors affecting the historical Darwin temperature record, and they are going to fax it to me. I could send a copy to you if you are interested.

    Regards Ken Parish”

  376. The “smoking gun” at Ground Zero

    Oh, great. We all needed another Truther pet conspiracy theory.

  377. Willis Eschenbach

    “…..Because I assure you, with these good folks, I have a host of experience that “Just ask” doesn’t work.

    JJ, I appreciate both the tone and the content of your comments. In a regular scientific situation, this would never come up, and I would just ask, you would be totally correct. But these days, much of climate science is not science at all. It is people fiercely fighting to withhold information from the public. When they are fighting to keep information secret, “just ask” is just inadequate.”

    Willis,
    I would like to add. When “just ask” is repeatedly blown off in a scientific discussion. A discussion where the data and methods are withheld, then the discussion is no longer in the realm of science at all. It has devolved into a playground pi$$ing contest, and the scientists withholding data should no longer be considered scientists but political hacks until they do return to the relm of science – OPEN DEBATE, unbias peer review and gentlemanly conduct towards others.

    I have nothing but contempt for these people who sully the name and methods of science.

  378. Jim (07:35:41) says:

    ***************
    Dominic White (06:45:19) :
    “A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”

    No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=annual&ave_yr=3
    *******************
    So why do they assume the drop is wrong. Maybe all the previous readings were too high and need to be adjusted downwards. It seems they always have an explanation for what they did, but never have proof that it yields an accurate temperature history. It looks like a bunch of hand-waving to me.

    Indeed. If you look at the Darwin data there is a gentle decline in temperatures from 1882 to about 1939, and then there is a sharp drop.

    Even if we assume that the sharp drop is due to a change in measurement method, the gentle decline from 1882 to 1939 still needs to be explained. We should not throw the baby out with the bath water.

  379. MattB (07:51:04) quotes Ken Parish:

    As previously advised, the main temperature station moved to the radar station at the newly built Darwin airport in January 1941. The temperature station had previously been at the Darwin Post Office in the middle of the CBD, on the cliff above the port. Thus, there is a likely factor of removal of a slight urban heat island effect from 1941 onwards

    However, there is a problem here. The temperature data from 1882 to 1939 shows a gentle decline over that period followed by a sharp decline.

    I do not think that Darwin was a bustling, dynamic city in 1882 that then went into decline, reducing the UHI.

  380. Brendan H (23:12:57) :

    ….In these situations I find it useful to repair to my rule-of-thumb measure of trustworthiness. Using the “Beard Index” I find the AGW scientists are highly trustworthy, and that there is empirical evidence to show this…..

    I have found my index a great comfort during difficult times such as we are currently undergoing. I think you would also find great benefit from this easily applied and highly reliable measure.

    Gee Brendan H, now you have me confused. I thought that was the pompous donkey index! By doing a scientific comparative analysis of a picture of Steve to a picture of Einstein, Steve obviously is higher on the intelligence scale than Schmidt and Mann, so for that matter is “Harry” Harris.

  381. Richard why is one station showing a very very marginal decline in the late 1800s of any concern in the slightest?

  382. Richard Sharpe (23:35:44) :

    “….Yeah, Willis. Billy is the man who defines what environmentalism is, and if you don’t meet his requirements, then you are a Cherry Pickin’ Denier!”

    Interesting how Billy and his friends arguments are not from the realm of science but from the realm of law and that most politicians at least in the USA are lawyers.

    “When the law is against you, argue the facts. When the facts are against you, argue the law.When both are against you, attack the plaintiff.” – R.Rinkle

    “The legal system’s often a mystery, and we, its priests, preside over rituals baffling to everyday citizens.” – Henry Miller

  383. michaelfury (07:04:50) :

    Hey Michael, your 911 conspiracy link would be best posted at RealClimate, MediaMatters of HuffingtonPost etc where AGW warmers are also 911 conspiracy loons.
    They are as certain of exploded building as they are of CO2 emissions flooding and boiling the plant.

  384. Ryan Stephenson (04:46:20) :
    A uk spaghetti graph for you
    Figures unadjusted met office

    UK averaged

    Some stuff from the islands

  385. magicjava (16:31:09) :

    Just to be clear, I agree that the CRU code is bad code. I agree that studying it is worthwhile.

    But unless it can be shown that the code was used to produce data or reports that were made available to scientists outside CRU or to the public, it’s not a smoking gun.

    Point taken. Though I’d say that whatever the code was used for, internally or externally, it’s evidence of a severe lack of quality control and programming ability!

  386. weschenbach (16:05:36) :

    tallbloke (14:57:12) : Thanks, tallbloke, quote as you wish. Regarding his questions/statements:

    Thanks Willis. His first reaction was that 2.5C adjustments are not unheard of, and since Darwin varies 10C a day, a change in time of obs could account for it.

    I’va asked him to provide examples of 2.5C adjustments. If he comes up with any, we’ll get some more smoking guns to add to the collection. ;-)

  387. MattB (08:32:59) asks:

    Richard why is one station showing a very very marginal decline in the late 1800s of any concern in the slightest?

    You are seemingly very incurious.

    UHI was advanced as one explanation for the sharp decline in the 1939/1941 timeframe. Yet, that explanation is not compatible with the record for the prior 60 years. I can think of no measurement or mechanical reason for the prior decline.

  388. “”” Ryan Stephenson (02:46:32) :

    Can I please correct you. You keep using the phrase “raw data”. Averaged figures are not “raw data”. Stevenson screens record maximum and minimum DAILY temperatures. This is the real RAW data.

    When you do an analysis of temperature data over one year then you should always show it as a distribution. It will have a mean and a standard deviation. Take the UK. It may have a mean annual temperature of 15Celsius with a standard deviation of 20Celsius. “””

    So Ryan; if a Stevenson screen records the maximum and minimum daily temperature (the RAW data), just what is the purpose of showing that as a distribution.

    The RAW data is a record of DIFFERENT observations. Suppose I go to the London zoo, and record what creature I find in each cage or enclosure or display area. Should I then show this RAW data as a distribution and calculate its mean and standard deviation ? Would I perhaps find that the mean animal in the london zoo, is a Wolverine, and the standard deviation is a Lady Amherst pheasant ?

    Why all this rush to try and convert real RAW data into something totally false and manufactured.

    The temperature is different every place on earth and changes with time in a way that is different at every place; so why try to replace all of that real RAW information with two numbers that at best are total misinformation.

    It seems to me that statisticians, having run out of useful things to calculate, gravitate towards climatology and start applying their methodologies to disguising what the instruments tell us the weather and climate is, or has been.

    GISStemp and HadCRUT are simply that; mathematical creations of arbitrary AlGorythms applied to recorded information of historic weather data; the results of which have no real scientific significance, as far as planet earth is concerned. They certainly don’t tell us whether living conditions on earth are getting better or worse; or even how good they might have been at some past epoch.

  389. Thank you Sir !
    lets hope the idiots don’t put the world into reverse gear. I have posted my Representative the prime reader ( for 5 year olds) with the WUWT link.

  390. When the stevenson screens came in, it would be interesting to see what they did to “adjust” for that change…. That is why there is a step change for Darwin around 1940-41 in record zero I’d say.

  391. bill (08:44:07) :

    Ryan Stephenson (04:46:20) :
    A uk spaghetti graph for you
    Figures unadjusted met office


    Did I notice “De Bilt” in there? That’s a village in The Netherlands, and it’s clearly not in the UK. The dutch met office KNMI is sited there.

  392. It happens that we (the US) have a meteorological site on Darwin proper. Coordinates are 12.4246°S 130.891597°E. I have no idea how that site relates to the sixty-seven-year-old thermometer at Darwin in terms of position, but it’s got enough instrumentation to corroborate or invalidate the adjustments to Darwin’s thermal record, I’d guess.

    Website is http://www.arm.gov/sites/twp/c3

    I just don’t know how to access the data. It’s only about seven years old, though, so it’d only be able to serve to validate recent data.

    I’d guess the AGW community has already done some work there, if I were in the habit of guessing.

  393. I’d also guess that it’s within a few kilometers of the Darwin weather station, which should be CEFGW.

  394. “When the stevenson screens came in, it would be interesting to see what they did to “adjust” for that change….”

    From my read of the homogenation methods, they did not do anything specific to adjust for any specific issue (such as Stevenson screens, TOB, etc) for stations outside the US.

    For USHCN stations, they did apply a metadata based homogenization. If the station metadata documeted a site change or an instrument change etc, then they applied a specific correction to the data in an attempt to account for it.

    For stations outside the US, they did not do that. Instead, they homogenized each station to a reference station, using a first difference series. This does not apply an explicit, defined correction to a discontinuity of known source…

  395. A couple things occured to me:
    1) The reasonable adjustments to stations should have a natural distribution centered on zero. Adjustments for station location or instrumentation changes should be equally positive and negative. The only decent reason for a positive bias among many stations would be moving stations from poor sites (beside an AC) to a good site (in a park), but even then, an adjustment would result in ‘locking in’ the bias of the poor site to all future measurements from the new site.
    2) If #1 is true and the net result of all adjustments, then global temperatures do not need adjustment, only gridding. Adjustment is only required for continuity in examining individual stations.
    3) The exception in this process would be any form of econometric adjustment for heat island effect. However, this would almost always be a negative adjustment.

    Bottom Line: If the sum of all global adjustments is positive, it’s a biased adjustment.

  396. My 1st thought on this is that what is needed is project similar to the surfacestations.org project. Grass-roots volunteer project. Volunteers do the same type of analysis as Willis did for Darwin for as many of the GHCN stations as possible (limiting factor certainly will be raw data) & have them compiled for the worls to see on a website. It could be eye opening. I am guessing with the number of scientifically trained readers on WUWT that this potentially is an achievable community science project.

    Any takers? I’ll pitch in with some station analysis.

  397. Can there be anything more upsetting and unsettling to a scientific evidence-based argument than to show that the evidence as presented is not only untrustworthy but perhaps intentionally so? And can we agree that if the temperature record has been massaged, twisted, encouraged, and otherwise manipulated to minimize inconvenient facts, that what we have is scientific fraud even if done in sincerity, with the best of intentions? Finally, why should the peoples of developing countries have to suffer more and longer and be denied the wonders and comforts of modern life just because of the faulty work of well-intentioned zealots who lost their ability to claim objectivity many years back?

  398. Jeff L (11:53:41) :

    My 1st thought on this is that what is needed is project similar to the surfacestations.org project. Grass-roots volunteer project. Volunteers do the same type of analysis as Willis did for Darwin for as many of the GHCN stations as possible (limiting factor certainly will be raw data) & have them compiled for the worls to see on a website. It could be eye opening. I am guessing with the number of scientifically trained readers on WUWT that this potentially is an achievable community science project.

    Any takers? I’ll pitch in with some station analysis.

    Jeff, it’s already happening. I’ve formed surfacetemps.org, email me if you are interested, willis [at] surfacetemps.org

  399. It remains a possibility that criminal fidgeting has occurred with the Darwin station data, but an alternate explanation for large adjustments lies before you now, one previously outside your knowledge.

    This is why you should ask.

    JJ

    First, yes, I read the text you quoted. I know that huge adjustments are sometimes made to individual stations. I’ve looked at them. I’ve looked at a lot of stations. The adjustments to Darwin Zero are in a class all their own.

    And yes, that is possible, it may all just be innocent fun and perfectly scientifically valid. And if someone steps up to the plate and lists why those adjustments were made, and the scientific reasons for each one, I’ll look like a huge fool.

    Still waiting …

  400. Willis;

    I went to surfacetemps.org – Seems to be selling park equipment. WUWT?

    I have been analyzing South Texas temperature data, attempting to determine San Antonio UHI using 5 surrounding rural sites (using raw data). Since the SA site had a major move in June 1942 (from a rapidly growing downtown to a (then) rural airport), I looked to see how NOAA had handled the move:

    In the 5 years before the move, NOAA subtracted 2°F (1.97 to 2.07) annually from the raw data. In the 5 years after the move, they subtracted 1°F (1.01 to 1.11) from the raw data.

    I’d be willing to repeat your exercise using the whole NOAA dataset for South Texas once surfacetemps is up and running.

    BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.

    Ryan Stephenson (04:46:20) : …Anyway, the means for the annual distribution show a 0.35Celsius increase in the last 10 years compared to the first 10 years. Problem is the averaged monthly show a variation ranging from -0.03 to +1.0 Celsius taken over the same period.

    If you want to see a baffling wide variation of trends, check out averaged monthly Tmax & Tmin instead of Tmean.

  401. willis
    really spectacular article. i nominate you for the nobel prize!
    this has set me thinking. as a pure novice in all this, i know that in scotland virtually all the recording stations are at or near airports.
    our local recordings are from Leuchars military airport just across the water from Carnoustie here. that is a very busy military airport. for security reasons, presumably, we cannot gain access to verify the position of the measuring point.

    i wonder if anyone has investigated the observatory in Armagh (pronounced arm – ahh) in ireland.
    this is sited in a small city which is very rural and has, to my knowledge, been recording for over 150 years. it should be a good source of unadalturated data.
    perhaps you or steve could have a look at this.

  402. Makes me wonder. I can’t honestly judge this in anyway, I’m not qualified. I can’t buy AGW because I would have to buy it on faith. I can’t deny it either because I would have to deny it on faith. Bottom line, I have no opinion.

    If they’re going to devastate economies to “correct” the climate, they better “fix” things in such a way that it will help us whether they’re right or not. For example, nuclear power, solar thermal, and so on, done right, could help us to fight GW and to secure our domestic energy future. (Both providing jobs and security) It would help us either way. Those are the kinds of answers we need. What I hope doesn’t happen are answers that’re too specific to mitigating climate change.

  403. Tom in Texas (13:26:56) :

    BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.

    WinRar does it…

  404. Seems to me it would be best to use some process that reconciles temperature readings from the various stations by associating each with the all other “nearby” readings. (i.e., not varying from some fixed time by more than a couple of minutes). The average of such a closely associated group would provide a global temperature reading at a specific times of the day. …

    There would seem to be numerous ways to aggregate such groupings to rule out real outliers without having to make arbitrary changes to individual readings.

  405. THANK YOU!!! This breakdown puts some meat on the bones of the manipulation, and gives us direct questions which can be either answered directly, or obfuscated. I cannot express my thanks for your and others efforts to dig down through the available data, and allow the rest of us just getting up to speed on the raw datasets (such that remain).

    More please, it is appreciated by all seeking truth.

  406. CodeTech (13:55:17) :

    Tom in Texas (13:26:56) :

    BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.

    WinRar does it…

    Thanks. Will try it.

  407. What I want to know is does GHCN make those kind of adjustments in a. . .ahh. . . “artisanal” (to borrow a wonderful application of the word from Steve McI) manner, individually, one human studying the individual records for a station, and making the changes. . . . or does this happen as a some impersonal computer wholescales an algorithm across its dataset, and rarely does a human look at outliers and go “oh, gee, that can’t be an acceptable result”. And even if they do the latter, do they have any ability to change that one result and make the change “stick” for the future.

    The two models of how this happens (individually vs mass computer) do make a difference it seems to me. Both as to going to intent in the face of an obviously wrong result, and in finding out “who did this?” and how to fix it going forward.

    If this is the work of a mass of interns one station at a time over years. . . .that’d really be one heckuva mess to untangle now. If it is the work of a computer program, it can probably be nudged somehow to kick out outliers on some criteria or other to be adjusted individually if necessary by another file identifiying them by station id to do something along the lines of “don’t do your usual algorithm here, because it sucks –do this instead”.

  408. Oh, I just have Cygwin installed. Presto! You have a lot of *nix capabilities at your fingertips.

  409. I guess what I’m wondering is did a human do that on purpose, or is this yet another application of the 80/20 rule of computer programming? (i.e. 80 percent of the work to do 20 percent of the cases)

  410. http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php?utm_source=sbhomepage&utm_medium=link&utm_content=channellink

    Willis,
    You have got to love this post. You published your findings yesterday and already they have proven them “false”. Now they make some claim that you doctored data. You were trying to find how much of an adjustment to the data was necessary to go from the raw data to the “corrected” data. How would you do this without trying to make modifications to the raw data? Everything in their article is pure crap and they have quite obvoiously not even taken the time to read your article. What is wrong with these people?

  411. I’m sitting here watching the BBC following the warmist party line – no mention of opposing views of course.
    I cannot help but think that all this, when coupled with the events in the UK of the last 12 years that this is a precursor to Orwell’s 1984. As series of excuses to govern and dominate the proles.

  412. Oh, the 80/20 rule could be applied from paper application of the algorithm as well, manually, by someone who doesn’t have the authority, respect, and/or knowledge to challenge it.

    I just wonder the *actual* mechanics of how these adjustments get made. Is it Phil Jones and Gavin Schmidt poring over each one and coming to an agreement, with Michael Mann called in to break ties? It is at least a climate PHD working from a script who has the ability to go back to the script writer and say “look, your algo needs work –this is a ridiculous result when I follow it for this case”? Or is it interns working from a 20 page script of what to do, and not a chance in the world they’d tell anyone “this is rubbish!”, or anyone in authority would listen to them if they did? Or is a computer program doing it?

    How does that *actually get done*? I want to know that before I start making decisions on incompetence vs bias.

  413. @OP
    You make basic mistakes that undermine your results. Some very basic background:

    http://data.giss.nasa.gov/gistemp/

    For example, your complaint about 500 km separation is simply facile:
    “The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.”

    Avoiding the mistakes of previous generations is exactly why scientific research is left to people with specializations in these fields. If you insist on doing amateur analysis, I would suggest you start at the most recent publications and following the citations backward. That way you can understand exactly how and why the corrections are applied rather than just guessing based on one input and their output.

    @Street
    Normal distribution will only apply if measurements are samples from a random variable. You cannot assume this and in this assumption’s absence, your analysis is false.

  414. Joel D, did you actually read that article? It’s pretty clear on the reasons why using the raw data without adjustments is going to give you bad results.

  415. O.K.: Think we can now reasonably submit that WUWT has gone mainstream; at least in some quarters:

    For those who may not have noticed, James Delingpole at the UK Telegraph now has a major piece up dated 8 Dec, where he quotes extensively from this thread start (including a graph) AND links directly to it. See:

    http://blogs.telegraph.co.uk/news/jamesdelingpole/100019301/climategate-another-smoking-gun/

    Not only that: The above piece was prominently featured in the ”Climate Change Debate” headline section on RealClearPolitics. Regardless of desperate attempts by AGW partisans to subvert and suppress it, I think the message is starting to get out to the wider world. . . .

  416. I’m waiting for RealClimate’s response. Perhaps you can prod them.

    At the moment, as a casual observer I notice this: plots 2–5 look the same as the IPCC plot 1. All you have to do is look at the time period they have in common. It seems a bit shady that IPCC chose not to show pre-1920 data, which indicates a cooling trend, but presumably their full-time quantitative analyses take this into account.

  417. As someone who works with analysis of scientific data for a living, the thing that most strikes me about the removal of “inhomogeneities” is that it is a technique that is generally thought necessary only for small data sets. When dealing with a sufficiently large data set, random errors cancel themselves out. For example, for every station moved to a warmer location, one would expect there would be one moved to a cooler location – so why correct at all?

    Surely the surface temperature record of the entire world is a sufficiently large data set to analyze, at least once, without “correction” for random errors.

  418. Do realize what you have found here. The “homogonized” data matches exactly the “fudge factor” code in the CRU source code. Here is the code:

    ;
    ; Apply a VERY ARTIFICAL correction for decline!!
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor
    (…)
    ;
    ; APPLY ARTIFICIAL CORRECTION
    ;
    yearlyadj=interpol(valadj,yrloc,x)
    densall=densall+yearlyadj

    What this does is it establishes an array that artificially injects increases in temperature that will automatically turn the data into a hockey stick. The hockeystick it creates matches exactly to the Station Zero at Darwin, showing the raw and the homogenized versions.

    Many people trying to debunk the source code say it is common practice in modeling to include adhoc code for test purposes not to be used to publish actual data. This proves a real life application of the “fudge factor” code.

  419. Tom in Texas (13:26:56) :

    Willis;

    I went to surfacetemps.org – Seems to be selling park equipment. WUWT?

    I have been analyzing South Texas temperature data, attempting to determine San Antonio UHI using 5 surrounding rural sites (using raw data). Since the SA site had a major move in June 1942 (from a rapidly growing downtown to a (then) rural airport), I looked to see how NOAA had handled the move:

    In the 5 years before the move, NOAA subtracted 2°F (1.97 to 2.07) annually from the raw data. In the 5 years after the move, they subtracted 1°F (1.01 to 1.11) from the raw data.

    I’d be willing to repeat your exercise using the whole NOAA dataset for South Texas once surfacetemps is up and running.

    BTW, how do you decompress a UNIX .Z file using Windows? I want to check what NOAA posts as raw data, is truely raw.

    Ryan Stephenson (04:46:20) : …Anyway, the means for the annual distribution show a 0.35Celsius increase in the last 10 years compared to the first 10 years. Problem is the averaged monthly show a variation ranging from -0.03 to +1.0 Celsius taken over the same period.

    If you want to see a baffling wide variation of trends, check out averaged monthly Tmax & Tmin instead of Tmean.

    surfacetemps.org, that’s just the generic web page they gave me. I’ll fix it to “coming soon” … soon …

    Don’t know how to decompress a “.Z” file on the PC, on my Mac I just double-click it.

    Yes, there are many, many strange things going on with the data. Darwin Zero was my first look.

  420. Warwick Hughes (13:29:22) :

    I think Willis is not correct to assume CRU have used the GHCN Darwin Zero. I also think it is wrong to assume Jones / CRU have simply use the GHCN station data. See my take over at;

    http://www.warwickhughes.com/blog/?p=357

    Thanks, Warwick. As I commented about CRU in the head post, we don’t know what data they are using. That was the point of the FOI request, which was turned down. I encourage people to visit Warwick’s site, lots of good stuff.

  421. patrick healy (13:30:44) :

    willis
    really spectacular article. i nominate you for the nobel prize!
    this has set me thinking. as a pure novice in all this, i know that in scotland virtually all the recording stations are at or near airports.
    our local recordings are from Leuchars military airport just across the water from Carnoustie here. that is a very busy military airport. for security reasons, presumably, we cannot gain access to verify the position of the measuring point.

    i wonder if anyone has investigated the observatory in Armagh (pronounced arm – ahh) in ireland.
    this is sited in a small city which is very rural and has, to my knowledge, been recording for over 150 years. it should be a good source of unadalturated data.
    perhaps you or steve could have a look at this.

    Armagh is a very good site, meticulous records. Shows the gradual, centuries long warming.

    Airports are not the best place for temperature stations, lots of vehicles, jet exhaust …

  422. denis (14:07:57) :

    Seems to me it would be best to use some process that reconciles temperature readings from the various stations by associating each with the all other “nearby” readings. (i.e., not varying from some fixed time by more than a couple of minutes). The average of such a closely associated group would provide a global temperature reading at a specific times of the day. …

    There would seem to be numerous ways to aggregate such groupings to rule out real outliers without having to make arbitrary changes to individual readings.

    Most stations take two readings a day at fixed times, so that would be hard. Interesting idea, though.

  423. geo (14:39:00) :

    What I want to know is does GHCN make those kind of adjustments in a. . .ahh. . . “artisanal” (to borrow a wonderful application of the word from Steve McI) manner, individually, one human studying the individual records for a station, and making the changes. . . . or does this happen as a some impersonal computer wholescales an algorithm across its dataset, and rarely does a human look at outliers and go “oh, gee, that can’t be an acceptable result”. And even if they do the latter, do they have any ability to change that one result and make the change “stick” for the future.

    The two models of how this happens (individually vs mass computer) do make a difference it seems to me. Both as to going to intent in the face of an obviously wrong result, and in finding out “who did this?” and how to fix it going forward.

    If this is the work of a mass of interns one station at a time over years. . . .that’d really be one heckuva mess to untangle now. If it is the work of a computer program, it can probably be nudged somehow to kick out outliers on some criteria or other to be adjusted individually if necessary by another file identifiying them by station id to do something along the lines of “don’t do your usual algorithm here, because it sucks –do this instead”.

    Unknown. Different groups (CRU, GHCN, GISS) do it differently. All of them use computers to identify potential problems. What they do from there is a mystery …

  424. zosima (15:06:26) :

    @OP
    You make basic mistakes that undermine your results. Some very basic background:

    http://data.giss.nasa.gov/gistemp/

    For example, your complaint about 500 km separation is simply facile:
    “The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.”

    Avoiding the mistakes of previous generations is exactly why scientific research is left to people with specializations in these fields. If you insist on doing amateur analysis, I would suggest you start at the most recent publications and following the citations backward. That way you can understand exactly how and why the corrections are applied rather than just guessing based on one input and their output.

    @Street
    Normal distribution will only apply if measurements are samples from a random variable. You cannot assume this and in this assumption’s absence, your analysis is false.

    Been there, done that, I’ve read the citations. Re-read my article. The 500 km figure is from the GHCN’s own description of their own procedure. Hansen’s claim is true (I’ve replicated it), but it means nothing about the GHCN procedures.

  425. Ken (16:52:38) :

    Do realize what you have found here. The “homogonized” data matches exactly the “fudge factor” code in the CRU source code. Here is the code:

    ;
    ; Apply a VERY ARTIFICAL correction for decline!!
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor
    (…)
    ;
    ; APPLY ARTIFICIAL CORRECTION
    ;
    yearlyadj=interpol(valadj,yrloc,x)
    densall=densall+yearlyadj

    No joy. The amounts of the adjustments don’t match. Also, that code is from CRU and the data is from GHCN.

  426. “Warwick Hughes (13:29:22) :

    I think Willis is not correct to assume CRU have used the GHCN Darwin Zero. I also think it is wrong to assume Jones / CRU have simply use the GHCN station data. See my take over at;
    http://www.warwickhughes.com/blog/?p=357

    Well, thanks Warwick, much appreciate that .

    Refer Steve Short (17:18:27) :

    “(2) Can we rely on the accuracy (?) of an interpretation (?) that ‘the emails’ (remarkable isn’t it we at least all know exactly what these are ;-) suggest (?) that the CRU database relies (?) on GHCN (whew)?

    Willis :

    “That’s what Phil Jones said, and until they release their data, there’s no way to verify it.”

    Exactly. Yet more evidence this Phil Jones is one ultra-slippery character. Glad I never had him as a ‘peer reviewer’. He doesn’t even have a manicured beard either – shame on him!

  427. Joel D (14:46:36) :

    http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php?utm_source=sbhomepage&utm_medium=link&utm_content=channellink

    Willis,
    You have got to love this post. You published your findings yesterday and already they have proven them “false”. Now they make some claim that you doctored data. You were trying to find how much of an adjustment to the data was necessary to go from the raw data to the “corrected” data. How would you do this without trying to make modifications to the raw data? Everything in their article is pure crap and they have quite obvoiously not even taken the time to read your article. What is wrong with these people?

    That’s Deltoid, Tim Lambert’s blog. Lambert doesn’t like me, we duelled over Tuvalu sea levels. I’ve just posted a note there inviting anyone who wishes to post scientific questions here. Bring it on, that’s what science is about.

  428. Yes, verily. But if you’re Tim Lambert, it’s more about demonizing people you disagree with.

  429. M. Johnson – Exactly my reasoning as well. I’d love to hear a plausible reason why it’s necessary to add positive adjustments to more stations than you are adding negative adjustments. Has anyone ever run a calculation of what the net adjustment is globally and what the distribution looks like?

    Another thing pointed out in the article:
    “The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.”

    That seems to be a sensible approach to automated adjustment. However, I think problems may crop up if the areas covered by the individual stations are not equal. Your more likely to get mutliple stations in metro areas. Let’s say you have a cluster of 5 stations around a city. All of them should exhibit similar heat island effects, right? If the algorithm references the 5 nearest stations, this cluster would only reference itself and confirm it’s common heat island effect as ‘real’. No real problem there.

    Problem is stations that are not clustered (likely when they are rural) may use reference stations in these metro clusters for adjustment. This may result in the positive trend due to heat island effect being ‘transmitted’ into the rural station via the adjustment process.

    @zosima – The random variable is the differences in temp bias between the old and new locations and the old and new instrumentation. You can probably argue that newer instruments might be designed to prevent solar heating and run cooler, but location changes should exhibit a random bias on temps, shouldn’t it?

  430. Willis,
    Some questions, then.

    How is anomaly defined in Fig 6 and 7? What is your baseline period? Is it a coincidence that Fig 7 starts at an anomaly of zero, or did you adjust the whole plot downwards to make it so?

    You seem to recognise there was a station move in 1941. I find your ‘judgment call’ to not make an adjustment there to be quite odd. I’d suggest repeating Fig 6 and 7 with an adjustment for the station move, to see what it looks like. Further, did you ask what sorts of site adjustments might have been made after that time?

  431. Willis

    Please look at the CRU source code I put in my last code. The “homogenization” and the “fudge factor” codes match perfectly. It is a connection that proves beyond doubt the raw data was manipulated.

  432. Willis

    OK so the “fudge factor” and “homogenization” are from two different sources but their manipulations are almost identical.

  433. M. Johnson (16:39:05) :

    As someone who works with analysis of scientific data for a living, the thing that most strikes me about the removal of “inhomogeneities” is that it is a technique that is generally thought necessary only for small data sets. When dealing with a sufficiently large data set, random errors cancel themselves out. For example, for every station moved to a warmer location, one would expect there would be one moved to a cooler location – so why correct at all?

    Surely the surface temperature record of the entire world is a sufficiently large data set to analyze, at least once, without “correction” for random errors.

    I definitely start with the “raw” data. As you say, most of the stuff averages out. However, it’s not as simple as small vs. big datasets.

    For example, the US switched over to an “MMTS” sensor system in place of thermometers. This is a one-off event, and always raises the temperature.

    So some adjustments are necessary. Darwin Zero, on the other hand …

  434. Willis,
    Here’s a bit of back and forth I had with a gent over at the Discovery Magazine blog…can you make any sense of his comment, and is there anything to it? Did they just adjust station zero to match 1 and 2?

    61. Larry Johnson Says:
    December 9th, 2009 at 9:27 pm
    “50. MartinM Says:
    December 9th, 2009 at 6:47 pm
    What, this isn’t an explanation?
    If you actually look at the raw data, it’s pretty bloody obvious why the station 0 record has been adjusted in the way it has. The step change around 1940 is obviously due to the shift of the station and the addition of a Stevenson screen. And the addition of an upwards trend from that point on is to bring it into line with stations 1 and 2, which track each other almost exactly, and both show a strong warming trend from 1940 (1950 in the case of station 1, since that’s when it starts) to 1990.”
    Appreciate any valid explanation, but I’m not follow’ ya. Willis says they all pretty much agree (all three) so why adjust any of them. Then he says they adjusted 0 and 1 but left 2 untouched. They all look pretty close to me. So I still think his concerns are valid. Maybe I’m missing something.

    http://blogs.discovermagazine.com/intersection/2009/12/09/how-the-global-warming-story-changed-disastrously-due-to-climategate/comment-page-2/#comment-41786

  435. 2.6….
    2.5
    1.7
    1.2
    0.8
    0.3
    0 0 0 0 0
    -.1 -0.1
    -0.25
    -.3
    1900 -> 20 40 50 60 70 80 90 2000
    [0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$ <-CRU fudge factor

    Plug in the numbers on the GHCN "homoginization" with the CRUs "fudge factor algorithm. Or should we say ALGORE-RITHM.

  436. Rick Jelliffe (00:55:45) :

    I am intrigued that you include YAMBA in your readings, in the second diagram, and call it a Northern Station. It would be surprising if it fits the bill for a neighbouring station. Is that what the CRU people did, or just what you did?

    YAMBA lat -29,43 long 153.35 is about 3,000km from Darwin. It sits on a different ocean (Pacific) next to my home town, and has very equitable climate. It is the YAMBA you get when dialing up on the AIS site.

    3000km, to give some context, is a little more than the distance between London England and Ankara Turkey. More than the distance from Detroit Michigan and Kingston Jamaica.

    I included everything bounded by the UN IPCC box shown as Fig. 1., from 110E to 155E, and from 30S to 11S. Yamba is within that box.

  437. Yamba is a few hundred kilometers south of Brisbane on the Eastern coast of Australia. There is absolutely NO WAY that any adjustment of Darwin could be made based on any data from Yamba. Is that what you’re saying was done? Pure bunkum if it was.

  438. Time to start from scratch! …. at this point our ground-truth is just no there So I hereby humbly offer gratis my humble action plan (it’s also a bit cheaper than the Copenhagen Treaty an will as a bonus produce lots of good jobs at good wages):…. 1) Spend the next two years designing a new state-of-the art family of fully automated, reliable instruments specified to produce a reliable century-long web accessible local climate record …..2) Spend the next ten years surveying sites (good, better and best) and equipping them with the respective good, better and best (full redundant 3D profile of latent an sensible heat fluxes, H2O, CO2 concentrations, etc installed on a 100 m tower) equipment sets (ratio of 10^6, 10^4, 1000 sites) …. 3) Spend the same 12 year period developing the best possible open analysis tools (Linux GNU-like) and testing, verifying and testing (as we’ve done with the satellite MSU data) …. 4) Do the same with a new fleet of redundant satellite equipment packages which can piggy-back on various satellites to monitor the sun, earth full disk radiation balance, cloudiness, volcanoes (including undersea, under ice, under sea ice, etc) … 5) Accumulate data from 2020 until 2060 (gives us a couple of double-length solar cycles to compare against) and then we can see if the best of the 2060 version GCM models (presumably much improved with Exa-flop caliber computing available in your pocket) are able to match our high precision, high accuracy, optimally-sited, ground and space-truth data sets … 6) Then we can make a more credible judgment than we are to do at this time … the other BIG Benefit … by making ALL of the Data and all of the Analysis tools freely accessible to any and all on via a web browser — we can crush once and for all the “alchemist-like” AGW Gaia priesthood

  439. Dominic White (06:45:19) :

    It appears you quote the explanation for the homogonisation results in your own entry. Have a look at http://www.bom.gov.au/climate/change/datasets/datasets.shtml which details how and why the data was homogonised (all the research is public). Their opening paragraph:

    “A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”

    No scandal at all, just a change in themometer housing in Aus. The resulting data set for Darwin is at: http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=meanT&area=nt&station=014015&dtype=anom&period=ann

    “Just a change in thermometer housing”? I provided a link at the end of my original post showing that the change to Stevenson Screens took place in the 1800s, so that’s out. You are left with 5 adjustments to explain, 1 downwards, 4 upwards.

    You also need to explain why after adjustments the five site records (which were running within a few tenths of each other) were suddenly very different. That’s like adjusting 5 clocks that are telling the same time so that they show different times. What kind of an adjustment is that?

    I’m sorry, but your explanations simply don’t add up to the 2.5 C adjustment we see in the records.

  440. J. Peden (23:08:27) nails it:

    While I completely understand where JJ is coming from and admire the sentiment, I agree with J. Peden, “No, they should have ‘told’, right from the start. Otherwise they have no Science. No one even needs to ask. If they don’t tell, they have nothing scientific to ‘ask’ about.”

    This is precisely the point. The “climate scientists” who pretend that they are ginning out “scientific” conclusions that are utterly unverifiable and unreproducible because they hide (or lose or destroy) their data and methods are not scientists, by definition. To claim the scientific mantle without sharing data and methods — remember, “no one needs to ask” — is, prima facie, fraud.

    In contrast, Willis — like a bona fide scientist — has put his data and methods front and center for all to see and critique. He is entirely within his rights to call out the CRU for failing to do the same.

    I think there is a good shot here, for someone, at putting together a massive False Claims Act case in the United States. Anyone have any idea how many millions of U.S. tax dollars have been squandered on this hoakey “climate science” stuff?

  441. Westhighlander, I concur. Let’s get going with this. Yamba adjusting for Darwin in a chaotic system is pseudo science. We need an open source, volunteer funded and driven solution for this mess.

  442. John F. Opie (06:41:26) :

    Hi –

    First of all, kudos to Mr. Willis Eschenbach: a right scrum job there.

    Of all the critiques – and I skimmed the comments – it seems that no one has noticed the fundamental error that the adjusters made.

    If I am constructing a proxy, I need to find a baseline, one “best fit” or best practice estimation point or series of data points where I can then leverage incomplete knowledge to form a greater whole.

    As someone who handles a huge amount of industrial data, the closer you are to the current date, the more confidence you generally have in the data, as reporting is better, samples are more complete, turnaround is faster (resulting in faster revisions) and if you have a question about the validity of a data point, you can find an answer with recent data (simply ask and someone may remember), but asking about a monthly data point of 23 years ago results in shrugs and “dunno”.

    Now, that said, I do understand that adjustments are placed on data, especially if the stations involved have changing environments. Given the difficulties in having a constant operating environment, I can understand the rational behind making corrections. But why did they do this from a baseline from the 1940s?

    Data must be understandable and pristine: hence if I have a daily temperature reading (and the poster who points out that this is an average between daily highs and lows is correct: without those, these numbers are compromised unless you are only looking at the vector of the average, rather than its level…but I digress) I want that to remain unchanged, since I can then simply add new data points without any adjustments (until, of course, something happens that requires an adjustment: in that case, I rebase the series so that my current readings always – always! – enter the database unadjusted.

    To repeat: the most current data is inviolable and is not to be adjusted!

    This is exactly, it seems, to be what Mr. Eschenbach did in his Figure #6.

    The egregious error of the original data processing was to apply the corrections from left to right: it should be applied right to left, i.e. from the current data point backwards.

    By applying it from the first data point forwards, they are making the assumption that the oldest data is the most accurate and hence forms the baseline for all following data corrections: from what I’ve understood of the data collection methods, this is, shall we say, a somewhat … heroic assumption.

    This is, really, basic statistical analysis. I’d love to be snarky about this and ask if climatologists have ever had statistics, but know too little about what they learn during their training to do that with any degree of accuracy.

    Pun intended.

    You are correct, and that is the way that the adjustments are added, from the first point backwards. I have chosen to show them the other way, as it makes the steps more visible. Since we are working with anomalies, it makes no difference. It’s still a step up of 2.5 C, whether you begin at the start or the end.

  443. I am going to patiently wait for Willis to confirm that Yamba was NOT used to correct Darwin. If Yamba was in fact used to correct Darwin then I can honestly say that Darwin “corrected” figures do not in any way reflect the historical temperature at Darwin. There are simply too many local climate variables between Yamba and Darwin. To such an extent that their climate’s are mutually exclusive. To begin with, Darwin is tropical, Yamba is temperate. How on earth can an adjustment be made between such environments? Can we please start a unified project to analyze and collate all ground station data?

  444. Sir-

    Thank you for your work. I wish I had that kind of time, not owned by others, for that kind of effort.

    Bravo!

  445. I once attended a seminar given by a guy who was a expert in radio communications. During a break, he told us a story about how he had developed a burst transmitter design for an agency within the “intelligence community”. In the process, he described how not only did this intelligence agency have guys designing radio transmitters that could be hidden, they had another set of guys, a “counter group,” who’s job it was to detect hidden radio transmitters. These two groups would go after each other in an attempt to come up with the best possible transmitters and the best possible methods of detection.

    In climate science, we have a bunch of seemingly half drunken academics who live off the government dole while they concoct amateurish schemes to prove something that it seems has been predetermined to be true, no matter the actual empiric data. The only group of guys trying to test their schemes are underfunded or doing work on their own time, pro-bono.

    This process is obviously corrupt. It was never meant to provide the truth. If it was, the government research community would also have a fully funded “counter group” to try to prove that “Anthropogenic Global Warming” doesn’t exist, has little impact or at least can be easily mitigated and therefore save billions, if not trillions, of dollars/Euros/pounds on trying to prevent a non sequitur.

    The fact that there is no “counter group” immediately brings into question the purpose of the activity and whether it is meant to be part of that “waste, fraud and abuse” that so often infiltrates all vestiges of government. The fact that this is an international activity makes one wonder if the UN has any real function except to give heads of state a chance to go shopping in New York City from time to time and travel to useless conferences where they can dine well and come up with new ideas on how to fleece their citizens at home.

  446. What information is contained in the “failed quality assurance” files? I presume these are the left-overs after the “value-adding” (or fact-removal) process?

    Just probing without much knowledge.

    My gut read is that you’ve done another fair-minded analysis. Maybe that’s what is disturbing to so many people.

    Also, out of curiosity, Willis: in picking through the meta-data during your quest, have you ever seen anything approaching the chaotic “Harry-Read-Me” file?

  447. Steve Short: “Next we have the control freaks…”

    I disagree. A disciplined beard is the sign of a disciplined mind. They adorn men who are in a hurry to get things done, to shape the world in their own dynamic image. They don’t leave any loose ends lying around and are fully in control of their domain.

    Gail Combs: “By doing a scientific comparative analysis of a picture of Steve to a picture of Einstein…”

    Not so fast. A common mistake among sceptics is to fail to take into account that the Beard Index is multi-factoral. Adjustments must be made for cultural, geographical and other relevant factors.

    The relevant factors for Einstein are that 1) he was the 20th century’s reigning scientific genius, and is thus entitled to a major adjustment upwards for the genius factor; 2) he wore a moustache, thus the downward adjustment for scruffiness is half that of fully bearded men.

    Interestingly, Lord Christopher Monckton makes no appearance on my index. This is a puzzle, since, despite being beardless, this lack is fully compensated for by his eyebrows. I suspect he inhabits an index all of his own.

  448. Mike (19:31:13)

    I am going to patiently wait for Willis to confirm that Yamba was NOT used to correct Darwin. If Yamba was in fact used to correct Darwin then I can honestly say that Darwin “corrected” figures do not in any way reflect the historical temperature at Darwin. There are simply too many local climate variables between Yamba and Darwin. To such an extent that their climate’s are mutually exclusive. To begin with, Darwin is tropical, Yamba is temperate. How on earth can an adjustment be made between such environments?

    So where is the actual thermometer at Yamba? I’ll wait patiently for someone to tell me It’s not in the car park of the Moby Dick Motel. : )

    You couldn’t make this stuff up.

  449. (Follow up on Yamba)

    So let me get this right. The HadCRUT3 paper shows hundreds of stations that it says are used, which correspond to the Aust BOM stations. See figure 1 http://hadobs.metoffice.com/hadcrut3/HadCRUT3_accepted.pdf

    But you quote Professor Karlen that NASA only has three stations. You pick three stations, one monsoonal (Darwin), one desert (Alice Springs), one temperate coastal (Yamba), and add them, and then you get surprised that the result does not look like anything the the IPCC graphic? What is the point of that?

    Then you do all sorts of elaborate reverse engineerings, to discover that there has been some kind of a data adjustment, when the owners of the data (the Aust. BOM — I don’t think NASA had any stations in Australia in the early 1900s!) warn in their page on the Australian station figures that the early numbers are unreliable without an adjustment.

    It seems to me that your figure 4 is the only one of much interest. Where is this smoking gun?

  450. Willis,

    “First, yes, I read the text you quoted.”

    Perhaps, but you dont seem to understand the implications.

    Your post centers on evident large adjustments to a station that do not appear to make sense with respect to the temperatures local to that station. You claim this is iron clad evidence of wrong doing.

    The methods document that you quoted answers that charge. It recognizes that the homogenization methods may apply large adjustments to single stations, that do not make sense with respect to temperatures local to that station. The methods document in fact recommends that unadjusted data be used for analyzing single stations for this reason.

    The assertion is that the homogenized data are more reliable when used to analyze long term trends at the region scale or larger. Support is given for that assertion, in the form of a cited paper. The further assertion is that the homogenization methods have small effect on globally averaged results. Support is given for that assertion, in the form of a cited paper.

    Your claims regarding the Darwin adjustments are responded to, in the paper you read prior to making the claims.

    If you have legitimate issues with Darwin or any other site in GHCN, that you have found a site with large adjustments that do not track well with local temps is not among them. That circumstance is predicted in the methods. A well supported response to you would be ‘Duh. We told you that.’

    Moving forward, potentially legitimate lines of attack would include:

    1) Refuting the assertion that the homogenized data are valid for long term, region scale or larger trend analyses.

    2) Refuting the assertion that the homogenization method has only minor effect on globally averaged temperature trends.

    Above, Basil posted a link to a NOAA chart that plots Adjusted – Raw, and the trend of the adjustments is 0.33C (vs a total ‘global warming’ land temp trend of 1.2C over the same period). Not knowing which version of GHCN is used for the graph, I dont know which homogenization methods the graph applies to, but 0.33C seems significant even if it isnt earth shattering.

    More importantly, the real metric of interest would not be Adjusted – Raw, but Properly adjusted – Improperly adjusted, if such obtains. If you can prove that the homogenization methods are illegitimate for long term global temperature trends (see #1) or if the the methods are OK but have not been applied per spec, and if the resulting err is of significant magnitude, you’ve got something. You dont have any of that yet.

    “I know that huge adjustments are sometimes made to individual stations.”

    Do you also understand that even if those huge adjustments dont track local temperatures at some stations, the homogenized data are held to be valid for the purpose to which they are being put? Find all the large, weird adjustments you want. You dont have anything unless you prove wrong the research that says they dont matter in the aggregate.

    “I’ve looked at them. I’ve looked at a lot of stations. The adjustments to Darwin Zero are in a class all their own.”

    Claiming to have found a rare outlier does not strengthen your position.

    “And yes, that is possible, it may all just be innocent fun and perfectly scientifically valid. ”

    As of this time, you do not have reason to believe otherwise, let alone point fingers and claim criminality.

    “And if someone steps up to the plate and lists why those adjustments were made, and the scientific reasons for each one, I’ll look like a huge fool. Still waiting …”

    One wonders what exactly you are waiting on. You have the raw data. The homogenization methods have been provided to you, along with a bibliography of documents that provide great detail. You quote from them.

    You need to read them. If you do, one of the first things that you are likely to pick up on is that (outside of the US) GHCN2 does not apply adjustments of the sort that your ‘show me the scientific reason for each one’ question assumes.

    Stop ‘waiting’. Get to work.

  451. Neo wrote:

    “In climate science, we have a bunch of seemingly half drunken academics who live off the government dole while they concoct ridiculous schemes to prove something that it seems has been predetermined to be true, no matter the actual empiric data. The only group of guys trying to test they schemes are underfunded or doing work on their own time pro-bono.

    This process is obviously corrupt. It was never meant to provide the truth. If it was, the government research community would also have a fully funded “counter group” to try to prove that “Anthropogenic Global Warming” doesn’t exist, has little impact or at least can be easily mitigated and therefore save billions, if not trillions, of dollars/Euros/pounds on trying to prevent a non sequitur.

    The fact that there is no “counter group” immediately brings into question the purpose of the activity and whether it is meant to be part of that “waste, fraud and abuse” that so often infiltrates all vestiges of government.”

    Henry Bauer, who believes that the currently embedded practices of modern, bureaucratic science have corrupted it (the CAWG consensus being a prime example IMO), has suggested that 10% or so of funding needs to go to contrarian viewpoints, that there should be a place at the table for contrarians (in every field), and that there should be “science courts” where both sides can argue their case in matters where established science has shut out or shouted down outsiders. You can find more here:

    “Science in the 21st Century: Knowledge Monopolies and Research Cartels”

    By HENRY H. BAUER
    Professor Emeritus of Chemistry & Science Studies
    Dean Emeritus of Arts & Sciences
    Virginia Polytechnic Institute & State University

    Journal of Scientific Exploration, Vol. 18, No. 4, pp. 643–660, 2004

    http://henryhbauer.homestead.com/21stCenturyScience.pdf

  452. So, you guys all know better than thousands of actual scientists? Global warming is actually not happening?

    So…
    Why are all those glaciers melting?
    Why have the sea levels risen?
    What has happened to the billions of tons of CO2 that HAS been pumped into the atmosphere by human activities?
    Would you dispute the science that shows CO2 has a warming effect in the atmosphere?
    Would you dispute that there is now significantly more CO2 in the atmosphere than prior to the industrial revolution?
    So… according to you [snip] types, somehow, maybe by magic, the CO2 that humans put into the atmosphere doesn’t add any additional warming.
    Well, that’s okay then. Lets all just assume everything is ok, and not take any action.
    Brilliant. I feel much better with you geniuses around.

  453. Alan P

    Glaciers: Some are retreating, some are growing some are not moving at all, but in general we know that glaciers have been retreating since around 1870, (Before that they we growing due to the huge impact from Krakatau)

    Sea levels have been rising for a very long time there is nothing new in that, but since 2006 there is no change.

    The CO2 will stay for a while in the athmosphere, for how long?, depends who you ask.

    There is more CO2 in the athmosphere now than in pre-industrial time, no one denies that, but at this level (380 ppm) an increase of CO2 makes a very limited impact on the greenhouse effect (Do not even try to dispute that, it has been verified the correct scientific way)

  454. Mike (19:22:41) :

    If you want to know how mad it would be to adjust Darwin using Yamba data check out google maps http://maps.google.com/maps?hl=en&source=hp&q=yamba+australia&um=1&ie=UTF-8&hq=&hnear=Yamba+NSW,+Australia&gl=us&ei=0GUgS8eSC4q9ngfZtKzWDQ&sa=X&oi=geocode_result&ct=title&resnum=1&ved=0CAsQ8gEwAA Please tell me that Yamba was not used to adjust Darwin! please, please!

    OK. Yamba was not used to adjust Darwin. It’s just in the area indicated in Fig. 1, is all, so it’s used in that average.

  455. JJ (23:13:30) :

    It recognizes that the homogenization methods may apply large adjustments to single stations, that do not make sense with respect to temperatures local to that station.

    ….

    The assertion is that the homogenized data are more reliable when used to analyze long term trends at the region scale or larger. Support is given for that assertion, in the form of a cited paper. The further assertion is that the homogenization methods have small effect on globally averaged results

    That’s good news, JJ. Since the homogenized data don’t work on the local scale (point one above) and they don’t make much difference on the global scale (point two above), we can avoid all the controversy and forget about them.

    My objection goes deeper. Nature is not homogeneous. Here it’s rock, and a centimeter away, it’s air. Here it’s clear air, and there, it’s a cloud. Nature specializes in improbable events, the “long tail” probabilities. I’m not sure if removing the odd-balls gives us better numbers. The problem, of course is that people from the IPCC on down use the “homogenized” numbers at sub-continental scales.

    Finally, the global sum of the adjustments is about 0.6 F. Half of the warming of the last century is adjustments. McKitrick has previously shown that half the warming of the last century is from urban and other development. Doesn’t leave much.

    Now, back to Darwin Zero … and thanks for the questions.

  456. Willis,
    Further, in Fig 5, you plot the various records that come up for Darwin Airport in the GISS page, looking at raw data.

    Simply looking at them, it looks to me that in the periods of overlap, those aren’t independent measurements at all, but that they’re duplicates. Did you think to ask somebody to clarify that first, before making conclusions like: “Why should we adjust those, they all show exactly the same thing.”

    Meanwhile, in looking for neighboring stations, I don’t think we need to limit ourselves to ones that go back to 1941; we can still learn something from ones that start after that.

    I find it interesting that the GISS homogenised set doesn’t pick up until 1963. If nothing else, we might learn something here about the difference between GISS and GHCN’s methods for homogenisation.

    Finally, I’d still like to see you repeat the analysis with an adjustment for the station move in 1941; not doing so seems really questionable.

  457. I can’t believe this. I can’t believe that the science has been so corrupted.

    I wish the ABC would finally do their job and begin reporting on serious issues like the global warming alarmist fraud which is present and persistent in the scientific community at the moment.

    Anyone want to start a petition?

  458. Alan P

    So, you guys all know better than thousands of actual scientists?
    Some of us are actual scientists and engineers with decades of experience, so don’t give the PR line about the “””SCIENTISTS”””

    The questions raised by the “skeptics” rarely are … does “climate change”, “climate warming” or “climate cooling” exist ? … they all do, from time to time.

    The real question to be ask is how great are the changes ? … are they within normal variability ? … does man’s contribution really change things that much ? … why isn’t the IPCC including effects of clouds in their models ? (it’s hard) … are clouds affected by cosmic rays ? (possibly yes) … why is solar variability including the solar effects on cosmic rays downplayed by the IPCC ? … can we mitigate it cheaper ? … is it worth the price ? … can we live with it ?

    Some of these and other questions have difficult answers, so let’s start now to answer them. The science is not “settled.” Anyone who tells you it is “settled” is a fool, idiot or a politician looking to perform a “wallet-ectomy” on you.

    Every problem does not demand a solution. For instance, there is a chance that the Earth could be hit by a massive meteor that could end civilization in minutes, do you see trillions of dollars being spent on a solution to that problem ? No, because it will cost too much given the likelihood. A large solar flare could fry the Earth, is there a solution to that ? No, we will all fry if that happens.

    Is it worth $20 or $30 trillion to possibly reduce the global temperature by 0.1C when it will might naturally go up by 1.0C or more ? .. it might not go up at all ? Could that money be better spent mitigating the result of the difference or the whole thing ? Maybe an alternative will leave some money could go to end suffering and reduce world hunger. Now there’s a happy thought.

  459. Willis, here is what seems to be another smoking gun. There used to be two stations at Halls Creek. The first, at Old Halls Creek [Latitude: 18.25° S; Longitude: 127.78° E; Elevation: 360 m], operated until the 20th May 1999. Yet the BOM only graphs the data up until 1951 – thereby jettisoning nearly half a century of data. Nevertheless, the data they use shows a mean maximum temperature flatlining at approx 33.4 degrees C. This may be seen at: http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002011&p_nccObsCode=36&p_month=13

    The second station, at (New) Halls Creek Airport [Latitude: 18.23° S; Longitude: 127.66° E; Elevation: 422 m] opened in 1944 and is presumably running today. The data produced shows a mean maximum temperature flatlining at approc 33.7 degrees C. This may be seen at: http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002012&p_nccObsCode=36&p_month=13

    Now, here we have the data from two stations separated by about 15 kilometres and 62 metres height above sea level. That the higher of the two in height shows the higher temperature may have something to do with its site (airports tend to have lots of lovely heat-absorbing tarmac/asphalt). Nevertheless, they both show flatlining annual maximum mean temperatures. One would therefore expect a homogenised graph of the data from the two to reflect this – ie: to flatline at about 33.5-33.6 degrees C.

    However, this is precisely what we do not find. Instead, the data from the two have been crudely combined to produce an apparent century long annual mean maximum temperature rise from about 32.2 degrees C in 1900 to about 33.8 degrees C in 2009. This may be seen at: http://reg.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=maxT&area=aus&station=002012&dtype=raw&period=annual&ave_yr=T

    As the late Bernard Levin might have said: ‘Not only is this nonsense, it is nonsense on stilts.’

  460. JJ (23:13:30) :

    I’m generally in agreement with your take on this. I do not know if Darwin is indicative of a larger problem or not. I do think it indicates the need to check further. With the release of a “subset” of the HadCRUT3 data, I downloaded it, and looked for a station near me. That ended up being Nashville, TN. I then pulled the GISS data, to compare it. In recent years, the data are exactly the same. But at some point in the past, the GISS becomes consistently 1 degree cooler. I’ve been too busy to nail it down — hopefully this weekend — but it has me suspicious of how GISS adjusted its data so that max US temps were no longer in the 1930’s, so I’m wondering if there is a pattern here.

    One thing that has occurred to me, and which I may write up as a post for Anthony’s consideration, is that all these “step function” adjustments we see, which have the potential to seriously distort estimated trends are much less an issue when the data are analyzed as first (or seasonal) differences. And the documentation for the homogeneity adjustments say — if I’ve read it correctly — that they are based on correlating differences with “nearby” stations, which strikes me as the correct thing to do — if we’re going to do it, which is another matter entirely. But whatever is obtained from using differences seems to be lost in translating back to undifferenced data.

    Above, Basil posted a link to a NOAA chart that plots Adjusted – Raw, and the trend of the adjustments is 0.33C (vs a total ‘global warming’ land temp trend of 1.2C over the same period). Not knowing which version of GHCN is used for the graph, I dont know which homogenization methods the graph applies to, but 0.33C seems significant even if it isnt earth shattering.

    Actually, most of the adjustment comes after 1950, so it is relatively a more significant part of the “warming” claimed for the second half of the 20th century.

  461. The adjustments that seem to have been applied do not look like they were made by a person. They look like they were made by an algorithm running through the data. It appears that this is what they did – they took the raw data and “homogenized” it. Assuming that they never went back and checked on their homogenization software, is it possible that that is where the fault lies? Could some long ago programmer have created a routine to work with the data and that program he/she created wound up having a runaway warming effect because of an error? That doesn’t seem unreasonable to me and would actually be sort of an exculpatory explanation for the scientists involved. If a scientist assumes that the homogenized data is the gold standard, and that process itself introduces the warming, unless that scientist checks his/her premises the data would seem to conclusively show a trend.

    Really, the only way to confirm this theory would be to do a lot of plots of data like this and see if this kind of adjustment is consistent. I’d propose an effort, similar to the “How (not) to measure temperature” series for people to go through and do this exercise with other temperature sets. I, for one, would be interested in doing this and I suspect you could create a whole “Army of Davids” to plot a ton of these quickly. My only problem is I don’t know where the data is or how to get it into a format that could be analyzed (if excel isn’t the software of choice for it). If you could do a post on how to get this data and how to do the analysis like what you’ve done here, I think it could be huge. If people checked 100 miscellaneous stations and they came up with both up and down adjustments that overall balanced out, then we could maybe rule out the idea that it was the adjustment scheme that has created this problem. But, if all or most of them show a significant warming adjustment then we’d know something was up. If you can do a post on how to do this kind of work and then set people to it, I’ll bet you could have a bunch of them done within a day or two.

    This post has already been very influential. It could be much moreso if it turns out that this kind of adjustment is common.

  462. I believe the Earth’s surface is approximately 29% land mass and 71% covered by water. In the Southern hemisphere water coverage rises to about 85%. Given the small area that can therefore be covered by ground stations plus the many factors which can and do lead to errors, as a lay person I wonder why UNIPCC scientists have chosen to use the data so gathered as the main basis for their climate modelling and predictions? Sophisticated satellite technology used for temperature measuring has been rigourously tested and would appear to be a far more accurate and less error prone way to give a truer picture of global mean temperature. Perhaps the following comments of John Daly are just as relevant today as they were when first published. Quote:-

    “A further problem with the statistical processing is that neither GISS nor CRU can inspect the siting or maintenance of the thousands of stations they include in their data set. Thus even stations which might reasonably be assumed to be `clean’ like Low Head Lighthouse, may actually conceal site-specific errors which are known only to people who are local to the station.
    As to station selection, not all stations in the world are included in the global data sets. This raises the question as to what criteria are used when selecting stations when the researchers at GISS and CRU have little local knowledge about the stations themselves. Indeed, they have shown no hesitation in accepting big-city stations into their data sets, knowing full well that urbanisation will be a wild card in the data.
    The Australian Bureau of Meteorololgy (BoM) did attempt a station-by-station analysis of error-producing factors for Australian stations, including urbanisation, and offered an adjusted dataset of all Australian stations. The corrections included Low Head Lighthouse whose record was `cooled’ and brought more into line with nearby Launceston Airport. However, GISS and CRU are still using the original dataset, not the modified one offered by the BoM.

    Station degradation and closures

    Since about 1980, there have been numerous closures of stations across the world as governments have sought to cut expenditure in public services. The loss of stations has been particularly significant in the southern hemisphere where the station density was already thin to begin with. The adoption of the 100-station `climate reference network’ to cover the vast Australian continent suggests a further downgrading of stations not included in that hundred.
    This has an unintended consequence for the statistical calculation of `global mean temperature’. In each 5°x5° grid-box any thinning out of the number of stations over time will result in a smaller mix of stations in the 1980s-1990s than was the case in previous decades, with a consequent shift in the mean temperature for each grid-box. This shift could theoretically result in a warming creep in some sectors and a cooling creep in others, not caused by climate, but caused by a shrinking station integration base. Another response to these closures is to accept stations into the database which had previously been excluded. This could result in sub-standard stations contaminating the agreggate record even further.
    The end result cannot be statistically neutral because the majority of the stations being closed are precisely those stations which GISS and CRU designate as `rural’. These are the very stations which have the least warming, or no warming at all, and their closure during recent decades leaves that entire period to the tender mercies of the urban stations – i.e. accelerated warming from urbanisation. It is little wonder that the 1980s and 1990s are perceived to be warmer than previous decades – the collected data is warmer. But was the climate itself warmer? The surviving rural stations would suggest not.
    We return to the central question. Is the surface record wrong in respect of both the amount of warming reported during the 1920s and in respect of the disputed warming trend it reports since 1979? In the latter case, the surface record is contradicted by both the satellite MSU record and the radio sonde record.
    This is where individual station records can prove useful. Such records represent real temperatures recorded at real places one can find on a map. As such they are not the product of esoteric statistical processing or computer manipulation, and each can be assessed individually.
    Some critics will dismiss individual station records as merely `anomalous’ (in which case most of the non-urban stations would have to be dismissed on those grounds), but when one station acquires an importance far beyond its own little record, no effort is spared to discredit it. This was the fate of Cloncurry, Queensland, Australia, which holds the honour of having recorded the hottest temperature ever measured in Australia, a continent known for its hot temperatures. The record was 53.1°C set, not in the `warm’ 1990s, but in 1889. It was a clear target for revisionism, for how can a skeptical public be convinced of `global warming’ when Cloncurry holds such a century-old record? The attack was made by Blair Trewin of the School or Earth Sciences at the University of Melbourne, with ample assistance from the whole meteorological establishment. And all this effort and expense was deployed to discredit one temperature reading on one hot day at one outback station 111 years ago. Stations do matter.
    In the Appendix, there are numerous station records from all over the world, most of them from completely rural sites, some of them scientifically supervised. One telltale sign of any good record is when the data extends back many decades with no breaks. Where the record is unbroken, it indicates better than anything else that the people collecting the data take their task seriously, a good reason to also have confidence in their maintenance and adherence to proper procedures. Where the record is persistently broken, such as Mys Smidta and many other Russian and former Soviet stations, there is no reason to have any confidence in the fragmentary data they do return.
    However, this is not the way GISS, CRU, or the IPCC view them. In spite of the station closures, and the fragmentation of so much Russian and former Soviet republic data, the surface record continues to be accepted uncritically in preference to the well validated satellite and sonde data. Indeed, in the latest drafts of the IPCC `Third Assessment Report’ , the surface record is taken as a foundation assumption to underpin all the predictions about future climate change. To admit the surface record as being seriously flawed would unravel the integrity of the entire report, and indeed unravel the foundations of the `global warming’ theory itself. ” end quote

    A suggestion: Could one of the many knowledgable people that have posted on this site access the raw satellite data from 1979 to the present and see how it stacks up with what UNIPCC has been putting out. Perhaps it’s a simple thought from a simple man but wouldn’t that give us a good idea of of the true situation?

  463. Alan,

    “So, you guys all know better than thousands of actual scientists?”

    There are not thousands of scientists who have published evidence that confirms Catastrophic Anthropogenic Global Warming. There are only a handful that have even claimed that. And yes, we have demonstrated in some instances that our work is in fact better than theirs i.e. that their work was in fact worng. And a significant number of that handful have been shown to be doing things that are decidedly unscientific, i.e. they are not ‘actual scientists’.

    And not that credentials matter – despite what you may have heard, science is about who has the right answer, not who has the right diploma – but some of us are actual scientists.

    Thousands of us, in fact. Not that numbers matter … despite what you may have heard, science is not a popularity contest.

    “Global warming is actually not happening?”

    Depends on what you mean by that term. One of the tricks that the alarmists use, and they have used it very successfully on you, is to switch back and forth between two or more defintions of that term. One of those definitions, there may be some evidence for. The scary defintion of that term, the one that people wave around when they want to make politics go their way, is not sufficiently supported.

    The scary Catastrophic Anthropogneic Global Warming theory has several components:

    1) There is a recent (last 100 yrs) rise in globally averaged temperatures.

    2) The temperature to which the globe has risen is unprecidented in human history.

    3) The temperature rise is caused by CO2 emitted by human activity.

    4) The temperature rise can be reversed by changes in human activity.

    5) The temperature rise will bring net catastrophic effects to humans, such that if we must do anything we can to stop it, provided that we can (see #4).

    That is ‘global warming’. All of those points are necessary to ‘global warming’. None of that has been proved.

    Only point #1 even comes close. I would argue that the temperature measuring networks have insufficient quality, spatial distribution, and temporal coverage to prove that claim. (Incidently, if you read the CRU emails, you will see I am not the only one who thinks that :)

    That said, is the earth experiencing a recent (100 yr) warming trend? I wouldnt be surprised. The longer term trend is the recovery from the Little Ice Age, and the much longer term trend is the climb out of the Big Honkin Ice Age. Given those trends, and the fact that climate is a dynamic system, I wouldnt be surprised if we have seen some recent warming. So what?

    Note that the examples thet you regurgitate from the alarmist talking points are all anecdotal (not rigorous proof of a global effect) and all reference #1. Whatever evidence there is for the global warming defined by #1, it is bait and switch to pretend that evidence proves the ‘global warming’ that depends on #1, #2, #3, #4, and #5.

    Point number two requires some very specific research – demonstrate globally averaged temperatures are now higher than any point in the history of cvilization. That requires sufficient spatial, and temporal coverage of temperature measurements that are of sufficient accuracy and reliability to make a determination of global temrature to witihn several tenths of a degree.

    This research does not exist. Only a handful of people are even trying to do it, and the most prominent among them are, bluntly, crooks.

    The third point has not been proven. It has only been demonstrated by shaky inference applied to computer models that are known to not accurately some phyisical processes important to climate (the modelers freely admit this), models that did not predict the current ‘lack of warming’ trend. This may be due in part to the fact that those models are parameterized with and calibrated to current temperature datasets (#1) and past temperature reconstructions (#2) that are not been proven and that have had the fingers of some very non-scientific people mucking about in them.

    Point four is based on those same models.

    So is point five.

    There are not ‘thousands of scientists’ working on each of those points, who have independantly proven those points to be true. In many cases, there are only a handful of scientists working on any one of those points, they are not independant, and they have not yet proved the point.

    And some of those handful of scientists are not ‘actual scientists’. Some of them are doing things that are absolutely counter to science. Hiding adverse results, corrupting peer review, bullying other scientists, refusing to provide methods and data for replication, evading the laws that require them to do so, conspiring to destroy evidence .. this is not science, and the people doing it are not scientists.

  464. I was skeptical of this skeptism! So I thought I would cross-check it (always healthy in scientific debate) so I went to the GISS data you referenced above.

    I chose to look at Norwich for several reasons. Firstly it is a favourite town of mine, but secondly it has a 100 years of reliable data, as far as I know, there is no need to adjust the data.

    I had to slightly alter the data’s format so that I could plot it. The resulting graph is shown in the above link. I have fitted it with a sine function (the Earth orbits the sun fairly periodically, so it is fair to assume a sinusoidal fluctuation in temperature, though of course this isn’t accurate over long periods) with a linear increase or decrease. This increase is 0.000659169 +/- 0.0001593 C/ month in my model.

    Now I am not a climate scientist and this is only one data point but I thought I’d share it with everyone. Maybe we could each do this with our own favourite town and amass the results!

    [I used the following in an iMac (or cygwin for windows, or any *NIX) :

    first, use the link in the original article to download raw data.
    next use a text editor (Vi, Excel, Notepad) to remove the list at the top of month names and change the spaces to tabs (in Vim you can use

    :%:s/ \+/\t/g

    Then run :
    cut -f2-13 -d’ ‘station.txt | perl -pe ‘s/ /\n/g’ > station-temp.dat

    where there is a tab between the single quotes (CTRL-V then press tab)

    The use gnuplot to plot and fit the data.

    first issue the command:

    set datafile empty ‘999.9’

    as 999.9 signifies missing data

    the set the fit function

    y(x) = A* sin(x*f + C) + E + D*x

    There maybe a lot of reasons this is a bad idea to use this function: cue climate scientists to shoot me down. The first part is just seasonal variation as we orbit the sun. A is an amplitude (probably based on where you are) f is the frequency of our orbit, using units of rad/month this is 2* pi / 12 or 0.523598776. C just exists to offset the start point, which probably won’t be the spring equinox. E is the station’s mean temperature, that like A, depends on where you are. D is the increase (or decrease) in temperature per month, if there is any at all.

    You need to then plot this graph to see how close the fit resembles the data. You then need to adjust the values until you get a reasonable fit. Once you’ve got something that looks reasonable, gnuplot can adjust it to finer detail. However, if you don’t start with something reasonable, gnuplot goes mad and fails. e.g. for Norwich I used:
    A = 16
    C = 1.6
    f = 0.53
    D = 0.001
    E = 5.02003
    You can now issue these commands:

    plot “station-temp-numbered.dat” using 1:2, y(x)

    fit y(x) “station-temp-numbered.dat” using 1:2 via A, C, f, D, E

    We are interested in D and the error on D. Plot again to check gnuplot did something reasonable. If it didn’t you need better starting points. ]

    p.s. I have no doubt that industrialisation is slowly destroying the planet and needs to be halted.

  465. JJ (23:13:30) :

    ””One wonders what exactly you are waiting on. You have the raw data. The homogenization methods have been provided to you, along with a bibliography of documents that provide great detail.

    You need to read them. If you do, one of the first things that you are likely to pick up on is that (outside of the US) GHCN2 does not apply adjustments of the sort that your ’show me the scientific reason for each one’ question assumes.

    Stop ‘waiting’. Get to work.””

    JJ, can you explain what you mean by this?

    Are you telling Willis that he needs to figure out the reasons for each of the adjustments to Darwin on his own by relying on homogenization procedures described in the bibliographical documents?

  466. WAG (07:51:47) :

    “”But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?””

    I think Willis is comfortable with adjustments for errors. It seems as if he wants those that made the adjustments to describe exactly what errors they were adjusting for. There were half a dozen or so adjustments made so they should be able to point to half a dozen or so errors.

  467. WAG (07:51:47) :

    “”Also, I thought that according to SurfaceStations.org, siting problems were what rendered the RAW data unreliable. But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?””

    (This is my second try. I’m sorry if this ends up as a duplicate comment.)

    WAG, I think Willis is willing to accept adjustments that make sense. He wants to know the reason that each of the half a dozen adjustments were made. If there were half a dozen adjustments, then the adjusters should be able to document half a dozen specific errors that required adjustments.

  468. WAG (07:51:47)…

    …once again displays his abysmal lack of critical thinking.

    The siting issue is entirely different from the issue of CRU and Michael Mann hiding the raw data. Sites on airport runways, or next to air conditioner exhausts should not be “adjusted” by the Elmer Gantry CRU scientists, who refuse to disclose how they do their data massaging. Sites contaminated by manmade heat should be discarded completely.

    WAG claims that “Lambert never made any ad hominems against Willis.” But Lambert says that Willis is “lying.” To any normal person, that is an ad hominem attack.

    WAG and Lambert are a typical examples of the the ethics-challenged people who migrate to the alarmist camp. Since they fail to make a credible case, they always fall back on ad hominem attacks. Lacking the science, that’s all they’ve got.

  469. “Actually, Lambert never made any ad hominems against Willis.”

    For real? The title of his frakking post is

    Willis Eschenbach caught lying about temperature trends

    That’s not ad hominem, though? That bald, unsupported assertion that Willis is knowingly engaging in deception, that’s not ad hominem?

    You guys just can’t recognize it when you see it. Because…well, you’re soaking in it. Really…”Climate Fraudit”? This is middle-school ad hom.

  470. Re my previous post (06:39:05) : Why homogenise anything at all? Do the unused 48 year records from Old Halls Creek from 1952 to 1999 continue to flatline at 33.4 degrees C? If so, whence comes the justification for a rise of 1.6 degrees C over the past century?

  471. My complaint would be a 100 year sampling of Earth’s monitored weather is a moronic idealistic approach to dictate or predict any warming or cooling trend of this planet!

    The earth’s billions of years of turmoil and evolution and weather are far too extreme to evaluate by using only a 100 year window! And I find not one analysis or speculation of our Sun being of any consideration?

  472. “A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”

    http://www.bom.gov.au/climate/change/datasets/datasets.shtml

    A number of commentators have pointed to this quote, which is absent from the top post. I’d like to make the meta point about what this tells us about the denier methodology. There are published papers on the homogenization methods that might have been addressed, possibly critically, in scientific argument by a skeptic. A denier, on the other hand, just puts the words “adjustment” and “homogenization” in scare quotes and places them in a speculative and conspiratorial context.

    Witness this subtle rhetorical use of scare quotes in place of actual arguments. From the top post: “The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data.” This sort of thing shows clearly the line between skeptic and denier. The skeptic would look in the literature for the explanation and meaning of homogenization. The denier leaps to conspiratorial conclusions.

    [REPLY – When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating), we could actually look into said “meaning”. (At least) until then, scare quotes apply. ~ Evan]

  473. “One of the tricks that the alarmists use, and they have used it very successfully on you, is to switch back and forth between two or more definitions of that term.”

    There’s a technical term for that trick: equivocation.

  474. Willis, I’ve been reading some of the criticisms of your article. And as a result, I am circling back to one of my original assertions regarding the assessment of the validity of the ground surface records which is that the ground station data should be limited to raw data only and that “adjustments” are not permitted because they are arbitrary and prone to error (errors that would exceed the global signal). Therefore, in order to analyze the ground record it would be necessary to break up a station’s record into independent data sets where each data set is bounded by the previous and next known event that occurred in the station’s history (destruction, relocation, installation of screens, introduction of nearby structures etc). Then, each individual set would be tested for increase or decrease and the entire differential set would have averaging applied to determine a global up or down tick. However, this process is a monumental task that BOM did not carry out. They, and all other groups (NOAA etc), simply applied their “adjustments” in order to provide a continuous report for each station. These “adjustments” are OK for the original purpose of the stations, which was to produce a temperature record that meteorologists could then use to predict weather in regional or even local micro climate areas surrounding groups of stations. But such a technique is hopelessly inadequate for predicting a global temperature trend. We (skeptics and supporters of AGW alike) cannot produce a reliable answer to the question until we have access to ALL of the original and completely unadjusted data and access to ALL of the station historical records. When do we get organized to get this project underway?

  475. Wobble – the Australian scientists did detail their reasons for adjusting the data, as Lambert pointed out. Here’s what the scientists’ say:

    “A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.

    “The impacts of these changes on the data are often comparable in size to real climate variations, so they need to be removed before long-term trends are investigated. Procedures to identify and adjust for non-climatic changes in historical climate data generally involve a combination of:

    * investigating historical information (metadata) about the observation site,
    * using statistical tests to compare records from nearby locations, and
    * using comparison data recorded simultaneously at old and new locations, or with old and new instrument types.”

    Smokey – no, accusations of “lying” do not constitute an ad hominem. If someone says something that is not true, that is a lie. If you claim that adjusting temperature data is baseless, “blatantly bogus,” or “indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming,” that is factually incorrect, and therefore a lie. Willis didn’t even make an effort to examine the reasons for adjusting the data.

  476. jrshipley (08:36:36) :

    ““A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious…””

    And if the change in the thermometer shelter at Darwin occurred many years prior to 1941, then what was the reason for the SERIES of adjustments in the 1940s and beyond?

  477. ‘When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating)’

    Evan, you’re the one obfuscating. It’s been pointed out to you that the 1940s jump is . You were aware that there were five stations in the temperature series.

    If you’d wanted to do a competent analysis, you’d have, oh, maybe colored series from each station differently. Then, you could see differences in each series. Then, you could have seen the changes in the 1940s as being due to *one of old weather stations being bombed by the Japanese* leading to a new station being built in a different location rather than a grand conspiracy of Evul Climate Scientists. Unless the Evul Climate Scientists were helping the Japanese during WW2. How do the temperature trends look from before and after 1941, if given the step change in the early 1940s is from a weather station relocation?

    However, you’ve pleased your fanboys. How nice for you.

    [REPLY Let’s keep it simple. How about just a nice cup of tea and the code? ~ Evan]

  478. I thought I’d cross post this for a laugh.

    “David, Willis is lying by presenting the raw temperature graph, which mixes together several records, as the true temperature record of Darwin. He doesn’t get the benefit of the doubt because of his previous dishonesty about temperatures.

    Posted by: Tim Lambert | December 9, 2009 11:33 PM”

    Now I could be wrong but isn’t the whole IPCC report/s and many of mann’s, CRU’s, and other global warming data based on “mixing together several records” and then “adjusting them”…

    Tree rings, proxy data, adjusted data… all mixed and matched at different time scales and such… so is Tim lambert now claiming the IPCC reports are lies?

  479. [REPLY – When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating), we could actually look into said “meaning”. (At least) until then, scare quotes apply. ~ Evan]

    The methods are described in some detail in papers published by the NCDC. Willis could try to follow those procedures. Perhaps not all relevant information about that particular site and the data is available online – time of observation changes, and the like? Then he could politely ask the either the Australian BoM or the NCDC where he could find more, instead of going to public accusations of fraud. Not feeling like writing your own code from scratch? I don’t know about NOAA, but GISS has all of theirs available for download; one could look at that.

  480. WAG (08:56:36):

    “Smokey – no, accusations of ‘lying’ do not constitute an ad hominem. If someone says something that is not true, that is a lie.”

    False conclusion. WAG demonstrates his abysmal lack of understanding and ethics by imputing motive to what could just as well be a mistake [or not], or simply a difference of opinion.

    If WAG can prove a deliberately dishonest motive for anything Willis said, then I will concede that Willis was lying. But given the fact that Willis repeatedly concedes whenever someone points out an error in his analysis, it is apparent that unlike WAG, he is interested in finding the truth.

    That makes clear that both WAG and Lambert are deliberately making their reprehensible ad hominem attacks. There is a reason that alarmists like WAG and Lambert make ad hominem accusations against those they disagree with: because they lack the facts to support their arguments. Their name calling doesn’t make them right, only despicable.

  481. Wobble,

    “Are you telling Willis that he needs to figure out the reasons for each of the adjustments to Darwin on his own by relying on homogenization procedures described in the bibliographical documents?”

    More to the point, I am telling Willis that if he reads the homogenization methods, he is likely to find that the sort of adjutments his criticisms assume (discrete adjustments for documented issues such as station moves, instrument changes, etc) are not the type of adjustments made in GHCN2 for stations outside the US. Rather, GHCN2 homogenizes by first difference comparison to a reference series.

    The homgenization methods are documented, and Willis has them. He also has raw data. This provides him several options for pursuing the argument legitimately.

    He could attempt to exactly duplicate the homogenization for the Darwin site. If it works out, then he would have complete knowledge of the nature of the adjustments. If it doesnt, then he may have proved that the published adjustment methodology wasnt followed – that would be a smoking gun of sorts.

    He could also perform his own homogenization adjustment of the raw data, using GHCN methods and a reference series of his choice that fits the GHCN methods and that he personally finds to be suitable. If choosing a different but otherwise suitable reference series results in dramatically different results, that would indict the adjustment methods.

    He could also perform his own homogenization adjustment of the raw data, using his own methods. If the results are significantly different than GHCN with respect to a global averaged long term trend, he would be on to something. He’d still have to demonstrate that his method was sufficient (or preferably, superior) but if he could that would be huge.

    He could also investigate the principle claims about the homogenization method: that it is sufficient for large area long term trend analysis, and that it results in minor effects on the global average long term trend. This would require plenty of reading and understanding of the methods documentation.

    Instead, he seems to be content to wave his arms wildly and point to the fact that the homogenization method appears to produce enormous adjustments in some stations, and that those adjustments sometimes do not track well with the temps local to that station – both of which are effects documented and accounted for in the published methods.

    This leaves him wrestling in the mud with the likes of Tim Lambert.

  482. Well done. Very impressive work. However by 1/2012 this will all be meaningless. Our system IS binary. Wormwood will emerge from Libra space to the south of our planetary plane. Governments will deny all former knowledge. A very close pass to Urth will leave the sky orange, the seas red and deposit many rocks. A turnaround path seven years latter will destroy much. Yashua/Y’hovah will res-Q. Look up. The times of tribulation draws near…

  483. “[REPLY – When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating), we could actually look into said “meaning”. (At least) until then, scare quotes apply. ~ Evan]”

    I never understand these calls for releasing things that are widely discussed in the literature. Lambert cites several papers. In any case, I’m still not seeing how anyone with an once of intellectual dignity can leave the explanation of homogenization out of a post on homogenized vs raw data. There’s no argument in favor of using inhomogenous data and no substantive criticism of the published work on all of this, just conspiracy-mongering over adjustment of data at one station.

    Again, this is a clear case of denialism as opposed to skepticism. The voraciousness of the effort by deniers to undermine, through conspiracy-mongering, the credibility evidence for anthropogenic change indicates to me the strength of that evidence and the desperation at the exhaustion of skeptical lines of argument.

    [REPLY – It’s really quite simple. Until the algorithms are released for independent review it is impossible to replicate the findings or diagnose the procedure. “Widely discussing it in the literature” does NOT feed the scientific bulldog. Until methods are released, scare quotes are quite appropriate. As Col. Mandrake put it, “The code, Jim. That’s it. A nice cup of tea, and the code . . .” ~ Evan]

  484. WAG (08:56:36) :

    “”Wobble – the Australian scientists did detail their reasons for adjusting the data, as Lambert pointed out. Here’s what the scientists’ say:””

    Yes. I know what they generically say.

    Are you, here and now, claiming that each of their adjustments was definitively made as a result of an actual, and documented, event that they mention?

    This is all Willis is asking. It should be that difficult for you to understand this.

  485. JJ (09:44:25) :

    “”He could attempt to exactly duplicate the homogenization for the Darwin site. If it works out, then he would have complete knowledge of the nature of the adjustments. If it doesnt, then he may have proved that the published adjustment methodology wasnt followed – that would be a smoking gun of sorts.””

    JJ, you’re wrong.

    It’s not enough to have the raw data and the methodology which is supposed to be used. Willis would need to know the recorded dynamic events which resulted in the adjustment methodology being applied – this is what he’s asking for.

    You seem to believe that a referee, who knows the rules of a game, is capable of determining whether or not penalties were properly applied during a game he didn’t not see. The referee would also need to know the supposed reasons for the penalties. Would he not?

  486. jrshipley (10:02:11) :

    “”I never understand these calls for releasing things that are widely discussed in the literature.””

    jrshipley, are you claiming that the specific reason for each adjustment at Darwin is discussed in the literature. If so, would you mind tell me where it’s discussed?

  487. Sock Puppet of the Great Satan (09:04:40) :

    “”Then, you could have seen the changes in the 1940s as being due to *one of old weather stations being bombed by the Japanese* leading to a new station being built in a different location rather than a grand conspiracy of Evul Climate Scientists.””

    Are you claiming that ALL the changes in the 1940s are due to one weather station being moved?

  488. “jrshipley, are you claiming that the specific reason for each adjustment at Darwin is discussed in the literature. If so, would you mind tell me where it’s discussed?”

    I’m not the one making paranoid conspiratorial claims. I always find this sort of “gotchya” from deniers amusing. Why are you asking some random person in a blog thread to do your research for you? And how can you justify making the conspiratorial allegations in the top post before finding out the specific reasons? You’ve got the cart before the horse here, but I appreciate you helping to show how this entire post is based on paranoid speculation. Until you know the specific reasons, which you acknowledge not knowing, then you haven’t got much of a smoking gun here do you? Skeptics ask questions. Deniers presume conspiracies before even looking for answers. Which are you?

  489. “It’s really quite simple. Until the algorithms are released for independent review it is impossible to replicate the findings or diagnose the procedure. “Widely discussing it in the literature” does NOT feed the scientific bulldog. Until methods are released, scare quotes are quite appropriate. As Col. Mandrake put it, “The code, Jim. That’s it. A nice cup of tea, and the code . . .””

    I can only speak to my own experience, but if I want to reproduce or build upon an analysis described in the literature, I write my own code; I don’t go bugging other people for theirs. If the description in the literature is half-way decent, you’ll be able to do it. In this case, the publications by Peterson look pretty detailed to me; you should be able to have a try at it.

    In any case, all GISS code is public; you can go see how their homogenisation code works, no?

  490. Anyone can google your name and see that you have quite a reputation for fudging data to fit your theory, interesting given your accusations. Just what are your credentials and how are you more qualified to make the adjustments you see fit than the international scientific community? You’ve got quite a swanky blog page, you’re pandering well. I trust this is profitable for you, isn’t capitalism great?

  491. Wobble,

    “JJ, you’re wrong.”

    Nope.

    Please pay attention to what I write. This is the third time this has been explained.

    “It’s not enough to have the raw data and the methodology which is supposed to be used. Willis would need to know the recorded dynamic events which resulted in the adjustment methodology being applied – this is what he’s asking for.”

    Wrong on both counts.

    Once again, GHCN2 does not apply metadata based adjustments to non US stations. Outside the US, homogenization is by first difference based comparison to a reference series. The ‘records of dynamic events’ that you and he think you need likely do not exist, and at any rate do not apply. Further, it doesnt appear that Willis has actually asked for anything, and he seems very resistant to the notion.

    Willis has the raw data. He has the methods. He needs to understand the methods and apply them. I gave a few suggestions above as to potentially productive ways that could be done.

    “You seem to believe that a referee, …”

    Your analogy is inapplicable. Homogenization is not a ballgame. Stick to the facts.

  492. Strange that an appeal for actual methods (as opposed to a “description” of methods) should raise such objection.

  493. jrshipley,

    Let me ask you again.

    Are you claiming that the specific reason for each adjustment at Darwin is discussed in the literature?

    If you don’t know, then you can’t claim that the information Willis is looking for is publicly available. Therefore, you should stop claiming that it is.

    If you are claiming that it’s publicly available, then please tell me where.

  494. JJ,

    Are you claiming that the GHCN2 homogenization can alter data at stations at which nothing has occurred to warrant such alterations – other than observed divergence from a reference series?

  495. Sjoerd Schreuder (09:56:33) :
    bill (08:44:07) :
    Figures unadjusted met office

    Did I notice “De Bilt” in there? That’s a village in The Netherlands, and it’s clearly not in the UK. The dutch met office KNMI is sited there.

    Apologies, I didnt deselect it for the plot (you are lucky salekhard – siberia – was not included!) But it seems to fit in with the rest

  496. @wobble: “Are you claiming that the GHCN2 homogenization can alter data at stations at which nothing has occurred to warrant such alterations – other than observed divergence from a reference series?”

    I think this is mostly correct, except insofar as it could be read to suggest that there is just one reference series against which all weather stations are measured, which is _not_ the case. My understanding of what NOAA is doing is comparing the data for each station to the other nearby weather stations with which that station’s data is otherwise most highly correlated.

    To provide a very bad example using completely made up numbers, assume the following weather stations show the following temperature readings:

    A: 1,2,3,4,3,2,1
    B: 3,4,5,6,5,4,3
    C: 1,2,3,4,7,6,5

    If you calculate the period-over-period differences in temperature readings:

    A: +1,+1,+1,-1,-1,-1
    B: +1,+1,+1,-1,-1,-1
    C: +1,+1,+1,+3,-1,-1

    C is highly correlated with A and B except for one outlier (the +3 rather than -1). If that difference is statistically significant, an adjustment will be applied to C.

    Now, to get to what I take your central point to be: could this adjustment be applied even if there is no historical information available that might explain what might have happened to temperature readings at C between year 4 (when the reading was 4) and year 5 (when the reading was 7)? YES.

    Why do that? Here’s my best understanding:

    1) This statistical method can be mechanically applied to the data set without having researchers make subjective assessments of what impact particular historical changes at particular stations might have on data collection at those stations.

    2) Outside of certain stations in the US, there is not good historical information about station conditions (“metadata”) that would enable such subjective assessments to be made.

    3) In some cases, historical information wouldn’t be helpful in reconstructing the reasons for oddities in the data (e.g., historical operator and transcription errors).

    4) The adjusted data is recommended by NOAA only for use in regional analysis. (If you are doing a global calculation, errors in unadjusted will hopefully mostly cancel each other out. If you are looking at one individual particular station’s data for some reason, you’re better off doing what you can with the metadata you have.)

  497. Wobble,

    “Are you claiming that the GHCN2 homogenization can alter data at stations at which nothing has occurred to warrant such alterations – other than observed divergence from a reference series?”

    Yes.

    Now that you have grasped that part, please go back and read the rest of my posts on the subject, and endeavor to understand the other parts as well. Please do this before replying again …

    The short version is that the GHCN2 methods document that Willis quotes from says:

    You may see enormous adjustments applied to some stations. These adjustments may not track well with the temperatures local to that station. That is OK. While the sometimes goofy looking adjusted data should probably not be used for local analyses, we think it actually gives more reliable results when applied to long term global trend analysis, and here’s why (cites paper).

    To which Willis artfully replies:

    AHA!!! I have found enormous adjustments to this one station!!!!! And these adjustments do not track well with the temperatures local to that station!!!!!! You guys are all a bunch of crooks and liars!!!

    To which Anthony adds:

    Smoking Gun!!!!!!

    Calm down folks.

    There may be something up here, but you certainly havent found it yet. In fact, all you’ve done so far is confirm what they already told you.

  498. Carbon dioxide cause or effect?

    We should take the climate change seriously because these changes have created and destroyed huge empires within the history of mankind. However, it is a shame that billion dollar decisions in Copenhagen may be based on tuned temperature data and wrong conclusions.

    The climate activists believe that the CO2 has been the main reason for the climate changes within the last million of years. However, the chemical calculations prove that the reason is the temperature changes of the oceans. Warm seawater dissolves much less CO2 than the cold seawater. See details from:

    http://www.antti-roine.com/viewtopic.php?f=10&t=73

    This means that CO2 content of the atmosphere will automatically increase, if the sea surface temperature increases for ANY reason. Most likely, carbon dioxide contributes to global warming, but it is hardly the primary reason for climate change.

    The magnitude of CO2 and water vapor emissivity and absorptivity are the same, however, the concentration of CO2 (0.04%) is much less than water (1%) in atmosphere. In this mean the first assumption is that water vapor effect on the climate change must be much larger than CO2.

    If it is true that:
    1. CRU adjusted and selected the data according to their mission,
    2. The town heating effect have nearly been neglected,
    3. Original data has been deleted,
    4. The climate warming cannot be seen from country side weather station data, then real scientist should be a little bit worried :)

    Climate models which correlate with the CRU data cannot validate the methods of CRU, because all these models have been calibrated and fittet with the CRU data. So it is not a miracle that they fit to the CRU results. An other problem is that they do not take into account the effect of oceans.

    Kyoto-type agreements transfers emissions and jobs to those countries which do not care about environmental issues. They also channel emissions trading funds to the population growth and increase of welfare, which both increase CO2 emissions. New Post-Kyoto agreements should channel the funds directly to development of our own sustainable technology and especially to the solar technology.

  499. I don’t quite get the point of all this: you start the post correctly saying that weather station data need to be adjusted, than you take a station out of 7000 and show that it was, indeed, adjusted. How is this a smoking gun I don’t understand.

    I understand you are not willing or able to reproduce the exact same procedure that GHCN used but what is the point? I mean, I can also cook pasta here at home, claim that I am cooking pasta the way you do and then say “YIKES! You see! This pasta tastes like crap ERGO JJ cannot cook”. This is beyond human logic, come on.

  500. More on Time of Observation from the warmist site I do battle on. Any comment here Willis?

    “I had time to read a little bit about the Time of Observation Bias, and it’s fascinating. As I understand it, before November 19