The Smoking Gun At Darwin Zero

by Willis Eschenbach

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport – by Dominic Perrin via Panoramio

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.
  • The shortest record is only nine years long.
  • There are gaps of a month and more in almost all of the records.
  • It looks like there are problems with the data at around 1941.
  • Most of the datasets are missing months.
  • For most of the period there are few nearby stations.
  • There is no one year covered by all five records.
  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My previous post on this subject.

The late and much missed John Daly, irrepressible as always.

More on Darwin history, it wasn’t Stevenson Screens.

NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony

4.7 3 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

909 Comments
Inline Feedbacks
View all comments
December 8, 2009 1:41 am

Wow. More manufactured fakeness than a million Hollywood blockbusters! Not a smoking gun, not a nuclear explosion, the birth of new GALAXY! When this hits the fan, Copenhagen will become “broken wagon” – the wheels are falling off! Great job!

yonason
December 8, 2009 1:44 am

RE my yonason (01:26:02) :
I said, “The only reason to “adjust” it is to hide the fact that it was bad data to begin with.” By that I meant, of course, that if “adjustments” really were needed, then the data was bad. However, when the data is good, that’s even worse, because we are no longer dealing with incompetence, but premeditated deliberate deception.

rcrejects
December 8, 2009 1:46 am

Could this perhaps explain the reluctance of CRU to release the data?

harpo
December 8, 2009 1:48 am

Greetings from Australia.
Nobody in the current Australian government cares a damn about whether the temperature has gone up down or side ways. They want to implement a tax so they can collect money from the rich polluters (their words not mine) and give it to the poor working man (that’s my simplistic reading of it).
Climate Change, Climate Change, Climate Change, Tax will fix it, Tax will fix it, Tax will fix it… Climate Change, Climate Change…..
No will you critical thinkers just give up and love Big Brother. 2 + 2 = 5 remember…. 2 + 2 has always equalled 5…..
As an engineer educated in Australia in the 80’s it both breaks my heart and scares the [snip] out of me…
(Interestingly, before 1984, Orwells’ 1984 was required reading for all year 11/12(?) students in Victoria… now I can’t find anybody under the age of 40 who has read it)

Andrew P
December 8, 2009 1:49 am

“Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.”
Wow. So GHCN blatently adjusts the raw data, to create a post-war warming trend where none existed. And the GISS data appears to match GHCN’s (once it has also been ‘adjusted’). And CRU won’t release their raw data, which doesn’t inspire much confidence – not that I had much in them after recent developments.
From this example, and given the implications for the world economy currently being discussed in Copenhagen, I think that the precautionary principle should be adopted, and all adjusted data from GHCN, GISS and CRU should be classed as fraudulent, until proven otherwise. Willis’ essay should be sent to every politician and journalist possible.

December 8, 2009 1:50 am

“Barry Foster (00:51:02) :
Michael. Or could it have been in December 1942 when the British radio station in Hong Kong picked up radio traffic about the forthcoming attack on Pearl Harbour and decyphered it – and it was made known to Roosevelt, who kept it to himself in order to allow a way in to the war? Ooohh!”
Seeing how the Japanese attack on Pearl Harbour took place a year earlier then I think this is unlikely.
Seriously, though – this is great work and demonstrates exactly why climate science must be conducted openly and with free access not only to the raw data, but the methodology used to analyse it.
As a layman I look at the raw data and the “homogenized” version and can only assume that “homogenized” actually means massaged to fit a political preconception.

John in NZ
December 8, 2009 1:55 am

Thank you Willis.
I really learn a lot from your posts.

Jack Green
December 8, 2009 1:56 am

I think NASA knows that their CO2 models are flawed. Note the comment in this paper (thanks Stephen Wilde) that SABER is direct measuring CO2 ratios where GCM models are being used in climate simulations. Interesting that one hand doesn’t know what the other is doing or do they?
Abstract from:
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20090004446_2009001269.pdf
The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) experiment is one of four instruments on NASA’s Thermosphere-Ionosphere-Energetics and Dynamics (TIMED) satellite. SABER measures broadband infrared limb emission and derives vertical profiles of kinetic temperature (Tk) from the lower stratosphere to approximately 120 km, and vertical profiles of carbon dioxide (CO2) volume mixing ratio (vmr) from approximately 70 km to 120 km. In this paper we report on SABER Tk/CO2 data in the mesosphere and lower thermosphere (MLT) region from the version 1.06 dataset. The continuous SABER measurements provide an excellent dataset to understand the evolution and mechanisms responsible for the global two-level structure of the mesopause altitude. SABER MLT Tk comparisons with ground-based sodium lidar and rocket falling sphere Tk measurements are generally in good agreement. However, SABER CO2 data differs significantly from TIME-GCM model simulations. Indirect CO2 validation through SABER-lidar MLT Tk comparisons and SABER-radiation transfer comparisons of nighttime 4.3 micron limb emission suggest the SABER-derived CO2 data is a better representation of the true atmospheric MLT CO2 abundance compared to model simulations of CO2 vmr.

Stacey
December 8, 2009 1:57 am

Dear Willis
This is a great post and it demonstrates something that I have always intuitively felt about the treatment of data and the way th graphs are deliberately drawn to create alarm.
I know you looked at the CET and would appreciate a link, which I have lost?

Deadman
December 8, 2009 2:01 am

there’s a typo in the penultimate paragraph: in omnis is Latin for “in all.”

Ken Seton
December 8, 2009 2:02 am

Willis – Great read and easy to follow.
You do us a great service.
Thanks from Sydney (Aus).

Scott of Melb Australia
December 8, 2009 2:12 am

I have just forwarded this link to one of our more science savy opposition senators in Australia.
(you know the ones that revolted against the Carbon tax here in Australia and voted it down)
Plus had to include Andrew Bolt hoping it will get into his column in the MSM tomorrow.
Nothing like giving them a little Ammo.
Great article
Scott

KeithGuy
December 8, 2009 2:12 am

Excellent work Willis,
When it comes to the agreement between the three main global temperature data sets the term “self fulfilling prophecy” comes to mind.
Someone somewhere started with the notion that there has been a warming of something like 0.7 of a degree over the twentieth century, and the CRU, GISS, and GHCN have manipulated their data, using three different, contrived methodologies, until it agrees with their pre-conceived ideal.
It appears to me that the whole exercise of reconstructing historic global temperatures owes very little to science and much more to “clever” statistics.

supercritical
December 8, 2009 2:13 am

Raw data is just that.
And, anybody subsequently approaching the raw data must have an a-priori motivation, which must be made explicit.
If there are gaps, and bits of the raw data that are unsuitable for the purposes of the current study, then why not leave them out altogether?
IF the motivation is to determine whether or not today’s ambient global air temperatures are hotter or colder than they were, then a continuous record is NOT required. Rather, as long as there were matching continuous sequences of a few years, this would be sufficient for the purpose.
So why do the climate scientists need a ‘continous record’? And for what purpose are they trying to create an artefactual proxy of the real raw data? And in so doing, aren’t they creating a subjective fiction? an artefact? A man-made simulation?
Isn’t this similar to producing a marked-up copy of the dead-sea scrolls, with the corrections in felt-tipped pen, and missing bits added-in in biro, and then calling it ‘the definitive data-set’ ?

Geoff Sherrington
December 8, 2009 2:14 am

There is an unknown in the equation.
The Darwin data were collected by the Bureau of Meteorology, who have their own sets of “adjustments”. I am trying to discover if the Darwin data sent to GNCN are unadjusted or Australian-adjusted. By coincidence, I have been working on Darwin records for some months. There was an early station shift from the Post Office to the Regional Office near the town in 1962 (which would have gradually built up some UHI) to the airport, which in 1940 was way, way out of town but which is now surrounded by suburbs on most sides, so UHI is complicit again.
There is a BOM record of data overlap in the period 1967 to 1973. Overall, the Tmax was about 0.5 deg C higher at the airport and the Tmin was about 0.5 deg C lower at the airport during these 7 years. The Tmax averaged 31.5 deg C and 32.1 deg C at Office and Airport respectively. The Tmin averaged 23.8 and 23.2 at Office and Airport. Of course, if you take the mean, the Office is the same as the airport.
However, my problem is that I do not know if the Office and the Airport use raw or Australian adjusted data. I suspect the latter. If you can tell me how to display graphs on this blog I’ll put up a spaghetti graph of 5 different versions of annual Taverage at Darwin, 1886 to 2008. The worst years show a difference between adjusters of about 3.5 deg C, with KNMI GHCN version 2 adjusted being lower than recent BOM official figures.
I still do not know if any of us has seen truly raw data for Darwin.
Or from any other Australian station.

December 8, 2009 2:16 am

http://www.bbc.co.uk/blogs/ethicalman/2009/03/obama_will_circumvent_congress_to_limit_us_emissio.html
Democracy surplanted by the will of Obama and the unelected EPA of America.
See what John Podesta, Obama’s top adviser has to say.

Aligner
December 8, 2009 2:23 am

An excellent article.
But it doesn’t stop there. In order to arrive at a “global average” gridding is used with backfilling of sparse areas using methods such as averaging or interpolating from “nearby” stations, taking no account of topography, etc. Any error such as this therefore carries more weight than in areas where records are more prolific.
And nowhere is the margin of error introduced accounted for. It has always seemed to me that the degree of warming being measured is probably less than the margin of error of the temperature record itself, especially when SSTs from bucket measurements are added. Add in UHI effect (even if you adjust for that too) and the margin for error increases again.
Ultimately, therefore, IMHO the whole exercise becomes meaningless and making alarmist observations, let alone blaming CO2, preposterous.

Donald (Australia)
December 8, 2009 2:24 am

It would be interesting to feed this through to Copenhagen, and have some brave soul present it to the assembled zombies.
A clove of garlic, sunlight, or the sight of a wooden stake could not arouse more panic, or howls of anger.

December 8, 2009 2:26 am

Lets all “homogenize” our data
Into chunks of bits and pieces,
Lets forget which way is up or down
And randomize our thesis,
So black is white and white is brown
And purple wears a hat,
And when our data’s goose is cooked,
We’ll all say, “How HOT is that?”
.
.
©2009 Dave Stephens
http://www.caricaturesbydave.com

skylarker
December 8, 2009 2:29 am

From a tyro sceptic. Thank you for this excellent paper. More please.

Rob
December 8, 2009 2:38 am

This could explain the MSM reaction: he owns ALL the papers in Australia. This country has no freedom of the press anymore basically
copied from another site
“Phil Kean wonders why Sky gives so much time to the global warming scare. Perhaps it could be because it is owned by News International which is run by James Murdoch who is married to a climate change fanatic. Kathryn Hufschmid runs the Clinton Climate Initiative.
.
I understand that News International also owns a number of newspapers in this country. I don’t suppose that the fact that the boss’s wife is AGW nutter has any influence on the editorial policy of those newspapers.
.
It almost makes me wish that Daddy Rupert still had personal control of the media in this country.

December 8, 2009 2:43 am

Onwards and upwards.
Great work Willis; much appreciated amidst all the BS surrounding Copenhagen.

December 8, 2009 2:44 am

The lack of transparency is the problem. The adjustments should be completely disclosed for all stations including reasons for those adjustments. You have to be careful drawing conclusions without knowing why the adjustments were made. It certainly looks suspicious. In Torok, S. and Nicholls, N., 1996, An historical temperature record for Australia. Aust. Met. Mag. 45, 251-260 which I think was the first paper developing a “High Quality” (not sure that is how I would personally describe it given the Australian data and station history but moving along…) one example of adjustments is given for 224 stations used in that paper and they are for Mildura. The adjustments and reasons (see p.257):
<1989 -0.6 Move to higher, clearer ground
<1946 -0.9 Move from Post Office to Airport
<1939 +0.4 New screen
<1930 +0.3 Move from park to Post Office
1943 +1.0 Pile of dirt near screen during construction of air-raid shelter
1903 +1.5 Temporary site one mile east
1902 -1.0 Problems with shelter
1901 -0.5 Problems with shelter
1900 -0.5 Problems with shelter
1892 +1.0 Temporary site
1890 -1.0 Detect
“Detect” refers to use of the Detect program (see paper). The “<” symbol indicates that the adjustment was made to all years prior to the indicated year.
The above gives an idea of the type of adjustments used in that paper and the number of adjustments made to data. For the 224 candidate stations 2,812 adjustments were made in total. A couple of points, the adjustments are subjective by their very nature. Use of overlapping multi station data can assist. I have concerns about the size of the errors these multiple adjustments introduce but I am certainly no expert. I wonder what the error bar is on the final plot when we are talking of average warming in the tenths of a degree C over a century. The stations really never were designed to provide the data that it is being used for but that is well known.
My point is without the detailed station metadata it might be too early to draw a conclusion. This is why we need to know what were the adjustments made to each station and the reasons. Surely this data exists (if it doesn’t then the entire adjusted data series is useless as it can’t be scrutinised by other scientists – maybe they did a CRU with it!?) and if they do why are they not made public or at the very least made available to researchers. Have the data keepers been asked for this? I am assuming they have.

Charles. U. Farley
December 8, 2009 2:45 am

From FOIA2009.zip/documents/osborn-tree6/tsdependent/compute_neff.pro
***Although there are no programming errors, as far as I know, the
; ***method would seem to be in error, since neff(raw) is always greater
; ***than neff(hi) plus neff(low) – which shouldn’t be true, otherwise
; ***some information has somehow been lost. For now, therefore, run
; ***compute_neff for unfiltered series, then for low-pass series, and
; ***subtract the results to obtain the neff for the high-pass series!

Ryan Stephenson
December 8, 2009 2:46 am

Can I please correct you. You keep using the phrase “raw data”. Averaged figures are not “raw data”. Stevenson screens record maximum and minimum DAILY temperatures. This is the real RAW data.
When you do an analysis of temperature data over one year then you should always show it as a distribution. It will have a mean and a standard deviation. Take the UK. It may have a mean annual temperature of 15Celsius with a standard deviation of 20Celsius.
Without the distribution the warmists can say “The mean of 2001 was 0.1Celsius higher than the mean of 2000. This is significant – we are heating the planet”. With the distribution you would say “The mean of 2001 was 0.1Celsius higher than 2000 but since the standard deviation of the annual distribution is 20Celsius, we cannot consider this as being statistically significant”.
If we had the REAL raw data we could almost certainly show that the off-trend averages of the last few decades was of no statistical significance anyway, before we got into the nitty-gritty of fudges to the data. By using slack language to describe the mean annual temperatures as “Raw Data” we are falling into a trap set by the warmists.

Verified by MonsterInsights