by Willis Eschenbach
People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport – by Dominic Perrin via Panoramio
Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.
The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.
There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.
So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:
Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?
If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.
The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.
One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.
So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.
Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.
So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?
The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.
First, what is an “inhomogeneity”? I can do no better than quote from GHCN:
Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.
That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.
I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.
Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.
This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.
- There is no one record that covers the whole period.
- The shortest record is only nine years long.
- There are gaps of a month and more in almost all of the records.
- It looks like there are problems with the data at around 1941.
- Most of the datasets are missing months.
- For most of the period there are few nearby stations.
- There is no one year covered by all five records.
- The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?
In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.
Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.
However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.
I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.
OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.
Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.
To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record
YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.
Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say
GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.
The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.
…
In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.
…
The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.
Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.
OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …
So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.
Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.
Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?
Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.
One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.
So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.
Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.
That way, you get an average that looks kinda real, I guess, it “hides the decline”.
Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.
Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.
And CRU? Who knows what they use? We’re still waiting on that one, no data yet …
What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.
And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.
Regards to all, keep fighting the good fight,
w.
FURTHER READING:
My previous post on this subject.
The late and much missed John Daly, irrepressible as always.
More on Darwin history, it wasn’t Stevenson Screens.
NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony
Smokey: You say about the same thing in every thread. Yes, if somebody puts forth a hypothesis, it won’t be accepted until some convincing and consistent evidence is given for it. Clearly, you don’t think this has happened yet, while many people think it has happened.
But that order of events is entirely unrelated to what I’m talking about here.
Here, Willis has quite explicitly accused somebody of fraud. This is entirely different from Willis saying somebody’s conclusion isn’t well supported, or somebody’s analysis is flawed, or anything like that. No, he is explicitly saying that somebody intentionally fudged some numbers. That is not a claim to be made lightly, and it is absolutely a claim you do not make unless you can provide evidence to support it. If you make this claim, the burden is on you to actually back up your accusation with something. The credibility of the accuser is on the line, if he does not support the claim. Willis has not done that. All that he’s shown is that there a single station with a sizable homogenisation adjustment. That’s all he’s done. But it’s already known that at some individual stations, some large adjustments are made. That shouldn’t have been surprising to anybody.
With all of the people from Australia commenting, I think we should put the politicians on the barbie. Nice work, Willis and Anthony. More evidence that something in temperature measurement is amiss.
JJ: I think I found what you want.
There is a such a comparison for max/min temp data, if not the mean, in Peterson and Easterling, “The effect of artificial discontinuities on recent
trends in minimum and maximum temperatures”, Atmospheric Research 37 (1995) 19-26.
For the entire Northern Hemisphere, for max temps, the raw data give a trend of +0.73 C/century; the adjusted data give +0.92 C/century. So there is some difference in the max temps. But for the temperature minima, raw and adjusted are essentially the same, at +2 C/century.
So the temperature minimum data are unaffected by homogenisation; the maximum data are somewhat affected. This makes some sense to me – adding or removing a Stevenson screen won’t affect the nighttime measurement.
They go on to show that for smaller regions, you might see bigger differences between raw and adjusted, and they mention that individual stations can see dramatic shifts on adjustment. They toss a station out if the adjustment is greater than 3 C. So that’s a statistical property of the data: taken as a whole, the adjustments make little difference. But as you take smaller subsets of data, you might find some bigger effects.
For what it is worth, I will quote the following from the paper. If the quote is too long to be fair use, the moderator can remove it.
“However, non-random changes in location (e.g. movements from urban to rural airport locations), or in instrument types (e.g. liquid in glass thermometers to thermistors) may cause biases to be consistently in the same direction (e.g. all warm or all cool) at many or even all stations in a network.”
So this is perhaps a reason why you adjustments are not random in sign, within a given subset of the data (like the US).
This is a long and roundabout way of saying “We don’t have the facts”. To ‘suppose’ that the central premise may still be true is not how science works. There’s facts, then you know its true. That’s the only way it works. In the case of temperature records, and their bearing on climate change, we don’t have any facts. Furthermore, if this data constitutes a significant portion of the climate reconstruction FOR AN ENTIRE CONTINENT, then you can boldly say that the whole CRU record is a farce.
Like atheists who pray on their deathbed, anyone presenting this data, then leaving the door open to the idea that the central premise may still have merit, is doing nothing but openly undermining their own credibility. Is there a scientist left in the world besides me who is still willing to stand behind facts and call a spade a spade. If someone can present some hard and verifiable FACTS to PROVE the Earth is warming at all, I’ll be their most vocal spokesman. But play wishy washy with statistics, and say “well its not totally true or untrue” is not science, nor enlightening, nor does it even require one shred of scientific intellect.
We simply don’t have the data or knowledge at this point to determine the answer either way. Even the raw temperature data cannot be verified to be calibrated or validated across stations in addition to having significant gaps. That is the one and only FACT you can extract from the entire global circus around manmade global warming. I defy anyone to present a fact-based, verifiable argument to the contrary.
I thought you people were on the side of facts.
ouch, brutal takedown of your article in the economist…
http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists
From the department of “I really hate to say I told you so, but …”
JJ, the economist blogger would have written the same post with or without the accusations.
Let them keep hiding behind the algorithm – the all mighty algorithm.
carrot eater (14:26:43) :
“”For the entire Northern Hemisphere, for max temps, the raw data give a trend of +0.73 C/century; the adjusted data give +0.92 C/century. So there is some difference in the max temps. But for the temperature minima, raw and adjusted are essentially the same, at +2 C/century.””
What about the SH and avg temp? Do you know why the paper provided this without the entire world avg?
John McManus (12:59:32) :
“I checked you allegations about the Darwin weather station. It is not the lonely site you claim. Australia shows 88 weather stations, 17 of them within your 500 km. radius. ”
Now, yes. Then, no. (Read back through the comments to find the spot where this was clarified.)
JJ (15:53:01) :
From the department of “I really hate to say I told you so, but …”
I also watch this “gotcha” claims with dread, and wish that claimants would remember my constant urgings not make roundhouse swings aiming for a knockout, which leaves our side open to a counterpunch. Our credibility is very important at this moment. And we don’t need to do anything now, because time is in the processes of taking down the other sides’ house of cards. Just let it happen.
Just to check my understanding, the debate here is about homogenization (trying to eliminate errors in reporting by looking at nearby stations and seeing what they show, and then adjusting the station if it shows significant differences from its “neighbors”), correct? But an earlier poster notes that this is done only for regional analysis, not for the overall global temp, where it is assumed errors average out. Does anyone have a source for that I can check?
wobble:
Without the accusations, this isn’t a topic with 900 some comments, between the two threads. Without the accusations, this isn’t a topic that people forward to each other.
Without the accusations, this is just the musings of a guy who doesn’t understand why the homogenisation process produces certain results for this one station, but who hasn’t put the work in to see what the process is doing. It’d generate some discussion here about the homogenisation process; people would look up the papers and learn something about how it’s done; people might be impressed or unimpressed by the method; maybe somebody from the ABoM would eventually come up and describe the site changes – but it wouldn’t get the attention, nor would it attract the disdain once people realise what’s shown here, and what isn’t.
Like it or not, there is a credibility issue here. Don’t make accusations of fraud, don’t talk about smoking guns, unless you’ve actually done the work to back it up. People might not pay any attention to you, next time.
what is the point of a “global average”?
in respect of the melting ice cap (for example) the only temperature that matters is the local tgemperature (specifically, is it >0degC)
Kilamanjaro – the ice cap is, as I understand it, above the freeze altitude – so again what relevance is a global average?
As someone who is trying to make up their own mind about climate change I was pleased to see this. However friends have referred me to http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
I am confused about the truth. Please can you explain?
wobble (16:25:27) :
Sorry, I didn’t note if they gave a reason for only doing a NH analysis, and not global. Don’t have the paper in front of me anymore, but I can check later if you want.
The mean would be doing something between the max and min, I’d imagine.
I didn’t actually do a search for this, so there might be other papers or online material of this sort. I just came across this one because it was cited by another paper I was reading.
I agree with you that it’s an interesting question.
carrot eater (13:59:58):
But WAG, who is on your side, says @ur momisugly(08:56:36):
I replied that WAG’s…
So you see, what you’re trying to accuse Willis of is routinely done in spades by the alarmist crowd. They do it, of course, because they lack credible facts to support their assertions.
But enough of this. You’re trying to make a case and I don’t agree with it. Unless, of course, you want to get it on with WAG and Lambert for doing what you’re trying to accuse Willis of.
So let’s cut to the chase: government entities like GISS, for example, fudge the numbers. And they stonewall requests to disclose how they did it. Since we’re talking about the climate, and not national defense secrets, I think they’re refusing to open the books simply because they are being dishonest, and they know it.
For instance, Mike McMillan produced a page of graphs showing the shenanigans that GISS used to show warming by diddling the figures. The original raw data, going back to 1900 and before, was taken directly from surface stations and hand-written onto B-91 forms, signed, and dated. Here’s an example: click.
But GISS has changed those raw numbers — and they refuse to explain exactly how. It’s a secret, see? It’s none of the public’s business.
Here’s McMillan’s chart page: click
Notice that those are some really, really BIG “adjustments” of readings that were taken directly from mercury thermometers [which are even today more accurate than most thermocouple and even RTD based thermometers].
Getting all red faced and arm-waving over an interpretation of motives is just a distraction from the rampant corruption endemic to GISS and the rest of the mainstream climate scientists, who have become rich and famous by using fudged scare tactics. The deliberate alteration of the record by GISS exposes what they’re up to. And hiding their methodology is exactly what people would do who want to artificially show that the planet is warming far beyond what it really is. That also is a claim not to be made lightly, and I am making it — and providing evidence to support it.
Willis you seem to have ignored my analysis so I will repost it. Please comment.
bill (17:11:19) :
Posted on the stick thread:
Willis Looking at the unadjusted plots leads me to suspect that there are 2 major changes in measurement methods/location.
This occur in january 1941 and June 1994 – The 1941 is well known (po to airport move) . I can find no connection for the 1994 shift
These plots show the 2 periods each giving a shift of 0.8C
http://img33.imageshack.us/img33/2505/darwincorrectionpoints.png
The red line shows the effect of a suggested correction
This plot compares the GHCN corrected curve (green) to that suggested by me (red).
The difference between the 2 is approx 1C compared to the 2.5 you quote as the “cheat”.
http://img37.imageshack.us/img37/4617/ghcnsuggestedcorrection.png
Smokey: I disagree with Tim Lambert in that, 100%. I don’t think Willis lying; I do think he’s throwing around wild and baseless accusations. Lambert is given to using stronger attacking language than I would, as does Joe Romm. My respect for either is diminished because of it. Neither are actual climate scientists; perhaps there’s a trend there?
But an accusation of scientific fraud is a much more serious charge, and that’s what Willis has levied here. I repeat, if you want to maintain your credibility, you don’t say such things lightly.
As for the rest of your missive, I haven’t a clue who Mike McMillan is, but right off the top, I’ve got my doubts of your claims. GISS doesn’t maintain raw data; they get the raw data from NOAA, in the form of GHCN/USHCN. So it isn’t even possible for GISS to secretly mess with the raw data. Just because somebody somewhere claims a conspiracy doesn’t mean you should just take their claims at face value.
Funny that you bring up GISS in this respect. All their code is freely available. You want to see a code that does homogenisation adjustments? You’ll probably find one on the GISS site.
john (16:52), I’ll explain. All statistics are junk science. They are laden with human assumption and never represent facts. Statistics are not needed to calculate the speed of an object in motion, determine if a building will stand, or tell you what you see in a microscope. That is science. Observation, measurement, mechanisms and reproducibility. What you’re seeing is the opposite and that is the precise reason why you question it.
What constitutes “the data” here is a choppy dataset from a hodge podge of weather stations, arranged in disproportionate number around the coasts of a continent the size of the US. A dozen different people have come along and applied their own unique set of assumptions to massage the data in a manner that they are sure is the ‘right’ way. And be they alarmists or skeptics each will claim the high ground, and a scientific ‘truth’ thus hinges on who can massage better or scream louder. Many have already pointed out that Eschenbach’s assertions about the proximity of other weather stations are embarrassingly and verifiably wrong, but the whole exercise ceased to be scientific long before that.
The earth has major geologic events every 50-100 million years. It has a 100,000 year orbital precess in which the both the radius from the sun and axial tilt go from a maximum to a minimum. This causes ice ages and interglacials on a 20,000/80,000 year cycle. Within this time there are 800-1000 year warming and cooling cycles where temperature and CO2 levels rise and fall in oscillating intervals. The deep Atlantic ocean currents oscillate in multidecadal cycles while the Pacific oscillates on decade cycles. Typical sunspot cycles run ~11 years, El Ninos and La Ninas happen in ~4 year cycles, and cloud formation changes randomly by the minute. This in addition to the fact that the sun, our main energy input, goes through internal energy cycles that we cannot fully predict. To think that 100 years of sketchy temperature station data is going to lead to an accurate composite prediction of future climate and temperatures within fractions of a degree is pure foolishness. The complex and differential forces acting on the earth in different timescales deserve more than just an extrapolated straight line. Its an insult to our intelligence no matter what the conclusion.
Much ado about nothing.
This paper is interesting “A historical Annual Temperature Dataset for Australia”
http://www.bom.gov.au/amm/docs/1996/torok.pdf
It includes a detailed description of “adjustments” made to one example of a station, at Mildura. How on earth can there be any confidence in temperature records with this sort of thing taking place and with who knows what sort of corrections being added ? “Move to airport” is interesting in that it scores a negative adjustment !
year adjustment reason
<1989 -0.6 move to higher ground
<1946 -0.9 move to airport
<1939 +0.4 new screen
<1930 +0.3 move from park
1943 +1.0 pile of dirt near screen
1903 +1.5 temporary site
1902 -1.0 problems with shelter
1901 -0.5 problems with shelter
1900 -0.5 problems with shelter
1892 +1.0 temporary site
1890 -1.0 detect
It’s Darwinian data selection… survival of the warmest?
Dr A Burns (19:28:34) :
You’re not telling the full story about that Torok paper. Firstly, they are not describing official BoM data – these are adjustments they did for their paper. It’s possible the BoM followed it later, but you haven’t shown that.
Second, you did not explain the notation on that list. All but the top four adjustments were for one year only, and would have had very little effect on the trend. And those four were two up, two down.
You focussed on the move to airport. Mildura Airport in 1946 would have been just a field somewhere, with maybe a few light planes and maybe a weekly DC3. There would have been very little paving – possibly not even the runway.
Nick there are all the Bom adjustments.
The “0” indicates all years backwards.
The “1” is one year only.
1021 are the minimums
1011 are the maximums
14015 1021 1991 0 -0.3 -0.3 dm
14015 1021 1987 0 -0.3 -0.6 dm*
14015 1021 1964 0 -0.6 -1.2 orm*
14015 1021 1942 0 -1.0 -2.2 oda
14015 1021 1894 0 +0.3 -1.9 fds
14015 1001 1982 0 -0.5 -0.5 or
14015 1001 1967 0 +0.5 +0.0 or
14015 1001 1942 0 -0.6 -0.6 da
14015 1001 1941 1 +0.9 +0.3 rp
14015 1001 1940 1 +0.9 +0.3 rp
14015 1001 1939 1 +0.9 +0.3 rp
14015 1001 1938 1 +0.9 +0.3 rp
14015 1001 1937 1 +0.9 +0.3 rp
14015 1001 1907 0 -0.3 -0.9 rd
14015 1001 1894 0 -1.0 -1.9 rds
ftp://ftp2.bom.gov.au/anon/home/bmrc/perm/climate/temperature/annual/
carrot eater (16:46:53) :
“”Without the accusations, this isn’t a topic with 900 some comments, between the two threads. Without the accusations, this isn’t a topic that people forward to each other.””
Your preachings are getting quite annoying. I’ve already acknowledged many times that Willis shouldn’t have included accusation in his post.
My last point was that the Economist blogger would have written his post even without the accusations (if it had been posted by skeptic bloggers – and I think it would have). You’re wrong if you disagree.
I have calculated the bias of adjustment for the *entire* CRU dataset. You find the result here. In short: there is no bias and no smoking gun.
Willis are you prepared to deconstruct this?
http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists it would be helpful (apologies if you have already)