Guest Post by Willis Eschenbach
Recapping the story begun at WUWT here and continued at WUWT here, data from the temperature station Darwin Zero in northern Australia was found to be radically adjusted and showing huge warming (red line, adjusted temperature) compared to the unadjusted data (blue line). The unadjusted data showed that Darwin Zero was actually cooling over the period of the record. Here is the adjustment to Darwin Zero:
Figure 1. The GHCN adjustments to the Darwin Zero temperature record.
Many people have written in with questions about my analysis. I thank everyone for their interest. I’m answering them as fast as I can. I cannot answer them all, so I am trying to pick the relevant ones. This post is to answer a few.
• First, there has been some confusion about the data. I am using solely GHCN numbers and methods. They will not match the GISS or the CRU or the HadCRUT numbers.
• Next, some people have said that these are not separate temperature stations. However, GHCN adjusts them and uses them as separate temperature stations, so you’ll have to take that question up with GHCN.
• Next, a number of people have claimed that the reason for the Darwin adjustment was that it is simply the result of the standard homogenization done by GHCN based on comparison with other neighboring station records. This homogenization procedure is described here (PDF).
While it sounds plausible that Darwin was adjusted as the GHCN claims, if that were the case the GHCN algorithm would have adjusted all five of the Darwin records in the same way. Instead they have adjusted them differently (see below). This argues strongly that they were not done by the listed GHCN homogenization process. Any process that changed one of them would change all of them in the same way, as they are nearly identical.
• Next, there are no “neighboring records” for a number of the Darwin adjustments simply because in the early part of the century there were no suitable neighboring stations. It’s not enough to have a random reference station somewhere a thousand km away from Darwin in the middle of the desert. You can’t adjust Darwin based on that. The GHCN homogenization method requires five well correlated neighboring “reference stations” to work.
From the reference cited above:
“In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.”
and “Also, not all stations could be adjusted. Remote stations for which we could not produce an adequate reference series (the correlation between first-difference station time series and its reference time series must be 0.80 or greater) were not adjusted.”
As I mentioned in my original article, the hard part is not to find five neighboring stations, particularly if you consider a station 1,500 km away as “neighboring”. The hard part is to find similar stations within that distance. We need those stations whose first difference has an 0.80 correlation with the Darwin station first difference.
(A “first difference” is a list of the changes from year to year of the data. For example, if the data is “31, 32, 33, 35, 34”, the first differences are “1, 1, 2, -1”. It is often useful to examine first differences rather than the actual data. See Peterson (PDF) for a discussion of the use of the “first-difference method” in climate science.)
Accordingly, I’ve been looking at the candidate stations. For the 1920 adjustment we need stations starting in 1915 or earlier. Here are all of the candidate stations within 1,500 km of Darwin that start in 1915 or before, along with the correlation of their first difference with the Darwin first difference:
WYNDHAM_(WYNDHAM_PORT) = -0.14
DERBY = -0.10
BURKETOWN = -0.40
CAMOOWEAL = -0.21
NORMANTON = 0.35
DONORS_HILL = 0.35
MT_ISA_AIRPORT = -0.20
ALICE_SPRINGS = 0.06
COEN_(POST_OFFICE) = -0.01
CROYDON = -0.23
CLONCURRY = -0.2
MUSGRAVE_STATION = -0.43
FAIRVIEW = -0.29
As you can see, not one of them is even remotely like Darwin. None of them are adequate for inclusion in a “first-difference reference time series” according to the GHCN. The Economist excoriated me for not including Wyndham in the “neighboring stations” (I had overlooked it in the list). However, the problem is that even if we include Wyndham, Derby, and every other station out to 1,500 km, we still don’t have a single station with a high enough correlation to use the GHCN method for the 1920 adjustment.
Now I suppose you could argue that you can adjust 1920 Darwin records based on stations 2,000 km away, but even 1,500 km seems too far away to do a reliable job. So while it is theoretically possible that the GHCN described method was used on Darwin, you’ll be a long, long ways from Darwin before you find your five candidates.
• Next, the GHCN does use a good method to detect inhomogeneities. Here’s their description of their method.
To look for such a change point, a simple linear regression was fitted to the part of the difference series before the year being tested and another after the year being tested. This test is repeated for all years of the time series (with a minimum of 5 yr in each section), and the year with the lowest residual sum of the squares was considered the year with a potential discontinuity.
This is a valid method, so I applied it to the Darwin data itself. Here’s that result:
Figure 2. Possible inhomogeneities in the Darwin Zero record, as indicated by the GHCN algorithm.
As you can see by the upper thin red line, the method indicates a possible discontinuity centered at 1939. However, once that discontinuity is removed, the rest of the record does not indicate any discontinuity (thick red line). By contrast, the GHCN adjusted data (see Fig. 1 above) do not find any discontinuity in 1941. Instead, they claim that there are discontinuities around 1920, 1930, 1950, 1960, and 1980 … doubtful.
• Finally, the main recurring question is, why do I think the adjustments were made manually rather than by the procedure described by the GHCN? There are a number of totally independent lines of evidence that all lead to my conclusion:
1. It is highly improbability that a station would suddenly start warming at 6 C per century for fifty years, no matter what legitimate adjustment method were used (see Fig. 1).
2. There are no neighboring stations that are sufficiently similar to the Darwin station to be used in the listed GHCN homogenization procedure (see above).
3. The Darwin Zero raw data does not contain visible inhomogeneities (as determined by the GHCN’s own algorithm) other than the 1936-1941 drop (see Fig. 2).
4. There are a number of adjustments to individual years. The listed GHCN method does not make individual year adjustments (see Fig. 1).
5. The “Before” and “After” pictures of the adjustment don’t make any sense at all. Here are those pictures:
Figure 3. Darwin station data before and after GHCN adjustments. Upper panel shows unadjusted Darwin data, lower panel shows the same data after adjustments.
Before the adjustments we had the station Darwin Zero (blue line line with diamonds), along with four other nearby temperature records from Darwin. They all agreed with each other quite closely. Hardly a whisper of dissent among them, only small differences.
While GHCN were making the adjustment, two stations (Unadj 3 and 4, green and purple) vanished. I don’t know why. GHCN says they don’t use records under 20 years in length, which applies to Darwin 4, but Darwin 3 is twenty years in length. In any case, after removing those two series, the remaining three temperature records were then adjusted into submission.
In the “after” picture, Darwin Zero looks like it was adjusted with Sildenafil. Darwin 2 gets bent down almost to match Darwin Zero. Strangely, Darwin 1 is mostly untouched. It loses the low 1967 temperature, which seems odd, and the central section is moved up a little.
Call me crazy, but from where I stand, that looks like an un-adjustment of the data. They take five very similar datasets, throw two away, wrench the remainder apart, and then average them to get back to the “adjusted” value? Seems to me you’d be better off picking any one of the originals, because they all agree with each other.
The reason you adjust is because records don’t agree, not to make them disagree. And in particular, if you apply an adjustment algorithm to nearly identical datasets, the results should be nearly identical as well.
So that’s why I don’t believe the Darwin records were adjusted in the way that GHCN claims. I’m happy to be proven wrong, and I hope that someone from the GHCN shows up to post whatever method that they actually used, the method that could produce such an unusual result.
Until someone can point out that mystery method, however, I maintain that the Darwin Zero record was adjusted manually, and that it is not a coincidence that it shows (highly improbable) warming.
Sponsored IT training links:
Want to pass HP0-J33 at first try? Gets certified 000-377 study material including 199-01 dumps to pass real exam on time.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



Don’t you mean point six (0.6C) per century?
Very nice analysis. Keep up the good work Mr Eschenbach.
Willis, could you please lead me to the red “homogenized graph” at the GHCN site? Thanks.
In general terms, am I correct in understanding that the “time of observation bias” inserted every time Hansen’s GISS program runs artificially lowers temperature data before 1970, and raises (or keeps the same data – but with no Urban Heat Island corrction) for all temperature data after 1970. From what I recall, artificial TOBS changes account for over 0.15 of the total 0.4 degree rise in the “supposed” ground temperature record.
Or over 1/3 of Hansen’s entire “measured” climate change is artifically inserted.
If so, why do they claim a TOBS correction is required at all?
Aren’t the only numbers used by Hansen/NOAA the day’s maximun and minimum values? How would those change based on what time of the day you read the max/min thermometer?
(Never mind whatever his “logic” is in using the same TOBS change for every record in every year. (How many times did these earlier weathermen keep changing the time of day they wrote temperatures down?)
Glenn (19:50:45) “Don’t you mean point six (0.6C) per century?”
See figure 1. If you need a grade 10 math tutorial, speak up.
Good work.
Correction: “highly improbability” should be “highly improbable.”
willis, have u seen this?
19 Dec: TBR: NZ Study may hold key to faulty world temp data
A long-forgotten scientific paper on temperature trends in New Zealand may be the smoking gun on temperature manipulation worldwide.
Since Climategate first broke, we’ve seen scandal over temperature adjustments by NZ’s National Institute of Water and Atmospheric research, NIWA, which in turn prompted a fresh look at raw temperature data from Darwin and elsewhere.
Now, a study published in the NZ Journal of Science back in 1980 reveals weather stations at the heart of NIWA’s claims of massive warming were shown to be unreliable and untrustworthy by a senior Met Office climate scientist 30 years ago, long before global warming became a politically charged issue.
The story is published in tonight’s TGIF Edition, and has international ramifications.
That’s because the study’s author, former Met Office Auckland director Jim Hessell, found a wide range of problems with Stevenson Screen temperature stations and similar types of unit.
Hessell debunked a claim that New Zealand was showing a 1C warming trend between 1940 and 1980, saying the sites showing the biggest increases were either urbanized and suffering from urban heat island effect, or they were faulty Stevenson Screen installations prone to showing hotter than normal temperatures because of their design and location.
One of the conclusions is that urbanized temperature stations are almost a waste of time in climate study:
“For the purpose of assessing climatic change, a ‘rural’ environment needs to be carefully defined. Climatic temperature trends can only be assessed from rural sites which have suffered no major transformations due to changes in shelter or urbanisation, or from sites for which the records have been made homogenous. Urban environments usually suffer continual modifications due to one cause or another.”
“It is concluded that the warming trends in New Zealand previously claimed, are in doubt and that as has been found in Australia (Tucker 1975) no clear evidence for long term secular warming or cooling in Australasia over the last 50 years [1930-1980] exists as yet.”
Hessell divided weather stations in New Zealand into two classes, “A” and “B”, where A class locations had suffered increasing urbanization or significant site changes, and B class locations had not.
“It can be seen immediately that the average increase in mean temperatures at the A class stations is about five times that of the B class”, the study notes.
Among the studies listed as a contaminated A site is Kelburn, which was at the heart of the NIWA scandal a fortnight ago.
A link to the study can be found in TGIF Edition.
http://briefingroom.typepad.com/the_briefing_room/2009/12/nz-study-may-hold-key-to-faulty-world-temp-data.html
Glenn,
No, look at the very top chart…from 1941 to about 1990 (50 yrs), the trend is +3 degrees. So it would be 6 deg/century.
Glenn (19:50:45) :
Nope, that’s the amazing part. Six degrees per century.
tokyoboy (19:54:02) :
I made the graph with data from the GHCN site. The data is at
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2
in the file called “v2.mean_adj
w.
Oh, I see.. they adjusted all the temps before 1952? 53? down by some constant. Notice the 1.7C drop about 1940 is intact. Then a smaller adjustment was applied step-wise to each of the subsequent decades. -2C before 1952, -1.2C for 1953 to 1960-ish, etc.
Thanks Anthony. Darwin has only two seasons each year – “the wet” (now) and “the dry”. Both tend to be pretty hot. It’s been consistently like this just as the unadjusted record accurately indicates.
Is there no end to the fabrications being perpetrated ?
How would those change based on what time of the day you read the max/min thermometer?
If you measure too close to the high point, you get cases where the same “high” carries over to both days. And vice versa if you measure at the low point.
If it’s really cold at dawn Monday and that’s when I measure it, I’ll get low a minute after dawn on Monday with the Sunday’s low being a minute before dawn. Two days of lows taken from a single cold interval. If dawn Tuesday is warmer than dawn Monday, it gets “left out”.
To avoid this, you need to measure at a time far removed from either the typical high point (mid-afternoon) or the typical low point (predawn).
RACookPE1978 (20:03:45) :
This is really a GISS question or a USHCN question, as GHCN do not do a TOBS adjustment per se. For the amount of the adjustment, see here.
Whether it is necessary or not depends on the station, the type of thermometer used, and the times of observation. The canonical document on this is Peterson (q.v.).
evanmjones (20:27:01) :
No, the “day” is midnite to midnite, so you will only get one high or low per day. The low is typically shortly after dawn, and the high somewhere in the late afternoon.
[REPLY – The 24-hour interval between times of observation is the “day” so far as the records are concerned, not midnight – midnight. There’s really no other way to do it. The best observation time is probably around 11 A.M. ~ Evan]
Willis Eschenbach (20:15:45) :
Glenn (19:50:45) :
Don’t you mean point six (0.6C) per century?
“Nope, that’s the amazing part. Six degrees per century.”
Ah. That would make the region the fastest warming area claim in the world, if memory serves, topping the Arctic and Siberia. About 3 times the global surface estimate in any event.
Glen the 6C per century slope is correct. No need to have a PhD to see that.
I already knew everything they explained to me in the special report on Climategate on FOX News except for the way Mann and others referred to McKitrick & McIntyre in the e-mails.
Did those creepy climate scientist walk around the cubicals in their offices and refer to McKitrick & McIntyre as M&M as a joke often?
Apparently so.
Willis Eschenbach (20:18:15) :
>tokyoboy (19:54:02) :Willis, could you please lead me to the
>red “homogenized graph” at the GHCN site? Thanks.
“I made the graph with data from the GHCN site. The data is at
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2 in the file called “v2.mean_adj”
Thanks….. but my PC can’t open the file, probably due to software mismatch.
Willis, don’t you have the data on such a popular software as PDF?
Thanks again.
Nice work Willis. Thank you.
A bit OT … carrying the case into the future.
So let us fast forward 30 years or so. If the top chart “adjusted” trend would continue its upward slope, it would be unbearable at Darwin yet folks will be carrying on as usual. “They” will be telling the Arctic ice has disappeared when it is just fine. “They” will be telling us that the Maldives just sank, when the obviously are fine.
Back to Darwin Zero … If that upward trend continues at some point it would start to look actually silly … nonsensical and YET they could not reverse it could they? Because THAT would be a cooling trend. Gotcha!! ☺ Are “they” LOCKED into this big lie? They cannot reverse it now. Ohoh.
What I am getting at is this: at some future point (even if the BIG LIE continues) when will it all finally fall apart? (I am not as optimistic as others that Climategate can carry this…) When will empirical facts outweight the manipulated data? (IPCC told us in 1991 that sea levels would rise by 300 mm by 2030…we are half way there in time and only 20% there in reality. BUT no one remembers their first report do they?)
Sane folks (like us) know we’ve already reached that point (where observation belies fudged data), but when will it become obvious to people in Darwin, New York, London and Inuvik that the official word is BS?
Just musing aloud on a snowy eve. ☺
Thanks again Willis.
I appreciate your analysis.
There seems to be clear evidence of deception about global warming.
Earth’s climate is changing, has changed in the past, and will always change because Earth’s climate is controlled by the stormy Sun – a variable star.
There is also evidence of government involvement in deception about the composition of the Sun, its source of energy and solar neutrino oscillations.
It appears that politicians have trained scientists with grant funds like Pavlov trained dogs with dog biscuits.
Scientific integrity was the first victim.
My greatest fear is that democracy itself cannot survive if scientists become tools for misinformation by politicians.
With kind regards,
Oliver K. Manuel
Former NASA PI for Apollo
Hi Willis Love your work by the way.
This temperature profile of Australia from the BOM Link, highlights the problem with a 1500 KM difference to different weather stations. To give you a clue 1500 km drops you from Darwin to Alice springs ie top middle down to near the bottom of the first dotted line middle of Australia. ie different heat zone.
http://www.bom.gov.au/cgi-bin/silo/temp_maps.cgi
I did the temperature modeling work when unleaded petrol was introduced to Australia. The problem was cars and petrol pumps were vapour locking. I developed the model that was adopted by the oil industry to related 90 percentile temperature data (using microfish of available temp data) to vapour pressure.
This was developed because all you had to do was move as little as 150km particularly from down south, to end up with a problem. So temperature zones were constructed and vapour pressure limits were established to stop valour locking as soon as you drove out of a capital city.
Thats how I am aware that small variations in distance will give a big difference in average temp data.
P.S. I left the oil industry back in 1997.
In the days of analogue temperature measurement, would time of day be used for max/min readings or would a max/min thermometer (illustration only http://www.redhillgeneralstore.com/A31690.htm ) have been used at these stations? The latter would measure max/min at whatever time that this was reached and would need to be reset once a day. Just curious.
Global Climate Data Tampering Exposed!
Climate Data Tampering In Australia Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/creating_warming_from_cold_austtralian_stats/
Climate Data Tampering In Darwin Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/climategate_how_one_human_caused_darwin_to_warm/
Climate Data Tampering In New Zealand Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/new_zealands_man_made_warming/
Climate Data Tampering In Russia Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/climategate_was_russias_warming_also_man_made/
Climate Data Tampering In Alaska Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/more_man_made_warming_this_time_in_alaska/
Climate Data Tampering In Sweden Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/making_sweden_warmer/
Fraudulent Hockey Sticks And Hidden Data
http://joannenova.com.au/2009/12/fraudulent-hockey-sticks-and-hidden-data/#more-4660
Would You Buy A Used Car From These So-Called “Scientists”?
Our gullible alarmist friends have obviously fallen for this scam hook, line, and sinker!
First we had the IPCC report “The Science of Climate Change 1995”, where lead auther Benjamin D. Santer removed the following conclusions made by genuine scientists, and without the scientists being made aware of this change.
“None of the studies cited above has shown clear evidence that we can attribute the observed climate changes to the specific cause of increases in greenhouse gases.”
“No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [man-made] causes.”
“Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.”
Then we have some choice quotes from so-called “consensus scientists”.
“The two MMs [Canadian skeptics Steve McIntyre and Ross McKitrick] have been after the CRU station data for years. If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send to anyone.”
Phil Jones email, Feb 2 2005
“I can’t see either of these papers being in the next IPCC report, Kevin and I will keep them out somehow, even if we have to redefine what the peer-review literature is!”
Phil Jones Director, The CRU
[cutting skeptical scientists out of an official UN report]
“The fact is that we can’t account for the lack of warming at the moment, and it is a travesty that we can’t …there should be even more warming… the data are surely wrong”.
Kevin Trenberth, Climatologist, US Centre for Atmospheric Research
“…If anything, I would like to see the climate change happen, so the science could be proved right, regardless of the consequences. This isn’t being political, it is being selfish. “
Phil Jones Director, The CRU
“We have to get rid of the Mediæval Warm Period” Confided to geophysicist David Deming by the IPCC, 1995
[Many believe that man to be Jonathan Overpeck, which Prof. Deming didn’t deny in an email response, who would later also serve as an IPCC lead author.]
“We have 25 years or so invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it?” Phil Jones Director, The CRU
”We have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have.” Professor Stephen Schneider
“Humans need a common motivation … either a real one or else one invented for the purpose. … In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill. All these dangers are caused by human intervention so the real enemy then, is humanity itself.” Club of Rome declaration
“It doesn’t matter what is true, it only matters what people believe is true…. You are what the media define you to be. Greenpeace became a myth and fund generating machine.” Paul Watson, Co-Founder Greenpeace, Forbes, Nov. 1991
Now what conclusion would a rational and sceptical person come to?
Frederick Seitz, president emeritus of Rockefeller University and chairman of the George C. Marshall Institute, summed it up nicely after seeing the changes made to the IPCC report.
“In my more than 60 years as a member of the American scientific community, including service as president of both the National Academy of Sciences and the American Physical Society, I have never witnessed a more disturbing corruption of the peer-review process than the events that led to this IPCC report.”
“If you measure too close to the high point, you get cases where the same “high” carries over to both days. And vice versa if you measure at the low point.
If it’s really cold at dawn Monday and that’s when I measure it, I’ll get low a minute after dawn on Monday with the Sunday’s low being a minute before dawn. Two days of lows taken from a single cold interval.
To avoid this, you need to measure at a time far removed from either the typical high point (mid-afternoon) or the typical low point (predawn).”
—…—…—
Yes, understood. I’ve heard similar explanation before.
Now, back to the purpose of my question: What (in your answer) or in the physical world of real temperatures and real measurements, actually justifies lowering all of the country’s recorded temperatures prior to 1970?
We get a cold front coming through once every 10 – 16 days (or maybe 25 – 30 days a year), and at that only half the year does that hypothetical cold front change temepratures drastically (in winter really – summer fronts are most often less drastic), and of those few events how many actually affected two days worth of readings, of that small theoretical fraction left, how many actual events really happened?
There is still no reason to use TOBS as a reason to change the earth’s temperature records.