Darwin Zero Before and After

Guest Post by Willis Eschenbach

Recapping the story begun at WUWT here and continued at WUWT here, data from the temperature station Darwin Zero in northern Australia was found to be radically adjusted and showing huge warming (red line, adjusted temperature) compared to the unadjusted data (blue line). The unadjusted data showed that Darwin Zero was actually cooling over the period of the record. Here is the adjustment to Darwin Zero:

Figure 1. The GHCN adjustments to the Darwin Zero temperature record.

Many people have written in with questions about my analysis. I thank everyone for their interest. I’m answering them as fast as I can. I cannot answer them all, so I am trying to pick the relevant ones. This post is to answer a few.

• First, there has been some confusion about the data. I am using solely GHCN numbers and methods. They will not match the GISS or the CRU or the HadCRUT numbers.

• Next, some people have said that these are not separate temperature stations. However, GHCN adjusts them and uses them as separate temperature stations, so you’ll have to take that question up with GHCN.

• Next, a number of people have claimed that the reason for the Darwin adjustment was that it is simply the result of the standard homogenization done by GHCN based on comparison with other neighboring station records. This homogenization procedure is described here (PDF).

While it sounds plausible that Darwin was adjusted as the GHCN claims, if that were the case the GHCN algorithm would have adjusted all five of the Darwin records in the same way. Instead they have adjusted them differently (see below). This argues strongly that they were not done by the listed GHCN homogenization process. Any process that changed one of them would change all of them in the same way, as they are nearly identical.

• Next, there are no “neighboring records” for a number of the Darwin adjustments simply because in the early part of the century there were no suitable neighboring stations. It’s not enough to have a random reference station somewhere a thousand km away from Darwin in the middle of the desert. You can’t adjust Darwin based on that. The GHCN homogenization method requires five well correlated neighboring “reference stations” to work.

From the reference cited above:

“In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.”

and  “Also, not all stations could be adjusted. Remote stations for which we could not produce an adequate reference series (the correlation between first-difference station time series and its reference time series must be 0.80 or greater) were not adjusted.”

As I mentioned in my original article, the hard part is not to find five neighboring stations, particularly if you consider a station 1,500 km away as “neighboring”. The hard part is to find similar stations within that distance. We need those stations whose first difference has an 0.80 correlation with the Darwin station first difference.

(A “first difference” is a list of the changes from year to year of the data. For example, if the data is “31, 32, 33, 35, 34”, the first differences are “1, 1, 2, -1”. It is often useful to examine first differences rather than the actual data. See Peterson (PDF) for a discussion of the use of the “first-difference method” in climate science.)

Accordingly, I’ve been looking at the candidate stations. For the 1920 adjustment we need stations starting in 1915 or earlier. Here are all of the candidate stations within 1,500 km of Darwin that start in 1915 or before, along with the correlation of their first difference with the Darwin first difference:

WYNDHAM_(WYNDHAM_PORT) = -0.14

DERBY = -0.10

BURKETOWN = -0.40

CAMOOWEAL = -0.21

NORMANTON = 0.35

DONORS_HILL = 0.35

MT_ISA_AIRPORT = -0.20

ALICE_SPRINGS = 0.06

COEN_(POST_OFFICE) = -0.01

CROYDON = -0.23

CLONCURRY = -0.2

MUSGRAVE_STATION = -0.43

FAIRVIEW = -0.29

As you can see, not one of them is even remotely like Darwin. None of them are adequate for inclusion in a “first-difference reference time series” according to the GHCN. The Economist excoriated me for not including Wyndham in the “neighboring stations” (I had overlooked it in the list). However, the problem is that even if we include Wyndham, Derby, and every other station out to 1,500 km, we still don’t have a single station with a high enough correlation to use the GHCN method for the 1920 adjustment.

Now I suppose you could argue that you can adjust 1920 Darwin records based on stations 2,000 km away, but even 1,500 km seems too far away to do a reliable job. So while it is theoretically possible that the GHCN described method was used on Darwin, you’ll be a long, long ways from Darwin before you find your five candidates.

• Next, the GHCN does use a good method to detect inhomogeneities. Here’s their description of their method.

To look for such a change point, a simple linear regression was fitted to the part of the difference series before the year being tested and another after the year being tested. This test is repeated for all years of the time series (with a minimum of 5 yr in each section), and the year with the lowest residual sum of the squares was considered the year with a potential discontinuity.

This is a valid method, so I applied it to the Darwin data itself. Here’s that result:

Figure 2. Possible inhomogeneities in the Darwin Zero record, as indicated by the GHCN algorithm.

As you can see by the upper thin red line, the method indicates a possible discontinuity centered at 1939. However, once that discontinuity is removed, the rest of the record does not indicate any discontinuity (thick red line). By contrast, the GHCN adjusted data (see Fig. 1 above) do not find any discontinuity in 1941. Instead, they claim that there are discontinuities around 1920, 1930, 1950, 1960, and 1980 … doubtful.

• Finally, the main recurring question is, why do I think the adjustments were made manually rather than by the procedure described by the GHCN? There are a number of totally independent lines of evidence that all lead to my conclusion:

1. It is highly improbability that a station would suddenly start warming at 6 C per century for fifty years, no matter what legitimate adjustment method were used (see Fig. 1).

2. There are no neighboring stations that are sufficiently similar to the Darwin station to be used in the listed GHCN homogenization procedure (see above).

3. The Darwin Zero raw data does not contain visible inhomogeneities (as determined by the GHCN’s own algorithm) other than the 1936-1941 drop (see Fig. 2).

4. There are a number of adjustments to individual years. The listed GHCN method does not make individual year adjustments (see Fig. 1).

5. The “Before” and “After” pictures of the adjustment don’t make any sense at all. Here are those pictures:

Figure 3. Darwin station data before and after GHCN adjustments. Upper panel shows unadjusted Darwin data, lower panel shows the same data after adjustments.

Before the adjustments we had the station Darwin Zero (blue line line with diamonds), along with four other nearby temperature records from Darwin. They all agreed with each other quite closely. Hardly a whisper of dissent among them, only small differences.

While GHCN were making the adjustment, two stations (Unadj 3 and 4, green and purple) vanished. I don’t know why. GHCN says they don’t use records under 20 years in length, which applies to Darwin 4, but Darwin 3 is twenty years in length. In any case, after removing those two series, the remaining three temperature records were then adjusted into submission.

In the “after” picture, Darwin Zero looks like it was adjusted with Sildenafil. Darwin 2 gets bent down almost to match Darwin Zero. Strangely, Darwin 1 is mostly untouched. It loses the low 1967 temperature, which seems odd, and the central section is moved up a little.

Call me crazy, but from where I stand, that looks like an un-adjustment of the data. They take five very similar datasets, throw two away, wrench the remainder apart, and then average them to get back to the “adjusted” value? Seems to me you’d be better off picking any one of the originals, because they all agree with each other.

The reason you adjust is because records don’t agree, not to make them disagree. And in particular, if you apply an adjustment algorithm to nearly identical datasets, the results should be nearly identical as well.

So that’s why I don’t believe the Darwin records were adjusted in the way that GHCN claims. I’m happy to be proven wrong, and I hope that someone from the GHCN shows up to post whatever method that they actually used, the method that could produce such an unusual result.

Until someone can point out that mystery method, however, I maintain that the Darwin Zero record was adjusted manually, and that it is not a coincidence that it shows (highly improbable) warming.


Sponsored IT training links:

Want to pass HP0-J33 at first try? Gets certified 000-377 study material including 199-01 dumps to pass real exam on time.


Advertisements

  Subscribe  
newest oldest most voted
Notify of
Glenn

Don’t you mean point six (0.6C) per century?

brnn8r

Very nice analysis. Keep up the good work Mr Eschenbach.

tokyoboy

Willis, could you please lead me to the red “homogenized graph” at the GHCN site? Thanks.

In general terms, am I correct in understanding that the “time of observation bias” inserted every time Hansen’s GISS program runs artificially lowers temperature data before 1970, and raises (or keeps the same data – but with no Urban Heat Island corrction) for all temperature data after 1970. From what I recall, artificial TOBS changes account for over 0.15 of the total 0.4 degree rise in the “supposed” ground temperature record.
Or over 1/3 of Hansen’s entire “measured” climate change is artifically inserted.
If so, why do they claim a TOBS correction is required at all?
Aren’t the only numbers used by Hansen/NOAA the day’s maximun and minimum values? How would those change based on what time of the day you read the max/min thermometer?
(Never mind whatever his “logic” is in using the same TOBS change for every record in every year. (How many times did these earlier weathermen keep changing the time of day they wrote temperatures down?)

Paul Vaughan

Glenn (19:50:45) “Don’t you mean point six (0.6C) per century?”
See figure 1. If you need a grade 10 math tutorial, speak up.

Good work.
Correction: “highly improbability” should be “highly improbable.”

pat

willis, have u seen this?
19 Dec: TBR: NZ Study may hold key to faulty world temp data
A long-forgotten scientific paper on temperature trends in New Zealand may be the smoking gun on temperature manipulation worldwide.
Since Climategate first broke, we’ve seen scandal over temperature adjustments by NZ’s National Institute of Water and Atmospheric research, NIWA, which in turn prompted a fresh look at raw temperature data from Darwin and elsewhere.
Now, a study published in the NZ Journal of Science back in 1980 reveals weather stations at the heart of NIWA’s claims of massive warming were shown to be unreliable and untrustworthy by a senior Met Office climate scientist 30 years ago, long before global warming became a politically charged issue.
The story is published in tonight’s TGIF Edition, and has international ramifications.
That’s because the study’s author, former Met Office Auckland director Jim Hessell, found a wide range of problems with Stevenson Screen temperature stations and similar types of unit.
Hessell debunked a claim that New Zealand was showing a 1C warming trend between 1940 and 1980, saying the sites showing the biggest increases were either urbanized and suffering from urban heat island effect, or they were faulty Stevenson Screen installations prone to showing hotter than normal temperatures because of their design and location.
One of the conclusions is that urbanized temperature stations are almost a waste of time in climate study:
“For the purpose of assessing climatic change, a ‘rural’ environment needs to be carefully defined. Climatic temperature trends can only be assessed from rural sites which have suffered no major transformations due to changes in shelter or urbanisation, or from sites for which the records have been made homogenous. Urban environments usually suffer continual modifications due to one cause or another.”
“It is concluded that the warming trends in New Zealand previously claimed, are in doubt and that as has been found in Australia (Tucker 1975) no clear evidence for long term secular warming or cooling in Australasia over the last 50 years [1930-1980] exists as yet.”
Hessell divided weather stations in New Zealand into two classes, “A” and “B”, where A class locations had suffered increasing urbanization or significant site changes, and B class locations had not.
“It can be seen immediately that the average increase in mean temperatures at the A class stations is about five times that of the B class”, the study notes.
Among the studies listed as a contaminated A site is Kelburn, which was at the heart of the NIWA scandal a fortnight ago.
A link to the study can be found in TGIF Edition.
http://briefingroom.typepad.com/the_briefing_room/2009/12/nz-study-may-hold-key-to-faulty-world-temp-data.html

Michael Jankowski

Glenn,
No, look at the very top chart…from 1941 to about 1990 (50 yrs), the trend is +3 degrees. So it would be 6 deg/century.

Willis Eschenbach

Glenn (19:50:45) :

Don’t you mean point six (0.6C) per century?

Nope, that’s the amazing part. Six degrees per century.

Willis Eschenbach

tokyoboy (19:54:02) :

Willis, could you please lead me to the red “homogenized graph” at the GHCN site? Thanks.

I made the graph with data from the GHCN site. The data is at
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2
in the file called “v2.mean_adj
w.

Oh, I see.. they adjusted all the temps before 1952? 53? down by some constant. Notice the 1.7C drop about 1940 is intact. Then a smaller adjustment was applied step-wise to each of the subsequent decades. -2C before 1952, -1.2C for 1953 to 1960-ish, etc.

Bill of Brisbane

Thanks Anthony. Darwin has only two seasons each year – “the wet” (now) and “the dry”. Both tend to be pretty hot. It’s been consistently like this just as the unadjusted record accurately indicates.
Is there no end to the fabrications being perpetrated ?

Evan Jones

How would those change based on what time of the day you read the max/min thermometer?
If you measure too close to the high point, you get cases where the same “high” carries over to both days. And vice versa if you measure at the low point.
If it’s really cold at dawn Monday and that’s when I measure it, I’ll get low a minute after dawn on Monday with the Sunday’s low being a minute before dawn. Two days of lows taken from a single cold interval. If dawn Tuesday is warmer than dawn Monday, it gets “left out”.
To avoid this, you need to measure at a time far removed from either the typical high point (mid-afternoon) or the typical low point (predawn).

Willis Eschenbach

RACookPE1978 (20:03:45) :

In general terms, am I correct in understanding that the “time of observation bias” inserted every time Hansen’s GISS program runs artificially lowers temperature data before 1970, and raises (or keeps the same data – but with no Urban Heat Island corrction) for all temperature data after 1970. From what I recall, artificial TOBS changes account for over 0.15 of the total 0.4 degree rise in the “supposed” ground temperature record.
Or over 1/3 of Hansen’s entire “measured” climate change is artifically inserted.
If so, why do they claim a TOBS correction is required at all?
Aren’t the only numbers used by Hansen/NOAA the day’s maximun and minimum values? How would those change based on what time of the day you read the max/min thermometer?
(Never mind whatever his “logic” is in using the same TOBS change for every record in every year. (How many times did these earlier weathermen keep changing the time of day they wrote temperatures down?)

This is really a GISS question or a USHCN question, as GHCN do not do a TOBS adjustment per se. For the amount of the adjustment, see here.
Whether it is necessary or not depends on the station, the type of thermometer used, and the times of observation. The canonical document on this is Peterson (q.v.).

Willis Eschenbach

evanmjones (20:27:01) :

How would those change based on what time of the day you read the max/min thermometer?
If you measure too close to the high point, you get cases where the same “high” carries over to both days. And vice versa if you measure at the low point.
If it’s really cold at dawn Monday and that’s when I measure it, I’ll get lows a minute after dawn with the previous day’s low being a minute before dawn. Two days of lows taken from a single cold interval.
To avoid this, you need to measure at a time far away from either the typical high point (mid-afternoon) or the typical low point (predawn).

No, the “day” is midnite to midnite, so you will only get one high or low per day. The low is typically shortly after dawn, and the high somewhere in the late afternoon.
[REPLY – The 24-hour interval between times of observation is the “day” so far as the records are concerned, not midnight – midnight. There’s really no other way to do it. The best observation time is probably around 11 A.M. ~ Evan]

Glenn

Willis Eschenbach (20:15:45) :
Glenn (19:50:45) :
Don’t you mean point six (0.6C) per century?
“Nope, that’s the amazing part. Six degrees per century.”
Ah. That would make the region the fastest warming area claim in the world, if memory serves, topping the Arctic and Siberia. About 3 times the global surface estimate in any event.

Peter

Glen the 6C per century slope is correct. No need to have a PhD to see that.

Michael

I already knew everything they explained to me in the special report on Climategate on FOX News except for the way Mann and others referred to McKitrick & McIntyre in the e-mails.
Did those creepy climate scientist walk around the cubicals in their offices and refer to McKitrick & McIntyre as M&M as a joke often?
Apparently so.

tokyoboy

Willis Eschenbach (20:18:15) :
>tokyoboy (19:54:02) :Willis, could you please lead me to the
>red “homogenized graph” at the GHCN site? Thanks.
“I made the graph with data from the GHCN site. The data is at
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2 in the file called “v2.mean_adj”
Thanks….. but my PC can’t open the file, probably due to software mismatch.
Willis, don’t you have the data on such a popular software as PDF?
Thanks again.

Clive

Nice work Willis. Thank you.
A bit OT … carrying the case into the future.
So let us fast forward 30 years or so. If the top chart “adjusted” trend would continue its upward slope, it would be unbearable at Darwin yet folks will be carrying on as usual. “They” will be telling the Arctic ice has disappeared when it is just fine. “They” will be telling us that the Maldives just sank, when the obviously are fine.
Back to Darwin Zero … If that upward trend continues at some point it would start to look actually silly … nonsensical and YET they could not reverse it could they? Because THAT would be a cooling trend. Gotcha!! ☺ Are “they” LOCKED into this big lie? They cannot reverse it now. Ohoh.
What I am getting at is this: at some future point (even if the BIG LIE continues) when will it all finally fall apart? (I am not as optimistic as others that Climategate can carry this…) When will empirical facts outweight the manipulated data? (IPCC told us in 1991 that sea levels would rise by 300 mm by 2030…we are half way there in time and only 20% there in reality. BUT no one remembers their first report do they?)
Sane folks (like us) know we’ve already reached that point (where observation belies fudged data), but when will it become obvious to people in Darwin, New York, London and Inuvik that the official word is BS?
Just musing aloud on a snowy eve. ☺
Thanks again Willis.

I appreciate your analysis.
There seems to be clear evidence of deception about global warming.
Earth’s climate is changing, has changed in the past, and will always change because Earth’s climate is controlled by the stormy Sun – a variable star.
There is also evidence of government involvement in deception about the composition of the Sun, its source of energy and solar neutrino oscillations.
It appears that politicians have trained scientists with grant funds like Pavlov trained dogs with dog biscuits.
Scientific integrity was the first victim.
My greatest fear is that democracy itself cannot survive if scientists become tools for misinformation by politicians.
With kind regards,
Oliver K. Manuel
Former NASA PI for Apollo

Scott of Melb Australia

Hi Willis Love your work by the way.
This temperature profile of Australia from the BOM Link, highlights the problem with a 1500 KM difference to different weather stations. To give you a clue 1500 km drops you from Darwin to Alice springs ie top middle down to near the bottom of the first dotted line middle of Australia. ie different heat zone.
http://www.bom.gov.au/cgi-bin/silo/temp_maps.cgi
I did the temperature modeling work when unleaded petrol was introduced to Australia. The problem was cars and petrol pumps were vapour locking. I developed the model that was adopted by the oil industry to related 90 percentile temperature data (using microfish of available temp data) to vapour pressure.
This was developed because all you had to do was move as little as 150km particularly from down south, to end up with a problem. So temperature zones were constructed and vapour pressure limits were established to stop valour locking as soon as you drove out of a capital city.
Thats how I am aware that small variations in distance will give a big difference in average temp data.
P.S. I left the oil industry back in 1997.

Keith Minto

In the days of analogue temperature measurement, would time of day be used for max/min readings or would a max/min thermometer (illustration only http://www.redhillgeneralstore.com/A31690.htm ) have been used at these stations? The latter would measure max/min at whatever time that this was reached and would need to be reset once a day. Just curious.

GreenIsAHoax

Global Climate Data Tampering Exposed!
Climate Data Tampering In Australia Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/creating_warming_from_cold_austtralian_stats/
Climate Data Tampering In Darwin Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/climategate_how_one_human_caused_darwin_to_warm/
Climate Data Tampering In New Zealand Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/new_zealands_man_made_warming/
Climate Data Tampering In Russia Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/climategate_was_russias_warming_also_man_made/
Climate Data Tampering In Alaska Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/more_man_made_warming_this_time_in_alaska/
Climate Data Tampering In Sweden Detected
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/making_sweden_warmer/
Fraudulent Hockey Sticks And Hidden Data
http://joannenova.com.au/2009/12/fraudulent-hockey-sticks-and-hidden-data/#more-4660
Would You Buy A Used Car From These So-Called “Scientists”?
Our gullible alarmist friends have obviously fallen for this scam hook, line, and sinker!
First we had the IPCC report “The Science of Climate Change 1995”, where lead auther Benjamin D. Santer removed the following conclusions made by genuine scientists, and without the scientists being made aware of this change.
“None of the studies cited above has shown clear evidence that we can attribute the observed climate changes to the specific cause of increases in greenhouse gases.”
“No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [man-made] causes.”
“Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.”
Then we have some choice quotes from so-called “consensus scientists”.
“The two MMs [Canadian skeptics Steve McIntyre and Ross McKitrick] have been after the CRU station data for years. If they ever hear there is a Freedom of Information Act now in the UK, I think I’ll delete the file rather than send to anyone.”
Phil Jones email, Feb 2 2005
“I can’t see either of these papers being in the next IPCC report, Kevin and I will keep them out somehow, even if we have to redefine what the peer-review literature is!”
Phil Jones Director, The CRU
[cutting skeptical scientists out of an official UN report]
“The fact is that we can’t account for the lack of warming at the moment, and it is a travesty that we can’t …there should be even more warming… the data are surely wrong”.
Kevin Trenberth, Climatologist, US Centre for Atmospheric Research
“…If anything, I would like to see the climate change happen, so the science could be proved right, regardless of the consequences. This isn’t being political, it is being selfish. “
Phil Jones Director, The CRU
“We have to get rid of the Mediæval Warm Period” Confided to geophysicist David Deming by the IPCC, 1995
[Many believe that man to be Jonathan Overpeck, which Prof. Deming didn’t deny in an email response, who would later also serve as an IPCC lead author.]
“We have 25 years or so invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it?” Phil Jones Director, The CRU
”We have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have.” Professor Stephen Schneider
“Humans need a common motivation … either a real one or else one invented for the purpose. … In searching for a new enemy to unite us, we came up with the idea that pollution, the threat of global warming, water shortages, famine and the like would fit the bill. All these dangers are caused by human intervention so the real enemy then, is humanity itself.” Club of Rome declaration
“It doesn’t matter what is true, it only matters what people believe is true…. You are what the media define you to be. Greenpeace became a myth and fund generating machine.” Paul Watson, Co-Founder Greenpeace, Forbes, Nov. 1991
Now what conclusion would a rational and sceptical person come to?
Frederick Seitz, president emeritus of Rockefeller University and chairman of the George C. Marshall Institute, summed it up nicely after seeing the changes made to the IPCC report.
“In my more than 60 years as a member of the American scientific community, including service as president of both the National Academy of Sciences and the American Physical Society, I have never witnessed a more disturbing corruption of the peer-review process than the events that led to this IPCC report.”

“If you measure too close to the high point, you get cases where the same “high” carries over to both days. And vice versa if you measure at the low point.
If it’s really cold at dawn Monday and that’s when I measure it, I’ll get low a minute after dawn on Monday with the Sunday’s low being a minute before dawn. Two days of lows taken from a single cold interval.
To avoid this, you need to measure at a time far removed from either the typical high point (mid-afternoon) or the typical low point (predawn).”
—…—…—
Yes, understood. I’ve heard similar explanation before.
Now, back to the purpose of my question: What (in your answer) or in the physical world of real temperatures and real measurements, actually justifies lowering all of the country’s recorded temperatures prior to 1970?
We get a cold front coming through once every 10 – 16 days (or maybe 25 – 30 days a year), and at that only half the year does that hypothetical cold front change temepratures drastically (in winter really – summer fronts are most often less drastic), and of those few events how many actually affected two days worth of readings, of that small theoretical fraction left, how many actual events really happened?
There is still no reason to use TOBS as a reason to change the earth’s temperature records.

Willis Eschenbach (20:27:37) :
Thank you.

Doug in Seattle

I just don’t see how any rational person can look at what was done at Darwin and defend the results.
Perhaps its too late for a quick “oops, we’ll fix it”. It does kind of amaze me that they circled the wagons with the Economist article. This will just end up making looking even stupider.

Rural Beeville 1895 – 2005
NOAA adjustments add a +2.8°F / Century Trend.
I will post a link to a graph tommorrow.

Willis Eschenbach

evanmjones (20:27:01) :

No, the “day” is midnite to midnite, so you will only get one high or low per day. The low is typically shortly after dawn, and the high somewhere in the late afternoon.

[REPLY – The 24-hour interval between times of observation is the “day” so far as the records are concerned, not midnight – midnight. There’s really no other way to do it. The best observation time is probably around 11 A.M. ~ Evan]

You are correct. According to Petersen the “first order” stations use a calendar day, midnite to midnite. Understandably, some people don’t want to use that interval. So a number of the stations don’t use midnite to midnite. But as long as you are not taking the observation near the time of min or max, it shouldn’t be a problem. Your suggestion of 11 AM is a good one.
However, all of this is really a subject for another thread.

Willis Eschenbach

tokyoboy (20:54:32) :

“I made the graph with data from the GHCN site. The data is at
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v2 in the file called “v2.mean_adj”

Thanks….. but my PC can’t open the file, probably due to software mismatch.
Willis, don’t you have the data on such a popular software as PDF?
Thanks again.

The data is plain text, compressed using “Z”. There are also .zip versions available at
www1.ncdc.noaa.gov/pub/data/ghcn/vw/zipd/
You could try those.

Dan Martin

Can someone explain to me why in the top graph the temp. anomaly scale is different on the right side from the left side?

J.Hansford

Seems to me that there is no way they could accidentally get that result.
The methodology seems to be striving towards a warming bias…. and deliberately so.
….There. I’ve said it.

Willis Eschenbach

My bad, one letter wrong
www1.ncdc.noaa.gov/pub/data/ghcn/v2/zipd/

Mooloo

There must be people who have lived, and more importantly, farmed in Darwin for fifty years. Have these people noticed the climate changing that rapidly?There must be some evidence, such as that they have to farm different things. Anyone from that part of the world here that has noticed things are 3° warmer?
Similarly, I find it hard to look at the figures shown for New Zealand and reconcile that with the actual weather we have. NIWA allege very rapid recent warming, yet there is no discernable difference. I’ve seen no actual evidence presented that our farmers are having to adapt, for example. My parents are keep gardeners, but haven’t noticed that frosts are any lighter and different plants will now grow.
Does this not bother the warming scientists? How do they reconcile their evidence with what they see with their own eyes?

Peter

Clive, I made a similar comment on another thread. yes, eventually the disparity between reality and official will be so obvious one would have to be a lunatic not to notice something smells in climate science. What’s worse is the disparity between the computer model predictions and even the fudged temperature data is already too significant. The computer models must now be dumped. Anyone who still relies on them are fools.

Neil Gibson

Hi Anthony,
As an engineer with a lifetime of experience in electrical measurement I cannot understand that in all the discussion I
have not seen the words Quality Assurance mentioned. All of our measurements and instruments had to have traceability and customers would full access to inspect our labs and our measurements. Every engineering firm dealing with Government of Semi-Government had to have similar QA procedures. Now while I can understand
that it would be difficult to apply QA to scientific theory it appears to me that the measurement of temperature should definitely have QA procedures and documentation. These should be available on request from the authorities concerned as no Government purchasing body in Australia or other Western countries will accept product from firms without QA. It is totally inconcievable to me that Anthony Watts and his crew of amateurs should be required to perform basic site QA .
The fact that not only do we have measurement without any quality assurance, the very processes used are not divulged.
I think all the climate warmists need is a detailed independent Quality Audit and most global warming will disappear!
REPLY: Agreed. I’ve been saying this for years. For example, why doesn’t climate data have an ISO-8000 certificate? Why doesn’t NCDC have an ISO-9000 rating? Private industry does these things, yet goevrnment seems to do everything haphazardly. – Anthony

Paul R

Nothing going on in Gove, It’s at the same latitude as Darwin, roughly. It needs Homer Geneizing.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=36&p_display_type=dataFile&p_startYear=&p_stn_num=014508

Keith Minto

May have answered my own question.
This is from the Australian BOM site http://www.bom.gov.au/climate/cdo/about/definitionstemp.shtml They measure at 9am and it seems that the minimum measured is for that day (the reasonable assumption is that the 9am temp is past the morning minimum on a midnight to midnight cycle) and the maximum recorded is that registered on the previous day. So on these thermometers max/mins are recorded whenever they occur during the day and 9am is a convenient way of reading min. on that day and max. the previous day.
Was this always the case ? as it may effect the recorded temp.
Thanks Willis, for your thoughtful analysis.

Richard Sharpe

Scott of Melb Australia (20:58:57) said:

(using microfish of available temp data)

Hmmm, is that a new unit of measure for temperature data that I am not aware of or did you mean microfiche?

Paul Vaughan

Using the link Willis provided, I found this:
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt
Listed in there are Canadian stations with serious errors. [I took the time to have a few of my concerns acknowledged by government officials (just to see if they would admit to quality issues – & they did without hesitation).]

steven mosher

hey Willis, petersons say the TOBS code can be purchased for a fee

Billyquiz

[snip -waaayyy off topic, sorry]

JP Miller

Willis, I do not understand the two different scales on the left verticla axis and the right vertical axis. They don’t match, should they?

Geoff Sherrington

Willis,
Glad to see you sticking to your guns.
Clarification about 1940. The year 1940 is in my official data as the year the station moved from the Post Office to the airport (014016 to 014015).
Other official BOM data show a comparison which, unless mislabelled by them compares these 2 sites (lats and longs given) from Jan 1967 to Dec 1973. There is inconsequential change in the average temperature, though there is a bigger range in monthly data at one station, particularly in min temp.
Although this study was done after the shift, it seems to be compelling evidence that no significant correction needed to be applied to the long term data because of the station shift.
It is an unresolved matter if there were problems at the older PO station before 1940. I have never seen any explanation. I do not know why people make a correction in the few years before the station shift. Until a reason is given, I cannot see cause for anyone to make a step change before 1940, or more specifically, because of the station change in 1940 (which some people report was 1941, but then…). An algorith that makes a step change for no explained reason is hardly trustworthy.

Nick Stokes

Willis,
You still haven’t recognised the most cogent criticism of Giorgio Gilestro, who showed that if you looked at the whole distribution of GHCN adjustments, and not just one station, the distribution is fairly symmetric, with stations almost as likely to have the trend adjusted down than up. The average upward adjustment to trend was 0.0175 C/decade; much less than the Darwin figure.
You say that it is improbable that a station would show an apparent gradient change to a late rise of 6C/century. You give no basis for this statement. Giorgio’s kind of analysis does not help with that figure, but it does indicate how often an adjustment can produce a rise comparable to Darwin’s. The result is more meaningful when restricted to longer series, since a small change to a short series can too easily produce a large trend change. So looking at stations with more than 80 years in the adjusted record, this histogram shows the position of Darwin 0. It is an outlier – 31’st in a list of 2074 stations, ordered by change in gradient. But not so improbable as to say that it had to be adjusted manually. And 17 stations were adjusted downward by the same amount or more that Darwin was adjusted upwards.
You are wrong in saying that the GHCN algorithm would have adjusted the duplicates equally. The algorithm identifies changepoints by looking at whether the difference in slope between the sections on each side is over a certain limit. That slope depends on the length of sample.
Do you still maintain your underlined charge that
“Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.??

Michael

“Santa makes his rounds later this week, but I’ve already received one of the best gifts ever: the complete unmasking of one of the most insidious movements of recent history – the radical effort to force reckless and needless constraints onto the human race in an attempt to change the planet’s climate.”
Mark Davis: The Gift of ‘Climategate’ is a Fresh Start
http://www.dallasnews.com/sharedcontent/dws/dn/opinion/viewpoints/stories/DN-markdavis_1220edi.State.Edition1.1d27b2e.html

Glenn

JP Miller (22:39:57) :
“Willis, I do not understand the two different scales on the left verticla axis and the right vertical axis. They don’t match, should they?”
Read the legend for the heavy black line, at the bottom of the graph.

Willis Eschenbach

Dan Martin (21:42:09) :

Can someone explain to me why in the top graph the temp. anomaly scale is different on the right side from the left side?

Merely for clarity, so the black line (adjustments) doesn’t obstruct the view of what’s been done.

Norm in Calgary

Back to Darwin Zero … If that upward trend continues at some point it would start to look actually silly … nonsensical and YET they could not reverse it could they? Because THAT would be a cooling trend. Gotcha!! ☺ Are “they” LOCKED into this big lie? They cannot reverse it now. Ohoh.
I think they want Copenhagen agreed ASAP because they probably see the temperatures declining in the near future and want to be able to claim that CO2 reduction is the reason. Hence the panic to start CO2 reduction ASAP, before the temperatures drop while CO2 continues to rise. I’m not sure how they would manage to record CO2 dropping enough to make a difference, unless they manipulate the data, but they’d never do that; or they claim the positive multiplier effect also works as a negative multiplier effect in reverse so that a teeny drop could save the world.

Michael

“Global warming has of late been a very hot topic in social media, and last week it was hotter than ever. Much of the added fuel came from climate change believers who engaged in the debate that had been dominated by skeptics.”
No Denying the Heat of Global Warming Debate in the Blogosphere
http://pewresearch.org/pubs/1446/global-warming-debate-on-blogs