Darwin Zero Before and After

Guest Post by Willis Eschenbach

Recapping the story begun at WUWT here and continued at WUWT here, data from the temperature station Darwin Zero in northern Australia was found to be radically adjusted and showing huge warming (red line, adjusted temperature) compared to the unadjusted data (blue line). The unadjusted data showed that Darwin Zero was actually cooling over the period of the record. Here is the adjustment to Darwin Zero:

Figure 1. The GHCN adjustments to the Darwin Zero temperature record.

Many people have written in with questions about my analysis. I thank everyone for their interest. I’m answering them as fast as I can. I cannot answer them all, so I am trying to pick the relevant ones. This post is to answer a few.

• First, there has been some confusion about the data. I am using solely GHCN numbers and methods. They will not match the GISS or the CRU or the HadCRUT numbers.

• Next, some people have said that these are not separate temperature stations. However, GHCN adjusts them and uses them as separate temperature stations, so you’ll have to take that question up with GHCN.

• Next, a number of people have claimed that the reason for the Darwin adjustment was that it is simply the result of the standard homogenization done by GHCN based on comparison with other neighboring station records. This homogenization procedure is described here (PDF).

While it sounds plausible that Darwin was adjusted as the GHCN claims, if that were the case the GHCN algorithm would have adjusted all five of the Darwin records in the same way. Instead they have adjusted them differently (see below). This argues strongly that they were not done by the listed GHCN homogenization process. Any process that changed one of them would change all of them in the same way, as they are nearly identical.

• Next, there are no “neighboring records” for a number of the Darwin adjustments simply because in the early part of the century there were no suitable neighboring stations. It’s not enough to have a random reference station somewhere a thousand km away from Darwin in the middle of the desert. You can’t adjust Darwin based on that. The GHCN homogenization method requires five well correlated neighboring “reference stations” to work.

From the reference cited above:

“In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.”

and  “Also, not all stations could be adjusted. Remote stations for which we could not produce an adequate reference series (the correlation between first-difference station time series and its reference time series must be 0.80 or greater) were not adjusted.”

As I mentioned in my original article, the hard part is not to find five neighboring stations, particularly if you consider a station 1,500 km away as “neighboring”. The hard part is to find similar stations within that distance. We need those stations whose first difference has an 0.80 correlation with the Darwin station first difference.

(A “first difference” is a list of the changes from year to year of the data. For example, if the data is “31, 32, 33, 35, 34”, the first differences are “1, 1, 2, -1”. It is often useful to examine first differences rather than the actual data. See Peterson (PDF) for a discussion of the use of the “first-difference method” in climate science.)

Accordingly, I’ve been looking at the candidate stations. For the 1920 adjustment we need stations starting in 1915 or earlier. Here are all of the candidate stations within 1,500 km of Darwin that start in 1915 or before, along with the correlation of their first difference with the Darwin first difference:

WYNDHAM_(WYNDHAM_PORT) = -0.14

DERBY = -0.10

BURKETOWN = -0.40

CAMOOWEAL = -0.21

NORMANTON = 0.35

DONORS_HILL = 0.35

MT_ISA_AIRPORT = -0.20

ALICE_SPRINGS = 0.06

COEN_(POST_OFFICE) = -0.01

CROYDON = -0.23

CLONCURRY = -0.2

MUSGRAVE_STATION = -0.43

FAIRVIEW = -0.29

As you can see, not one of them is even remotely like Darwin. None of them are adequate for inclusion in a “first-difference reference time series” according to the GHCN. The Economist excoriated me for not including Wyndham in the “neighboring stations” (I had overlooked it in the list). However, the problem is that even if we include Wyndham, Derby, and every other station out to 1,500 km, we still don’t have a single station with a high enough correlation to use the GHCN method for the 1920 adjustment.

Now I suppose you could argue that you can adjust 1920 Darwin records based on stations 2,000 km away, but even 1,500 km seems too far away to do a reliable job. So while it is theoretically possible that the GHCN described method was used on Darwin, you’ll be a long, long ways from Darwin before you find your five candidates.

• Next, the GHCN does use a good method to detect inhomogeneities. Here’s their description of their method.

To look for such a change point, a simple linear regression was fitted to the part of the difference series before the year being tested and another after the year being tested. This test is repeated for all years of the time series (with a minimum of 5 yr in each section), and the year with the lowest residual sum of the squares was considered the year with a potential discontinuity.

This is a valid method, so I applied it to the Darwin data itself. Here’s that result:

Figure 2. Possible inhomogeneities in the Darwin Zero record, as indicated by the GHCN algorithm.

As you can see by the upper thin red line, the method indicates a possible discontinuity centered at 1939. However, once that discontinuity is removed, the rest of the record does not indicate any discontinuity (thick red line). By contrast, the GHCN adjusted data (see Fig. 1 above) do not find any discontinuity in 1941. Instead, they claim that there are discontinuities around 1920, 1930, 1950, 1960, and 1980 … doubtful.

• Finally, the main recurring question is, why do I think the adjustments were made manually rather than by the procedure described by the GHCN? There are a number of totally independent lines of evidence that all lead to my conclusion:

1. It is highly improbability that a station would suddenly start warming at 6 C per century for fifty years, no matter what legitimate adjustment method were used (see Fig. 1).

2. There are no neighboring stations that are sufficiently similar to the Darwin station to be used in the listed GHCN homogenization procedure (see above).

3. The Darwin Zero raw data does not contain visible inhomogeneities (as determined by the GHCN’s own algorithm) other than the 1936-1941 drop (see Fig. 2).

4. There are a number of adjustments to individual years. The listed GHCN method does not make individual year adjustments (see Fig. 1).

5. The “Before” and “After” pictures of the adjustment don’t make any sense at all. Here are those pictures:

Figure 3. Darwin station data before and after GHCN adjustments. Upper panel shows unadjusted Darwin data, lower panel shows the same data after adjustments.

Before the adjustments we had the station Darwin Zero (blue line line with diamonds), along with four other nearby temperature records from Darwin. They all agreed with each other quite closely. Hardly a whisper of dissent among them, only small differences.

While GHCN were making the adjustment, two stations (Unadj 3 and 4, green and purple) vanished. I don’t know why. GHCN says they don’t use records under 20 years in length, which applies to Darwin 4, but Darwin 3 is twenty years in length. In any case, after removing those two series, the remaining three temperature records were then adjusted into submission.

In the “after” picture, Darwin Zero looks like it was adjusted with Sildenafil. Darwin 2 gets bent down almost to match Darwin Zero. Strangely, Darwin 1 is mostly untouched. It loses the low 1967 temperature, which seems odd, and the central section is moved up a little.

Call me crazy, but from where I stand, that looks like an un-adjustment of the data. They take five very similar datasets, throw two away, wrench the remainder apart, and then average them to get back to the “adjusted” value? Seems to me you’d be better off picking any one of the originals, because they all agree with each other.

The reason you adjust is because records don’t agree, not to make them disagree. And in particular, if you apply an adjustment algorithm to nearly identical datasets, the results should be nearly identical as well.

So that’s why I don’t believe the Darwin records were adjusted in the way that GHCN claims. I’m happy to be proven wrong, and I hope that someone from the GHCN shows up to post whatever method that they actually used, the method that could produce such an unusual result.

Until someone can point out that mystery method, however, I maintain that the Darwin Zero record was adjusted manually, and that it is not a coincidence that it shows (highly improbable) warming.


Sponsored IT training links:

Want to pass HP0-J33 at first try? Gets certified 000-377 study material including 199-01 dumps to pass real exam on time.


0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

303 Comments
Inline Feedbacks
View all comments
wobble
December 21, 2009 3:51 pm

Nick Stokes (15:43:36) :
“”I don’t believe that any of these adjustments were done manually.””
Does that include Darwin?

wobble
December 21, 2009 4:00 pm

Nick Stokes (15:43:36) :
“”I don’t believe that any of these adjustments were done manually.””
Actually, cancel my last question. It doesn’t matter.
Willis is attempting to show a warming adjustment (Darwin) which may not have been made objectively by the Peterson algorithm.
GG attempted to demonstrate that it didn’t matter since all adjustment trends offset each other. And you claim on his blog that this is true post 1940 and post 1970 (but you didn’t show your work).
So neither GG nor you really doesn’t address the issue. What happened with the Darwin adjustments DOES matter even IF GG is right.

Nick Stokes
December 21, 2009 4:07 pm

wobble (15:51:21) :
Yes. I believe they were all done by the same computer algorithm, which may or may not have been exactly Peterson’s.

Dave F
December 21, 2009 4:18 pm

Nick Stokes (15:41:22) :
Still, what does the graph of raw data look like next to the graph of adjusted data? Is there a difference of 0.0175C, 0.2C, or some other difference? Isn’t this the best way to determine what effect the adjustments have, not making statistical sausage out of the data?

wobble
December 21, 2009 4:28 pm

Nick Stokes (16:07:54) :
“”Yes. I believe they were all done by the same computer algorithm,””
Well Willis disagrees with you.
Now we’re back to 1st base, right?

JJ
December 21, 2009 4:45 pm

Willis,
Help is being offered to you. Do not take offense when the holes in your argument are pointed out, and do not lash out. Fix the holes. It results in a stronger argument.
GHCN station nomenclature documents that the first three digits ID the country. The next five digits are the nearest WMO station number. The next three digits are a modifier, and are zero if that is a WMO station. The final digit (the 0, 1, 2, 3 and 4 that differentiate the five Darwin records) is called the ‘duplicate number’. You need to know how multiple duplicates are treated in the GHCN, as opposed to multiple stations.
GHCN does not appear to have a numeric limit on the distance for ‘neighboring’. Perhaps you think they should. You can make your case for that, and it might prove fruitful. But you cannot impose a limit that is not present in their methods, and use that to claim that they didnt follow their methods. Those are two different issues.
You claim that the GHCN methodology cannot adjust different record duplicates differently. You rely on that as an assumption to make further claims. However, you provide no support whatsoever for that claim.
On the other hand, it is patently obvious how that might occur: Two different record duplicates, having different lengths, can easily have different correlation with a potential reference site. This is true even if those records overlap for some portion of their length, and have strong agreement in the overlap. Note that the reason behind the ‘hiding of the decline’ was to conceal just such a record length induced change in correlation …
Two record duplicates correlating differently to potential reference sites would result in them being adjusted with different reference series, which would obviously permit differing adjustments. Instead of assuming that this did not happen, find out. It would be valuable for you to know if that occurred …
Similarly, instead of simply assuming that the GHCN methodology limits its adjustments to the trend of the reference series, find out. It would be valuable for you to know if that is true …
On the same note, instead of simply assuming that the GHCN methodology limits the trend of the reference series to the trend of one of its inputs, find out. It would be valuable for you to know if that is true …
You are making many unwarranted assumptions in order to make your points. This not only results in a false or flimsy arguments for those points, it prevents you from making other points that may be more supportive of your ultimate goal.
You are not discussing Darwin in a vacuum. The only reason that anyone would give a gluteus rattius about what you are doing here is that it implies there is something wrong with the aggregate.
Currently, there is no justification for that implication. The GHCN methodology document explicitly acknowledges that there may be large adjustments to individual stations like Darwin. The GHCN methodology document explicitly acknowledges that those large adjustments to individual stations like Darwin may not track the local temperatures well. The GHCN methodology document also asserts that these are not an issue with large scale, long term temperature estimates.
So far, you have confirmed what the GHCN methodology states. If your point were to confirm GHCN, you would be well on your way. But that isnt your point …
Finally, in regards to your ‘you go out and do something before you complain about my work’ meme: That is precisely the refuge that the Hokey Team has taken with Steve Mc. “We’re not going to pay any attention to the deficiencies that you are bringing to light in our work. Who are you to show us holes in our methods? Go publish yourself.” Then they shut themselves away and demand that only people who agree with them 100% should dare speak to them.
This is not scientific behaviour. Nor a winning strategy.
Accept help.

JJ
December 21, 2009 5:00 pm

Nick Stokes:
“Yes, but USHCN uses metadata files.”
Explaining how they do it is not proof that they cant 🙂
“GHCN explicitly doesn’t, because it says that the way they are kept is just too varied across countries. If GHCN wanted to compile a history of manual changes, they would have to in effect create their own metddata files.”
Which they could do. But they dont, opting instead to cut effort and use statsitical methods that can apparently produce a 6C trend where none exists in the data. Yet they stick with the metadata based adjustments in the US. If the statistical methods are sufficient to polish crappy third world data into a gem of a temp estimate, why not apply them everywhere?
Does the answer lie near the fact that the USHCN adjustments, whatever the distribution of their +/- magnitudes is, add a strong, nearly hockey stick shaped warming trend into the final US temp estimate?
BTW, where is the similar plot of GHCN gridded temps, comparing the net effect of raw vs adjusted data on the global estimate? Put the ‘nearly symmetric distribution’ and other handwaving aside. What is the net effect of GHCN temp adjustments on GHCN temp estimates?

December 21, 2009 5:08 pm

To Nick Stokes
Coonabarabran NSW is given as an example of an NOAA NEGATIVE adjustment, to offest the positive 6 deg per century rate over 50 years at Darwin.
Here is a summary of Coonabarabran from recent Australian Burau of Meteorology files:
st,064008,64 ,COONABARABRAN (NAMOI STREET) ,01/1879, ,-31.2712, 149.2714,GPS ,NSW, 505.0, 510.0,94728,1957,2007, 99, 99, *, 0, *, 0,#
In short, although people setlled Coonabarabran about 1870s, the weather station was judged unreliable for use by the BOM except for the period 1957 to now. (or 2007 as it shows above).
On the other hand, the BOM accept the Darwin record back to 1869, with gaps for events like bombing in WWII of a couple of months.
Also, see an explanation for Coonabarabran:
“A Notable Frost Hollow at Coonabarabran,
New South Wales
Blair Trewin
National Climate Centre, Australian Bureau of Meteorology, Melbourne, Victoria
E-mail: b.trewin@bom.gov.au
Parallel observations, taken for 28 months between July 2001 and October 2003 at two sites at
Coonabarabran, New South Wales, show that topography has a dramatic influence on minimum
temperatures at the two locations. Over the period of the study mean minimum temperatures at a valley
site were 4.9°C lower than those at a plateau site 6.6 km away and 133 metres higher in elevation, with
differences of up to 14.3°C occurring on individual nights.
The observed minimum temperature differences were greater in winter (mean difference 6.0°C) than in
summer (3.0°C). Strong relationships were found between the magnitude of the minimum temperature
difference at individual nights and wind speed and cloud amount at 0300 and 0600 (local time), with
the minimum temperature difference largely disappearing on nights when the wind speed exceeded 8
m s-1 at the plateau site.
There was also a marked tendency, particularly in winter, for the largest temperature differences to occur
on the coldest nights, with the 10th percentile minimum temperatures at the two sites differing by 7.6°C
during the period of overlap. This corresponds to a dramatic difference in the frost risk between the two
sites, with minima falling below 0°C on 196 occasions during the 28 months of the study at the valley
site, but not at all at the plateau site. For a 2°C threshold the figures are 282 and 14 days respectively.
Mean maximum temperatures at the plateau site were 1.0°C lower than those at the valley site, approximately
consistent with the environmental lapse rate. There is little seasonal or day-to-day variation, with the
standard deviation of the daily differences being only 0.6°C, compared with 3.7°C for minima.
The large differences in minimum temperature identified in this study reinforce the fact that minimum
temperatures are highly dependent on local topography. This has substantial implications for the mapping
of frost risk, and other related climatic variables, at resolutions finer than the spacing of the station network
in areas with significant local relief.”
One suspects that there was an event like a site change in the 1950s. There might not have been. The absence of such information is used with dubious intent by pseuso-scientists to cast doubt. Such information alsoexplains the benefit of understatding individual stations well before jamming them into supercomputers with dumb inputs.
NOAA is therefore wrong again to use rejected data and GG was scientifically slack for not checking his sources.
Yours was not a quality post, Nick. The first example I tried gets 0/10.

Nick Stokes
December 21, 2009 5:10 pm

\ JJ (17:00:37) :\What is the net effect of GHCN temp adjustments on GHCN temp estimates?
See above: Nick Stokes (14:56:44) :

Nick Stokes
December 21, 2009 5:14 pm

wobble (16:28:53) :
Well Willis disagrees with you. Now we’re back to 1st base, right?

Yes. But look at the Coonabarabran plot. It’s even jumpier than the Darwin plot, but heads downward. If there’s a case for saying Darwin is manual, even more so for Coona. But what malign hand would be pushing it down?

Paul Vaughan
December 21, 2009 5:20 pm

Nick Stokes, you seem to be arguing that it is ok if the Darwin record gets messed up, so long as the amount that it gets messed up is offset by adjustments (perhaps valid ones) for other stations.
You’ve certainly succeeded in making me very suspicious of the assumptions upon which this changepoint method is based. When I have time/funding, I’m now curious to take a painstaking look at the fundamental assumptions underpinning the adjustment paradigm. We need accurate records at the local level to figure out the complexities of natural climate variations. (See the works of Currie.)

Dave Springer
December 21, 2009 5:32 pm

Bill Illis 11:41:23
Nice graphs. Shows it all. Interestingly some small fudge factor ~+-.25C is applied every year superimposed over a large fixed fudge factor spanning 10 to 40 year intervals where each interval steps up a half degree.
I was a prolific programmer for 30 years. If I wrote something that produced crap out like that adjusted temperature record and let it get released for use I would have been mortified, horrified, wouldn’t sleep until I found the flaw, fixed it ASAP, offer my most sincere and abject apologies to each and every harmed user, and beg for forgiveness.
An eyeball glance at the raw data reveals just one anomaly near 1940 where the temperature drops half a degree and remains down half a degree for the next 60 years. Software should have easily caught and corrected that and then there’s nothing else to catch. 120 years of very consistent data with little if any overall trend either up or down. I understand in 1940 the station moved which explains the one-time drop. The adjustment, however it was done, introduced serious flaws into what was an almost pristine record.

Nick Stokes
December 21, 2009 5:36 pm

Geoff Sherrington (17:08:48) :
There’s nothing to indicate that your frost hollow story involves the Coona weather station. The town is by the Castlereagh river, but in a fairly flat region.
I suspect the reason for the big change is a possible merger with data from the Siding Springs Observatory. But that’s just a guess.

JJ
December 21, 2009 5:37 pm

Nick Stokes:
“That’s basically what Romanm’s plot does.”
No it doesnt. That attempts to deal with the temporal issue, but in no way addresses the spatial. Already at 0.2C with just the temporal – a third of the whole value of the alleged ‘global warming’. What happens when those sites get area weighted differentially? When they’re used to fill in missing data cells? Another third?
How much are the UHI and othe runaccounted for local anthropogenic effects? Another third?
Until you can produce a plot of final GHCN global temps that compares adjusted vs raw, you’re just handwaving.

wobble
December 21, 2009 5:49 pm

Nick Stokes (17:14:45):
“”look at the Coonabarabran plot. It’s even jumpier than the Darwin plot, but heads downward. If there’s a case for saying Darwin is manual, even more so for Coona.””
Willis did quite a bit of work in an attempt to convince people that the Darwin adjustment is manual. He showed correlations to neighboring stations and described the distances involved with those further away.
You merely claim that Coonabarabran is “jumpier?”
Gee, who puts forth a better argument?

Willis Eschenbach
December 21, 2009 5:58 pm

JJ (16:45:42) :

Willis,
Help is being offered to you. Do not take offense when the holes in your argument are pointed out, and do not lash out. Fix the holes. It results in a stronger argument. …

JJ, as I said before, what you are offering is not help. It is handwaving and objections. I will gladly accept your help, but to date, you haven’t offered any. I’m not “lashing out”, I’m trying to stem your flood of well-meaning but meaningless platitudes. Let me give you some examples to clarify what I’m trying to say.
Despite trying, I am unable to create a reference series for Darwin because I can’t find relevant well-correlated neighboring stations. I know that. You know that. Help would be a list of appropriate stations, which you have not provided. Instead you keep saying that “neighboring” means over 1,500 km., well beyond the known limit of correlation between temperature stations. That’s no help at all, as any correlation beyond that distance would be by chance.
Despite trying, I have not been able to use the GHCN algorithm to twist nearly identical records in different directions. Help would be a demonstration from you that it can be done. Your statement that it can be done is meaningless. That’s just handwaving, and it is both useless and irritating. If you think it can be done, show us that it can be done.
I say that the GHCN algorithm cannot adjust the trend beyond that of its inputs. Why do I say that? Because I’ve read the description and I’ve tried it. Also, because the point of the algorithm is to make the target station agree with the inputs, not exceed them. You claim the algorithm can adjust it beyond the trend of the inputs. Help would be a demonstration that what you claim is true. Mindlessly repeating “yes it can, yes it can” is not help.
Finally, you keep bringing up Steve McIntyre. You say

Take a page from Steve Mc. He takes great pains to replicate results before he starts making claims about what someone else did.

As someone who has posted extensively on Climate Audit and who corresponds regularly with Steve, I can tell you that’s a fragrant vase of excrement. What Steve does is try to replicate results, as I have done. Often, he is in the situation that I am in … he can’t replicate the results. What he does then is post the fact that he can’t replicate the results. As I have done. Help would be you posting some way to actually replicate the results. Saying I should be like Steve is no help at all.
And no, I’m not using the Hokey Team excuse of “you go and publish before we can talk about this”. I’m asking for help, I don’t give a rat’s fundamental orifice if you publish. However, help does not mean saying “I really really really really think you are wrong about X”. That does nothing at all. If you think I am wrong, don’t talk about how right you know you are and provide endless justifications and reasons. Instead, demonstrate how right you are, show us how it can be done, bring in the citations that support your position, post a method, list the stations, anything but your unending litany of objections.
w.
PS – In researching this article I found this interesting report:

I’d also like to report that over a year ago, I wrote to GHCN asking for a copy of their adjustment code:

I’m interested in experimenting with your Station History Adjustment algorithm and would like to ensure that I can replicate an actual case before thinking about the interesting statistical issues. Methodological descriptions in academic articles are usually very time-consuming to try to replicate, if indeed they can be replicated at all. Usually it’s a lot faster to look at source code in order to clarify the many little decisions that need to be made in this sort of enterprise. In econometrics, it’s standard practice to archive code at the time of publication of an article – a practice that I’ve (by and large unsuccessfully) tried to encourage in climate science, but which may interest you. Would it be possible to send me the code for the existing and the forthcoming Station History adjustments. I’m interested in both USHCN and GHCN if possible.

To which I received the following reply from a GHCN employee:

You make an interesting point about archiving code, and you might be encouraged to hear that Configuration Management is an increasingly high priority here. Regarding your request — I’m not in a position to distribute any of the code because I have not personally written any homogeneity adjustment software. I also don’t know if there are any “rules” about distributing code, simply because it’s never come up with me before.

I never did receive any code from them.

So I’m not the only poor fool who can’t replicate the GHCN algorithm, despite your repeated claim that it can be done. However, I wish you the best of luck in your inquiries with GHCN, and I await your report on their response.

Willis Eschenbach
December 21, 2009 6:10 pm

Nick Stokes (17:14:45):

… look at the Coonabarabran plot. It’s even jumpier than the Darwin plot, but heads downward. If there’s a case for saying Darwin is manual, even more so for Coona.

Coonabarabran has no less than thirty stations within 1,200 km. which are candidates for the earliest adjustment. Darwin has two …

Ryan Stephenson
December 21, 2009 6:28 pm

@JJ: You repeat your mistake of the first analysis, by applying your own criteria for suitability, rather than GHCNs. GHCN does not appear to have any specific distance limit. GHCN appears to only be concerned if the stations are in the same ‘region’ (climatalogically) . You cannot say that GHCN did not apply its standard method.
Actually JJ, if you took the smallest trouble of looking at a map of Oz, you would see it is rather difficult to move more than 1500km from Darwin and still be in the same climatic region – its a big country, but not that big.
You strike me as a proponent of AGW posing as a sceptic that is using the rhetorical technique of challenging the minutae of every assertion as a form of obfuscation. It is similar to another form of obfuscation used on blogs that runs like this “before you challenge my simple assertion A then you must read obscure and difficult to obtain text B in great detail”. In your case you are pushing Willis to do more work to “prove” a point he has, in fact, already made very effectively. The condescending tone is no doubt intended to give the impression of great learning whilst not actually demonstrating anything other than fallacious argument. The fact that you clearly have no idea just how big Australia is exposes you as an ignoramus.
I accuse you of being a troll, and until you post something of real value to this blog then I suggest everyone treats you as a troll.

Dave Springer
December 21, 2009 6:40 pm

Nick,
The raw data can be adjusted so that it shows the famous hockey stick on a rolling average temperature plot and that wouldn’t change the trend line histogram.
Clearly the Darwin Zero data was badly misadjusted and in this case results in a drastic positive slope change on the trend line. The best case you can make from the trend line histogram is that the adjustments are equally flawed in both positive and negative slope changes. That’s not much of a defense for a broken procedure – saying it’s equally flawed in both directions so the flaws cancel out. It hardly inspires confidence that it isn’t broken in other ways. In any case the trend line histogram won’t reveal a temporal distribution problem where the adjustments are excessively negative in earlier records and equally excessive positive adjustments in the later records.

Ryan Stephenson
December 21, 2009 6:46 pm

The entire analysis used at Darwin is wrong-headed anyway. Given we know what todays temperatures are (we can just go out and measure them) we only need to know the rate of change of past temperatures to get an idea of future trends. Therefore we don’t need to adjust to find the absolute temperatures at any site. If a site has been moved, had its measurement kit changed or whatever, then the discontinuities can be detected and the rate of change between those discontinuities can still be calculated without resorting to back-filling data with complicated computer algorithms that clearly can’t be relied upon not to do “wierd stuff”.
The UHI is a different matter, but in each case it should be possible to measure the effect on a specific site by taking actual observation at the site and near to the site but outside the urbanised area. The only places this would not be possible would be in very large cities.
The reason we are having to fiddle with all this data is because there is no good data. All the Stephenson screens were designed to monitor weather, not climate. They were put in places that were easily accessible to the chap that had to read the thermometers that then became urbanised and affected by heavy traffic over the last hundred years. They then had all the instrumentation updated to electronic thermometers with remote reading. Unfortunately this has given climatologists the excuse to fiddle with the data in the most bizarre ways possible to support their ridiculous claims, then dress it up as science to difficult to understand for the layman. It’s not difficult to understand – any high school kid could do a better job.

Dave Springer
December 21, 2009 6:56 pm

Given that much of the record at Darwin is the only record for a large geographical land area doesn’t that give it greater weight in computing global average temperature? It surely should. Was Darwin cherry picked for manual adjustment due to its larger weighting? Again this wouldn’t show up in a trend line historgram of all stations in the worldwide record. The trend line histogram (I presume) treats all stations equally where in the global average calculation each station must be weighted for how much of the earth’s surface it represents.

Dave F
December 21, 2009 7:38 pm

Anyway, Nick Stokes, I would still expect there to be a larger bias in the trends if the adjustments were not being overly managed. The fact that they are so close to 0 is not comforting. Like I said before, if the adjustments are made because of faulty siting, equipment failure, and so forth, I think it is very unlikely that it would work out to be a practically nil contribution to the trend. That it does needs an explanation because I seriously doubt the failures in need of adjustment work out to .0175C of difference. It honestly begs the question, why adjust the data in the first place? Shouldn’t unadjusted and adjusted look basically the same? Do you know if they do, Mr. Stokes?

Grizzled Wrenchbender
December 21, 2009 7:54 pm

Dave Springer: BINGO! Just as Antarctica came to be represented by one station on the Palmer peninsula, Darwin too has high weighting in the gridding step. This is a promising alley for more sleuthing by concerned people like Willis and Ed Smith:
Which temperature stations have large impacts on the global temperature anomaly? Of those stations, how many have “adjustments” that tend to exaggerate GW? I suspect that the answers to these questions may be enlightening. Alas, like Fermat, I’m too busy to do the research myself, but I hope Willis and Ed can make some headway.

DR
December 21, 2009 8:10 pm

Recall the CRUTAR index from Africa
http://climateaudit.org/2009/07/19/christy-et-al-2009-surface-temperature-variations-in-east-africa/
http://wattsupwiththat.com/2009/07/18/out-of-africa-a-new-paper-by-christy-on-surface-temperature-issues/
Why Nick Stokes et al continue regurgitating the same old arguments, it’s the same old problems that plague the surface station network.

December 21, 2009 8:16 pm

Nick Stokes,
Your response to my observation that the BOM declines to use pre-1950s data for Coonanbarabran is?
If you go to St Kilda market on a Sunday you can pick up many sets of thimbles that are useful for hiding peas. Classically, the magician used 3. You might need a few dozen.
It’s not science to answer a comment with many points with one speculative answer to one selected point.
Yours was a very poor post.

1 7 8 9 10 11 13
Verified by MonsterInsights