Darwin Zero Before and After

Guest Post by Willis Eschenbach

Recapping the story begun at WUWT here and continued at WUWT here, data from the temperature station Darwin Zero in northern Australia was found to be radically adjusted and showing huge warming (red line, adjusted temperature) compared to the unadjusted data (blue line). The unadjusted data showed that Darwin Zero was actually cooling over the period of the record. Here is the adjustment to Darwin Zero:

Figure 1. The GHCN adjustments to the Darwin Zero temperature record.

Many people have written in with questions about my analysis. I thank everyone for their interest. I’m answering them as fast as I can. I cannot answer them all, so I am trying to pick the relevant ones. This post is to answer a few.

• First, there has been some confusion about the data. I am using solely GHCN numbers and methods. They will not match the GISS or the CRU or the HadCRUT numbers.

• Next, some people have said that these are not separate temperature stations. However, GHCN adjusts them and uses them as separate temperature stations, so you’ll have to take that question up with GHCN.

• Next, a number of people have claimed that the reason for the Darwin adjustment was that it is simply the result of the standard homogenization done by GHCN based on comparison with other neighboring station records. This homogenization procedure is described here (PDF).

While it sounds plausible that Darwin was adjusted as the GHCN claims, if that were the case the GHCN algorithm would have adjusted all five of the Darwin records in the same way. Instead they have adjusted them differently (see below). This argues strongly that they were not done by the listed GHCN homogenization process. Any process that changed one of them would change all of them in the same way, as they are nearly identical.

• Next, there are no “neighboring records” for a number of the Darwin adjustments simply because in the early part of the century there were no suitable neighboring stations. It’s not enough to have a random reference station somewhere a thousand km away from Darwin in the middle of the desert. You can’t adjust Darwin based on that. The GHCN homogenization method requires five well correlated neighboring “reference stations” to work.

From the reference cited above:

“In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.”

and  “Also, not all stations could be adjusted. Remote stations for which we could not produce an adequate reference series (the correlation between first-difference station time series and its reference time series must be 0.80 or greater) were not adjusted.”

As I mentioned in my original article, the hard part is not to find five neighboring stations, particularly if you consider a station 1,500 km away as “neighboring”. The hard part is to find similar stations within that distance. We need those stations whose first difference has an 0.80 correlation with the Darwin station first difference.

(A “first difference” is a list of the changes from year to year of the data. For example, if the data is “31, 32, 33, 35, 34”, the first differences are “1, 1, 2, -1”. It is often useful to examine first differences rather than the actual data. See Peterson (PDF) for a discussion of the use of the “first-difference method” in climate science.)

Accordingly, I’ve been looking at the candidate stations. For the 1920 adjustment we need stations starting in 1915 or earlier. Here are all of the candidate stations within 1,500 km of Darwin that start in 1915 or before, along with the correlation of their first difference with the Darwin first difference:

WYNDHAM_(WYNDHAM_PORT) = -0.14

DERBY = -0.10

BURKETOWN = -0.40

CAMOOWEAL = -0.21

NORMANTON = 0.35

DONORS_HILL = 0.35

MT_ISA_AIRPORT = -0.20

ALICE_SPRINGS = 0.06

COEN_(POST_OFFICE) = -0.01

CROYDON = -0.23

CLONCURRY = -0.2

MUSGRAVE_STATION = -0.43

FAIRVIEW = -0.29

As you can see, not one of them is even remotely like Darwin. None of them are adequate for inclusion in a “first-difference reference time series” according to the GHCN. The Economist excoriated me for not including Wyndham in the “neighboring stations” (I had overlooked it in the list). However, the problem is that even if we include Wyndham, Derby, and every other station out to 1,500 km, we still don’t have a single station with a high enough correlation to use the GHCN method for the 1920 adjustment.

Now I suppose you could argue that you can adjust 1920 Darwin records based on stations 2,000 km away, but even 1,500 km seems too far away to do a reliable job. So while it is theoretically possible that the GHCN described method was used on Darwin, you’ll be a long, long ways from Darwin before you find your five candidates.

• Next, the GHCN does use a good method to detect inhomogeneities. Here’s their description of their method.

To look for such a change point, a simple linear regression was fitted to the part of the difference series before the year being tested and another after the year being tested. This test is repeated for all years of the time series (with a minimum of 5 yr in each section), and the year with the lowest residual sum of the squares was considered the year with a potential discontinuity.

This is a valid method, so I applied it to the Darwin data itself. Here’s that result:

Figure 2. Possible inhomogeneities in the Darwin Zero record, as indicated by the GHCN algorithm.

As you can see by the upper thin red line, the method indicates a possible discontinuity centered at 1939. However, once that discontinuity is removed, the rest of the record does not indicate any discontinuity (thick red line). By contrast, the GHCN adjusted data (see Fig. 1 above) do not find any discontinuity in 1941. Instead, they claim that there are discontinuities around 1920, 1930, 1950, 1960, and 1980 … doubtful.

• Finally, the main recurring question is, why do I think the adjustments were made manually rather than by the procedure described by the GHCN? There are a number of totally independent lines of evidence that all lead to my conclusion:

1. It is highly improbability that a station would suddenly start warming at 6 C per century for fifty years, no matter what legitimate adjustment method were used (see Fig. 1).

2. There are no neighboring stations that are sufficiently similar to the Darwin station to be used in the listed GHCN homogenization procedure (see above).

3. The Darwin Zero raw data does not contain visible inhomogeneities (as determined by the GHCN’s own algorithm) other than the 1936-1941 drop (see Fig. 2).

4. There are a number of adjustments to individual years. The listed GHCN method does not make individual year adjustments (see Fig. 1).

5. The “Before” and “After” pictures of the adjustment don’t make any sense at all. Here are those pictures:

Figure 3. Darwin station data before and after GHCN adjustments. Upper panel shows unadjusted Darwin data, lower panel shows the same data after adjustments.

Before the adjustments we had the station Darwin Zero (blue line line with diamonds), along with four other nearby temperature records from Darwin. They all agreed with each other quite closely. Hardly a whisper of dissent among them, only small differences.

While GHCN were making the adjustment, two stations (Unadj 3 and 4, green and purple) vanished. I don’t know why. GHCN says they don’t use records under 20 years in length, which applies to Darwin 4, but Darwin 3 is twenty years in length. In any case, after removing those two series, the remaining three temperature records were then adjusted into submission.

In the “after” picture, Darwin Zero looks like it was adjusted with Sildenafil. Darwin 2 gets bent down almost to match Darwin Zero. Strangely, Darwin 1 is mostly untouched. It loses the low 1967 temperature, which seems odd, and the central section is moved up a little.

Call me crazy, but from where I stand, that looks like an un-adjustment of the data. They take five very similar datasets, throw two away, wrench the remainder apart, and then average them to get back to the “adjusted” value? Seems to me you’d be better off picking any one of the originals, because they all agree with each other.

The reason you adjust is because records don’t agree, not to make them disagree. And in particular, if you apply an adjustment algorithm to nearly identical datasets, the results should be nearly identical as well.

So that’s why I don’t believe the Darwin records were adjusted in the way that GHCN claims. I’m happy to be proven wrong, and I hope that someone from the GHCN shows up to post whatever method that they actually used, the method that could produce such an unusual result.

Until someone can point out that mystery method, however, I maintain that the Darwin Zero record was adjusted manually, and that it is not a coincidence that it shows (highly improbable) warming.


Sponsored IT training links:

Want to pass HP0-J33 at first try? Gets certified 000-377 study material including 199-01 dumps to pass real exam on time.


The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
303 Comments
Inline Feedbacks
View all comments
Gary Hladik
December 21, 2009 1:57 pm

Willis Eschenbach (12:16:55) : “I’m happy to be proven wrong because that’s how science progresses. Of course, I’m happier if I can prove someone else wrong …”
Wow, I gotta get meself inta this here science-type career thingy…no downside! 🙂
Congrats on another great article, Willis.

Willis Eschenbach
December 21, 2009 1:58 pm

JohnH (12:32:02) :

Latest from the UK MET office (to think my taxes directly pay for this lot.)
The HadCRUT database is UNDERSTATING temp increases and the Russians recent complaint confirms the database is correct as its matches their lower temps.
Quote ‘The IEA’s output is consistent with HadCRUT as they both confirm the global warming signal in this region since 1950, which we see in many other variables and has been consistently attributed to human activities’
I want my money back !!!!!
http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20091218

You deserve your money back. I loved their explanation:

The lower figure is the ECMWF analysis which uses all available observations, including satellite and weather balloon records, synthesised in a physically- and meteorologically-consistent way, and the upper figure represents the same period from our HadCRUT record. The ECMWF analysis shows that in data-sparse regions such as Russia, Africa and Canada, warming over land is more extreme than in regions sampled by HadCRUT. If we take this into account, the last decade shows a global-mean trend of 0.1 °C to 0.2 °C per decade. We therefore infer with high confidence that the HadCRUT record is at the lower end of likely warming.

When someone says that the observations have been “synthesized in a physically- and meteorologically-consistent way”, wise men run.
To review the bidding, HadCRUT says there is no significant warming in the last decade. GISS says there’s no significant warming in the last decade. Both the RSS and the MSU satellites say no significant warming in the last decade.
However, the magic “synthesis” of the above data by ECMWF shows “the last decade shows a global-mean trend of 0.1 °C to 0.2 °C per decade”.
Next, the met office is not showing the HadCRUT data, which includes the ocean. They are only using land stations … what’s up with that?
Their conclusion is that the EMCWF “synthesis” shows that the Met Office is right.
My conclusion is that all that has been shown is that the EMCWF “synthesis” is bovine excrement … however, having said that, I suspect someone will show up soon to say the EMCWF “could easily” be right.
Finally, being the suspicious sod that I am, I went to the EMCWF site and found … model results. Which makes sense, since they are the European Centre for Medium-Range Weather Forecasts, which they do with models.
It is also not clear what data the EMCWF are using. The met says that EMCWF are using “all available surface temperature measurements, together with data from sources such as satellites, radiosondes, ships and buoys.” However, their analysis does not show anything but grid boxes containing land stations … if they are using all available datasets, why is there nothing shown over the ocean?
In their catalog (175kb PDF) of products, they have:

Observation data
Data available from both the raw observation archive and the ECMWF operational feedback archive.
Marine observations
The subtypes available are surface BUOY, surface BATHY, surface TESAC and SYNOP ship. Oceanographic (sub-surface) data are available from DRIBU/BUOY, BATHY and TESAC.
Aircraft observations
The subtypes available are AMDAR, AIREP and ACAR.
Upper air soundings
The subtypes available are PILOT (Land), PILOT (Ship), TEMP (Land), TEMP (Ship), TEMP (mobile), TEMP (Drop), ROCOB (Land) and ROCOB (Ship).
Satellite data
Data from observational satellites that are agreed with satellite operators are available. The subtypes available are SATEM and SATOBS.

I don’t see any land station observational data there, but it is possible that they only give that to their friends … it certainly is not available on their data server, which only gives us plebians access to model results.
Finally, what “surface temperature measurements” are they using? GHCN raw? GHCN adjusted? Their own “adjustment” of the GHCN raw data? Anyone’s guess.

Janis B.
December 21, 2009 2:15 pm

Nick Stokes (13:47:31) :
“You’re all asking variants of the same question – doesn’t it matter when the adjustments were made? Yes, it does. What GG and I plotted in those histograms was the change to the trend over the whole time period of the station.”
Interesting… Take a look at: Musgrave (50194187000, and if I understood correctly, supposed to be -0,39C a decade, according to GG result.xls), adjusted graph here: http://www.appinsys.com/GlobalWarming/climgraph.aspx?pltparms=GHCNT100XJanDecI188020080600101AS50194187000x , years with less than 6 months of data excluded.
How does that “change to the trend” applies “over the whole time period of the station”?
“the measure is the linearised cumulative rate of change over the whole period.”
Don’t the different “whole periods” for different stations matter?

JJ
December 21, 2009 2:17 pm

Willis,
“If they were merely duplicates, the GHCN would not adjust them differently.”
Willis, they are duplicates. GHCN identifies them as such. Read the GHCN station metadata documentation, rather than making up stories based on suppositions.
You need to understand what that means, and how it affects how these data are ultimately used in the global anomaly calc.
“Now, do you see how condescending that sounds?”
That is not condescending. It is simply the fact of the matter. You need to know what a duplicate record is, and how it is ultimately used, before you can make claims about those duplicate records, and how they are used. You dont know those things. Find out.
“If you want people to take you seriously, talk to us like adults, not the way you talk to kids.”
I am talking to you as an adult. You are responding like a child. Stop being defensive when someone is trying to help you.
“If they were merely “duplicate records”, they’d average them to get the final record and be done with it. They don’t. If they were just “duplicate records”, they wouldn’t disagree 90% of the time. They do. If they were only “duplicate records”, they would not have been adjusted separately. They were.”
You’re making stuff up again. You dont need to do that. Read the GHCN documentation. If you find something that isnt well documented, ask. Dont try to reason what you dont know, from the limited amount that you do, when you can simply look it up, or ask.
“Man, you’ve been hanging around with too many “climate scientists”, you’re all about “could” and “might” and “may” and “could easily” and the like. Yes, the GHCN algorithm could do the things you claim … ”
Yes, the GHCN adjustment method can do those things I claim. And because it can, your claims that assume it cannot are false.
“TRY IT BEFORE YOU MAKE CLAIMS ABOUT IT!!”
Excuse me? Please take your own advice. TRY IT BEFORE YOU MAKE CLAIMS ABOUT IT!!”
You have yet to replicate a single GHCN adjustment. Until you have done that, you cannot say what they did. This is my fundamental point to you. Take a page from Steve Mc. He takes great pains to replicate results before he starts making claims about what someone else did.
“I am sick of people making claims about what “could easily” and “might” and “may” happen.”
Then you need to stop making claims supported by further claims about what “cannot” happen. You claim that the GHCN adjustment cannot have been applied, because the GHCN method cannot adjust duplicate records differently. As you admit, it can. Your claims are false.
“The GHCN says “neighboring” stations. Perhaps more than 1,500 km away is “neighboring” on your planet. On this planet it is not.”
Again, making stuff up. GHCN apparently does not have a distance limit. You cannot just make one up, and then claim they didnt follow their own rules because they didnt follow yours.
“I thought you were following the story. I am clearly talking about the adjustment in 1920. Go back to the records. Which Darwin first difference am I talking about?”
Answer the questions. Which of the five duplicate records was your correlation calculated for? And where is the similar calc for the other four? The records with shorter time frames may be matched to reference stations with shorter timeframes. What does that do to the availability of ‘neighboring’ stations? What does that do for correlations? Hint: It is much easier to find high correlations between shorter segments …
“No. The assertion is that “neighboring” stations are used. I’ve checked the neighborhood out to 1,500 km. without finding a single suitable station.”
So? Limiting your serach to 1500km is not the GHCN method. That is your method. If you want to claim that GHCN did not follow GHCN method, you have to check against the GHCN method. GHCN can apprently use any station within GHCN’s definition of ‘region’. What is that, for Australia?
“Take a deep breath and think about this for a minute, JJ. All the GHCN method can do is to adjust a station to match the trend in neighboring stations. It can’t create a trend out of nothing. For the adjustment to have been used, we would have to find 1) five well correlated neighboring sites that 2) increase at 6C per century. ”
No, you would have to find five well correlated neighboring sites that when homogenized into a reference series result in a potential adjustment of 6C per century.
Now see, you may be on to something there. I am not sure that it is true that the GHCN adjustment method limits the adjustment to the trend of the reference series. Or for that matter, if the trend in the reference series is limited to the trend in any one of the reference station’s homogenized data.
Hint: You arent sure either.
If you can:
a) prove (mathematically) that the GHCN adjustment method has those limits, and
b) prove that there are no stations that meet GHCN’s requirements for a reference station with that trend, THEN you will have proven that the GHCN methodology wasnt followed. And that would be something, wouldnt it?
And here’s the fun part: If you instead end up proving that the GHCN methodology CAN produce a 6C adjustment where no 6C trend exists in any of the reference stations … you’d have something there, too, now wouldnt you?
If you actually DO THE WORK to rigorously prove your point, you stand to come away with something valuable either way it turns out …
Sounds like a productive line of attack. Give it a go.
More later.

Mesa
December 21, 2009 2:19 pm

WE:
I work with first differenced data all the time in finance (returns actually) – just pointing out that a jump discontinuity in the undifferenced series will lead to one outsized difference and affect correlation measurements for small data sets….
I’d be curious to know if you think the extremely low correlations for the nearest neighboring stations are typical for the rest of the globe – ie any idea if its an Australia specific effect or are the distances just too great? It’s surprising given how they have been justifying using fewer and fewer thermometers globally…
Tks.

wobble
December 21, 2009 2:19 pm

Nick Stokes (13:11:12) :
“”Well, I said above that of stations >80 years adjusted record, 17 were adjusted down by more than Darwin was adjusted up. “”
Were any of these adjustments made outside of the Peterson algorithm?
If Peterson was followed for all of these adjustments, then Darwin is unfairly offsetting them.

DaveC
December 21, 2009 2:24 pm

[not sure if this was meant to be fightin words or not, but it didn’t really add to the discussion. ~ ctm]

Since it’s really hard to get into a fight with a computer screen, my comment was meant to be nothing more than a poignant observation directed at someone displaying boorish behavior.
Reply: Food fights are discouraged. Looks like I made the right choice. ~ ctm

Nick Stokes
December 21, 2009 2:47 pm

Janis B. (14:15:46) :
Don’t the different “whole periods” for different stations matter?

Yes, they do. You’ve plotted a smooth for that blue Musgrave data. If you had worked out the adjustment difference for that period and fitted a line, and got the slope, that is the number that goes into the histogram. There are two time issues:
1. The short period means that a relatively small adjustment overall can make a big slope
2. Short periods could be recent or long ago.
That’s why I’ve tended to focus on periods >80 years. It’s rare to find such a record that doesn’t cover most of the 20th C. And you only get a big gradient if the adjustment is also large.

Nick Stokes
December 21, 2009 2:51 pm

wobble (14:19:56) :
Were any of these adjustments made outside of the Peterson algorithm?

I don’t know. I’m analysing the data as found on the posted file. But I doubt that they were made manually. You can’t maintain an ongoing record with many thousands of stations if you have to keep track of manual changes.

Dave Springer
December 21, 2009 2:54 pm

Nick Stokes (22:46:31)
Giorgio Gilestro’s “most cogent criticism” is bogus in a couple of ways.
First of all if there is any bias in upward/downward adjustments the bias should be downward if the claimed corrections for urban heat islands were really made. There are going to be very few cases of air conditioner vents or blacktop or buildings removed from near a temp station but very many cases of these things being added near one. Gilestro indeed finds a small bias but its magnitude is the opposite of what one would expect to see. Correction for urban heat islands not only appears to be absent but there appears to be some small correction applied for what I can only describe as urban cold islands.
Secondly he makes no analysis of the temporal distribution of the adjustments. This makes a HUGE difference in what the trend looks like. If we adjust temperatures upward in the sceond half of the 20th century and adjust them downward by a commensurate amount in the first half of the 20th century then by Gilestro’s analysis all is well because they cancel each other out. In fact they don’t cancel out in the trend line. It’ll look colder than it really was in the first half of the century and hotter than it actually was in the second half. This should be obvious to anyone who can apply a triple digit IQ to it for a few minutes at most.

Nick Stokes
December 21, 2009 2:56 pm

Dave F (13:51:22) :
Still not sure what you are trying to prove. If you want to say there is not much of an effect on the record, plot the GHCN unadj against the GHCN adj and see what the difference is.

That’s basically what Romanm’s plot does. It shows the difference, which reaches a maximum size of about 0.2C back in about 1900. Noticeable, but not huge.

JJ
December 21, 2009 3:06 pm

Nick Stokes:
“You can’t maintain an ongoing record with many thousands of stations if you have to keep track of manual changes.”
Of course you can. In fact that, is exactly what USHCN does.
And the USHCN adjustments, whatever the distribution of their +/- magnitudes is, adds a strong, nearly hockey stick shaped warming trend into the final US temp estimate.
Where is the similar plot of GHCN gridded temps, comparing the net effect of raw vs adjusted data on the global estimate?

Willis Eschenbach
December 21, 2009 3:07 pm

JJ (14:17:03) :

Willis, …

I told you I’m not interested in your cavilling until you actually do something. You say that I haven’t been able to replicate the GHCN method. You are right, I haven’t been able to replicate the GHCN results. Why? Because despite an extensive search, I can’t find the stations to do so, nor have you. So sue me.
But on the other hand, you haven’t done a damn thing.
Come back when you have news from your enquiring emails to GHCN, or when you have identified possible stations for use in adjusting Darwin, or when you have shown that the GHCN method can take nearly identical records and twist them in different directions.
Because until then, you are just waving your hands and whining, and frankly, Scarlett, I don’t give a damn.

Dave Springer
December 21, 2009 3:08 pm

Nick Stokes 14:51:17
There are only two reasonable explanations for the adjustments seen at Darwin Zero. One is that a deliberate manual adjustment was applied to make the temperature trend appear to be an increasing one. The second is there’s a bug in the automated system that finds and applies the adjustments. In either case any honest researcher involved in the adjustment process should react to this by saying “This appears to be inconsistent with our published method of adjusting raw data. Let me figure out what happened and I’ll get back to you as soon as possible with an answer.” But instead we get nothing but rationalizations and denials. The so-called denialists are the warm mongers. The skeptics now appear to be the only ones NOT in denial.

Nick Stokes
December 21, 2009 3:14 pm

Dave Springer (14:54:13) :
First of all if there is any bias in upward/downward adjustments the bias should be downward if the claimed corrections for urban heat islands were really made.
Secondly he makes no analysis of the temporal distribution of the adjustments. This makes a HUGE difference in what the trend looks like. If we adjust temperatures upward in the sceond half of the 20th century and adjust them downward by a commensurate amount in the first half of the 20th century then by Gilestro’s analysis all is well because they cancel each other out. In fact they don’t cancel out in the trend line.

First, the GHCN corrections don’t claim to correct for UHI, and they don’t. They try to detect and correct discrete events.
Your second is the fallacy that I’ve been trying to explain over and over, “In fact they don’t cancel out in the trend line.” What GG is plotting is the distribution of trend line changes. The rate of change over time, for the whole period of the station, in C/decade. In your example, a big upward trend would go into GG’s histogram. No cancellation.
What do you think it is a distribution of?

John McDonald
December 21, 2009 3:17 pm

I have a odd thought about how it might have happened.
It looks sort of like a positive feedback loop.
Maybe something like this … they run the data look for a discontinuity. Clearly one is found around 1939. However, one can assume the old data should be lower or the new data higher … which raises the questions as to which one to change.
So it appears a 2nd order filtering algorithm runs in reverse starting in 1939 and ending 1920 with damping factor. It appears they got their – and + sign mixed up so instead of lowering 1930 to 1939 they raised it and instead of raising 1920 to 1930 ish they lowered it.
Next it appears they ran another algorithm going forward in time from 1939, this looks like a classical positive feedback — again probably a – and + signed mixed up somewhere so instead of damping you get ramping. (If someone can find another station with a similar discontinuity and see if the same thing occurs that would be great)
IF my speculation is correct — then this is not human caused, it is poor software coding, a lack of a software check for positive feedback, and an algorithm that goes unstable when such a profound step change is encountered. Engineers of all types accidentally make oscillating and positively feedback algorithms. Next time you hear a squeal at a concert from the speaker – think Darwin 1.
But who knows …

Dave F
December 21, 2009 3:20 pm

Nick Stokes (14:56:44) :
That’s basically what Romanm’s plot does. It shows the difference, which reaches a maximum size of about 0.2C back in about 1900. Noticeable, but not huge.
Still, what does the graph of raw data look like next to the graph of adjusted data? Is there a difference of 0.0175C, 0.2C, or some other difference? Isn’t this the best way to determine what effect the adjustments have, not making statistical sausage out of the data?

John McDonald
December 21, 2009 3:21 pm

For you filter guys out there … sort of like running a notch filter that goes unstable when you hit with a big enough step change and starts a positive feedback instead of filtering because the Q is set to high.

Dave F
December 21, 2009 3:24 pm

Nick Stokes (14:56:44) :
Also for consideration as to why the distribution was the wrong approach to take, does not GHCN use grids, and wouldn’t the weight of the station also need to be taken into account for its use in the grid when an adjustment is applied?

Janis B.
December 21, 2009 3:29 pm

Nick Stokes:
“Don’t the different “whole periods” for different stations matter?
Yes, they do.”
But that does not get reflected in histogram – at least not in GG’s, or does it?
“Length” of Musgrave record is more than 80 years – 25 years of data between 1907-1992, it’s just that there is a huge gap of those -999.9 in the middle.

Nick Stokes
December 21, 2009 3:33 pm

JJ (15:06:05) :
“You can’t maintain an ongoing record with many thousands of stations if you have to keep track of manual changes.”
Of course you can. In fact that, is exactly what USHCN does.

Yes, but USHCN uses metadata files. GHCN explicitly doesn’t, because it says that the way they are kept is just too varied across countries. If GHCN wanted to compile a history of manual changes, they would have to in effect create their own metddata files.

Nick Stokes
December 21, 2009 3:36 pm

Janis B. (15:29:27) :
By length of record, I mean length after adjustment. This acts as a filter for interrupted sets, and would eliminate Musgrave.

wobble
December 21, 2009 3:37 pm

Nick Stokes (14:51:17) :
“”I doubt that they were made manually. You can’t maintain an ongoing record with many thousands of stations if you have to keep track of manual changes.””
So don’t you think it’s a problem that the cooling adjustments you highlighted were made algorithmically according to Peterson, yet Darwin (an offsetting adjustment) was not?
Doesn’t this lead you to believe that GG’s plot should have revealed a net cooling adjustment if the temperature record was being objectively corrected?

Nick Stokes
December 21, 2009 3:41 pm

Dave F (15:24:42) :
Also for consideration as to why the distribution was the wrong approach to take, does not GHCN use grids, and wouldn’t the weight of the station also need to be taken into account for its use in the grid when an adjustment is applied?

It has to be better to look at the distribution of all stations than to pick out individual (extreme) stations. The grid argument applies even more to just focussing on Darwin. Incorporating grid weights would be a good thing in calculating the derived mean. It doesn’t change the distribution.

Nick Stokes
December 21, 2009 3:43 pm

wobble (15:37:07) :
I don’t believe that any of these adjustments were done manually.

1 6 7 8 9 10 13