by Willis Eschenbach
People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport – by Dominic Perrin via Panoramio
Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.
The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.
There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.
So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:
Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?
If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.
The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.
One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.
So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.
Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.
So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?
The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.
First, what is an “inhomogeneity”? I can do no better than quote from GHCN:
Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.
That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.
I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.
Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.
This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.
- There is no one record that covers the whole period.
- The shortest record is only nine years long.
- There are gaps of a month and more in almost all of the records.
- It looks like there are problems with the data at around 1941.
- Most of the datasets are missing months.
- For most of the period there are few nearby stations.
- There is no one year covered by all five records.
- The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?
In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.
Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.
However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.
I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.
OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.
Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.
To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record
YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.
Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say
GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.
The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.
…
In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.
…
The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.
Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.
OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …
So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.
Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.
Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?
Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.
One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.
So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.
Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.
That way, you get an average that looks kinda real, I guess, it “hides the decline”.
Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.
Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.
And CRU? Who knows what they use? We’re still waiting on that one, no data yet …
What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.
And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.
Regards to all, keep fighting the good fight,
w.
FURTHER READING:
My previous post on this subject.
The late and much missed John Daly, irrepressible as always.
More on Darwin history, it wasn’t Stevenson Screens.
NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
From January1967 to December 1973 incl, there awere 2 sets of readings taken daily in Darwin and reported by the Bureau of meteorology. I have converted them into monthly averages for both maximum and minimum temperatures.
One station was the BOM regional office, with lats and longs being shown as -12.4667, 130.8333, number 014161. The other was the airport, lats and longs being (now) -12.4239, 130.8925, number 014015. The second station is about 7 km NE of the first.
Here is a graph of the monthly readings:
http://i260.photobucket.com/albums/ii14/sherro_2008/Darwinoverlap.jpg?t=1260869317
Despite their proximity, some monthly averages differ by up to 2 degree C. The correlation coefficients between similar pairs are typicallu around 0.98 or better and the mean of both over the period are equal.
However, the presence of monthly differnces of up to 2 deg C goes with a systematic difference. The minima averages are 23.2 and 23.8 and the maxima averages are 32.1 and 32.6. Thus there is a systematic 0.4 degree difference between the max and mins of these 2 stations just 7 km apart in both maxima and minima over these 7 years. Remember, the world is making much of a global 0.7 degrees in the 20th century mean as constituted somehow.
This leads one to question if the correlation coefficient between the average temperature of 2 stations is an adequate criterion for levelling data or replacing missing data. The technique is widely used. Should it be?
Caveat: The available data do not allow confirmation that these actual sites were used. It is said that the actual position of the Stevenson screen at the airport was moved several times. The positions given above are those issued by the BOM for the present locations.
Comment on armwaving: I have asked a number of times if other readers know for certain that some adjustments were made to Darwin or World data and if certain data were truly raw. It is not good enough to receive an opinion in return. In my working career, if I had asked a colleague such questions and if I had received such answers, that employee would not be in the office for much longer. We are at the stage of questioning fundamental assumptions, not at the stage of saying “I think that is right”. We want to know if it is right.
Re: carrot eater (20:39:09) :
Geoff Sherrington (16:59:39) :
(to Nick) “I am asking if you have examined other long term data series to support this insinuation.”
Doesn’t his reply at (02:11:43) tell you? He looked at >40 year records.
………………………
Now, be a good chap and do the job properly. Look at >100 year records. Blind Freddie knows that there was a major break point in the data of many stations, about 50 years ago.
Just saw this.
Ryan Stephenson (07:06:07) :
“To claim that any scientifically useful data could be drawn from such a piece of equipment is itself fraudulent, and begs the question “Why?”.”
You use the word fraudulent way too easily. That isn’t ‘fraudulent’, at least, not that anybody has demonstrated.
But in any case, I agree that at some point, a data set just can’t be used anymore, if it has too many discontinuities. But one would have to come up with a statistically-justified objective measure of when to not use a data set. Perhaps this is why the GISS set leaves out the Darwin data before 1963; I’ve not looked into that. The GHCN has criteria for not using a station, as well.
The answer to ‘why’ is simply to have more spatial coverage at all times. Almost any weather station will have some discontinuity, whether it be an equipment change or time of observation change. You can’t go back in time and tell the people in 1880 where to put the weather stations, how to set them up and maintain them, and give them modern equipment as well; you have to do the best you can with what you have.
Steve Short (22:28:27) :
If the country wants to lie about what it’s giving to the GHCN, and give them somehow processed data while calling it raw, I suppose they could, if that’s what we’re coming to.
I’ll note that the GHCN’s methods seem able to sniff out some weirdnesses though; if you read the papers they give examples where the NOAA figured out mistakes in the data – people recording numbers in the wrong units, data misassigned to the wrong station, etc. Even the little bit of processing the countries are supposed to do – calculating the monthly means – can go wrong, and is one of the things the NOAA tries to sort out.
So if the countries give the NOAA some weirdly processed data, that itself could be detected by the NOAA methods.
“The only value of Willis’ work lies only in the fact that he has highlighted a highly complex situation in the treatment of JUST ONE station, Darwin by JUST ONE agency GHCN. But I respectfully suggests this at least shows just how convoluted and complex the situation can get.”
It shows you the difficulties that can arise at some stations. But that’s a far, far cry from what Willis was claiming it showed.
“In this unique context, endless blatherings of ‘don’t see the difficulty’ and ‘have you read every single paper’ references to authority just don’t resolve my ethical and scientific issues with this ‘mere exercise in data gathering’ – very sorry.”
I was answering specific questions – questions that can be addressed by actually reading the available literature, like ‘how do they do x’, or ‘why do they do x’, or ‘how do I know what GISS does, as opposed to what NCDC does’. If your personal point is simply that the whole thing can be difficult, then fine. But one has to move beyond that observation, and assess the uncertainty in the exercise.
Geoff Sherrington (01:49:39) :
Look at what they’ve reported so far. gg used essentially all stations for the entire lengths of their histories; nick stokes repeated that for only the longer stations (>40 years of adjusted data); nick stokes also did all data since 1940 for stations with at least 9 years in that period.
So if you’re interested only in what happened since 1950, does the last one answer your question?
gg and Nick Stokes have posted the code they’re using for this analysis. If you want to see every conceivable variation, at some point you can do it yourself.
One correction to something I said earlier: I suggested looking at GISS code for homogenisation. While that would help you with GISS’s methods, it wouldn’t help at all for GHCN because they use very different methods. GISS looks for UHI; GHCN uses the methods discussed here to try to detect all sorts of other changes.
Sorry for any misunderstanding. It’s unclear to me if any NCDC source code is available, though I still maintain that the method is described well enough that somebody could give it a try.
I do think it worth noting that the results are broadly similar either way. Different methods giving consistent results is a good thing; it shows you things aren’t overly sensitive to how you go about it.
carrot eater (04:27:24) :
“gg and Nick Stokes have posted the code they’re using for this analysis. ”
Could you direct me to these (code + analysis) please? Thanks.
AMac (12:57:01) :
Sorry for the delays, I’ve been traveling.
What the Aussies have done with Darwin temps is a large subject worthy of a thread of its own. However, in this thread I have not touched on what they have done in any way, so I don’t understand what the question has to do with me. What I have done is analyze the work done by GHCN, which is totally separate and different from what the Aussies have done. GHCN does not use local metadata, the Aussies do.
carrot eater (04:15:35) :
You are talking about two entirely different processes. One is the quality control process, designed to locate errors in the data given to them. The other is the homogenization process, which is designed (near as I can tell) to produce errors in the data. It is the second one of these that I have analyzed. The quality control procedures have nothing to do with what I have analyzed.
I’ll write a full post on this when I have time. I have been looking at the records of the nearest stations to Darwin, those which might have been used in the GHCN adjustment procedure. Here is the relevant part of that procedure:
As I mentioned above, the hard part is not to find five neighboring stations, particularly if you consider a station 1,500 km away as “neighboring”. The hard part is to find those stations whose first difference has an 0.80 correlation with the Darwin station first difference.
Accordingly, I’ve been looking at those stations. For the 1920 adjustment we need stations starting in 1915 or earlier. Here are all of the possible stations to be considered within 1,500 km of Darwin, along with the correlation of their first difference with the Darwin first difference:
WYNDHAM_(WYNDHAM_PORT) = -0.14
DERBY = -0.10
BURKETOWN = -0.40
CAMOOWEAL = -0.21
NORMANTON = 0.35
DONORS_HILL = 0.35
MT_ISA_AIRPORT = -0.20
ALICE_SPRINGS = 0.06
COEN_(POST_OFFICE) = -0.01
CROYDON = -0.23
CLONCURRY = -0.2
MUSGRAVE_STATION = -0.43
FAIRVIEW = -0.29
As you can see, not one of them is even remotely like Darwin. None of them are adequate for inclusion in a “first-difference reference time series”.
So I hold to my claim, that this station was not adjusted by the listed GHCN procedure. This is verified by the number of single-year adjustments in the record (see Fig. 8). The described GHCN procedures do not adjust single years.
My thanks to all who have commented, the discussion continues
w.
Willis
This is why I have been expending efforts to find the missing data from Oenpelli i.e. starting 1920-1925, possibly as early as 1910, through 1963. Oenpelli is 230 km east of Darwin and (considering the prevailing winds at Darwin) if any site is going to give you a +0.80 or better correlation it might well be Oenpelli. I did my PhD (hydrogeology/geochemistry) in the Alligator Rivers region 1983 – 1988, including long annual visits to Oenpelli and Narbalek Mine nearby. The Oenpelli site had a Stevenson Screen when I first saw it in 1983. I feel sure I sighted the long term temperature record (on paper) during that time – either at Oenpelli or more probably at the Narbalek mine site. ABoM’s online record is only from December 1963 although their metadata states from Jan 1957 (?). It is possible ABoM have the earlier Oenpelli data on paper in their archives if comments by ABoM’s Blair Trewin at Larvatus Prodeo is anything to go be, although why it wasn’t uploaded long ago is anyone’s guess.
. Its all about chasing shadows.
By that I mean latching on to this or that latest, most innovative idea that some self styled money making guru has put out in the hope it’ll go viral and make them a lot of money off the backs of all the headless chickens who will follow them blindly down a blind alley. Its a shame but a truism nonetheless that people will follow where someone they see as an expert leads. Even if they lead them to certain disaster, which is what most of the gurus tend to do to their flocks.
The trick is to recognize a shadow when you see it!
http://www.onlineuniversalwork.com
Can any of you experts out there explain to me how the BOM did this to Halls Creek?
http://members.westnet.com.au/rippersc/hchomog.jpg
Willis Eschenbach (19:27:35) :
The Aussies have this to do with you: You were wondering what caused the GHCN to make the adjustments they made. To do that, you’d have to look at the neighboring stations and follow their method. But their method is meant to infer actual changes on the ground, so as a test of its ability to do that, you can ask the Aussies for the historical metadata. Turns out, it’s online, after all.
Willis Eschenbach (19:31:39) :
Re quality control vs homogenization: At that point, we weren’t talking about your post anymore; Steve Short was onto his own personal peeves. In any case, I’d say that the homogenization step will also root out major errors, in addition to the quality control. When they go to build the reference network, and the nearby stations don’t correlate at all, that would throw up a flag.
Willis Eschenbach (20:34:06) :
“The hard part is to find those stations whose first difference has an 0.80 correlation with the Darwin station first difference.”
You absolutely should have tried doing that before your initial post. I can’t imagine why you didn’t. It would have added some substance.
“So I hold to my claim, that this station was not adjusted by the listed GHCN procedure.”
OH, come now. In some academic sense, I’m also curious to see how those (small) early year adjustments were made. I’ve more or less got your Fig 8 reproduced now; I just need to confirm how GHCN computes annual means.
But let’s be honest.
Nobody who read this post cared one bit about the tiny adjustments in the 1920s. It’s the “stepped pyramid climbing to heaven” that got everybody excited, and we all know it. So the first priority would be to look at neighboring stations in the more recent times; the distant past adjustments are secondary in importance.
Ripper (23:28:09) :
“Can any of you experts out there explain to me how the BOM did this to Halls Creek?
http://members.westnet.com.au/rippersc/hchomog.jpg”
Fairly blown away by that:
“Station 2011 min 1998 – 2998 = 0.451 deg/100yrs
Station 2011 max 1998 – 1951 = 0.0466 deg/100yrs”
stuff….
Again
carrot eater (04:27:24) :
“gg and Nick Stokes have posted the code they’re using for this analysis. ”
Could you direct me to these (code + analysis) please? Thanks.
Ripper,
I’ll see your chart and raise you $2.
Is this the same data as yours?
http://s260.photobucket.com/albums/ii14/sherro_2008/?action=view¤t=HallsCreek.jpg
This was created from BOM data available in 2006, with the 2007 and 2008 years added from an online service.
I would like to be able to comment, but I do not know with confidence whether you have raw data to start with, or whether it has been adjusted before your earliest stage.
It is possible to comment that the minima are more often rising or falling than the maxima when one does this type of plotting. The maxima are steady in many places, especially near to the coast. What were your data sources?
Steve Short
You’ll find analysis and both codes on GG’s site.
Geoff Sherrington (00:40:48)
I used these
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002011&p_nccObsCode=40&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002012&p_nccObsCode=40&p_month=13
And this for the Homoginised “quality” site
http://www.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=maxT&area=wa&station=002012&period=annual&dtype=raw&ave_yr=T
I am currently trying to work out what they have done to Meekatharra & Marble bar.
It appears that they used Wiluna as a reference station
http://maps.google.com.au/maps?f=q&source=s_q&hl=en&geocode=&q=26.6131%C2%B0+S+++118.5367%C2%B0+E&mrt=all&vps=1&sll=-26.58835,120.225234&sspn=0.003368,0.004812&ie=UTF8
Wiluna shire sealed all the streets in 1989-90 . they were previously gravel.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=013012&p_nccObsCode=36&p_month=01
Geoff Sherrington (01:49:39) :
OK, if you go to 80 year records after adjustment, there are 2074 such stations, and Darwin is now number #31, ranked by change to slope caused by GHCN adjustment.
This histogram shows what an outlier it is.
Nick Stokes (02:14:28) :
“Steve Short
You’ll find analysis and both codes on GG’s site.”
Thanks Nick. I hope to get into this over the Xmas/New year break.
Just one quick question jumps out at me as follows:
GG has calculated a summary stat by year (trend) over all stations and gets an average of +0.0175 C/decade. This is ‘the trend of the adjustment’ (in GGs own words) or in effect the aggregate bias of all adjustments.
Romanm has calculated a trend over time and weighted by the number of stations in each year (as you noted), this is +0.0170 C/decade.
The agreement is good between these two different approaches. Call the average aggregate bias +0.0172 C/decade?
However, correct me if I’m wrong, but isn’t the 20th century total global warming (from all sources) supposed to be ~0.65 C or 0.065 C/decade?
Isn’t a +0.0172 C/decade bias then a significant 26% of the supposed warming – reducing the unbiased warming to 0.048 C/decade?
And doesn’t this have a significant implication for an inferred CO2 sensitivity?
What have I missed (or misunderstood) here?
Steve Short (21:02:49) :
One of the first constructions when my colleagues-to-be discovered Ranger One was to erect a proper weather station. There are records from 1970-71. I’m trying to get paper copies. It’s 256 km by road to Darwin PO. (2 hours drive before the speed was restricted).
We share a problem in not being able to trace which version of which station was adjusted by whom, by how much, in which direction and why. That is why I am wearing thin about other people who assert that such-and-such data are the same as such-and-such version 2, but not the same as version 3. There does not seem to be an adequate fixed frame of reference in time, for either the form of the first records in time or the date stamp on many of the later versions.
Nick Stokes (02:14:28) :
“Steve Short
You’ll find analysis and both codes on GG’s site.”
Yes, we know that there has been an analysis there for some days now, but we do not know that you could stand in a Court with a Bible and swear that the versions had the origins you imagine. We are past the stage of analysis and saying “this version looks like that version”. It’s time for rigid documentation the sources, is it not? And how does one do this when some USA records are adjusted many, many times?
Sorry. I put up the January link instead of the annual
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=013012&p_nccObsCode=36&p_month=13
Steve Short (03:13:07) :
“However, correct me if I’m wrong, but isn’t the 20th century total global warming (from all sources) supposed to be ~0.65 C or 0.065 C/decade?”
Taken over the whole century, you’ll get something like that. Taken since 1970, you’ll get something rather higher, ~ 0.15 to 0.2 C/dec or so. The 20th century wasn’t exactly linear. Although we should be a bit careful; the data being analysed here are land-only, and we’re comparing it to land+ocean. Without looking, I’m not sure how big a deal that is, but the oceans do slow things down.
“Isn’t a +0.0172 C/decade bias then a significant 26% of the supposed warming – reducing the unbiased warming to 0.048 C/decade?”
You’re a priori assuming that any net effect due to homogenisation is bad?
“And doesn’t this have a significant implication for an inferred CO2 sensitivity?”
Zero implication.