The Smoking Gun At Darwin Zero

by Willis Eschenbach

People keep saying “Yes, the Climategate scientists behaved badly. But that doesn’t mean the data is bad. That doesn’t mean the earth is not warming.”

Darwin Airport – by Dominic Perrin via Panoramio

Let me start with the second objection first. The earth has generally been warming since the Little Ice Age, around 1650. There is general agreement that the earth has warmed since then. See e.g. Akasofu . Climategate doesn’t affect that.

The second question, the integrity of the data, is different. People say “Yes, they destroyed emails, and hid from Freedom of information Acts, and messed with proxies, and fought to keep other scientists’ papers out of the journals … but that doesn’t affect the data, the data is still good.” Which sounds reasonable.

There are three main global temperature datasets. One is at the CRU, Climate Research Unit of the University of East Anglia, where we’ve been trying to get access to the raw numbers. One is at NOAA/GHCN, the Global Historical Climate Network. The final one is at NASA/GISS, the Goddard Institute for Space Studies. The three groups take raw data, and they “homogenize” it to remove things like when a station was moved to a warmer location and there’s a 2C jump in the temperature. The three global temperature records are usually called CRU, GISS, and GHCN. Both GISS and CRU, however, get almost all of their raw data from GHCN. All three produce very similar global historical temperature records from the raw data.

So I’m still on my multi-year quest to understand the climate data. You never know where this data chase will lead. This time, it has ended me up in Australia. I got to thinking about Professor Wibjorn Karlen’s statement about Australia that I quoted here:

Another example is Australia. NASA [GHCN] only presents 3 stations covering the period 1897-1992. What kind of data is the IPCC Australia diagram based on?

If any trend it is a slight cooling. However, if a shorter period (1949-2005) is used, the temperature has increased substantially. The Australians have many stations and have published more detailed maps of changes and trends.

The folks at CRU told Wibjorn that he was just plain wrong. Here’s what they said is right, the record that Wibjorn was talking about, Fig. 9.12 in the UN IPCC Fourth Assessment Report, showing Northern Australia:

Figure 1. Temperature trends and model results in Northern Australia. Black line is observations (From Fig. 9.12 from the UN IPCC Fourth Annual Report). Covers the area from 110E to 155E, and from 30S to 11S. Based on the CRU land temperature.) Data from the CRU.

One of the things that was revealed in the released CRU emails is that the CRU basically uses the Global Historical Climate Network (GHCN) dataset for its raw data. So I looked at the GHCN dataset. There, I find three stations in North Australia as Wibjorn had said, and nine stations in all of Australia, that cover the period 1900-2000. Here is the average of the GHCN unadjusted data for those three Northern stations, from AIS:

Figure 2. GHCN Raw Data, All 100-yr stations in IPCC area above.

So once again Wibjorn is correct, this looks nothing like the corresponding IPCC temperature record for Australia. But it’s too soon to tell. Professor Karlen is only showing 3 stations. Three is not a lot of stations, but that’s all of the century-long Australian records we have in the IPCC specified region. OK, we’ve seen the longest stations record, so lets throw more records into the mix. Here’s every station in the UN IPCC specified region which contains temperature records that extend up to the year 2000 no matter when they started, which is 30 stations.

Figure 3. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

Still no similarity with IPCC. So I looked at every station in the area. That’s 222 stations. Here’s that result:

Figure 4. GHCN Raw Data, All stations extending to 2000 in IPCC area above.

So you can see why Wibjorn was concerned. This looks nothing like the UN IPCC data, which came from the CRU, which was based on the GHCN data. Why the difference?

The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data. GHCN adjusts the data to remove what it calls “inhomogeneities”. So on a whim I thought I’d take a look at the first station on the list, Darwin Airport, so I could see what an inhomogeneity might look like when it was at home. And I could find out how large the GHCN adjustment for Darwin inhomogeneities was.

First, what is an “inhomogeneity”? I can do no better than quote from GHCN:

Most long-term climate stations have undergone changes that make a time series of their observations inhomogeneous. There are many causes for the discontinuities, including changes in instruments, shelters, the environment around the shelter, the location of the station, the time of observation, and the method used to calculate mean temperature. Often several of these occur at the same time, as is often the case with the introduction of automatic weather stations that is occurring in many parts of the world. Before one can reliably use such climate data for analysis of longterm climate change, adjustments are needed to compensate for the nonclimatic discontinuities.

That makes sense. The raw data will have jumps from station moves and the like. We don’t want to think it’s warming just because the thermometer was moved to a warmer location. Unpleasant as it may seem, we have to adjust for those as best we can.

I always like to start with the rawest data, so I can understand the adjustments. At Darwin there are five separate individual station records that are combined to make up the final Darwin record. These are the individual records of stations in the area, which are numbered from zero to four:

Figure 5. Five individual temperature records for Darwin, plus station count (green line). This raw data is downloaded from GISS, but GISS use the GHCN raw data as the starting point for their analysis.

Darwin does have a few advantages over other stations with multiple records. There is a continuous record from 1941 to the present (Station 1). There is also a continuous record covering a century. finally, the stations are in very close agreement over the entire period of the record. In fact, where there are multiple stations in operation they are so close that you can’t see the records behind Station Zero.

This is an ideal station, because it also illustrates many of the problems with the raw temperature station data.

  • There is no one record that covers the whole period.
  • The shortest record is only nine years long.
  • There are gaps of a month and more in almost all of the records.
  • It looks like there are problems with the data at around 1941.
  • Most of the datasets are missing months.
  • For most of the period there are few nearby stations.
  • There is no one year covered by all five records.
  • The temperature dropped over a six year period, from a high in 1936 to a low in 1941. The station did move in 1941 … but what happened in the previous six years?

In resolving station records, it’s a judgment call. First off, you have to decide if what you are looking at needs any changes at all. In Darwin’s case, it’s a close call. The record seems to be screwed up around 1941, but not in the year of the move.

Also, although the 1941 temperature shift seems large, I see a similar sized shift from 1992 to 1999. Looking at the whole picture, I think I’d vote to leave it as it is, that’s always the best option when you don’t have other evidence. First do no harm.

However, there’s a case to be made for adjusting it, particularly given the 1941 station move. If I decided to adjust Darwin, I’d do it like this:

Figure 6 A possible adjustment for Darwin. Black line shows the total amount of the adjustment, on the right scale, and shows the timing of the change.

I shifted the pre-1941 data down by about 0.6C. We end up with little change end to end in my “adjusted” data (shown in red), it’s neither warming nor cooling. However, it reduces the apparent cooling in the raw data. Post-1941, where the other records overlap, they are very close, so I wouldn’t adjust them in any way. Why should we adjust those, they all show exactly the same thing.

OK, so that’s how I’d homogenize the data if I had to, but I vote against adjusting it at all. It only changes one station record (Darwin Zero), and the rest are left untouched.

Then I went to look at what happens when the GHCN removes the “in-homogeneities” to “adjust” the data. Of the five raw datasets, the GHCN discards two, likely because they are short and duplicate existing longer records. The three remaining records are first “homogenized” and then averaged to give the “GHCN Adjusted” temperature record for Darwin.

To my great surprise, here’s what I found. To explain the full effect, I am showing this with both datasets starting at the same point (rather than ending at the same point as they are often shown).

Figure 7. GHCN homogeneity adjustments to Darwin Airport combined record

YIKES! Before getting homogenized, temperatures in Darwin were falling at 0.7 Celcius per century … but after the homogenization, they were warming at 1.2 Celcius per century. And the adjustment that they made was over two degrees per century … when those guys “adjust”, they don’t mess around. And the adjustment is an odd shape, with the adjustment first going stepwise, then climbing roughly to stop at 2.4C.

Of course, that led me to look at exactly how the GHCN “adjusts” the temperature data. Here’s what they say

GHCN temperature data include two different datasets: the original data and a homogeneity- adjusted dataset. All homogeneity testing was done on annual time series. The homogeneity- adjustment technique used two steps.

The first step was creating a homogeneous reference series for each station (Peterson and Easterling 1994). Building a completely homogeneous reference series using data with unknown inhomogeneities may be impossible, but we used several techniques to minimize any potential inhomogeneities in the reference series.

In creating each year’s first difference reference series, we used the five most highly correlated neighboring stations that had enough data to accurately model the candidate station.

The final technique we used to minimize inhomogeneities in the reference series used the mean of the central three values (of the five neighboring station values) to create the first difference reference series.

Fair enough, that all sounds good. They pick five neighboring stations, and average them. Then they compare the average to the station in question. If it looks wonky compared to the average of the reference five, they check any historical records for changes, and if necessary, they homogenize the poor data mercilessly. I have some problems with what they do to homogenize it, but that’s how they identify the inhomogeneous stations.

OK … but given the scarcity of stations in Australia, I wondered how they would find five “neighboring stations” in 1941 …

So I looked it up. The nearest station that covers the year 1941 is 500 km away from Darwin. Not only is it 500 km away, it is the only station within 750 km of Darwin that covers the 1941 time period. (It’s also a pub, Daly Waters Pub to be exact, but hey, it’s Australia, good on ya.) So there simply aren’t five stations to make a “reference series” out of to check the 1936-1941 drop at Darwin.

Intrigued by the curious shape of the average of the homogenized Darwin records, I then went to see how they had homogenized each of the individual station records. What made up that strange average shown in Fig. 7? I started at zero with the earliest record. Here is Station Zero at Darwin, showing the raw and the homogenized versions.

Figure 8 Darwin Zero Homogeneity Adjustments. Black line shows amount and timing of adjustments.

Yikes again, double yikes! What on earth justifies that adjustment? How can they do that? We have five different records covering Darwin from 1941 on. They all agree almost exactly. Why adjust them at all? They’ve just added a huge artificial totally imaginary trend to the last half of the raw data! Now it looks like the IPCC diagram in Figure 1, all right … but a six degree per century trend? And in the shape of a regular stepped pyramid climbing to heaven? What’s up with that?

Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style … they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.

One thing is clear from this. People who say that “Climategate was only about scientists behaving badly, but the data is OK” are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.

So once again, I’m left with an unsolved mystery. How and why did the GHCN “adjust” Darwin’s historical temperature to show radical warming? Why did they adjust it stepwise? Do Phil Jones and the CRU folks use the “adjusted” or the raw GHCN dataset? My guess is the adjusted one since it shows warming, but of course we still don’t know … because despite all of this, the CRU still hasn’t released the list of data that they actually use, just the station list.

Another odd fact, the GHCN adjusted Station 1 to match Darwin Zero’s strange adjustment, but they left Station 2 (which covers much of the same period, and as per Fig. 5 is in excellent agreement with Station Zero and Station 1) totally untouched. They only homogenized two of the three. Then they averaged them.

That way, you get an average that looks kinda real, I guess, it “hides the decline”.

Oh, and for what it’s worth, care to know the way that GISS deals with this problem? Well, they only use the Darwin data after 1963, a fine way of neatly avoiding the question … and also a fine way to throw away all of the inconveniently colder data prior to 1941. It’s likely a better choice than the GHCN monstrosity, but it’s a hard one to justify.

Now, I want to be clear here. The blatantly bogus GHCN adjustment for this one station does NOT mean that the earth is not warming. It also does NOT mean that the three records (CRU, GISS, and GHCN) are generally wrong either. This may be an isolated incident, we don’t know. But every time the data gets revised and homogenized, the trends keep increasing. Now GISS does their own adjustments. However, as they keep telling us, they get the same answer as GHCN gets … which makes their numbers suspicious as well.

And CRU? Who knows what they use? We’re still waiting on that one, no data yet …

What this does show is that there is at least one temperature station where the trend has been artificially increased to give a false warming where the raw data shows cooling. In addition, the average raw data for Northern Australia is quite different from the adjusted, so there must be a number of … mmm … let me say “interesting” adjustments in Northern Australia other than just Darwin.

And with the Latin saying “Falsus in unum, falsus in omis” (false in one, false in all) as our guide, until all of the station “adjustments” are examined, adjustments of CRU, GHCN, and GISS alike, we can’t trust anyone using homogenized numbers.

Regards to all, keep fighting the good fight,

w.

FURTHER READING:

My previous post on this subject.

The late and much missed John Daly, irrepressible as always.

More on Darwin history, it wasn’t Stevenson Screens.

NOTE: Figures 7 and 8 updated to fix a typo in the titles. 8:30PM PST 12/8 – Anthony

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
4.7 3 votes
Article Rating
909 Comments
Inline Feedbacks
View all comments
mitchell porter
December 10, 2009 12:21 am

We have to get to the bottom of this “Yambagate” affair.

Alan P
December 10, 2009 1:34 am

So, you guys all know better than thousands of actual scientists? Global warming is actually not happening?
So…
Why are all those glaciers melting?
Why have the sea levels risen?
What has happened to the billions of tons of CO2 that HAS been pumped into the atmosphere by human activities?
Would you dispute the science that shows CO2 has a warming effect in the atmosphere?
Would you dispute that there is now significantly more CO2 in the atmosphere than prior to the industrial revolution?
So… according to you [snip] types, somehow, maybe by magic, the CO2 that humans put into the atmosphere doesn’t add any additional warming.
Well, that’s okay then. Lets all just assume everything is ok, and not take any action.
Brilliant. I feel much better with you geniuses around.

Morgan T
December 10, 2009 3:16 am

Alan P
Glaciers: Some are retreating, some are growing some are not moving at all, but in general we know that glaciers have been retreating since around 1870, (Before that they we growing due to the huge impact from Krakatau)
Sea levels have been rising for a very long time there is nothing new in that, but since 2006 there is no change.
The CO2 will stay for a while in the athmosphere, for how long?, depends who you ask.
There is more CO2 in the athmosphere now than in pre-industrial time, no one denies that, but at this level (380 ppm) an increase of CO2 makes a very limited impact on the greenhouse effect (Do not even try to dispute that, it has been verified the correct scientific way)

carrot eater
December 10, 2009 5:17 am

Willis,
Further, in Fig 5, you plot the various records that come up for Darwin Airport in the GISS page, looking at raw data.
Simply looking at them, it looks to me that in the periods of overlap, those aren’t independent measurements at all, but that they’re duplicates. Did you think to ask somebody to clarify that first, before making conclusions like: “Why should we adjust those, they all show exactly the same thing.”
Meanwhile, in looking for neighboring stations, I don’t think we need to limit ourselves to ones that go back to 1941; we can still learn something from ones that start after that.
I find it interesting that the GISS homogenised set doesn’t pick up until 1963. If nothing else, we might learn something here about the difference between GISS and GHCN’s methods for homogenisation.
Finally, I’d still like to see you repeat the analysis with an adjustment for the station move in 1941; not doing so seems really questionable.

David
December 10, 2009 5:46 am

I can’t believe this. I can’t believe that the science has been so corrupted.
I wish the ABC would finally do their job and begin reporting on serious issues like the global warming alarmist fraud which is present and persistent in the scientific community at the moment.
Anyone want to start a petition?

Neo
December 10, 2009 6:12 am

Alan P
So, you guys all know better than thousands of actual scientists?
Some of us are actual scientists and engineers with decades of experience, so don’t give the PR line about the “””SCIENTISTS”””
The questions raised by the “skeptics” rarely are … does “climate change”, “climate warming” or “climate cooling” exist ? … they all do, from time to time.
The real question to be ask is how great are the changes ? … are they within normal variability ? … does man’s contribution really change things that much ? … why isn’t the IPCC including effects of clouds in their models ? (it’s hard) … are clouds affected by cosmic rays ? (possibly yes) … why is solar variability including the solar effects on cosmic rays downplayed by the IPCC ? … can we mitigate it cheaper ? … is it worth the price ? … can we live with it ?
Some of these and other questions have difficult answers, so let’s start now to answer them. The science is not “settled.” Anyone who tells you it is “settled” is a fool, idiot or a politician looking to perform a “wallet-ectomy” on you.
Every problem does not demand a solution. For instance, there is a chance that the Earth could be hit by a massive meteor that could end civilization in minutes, do you see trillions of dollars being spent on a solution to that problem ? No, because it will cost too much given the likelihood. A large solar flare could fry the Earth, is there a solution to that ? No, we will all fry if that happens.
Is it worth $20 or $30 trillion to possibly reduce the global temperature by 0.1C when it will might naturally go up by 1.0C or more ? .. it might not go up at all ? Could that money be better spent mitigating the result of the difference or the whole thing ? Maybe an alternative will leave some money could go to end suffering and reduce world hunger. Now there’s a happy thought.

Slartibartfast
December 10, 2009 6:34 am

Global warming is actually not happening?

Who’s saying that, besides you?

Brian D Finch
December 10, 2009 6:39 am

Willis, here is what seems to be another smoking gun. There used to be two stations at Halls Creek. The first, at Old Halls Creek [Latitude: 18.25° S; Longitude: 127.78° E; Elevation: 360 m], operated until the 20th May 1999. Yet the BOM only graphs the data up until 1951 – thereby jettisoning nearly half a century of data. Nevertheless, the data they use shows a mean maximum temperature flatlining at approx 33.4 degrees C. This may be seen at: http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002011&p_nccObsCode=36&p_month=13
The second station, at (New) Halls Creek Airport [Latitude: 18.23° S; Longitude: 127.66° E; Elevation: 422 m] opened in 1944 and is presumably running today. The data produced shows a mean maximum temperature flatlining at approc 33.7 degrees C. This may be seen at: http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=002012&p_nccObsCode=36&p_month=13
Now, here we have the data from two stations separated by about 15 kilometres and 62 metres height above sea level. That the higher of the two in height shows the higher temperature may have something to do with its site (airports tend to have lots of lovely heat-absorbing tarmac/asphalt). Nevertheless, they both show flatlining annual maximum mean temperatures. One would therefore expect a homogenised graph of the data from the two to reflect this – ie: to flatline at about 33.5-33.6 degrees C.
However, this is precisely what we do not find. Instead, the data from the two have been crudely combined to produce an apparent century long annual mean maximum temperature rise from about 32.2 degrees C in 1900 to about 33.8 degrees C in 2009. This may be seen at: http://reg.bom.gov.au/cgi-bin/climate/hqsites/site_data.cgi?variable=maxT&area=aus&station=002012&dtype=raw&period=annual&ave_yr=T
As the late Bernard Levin might have said: ‘Not only is this nonsense, it is nonsense on stilts.’

Basil
Editor
December 10, 2009 6:42 am

JJ (23:13:30) :
I’m generally in agreement with your take on this. I do not know if Darwin is indicative of a larger problem or not. I do think it indicates the need to check further. With the release of a “subset” of the HadCRUT3 data, I downloaded it, and looked for a station near me. That ended up being Nashville, TN. I then pulled the GISS data, to compare it. In recent years, the data are exactly the same. But at some point in the past, the GISS becomes consistently 1 degree cooler. I’ve been too busy to nail it down — hopefully this weekend — but it has me suspicious of how GISS adjusted its data so that max US temps were no longer in the 1930’s, so I’m wondering if there is a pattern here.
One thing that has occurred to me, and which I may write up as a post for Anthony’s consideration, is that all these “step function” adjustments we see, which have the potential to seriously distort estimated trends are much less an issue when the data are analyzed as first (or seasonal) differences. And the documentation for the homogeneity adjustments say — if I’ve read it correctly — that they are based on correlating differences with “nearby” stations, which strikes me as the correct thing to do — if we’re going to do it, which is another matter entirely. But whatever is obtained from using differences seems to be lost in translating back to undifferenced data.
Above, Basil posted a link to a NOAA chart that plots Adjusted – Raw, and the trend of the adjustments is 0.33C (vs a total ‘global warming’ land temp trend of 1.2C over the same period). Not knowing which version of GHCN is used for the graph, I dont know which homogenization methods the graph applies to, but 0.33C seems significant even if it isnt earth shattering.
Actually, most of the adjustment comes after 1950, so it is relatively a more significant part of the “warming” claimed for the second half of the 20th century.

Ed
December 10, 2009 6:55 am

The adjustments that seem to have been applied do not look like they were made by a person. They look like they were made by an algorithm running through the data. It appears that this is what they did – they took the raw data and “homogenized” it. Assuming that they never went back and checked on their homogenization software, is it possible that that is where the fault lies? Could some long ago programmer have created a routine to work with the data and that program he/she created wound up having a runaway warming effect because of an error? That doesn’t seem unreasonable to me and would actually be sort of an exculpatory explanation for the scientists involved. If a scientist assumes that the homogenized data is the gold standard, and that process itself introduces the warming, unless that scientist checks his/her premises the data would seem to conclusively show a trend.
Really, the only way to confirm this theory would be to do a lot of plots of data like this and see if this kind of adjustment is consistent. I’d propose an effort, similar to the “How (not) to measure temperature” series for people to go through and do this exercise with other temperature sets. I, for one, would be interested in doing this and I suspect you could create a whole “Army of Davids” to plot a ton of these quickly. My only problem is I don’t know where the data is or how to get it into a format that could be analyzed (if excel isn’t the software of choice for it). If you could do a post on how to get this data and how to do the analysis like what you’ve done here, I think it could be huge. If people checked 100 miscellaneous stations and they came up with both up and down adjustments that overall balanced out, then we could maybe rule out the idea that it was the adjustment scheme that has created this problem. But, if all or most of them show a significant warming adjustment then we’d know something was up. If you can do a post on how to do this kind of work and then set people to it, I’ll bet you could have a bunch of them done within a day or two.
This post has already been very influential. It could be much moreso if it turns out that this kind of adjustment is common.

hillbilly
December 10, 2009 7:31 am

I believe the Earth’s surface is approximately 29% land mass and 71% covered by water. In the Southern hemisphere water coverage rises to about 85%. Given the small area that can therefore be covered by ground stations plus the many factors which can and do lead to errors, as a lay person I wonder why UNIPCC scientists have chosen to use the data so gathered as the main basis for their climate modelling and predictions? Sophisticated satellite technology used for temperature measuring has been rigourously tested and would appear to be a far more accurate and less error prone way to give a truer picture of global mean temperature. Perhaps the following comments of John Daly are just as relevant today as they were when first published. Quote:-
“A further problem with the statistical processing is that neither GISS nor CRU can inspect the siting or maintenance of the thousands of stations they include in their data set. Thus even stations which might reasonably be assumed to be `clean’ like Low Head Lighthouse, may actually conceal site-specific errors which are known only to people who are local to the station.
As to station selection, not all stations in the world are included in the global data sets. This raises the question as to what criteria are used when selecting stations when the researchers at GISS and CRU have little local knowledge about the stations themselves. Indeed, they have shown no hesitation in accepting big-city stations into their data sets, knowing full well that urbanisation will be a wild card in the data.
The Australian Bureau of Meteorololgy (BoM) did attempt a station-by-station analysis of error-producing factors for Australian stations, including urbanisation, and offered an adjusted dataset of all Australian stations. The corrections included Low Head Lighthouse whose record was `cooled’ and brought more into line with nearby Launceston Airport. However, GISS and CRU are still using the original dataset, not the modified one offered by the BoM.
Station degradation and closures
Since about 1980, there have been numerous closures of stations across the world as governments have sought to cut expenditure in public services. The loss of stations has been particularly significant in the southern hemisphere where the station density was already thin to begin with. The adoption of the 100-station `climate reference network’ to cover the vast Australian continent suggests a further downgrading of stations not included in that hundred.
This has an unintended consequence for the statistical calculation of `global mean temperature’. In each 5°x5° grid-box any thinning out of the number of stations over time will result in a smaller mix of stations in the 1980s-1990s than was the case in previous decades, with a consequent shift in the mean temperature for each grid-box. This shift could theoretically result in a warming creep in some sectors and a cooling creep in others, not caused by climate, but caused by a shrinking station integration base. Another response to these closures is to accept stations into the database which had previously been excluded. This could result in sub-standard stations contaminating the agreggate record even further.
The end result cannot be statistically neutral because the majority of the stations being closed are precisely those stations which GISS and CRU designate as `rural’. These are the very stations which have the least warming, or no warming at all, and their closure during recent decades leaves that entire period to the tender mercies of the urban stations – i.e. accelerated warming from urbanisation. It is little wonder that the 1980s and 1990s are perceived to be warmer than previous decades – the collected data is warmer. But was the climate itself warmer? The surviving rural stations would suggest not.
We return to the central question. Is the surface record wrong in respect of both the amount of warming reported during the 1920s and in respect of the disputed warming trend it reports since 1979? In the latter case, the surface record is contradicted by both the satellite MSU record and the radio sonde record.
This is where individual station records can prove useful. Such records represent real temperatures recorded at real places one can find on a map. As such they are not the product of esoteric statistical processing or computer manipulation, and each can be assessed individually.
Some critics will dismiss individual station records as merely `anomalous’ (in which case most of the non-urban stations would have to be dismissed on those grounds), but when one station acquires an importance far beyond its own little record, no effort is spared to discredit it. This was the fate of Cloncurry, Queensland, Australia, which holds the honour of having recorded the hottest temperature ever measured in Australia, a continent known for its hot temperatures. The record was 53.1°C set, not in the `warm’ 1990s, but in 1889. It was a clear target for revisionism, for how can a skeptical public be convinced of `global warming’ when Cloncurry holds such a century-old record? The attack was made by Blair Trewin of the School or Earth Sciences at the University of Melbourne, with ample assistance from the whole meteorological establishment. And all this effort and expense was deployed to discredit one temperature reading on one hot day at one outback station 111 years ago. Stations do matter.
In the Appendix, there are numerous station records from all over the world, most of them from completely rural sites, some of them scientifically supervised. One telltale sign of any good record is when the data extends back many decades with no breaks. Where the record is unbroken, it indicates better than anything else that the people collecting the data take their task seriously, a good reason to also have confidence in their maintenance and adherence to proper procedures. Where the record is persistently broken, such as Mys Smidta and many other Russian and former Soviet stations, there is no reason to have any confidence in the fragmentary data they do return.
However, this is not the way GISS, CRU, or the IPCC view them. In spite of the station closures, and the fragmentation of so much Russian and former Soviet republic data, the surface record continues to be accepted uncritically in preference to the well validated satellite and sonde data. Indeed, in the latest drafts of the IPCC `Third Assessment Report’ , the surface record is taken as a foundation assumption to underpin all the predictions about future climate change. To admit the surface record as being seriously flawed would unravel the integrity of the entire report, and indeed unravel the foundations of the `global warming’ theory itself. ” end quote
A suggestion: Could one of the many knowledgable people that have posted on this site access the raw satellite data from 1979 to the present and see how it stacks up with what UNIPCC has been putting out. Perhaps it’s a simple thought from a simple man but wouldn’t that give us a good idea of of the true situation?

JJ
December 10, 2009 7:33 am

Alan,
“So, you guys all know better than thousands of actual scientists?”
There are not thousands of scientists who have published evidence that confirms Catastrophic Anthropogenic Global Warming. There are only a handful that have even claimed that. And yes, we have demonstrated in some instances that our work is in fact better than theirs i.e. that their work was in fact worng. And a significant number of that handful have been shown to be doing things that are decidedly unscientific, i.e. they are not ‘actual scientists’.
And not that credentials matter – despite what you may have heard, science is about who has the right answer, not who has the right diploma – but some of us are actual scientists.
Thousands of us, in fact. Not that numbers matter … despite what you may have heard, science is not a popularity contest.
“Global warming is actually not happening?”
Depends on what you mean by that term. One of the tricks that the alarmists use, and they have used it very successfully on you, is to switch back and forth between two or more defintions of that term. One of those definitions, there may be some evidence for. The scary defintion of that term, the one that people wave around when they want to make politics go their way, is not sufficiently supported.
The scary Catastrophic Anthropogneic Global Warming theory has several components:
1) There is a recent (last 100 yrs) rise in globally averaged temperatures.
2) The temperature to which the globe has risen is unprecidented in human history.
3) The temperature rise is caused by CO2 emitted by human activity.
4) The temperature rise can be reversed by changes in human activity.
5) The temperature rise will bring net catastrophic effects to humans, such that if we must do anything we can to stop it, provided that we can (see #4).
That is ‘global warming’. All of those points are necessary to ‘global warming’. None of that has been proved.
Only point #1 even comes close. I would argue that the temperature measuring networks have insufficient quality, spatial distribution, and temporal coverage to prove that claim. (Incidently, if you read the CRU emails, you will see I am not the only one who thinks that 🙂
That said, is the earth experiencing a recent (100 yr) warming trend? I wouldnt be surprised. The longer term trend is the recovery from the Little Ice Age, and the much longer term trend is the climb out of the Big Honkin Ice Age. Given those trends, and the fact that climate is a dynamic system, I wouldnt be surprised if we have seen some recent warming. So what?
Note that the examples thet you regurgitate from the alarmist talking points are all anecdotal (not rigorous proof of a global effect) and all reference #1. Whatever evidence there is for the global warming defined by #1, it is bait and switch to pretend that evidence proves the ‘global warming’ that depends on #1, #2, #3, #4, and #5.
Point number two requires some very specific research – demonstrate globally averaged temperatures are now higher than any point in the history of cvilization. That requires sufficient spatial, and temporal coverage of temperature measurements that are of sufficient accuracy and reliability to make a determination of global temrature to witihn several tenths of a degree.
This research does not exist. Only a handful of people are even trying to do it, and the most prominent among them are, bluntly, crooks.
The third point has not been proven. It has only been demonstrated by shaky inference applied to computer models that are known to not accurately some phyisical processes important to climate (the modelers freely admit this), models that did not predict the current ‘lack of warming’ trend. This may be due in part to the fact that those models are parameterized with and calibrated to current temperature datasets (#1) and past temperature reconstructions (#2) that are not been proven and that have had the fingers of some very non-scientific people mucking about in them.
Point four is based on those same models.
So is point five.
There are not ‘thousands of scientists’ working on each of those points, who have independantly proven those points to be true. In many cases, there are only a handful of scientists working on any one of those points, they are not independant, and they have not yet proved the point.
And some of those handful of scientists are not ‘actual scientists’. Some of them are doing things that are absolutely counter to science. Hiding adverse results, corrupting peer review, bullying other scientists, refusing to provide methods and data for replication, evading the laws that require them to do so, conspiring to destroy evidence .. this is not science, and the people doing it are not scientists.

WAG
December 10, 2009 7:51 am

Actually, Lambert never made any ad hominems against Willis. Respond to the substance of Lambert’s critique:
http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php
Also, I thought that according to SurfaceStations.org, siting problems were what rendered the RAW data unreliable. But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?

BCK
December 10, 2009 7:56 am

I was skeptical of this skeptism! So I thought I would cross-check it (always healthy in scientific debate) so I went to the GISS data you referenced above.
I chose to look at Norwich for several reasons. Firstly it is a favourite town of mine, but secondly it has a 100 years of reliable data, as far as I know, there is no need to adjust the data.
I had to slightly alter the data’s format so that I could plot it. The resulting graph is shown in the above link. I have fitted it with a sine function (the Earth orbits the sun fairly periodically, so it is fair to assume a sinusoidal fluctuation in temperature, though of course this isn’t accurate over long periods) with a linear increase or decrease. This increase is 0.000659169 +/- 0.0001593 C/ month in my model.
Now I am not a climate scientist and this is only one data point but I thought I’d share it with everyone. Maybe we could each do this with our own favourite town and amass the results!
[I used the following in an iMac (or cygwin for windows, or any *NIX) :
first, use the link in the original article to download raw data.
next use a text editor (Vi, Excel, Notepad) to remove the list at the top of month names and change the spaces to tabs (in Vim you can use
:%:s/ \+/\t/g
Then run :
cut -f2-13 -d’ ‘station.txt | perl -pe ‘s/ /\n/g’ > station-temp.dat
where there is a tab between the single quotes (CTRL-V then press tab)
The use gnuplot to plot and fit the data.
first issue the command:
set datafile empty ‘999.9’
as 999.9 signifies missing data
the set the fit function
y(x) = A* sin(x*f + C) + E + D*x
There maybe a lot of reasons this is a bad idea to use this function: cue climate scientists to shoot me down. The first part is just seasonal variation as we orbit the sun. A is an amplitude (probably based on where you are) f is the frequency of our orbit, using units of rad/month this is 2* pi / 12 or 0.523598776. C just exists to offset the start point, which probably won’t be the spring equinox. E is the station’s mean temperature, that like A, depends on where you are. D is the increase (or decrease) in temperature per month, if there is any at all.
You need to then plot this graph to see how close the fit resembles the data. You then need to adjust the values until you get a reasonable fit. Once you’ve got something that looks reasonable, gnuplot can adjust it to finer detail. However, if you don’t start with something reasonable, gnuplot goes mad and fails. e.g. for Norwich I used:
A = 16
C = 1.6
f = 0.53
D = 0.001
E = 5.02003
You can now issue these commands:
plot “station-temp-numbered.dat” using 1:2, y(x)
fit y(x) “station-temp-numbered.dat” using 1:2 via A, C, f, D, E
We are interested in D and the error on D. Plot again to check gnuplot did something reasonable. If it didn’t you need better starting points. ]
p.s. I have no doubt that industrialisation is slowly destroying the planet and needs to be halted.

wobble
December 10, 2009 7:58 am

JJ (23:13:30) :
””One wonders what exactly you are waiting on. You have the raw data. The homogenization methods have been provided to you, along with a bibliography of documents that provide great detail.
You need to read them. If you do, one of the first things that you are likely to pick up on is that (outside of the US) GHCN2 does not apply adjustments of the sort that your ’show me the scientific reason for each one’ question assumes.
Stop ‘waiting’. Get to work.””
JJ, can you explain what you mean by this?
Are you telling Willis that he needs to figure out the reasons for each of the adjustments to Darwin on his own by relying on homogenization procedures described in the bibliographical documents?

wobble
December 10, 2009 8:02 am

WAG (07:51:47) :
“”But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?””
I think Willis is comfortable with adjustments for errors. It seems as if he wants those that made the adjustments to describe exactly what errors they were adjusting for. There were half a dozen or so adjustments made so they should be able to point to half a dozen or so errors.

wobble
December 10, 2009 8:09 am

WAG (07:51:47) :
“”Also, I thought that according to SurfaceStations.org, siting problems were what rendered the RAW data unreliable. But now you’re claiming that the RAW data is what we should rely on, and that adjusting for those errors is fraud?””
(This is my second try. I’m sorry if this ends up as a duplicate comment.)
WAG, I think Willis is willing to accept adjustments that make sense. He wants to know the reason that each of the half a dozen adjustments were made. If there were half a dozen adjustments, then the adjusters should be able to document half a dozen specific errors that required adjustments.

December 10, 2009 8:12 am

WAG (07:51:47)…
…once again displays his abysmal lack of critical thinking.
The siting issue is entirely different from the issue of CRU and Michael Mann hiding the raw data. Sites on airport runways, or next to air conditioner exhausts should not be “adjusted” by the Elmer Gantry CRU scientists, who refuse to disclose how they do their data massaging. Sites contaminated by manmade heat should be discarded completely.
WAG claims that “Lambert never made any ad hominems against Willis.” But Lambert says that Willis is “lying.” To any normal person, that is an ad hominem attack.
WAG and Lambert are a typical examples of the the ethics-challenged people who migrate to the alarmist camp. Since they fail to make a credible case, they always fall back on ad hominem attacks. Lacking the science, that’s all they’ve got.

Slartibartfast
December 10, 2009 8:24 am

“Actually, Lambert never made any ad hominems against Willis.”
For real? The title of his frakking post is

Willis Eschenbach caught lying about temperature trends

That’s not ad hominem, though? That bald, unsupported assertion that Willis is knowingly engaging in deception, that’s not ad hominem?
You guys just can’t recognize it when you see it. Because…well, you’re soaking in it. Really…”Climate Fraudit”? This is middle-school ad hom.

Mutton Dressed as Lamb
December 10, 2009 8:27 am

And all that Guff about Meat and Carbon is also a load of “Bull”.
http://muttondressedaslamb.wordpress.com/2009/07/16/the-carbon-meat-myth/

Brian D Finch
December 10, 2009 8:32 am

Re my previous post (06:39:05) : Why homogenise anything at all? Do the unused 48 year records from Old Halls Creek from 1952 to 1999 continue to flatline at 33.4 degrees C? If so, whence comes the justification for a rise of 1.6 degrees C over the past century?

December 10, 2009 8:35 am

My complaint would be a 100 year sampling of Earth’s monitored weather is a moronic idealistic approach to dictate or predict any warming or cooling trend of this planet!
The earth’s billions of years of turmoil and evolution and weather are far too extreme to evaluate by using only a 100 year window! And I find not one analysis or speculation of our Sun being of any consideration?

jrshipley
December 10, 2009 8:36 am

“A change in the type of thermometer shelter used at many Australian observation sites in the early 20th century resulted in a sudden drop in recorded temperatures which is entirely spurious. It is for this reason that these early data are currently not used for monitoring climate change. Other common changes at Australian sites over time include location moves, construction of buildings or growth of vegetation around the observation site and, more recently, the introduction of Automatic Weather Stations.”
http://www.bom.gov.au/climate/change/datasets/datasets.shtml
A number of commentators have pointed to this quote, which is absent from the top post. I’d like to make the meta point about what this tells us about the denier methodology. There are published papers on the homogenization methods that might have been addressed, possibly critically, in scientific argument by a skeptic. A denier, on the other hand, just puts the words “adjustment” and “homogenization” in scare quotes and places them in a speculative and conspiratorial context.
Witness this subtle rhetorical use of scare quotes in place of actual arguments. From the top post: “The answer is, these graphs all use the raw GHCN data. But the IPCC uses the “adjusted” data.” This sort of thing shows clearly the line between skeptic and denier. The skeptic would look in the literature for the explanation and meaning of homogenization. The denier leaps to conspiratorial conclusions.
[REPLY – When the NCDC, et al., actually releases said methods when asked repeatedly (rather than stonewalling and obfuscating), we could actually look into said “meaning”. (At least) until then, scare quotes apply. ~ Evan]

1 21 22 23 24 25 37