Before One Has Data

Guest Post by Willis Eschenbach

Anthony Watts has posted up an interesting article on the temperature at Laverton Airport (Laverton Aero), Australia. Unfortunately, he was moving too fast, plus he’s on the other side of the world, with his head pointed downwards and his luggage lost in Limbo (which in Australia is probably called something like Limbooloolarat), and as a result he posted up a Google Earth view of a different Australian Laverton. So let’s fix that for a start.

Figure 1. Laverton Aero. As you can see, it is in a developed area, on the outskirts of Melbourne, Australia.

Anthony discussed an interesting letter about the Laverton Aero temperature, so I thought I’d take a closer look at the data itself. As always, there are lots of interesting issues.

To begin with, GISS lists no less than five separate records for Laverton Aero. Four of the records are very similar. One is different from the others in the early years but then agrees with the other four after that. Here are the five records:

Figure 2. All raw records from the GISS database. Photo is of downtown Melbourne from Laverton Station.

This situation of multiple records is quite common. As always, the next part of the puzzle is how to combine the five different records to get one single “combined” record. In this case, for the latter part of the record it seems simple. Either a straight linear offset onto the longest record, or a first difference average of the records, will give a reasonable answer for the post-1965 part of the record. Heck, even a straight average would not be a problem, the five records are quite close.

For the early part of the record, given the good agreement between all records except record Raw2, I’d be tempted to throw out the early part of the Raw2 record entirely. Alternately, one could consider the early and late parts of Raw2 as different records, and then use one of the two methods to average it back in.

GISS, however, has done none of those. Figure 3 shows the five raw records, plus the GISS “Combined” record:

Figure 3. Five GISS raw records, plus GISS record entitled “after combining sources at the same location”. Raw records are shown in shades of blue, with the Combined record in red. Photo is of Laverton Aero (bottom of picture) looking towards Melbourne.

Now, I have to admit that I don’t understand this “combined record” at all. It seems to me that no matter how one might choose to combine a group of records, the final combined temperature has to end up in between the temperatures of the individual records. It can’t be warmer or colder than all of the records.

But in this case, the “combined” record is often colder than any of the individual records … how can that be?

Well, lets set that question aside. The next thing that GISS does is to adjust the data. This adjustment is supposed to correct for inhomogeneities in the data, as well as adjust for the Urban Heat Island effect. Figure 4 shows the GISS Raw, Combined, and Adjusted data, along with the amount of the adjustment:

Figure 4. Raw, combined, and adjusted Laverton Aero records. Amount of the adjustment after combining the records is shown in yellow (right scale).

I didn’t understand the “combined” data in Fig. 3, but I really don’t understand this one. The adjustment increases the trend from 1944 to 1997, by which time the adjustment is half a degree. Then, from 1997 to 2009, the adjustment decreases the trend at a staggering rate, half a degree in 12 years. This is (theoretically) to adjust for things like the urban heat island effect … but it has increased the trend for most of the record.

But as they say on TV, “wait, there’s more”. We also have the Australian record. Now theoretically the GISS data is based on the Australian data. However, the Aussies have put their own twist on the record. Figure 5 shows the GISS combined and Adjusted data, along with the Australian data (station number 087031).

Figure 5. GISS Combined and Adjusted, plus Australian data.

Once again, perplexity roolz … why do the Australians have data in the 1999-2003 gap, while GISS has none? How come the Aussies say that 2007 was half a degree warmer than what GISS says? What’s up with the cold Australian data for 1949?

Now, I’m not saying that anything you see here is the result of deliberate alteration of the data. What it looks like to me is that GISS has applied some kind of “combining” algorithm that ends up with the combination being out-of-bounds. And it has applied an “adjustment” algorithm that has done curious things to the trend. What I don’t see is any indication that after running the computer program, anyone looked at the results and said “Is this reasonable?”

Does it make sense that after combining the data, the “combined” result is often colder than any of the five individual datasets?

Is it reasonable that when there is only one raw dataset for a period, like 1944–1948 and 1995–2009, the “combined” result is different from that single raw dataset?

Is it logical that the trend should be artificially increased from 1944 to 1997, then decreased from that point onwards?

Do we really believe that the observations from 1997 to 2009 showed an incorrect warming of half a degree in just over a decade?

That’s the huge missing link for me in all of the groups who are working with the temperature data, whether they are Australian, US, English, or whatever. They don’t seem to do any quality control, even the most simple “does this result seem right” kind of tests.

Finally, the letter in Anthony’s post says:

BOM [Australian Bureau of Meteorology] currently regards Laverton as a “High Quality” site and uses it as part of its climate monitoring network. BOM currently does not adjust station records at Laverton for UHI.

That being the case … why is the Australian data so different from the GISS data (whether raw, combined, or adjusted)? And how can a station at an airport near concrete and railroads and highways and surrounded by houses and businesses be “High Quality”?

It is astonishing to me that at this point in the study of the climate, we still do not have a single agreed upon set of temperature data to work from. In addition, we still do not have an agreed upon way to combine station records at a single location into a “combined” record. And finally, we still do not have an agreed upon way to turn a group of stations into an area average.

And folks claim that there is a “consensus” about the science? Man, we don’t have “consensus” about the data itself, much less what it means. And as Sherlock Holmes said:

I never guess. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. — Sir Arthur Conan Doyle, A Scandal in Bohemia

Advertisements

83 thoughts on “Before One Has Data

  1. Thanks Willis for correcting my Google Earth mistake and making an even better post. I’ve made a note in the original post along with a link.

  2. Nicely summed Willis. The simple point I was making in my correspondence with senior BOM climate staff is that they simply do not know the effect of UHI at this critical station, yet insist it is suitable for climate monitoring. IMHO they need to do the experiment then perhaps they can say something about UHI at the site. At the moment they are just guessing.

    I was looking at writing this up and submitting to the Australian Meteorological and Oceanographic Journal but when I looked at the editorial board (see here: http://www.bom.gov.au/amm/editorial-board.shtml) I figured I might as well try the Minister.

  3. Imagine a DA bringing speeding charges against you. These are serious charges and could result in felony charges involving words like “criminal indifference to human life” or “speeding through a school zone”.

    Your defense attorney asks what is the evidence.

    He is informed there are six different readings from two police jurisdictions. These readings are from at least six different radar guns, all with a history of giving, on occasion, spurious results. Because of these technical difficulties, the data from these readings are combined and then homogenized. Both the specific locations of the radar units and the basis of the data adjustments are not available to the defense. In fact, they are not available to anyone, as the records storage area of the two jurisdictions are in a state of complete chaos.

    In addition, the technicians responsible for making the the ‘adjustments’ use undocumented and varying procedures for these adjustments.

    One technician testifies, reluctantly, regarding his experience with one of the radar locations. He has observed how the new FM rock station has altered the electronic environment the radar guns operate under. The signal from this station can affect the readings on the radar guns. It really depends on what day of the week it is, as the station will broadcast at a lower power level when the intended audience is sleeping off last night’s party time.

    There is also the problem with householders installing security systems generating spurious signals interfering with the radar guns.

    “We know these new security systems affect our radar units, we just don’t know exactly how they do so. So we adjust the radar readings to compensate.”

    In addition, different police officers do their own adjusting in the field.

    “I can just tell when someone is really speeding. So I’ll adjust the recorded speeds to give a more truthful reading,” says one sheriff.

    This is the only law enforcement officer testifying, as the others simply refuse to comply with the court order to appear before the jury.

    It is also revealed that the two jurisdictions disagree with one another on how to adjust radar gun readings. Neither can produce any documentation on how the readings are adjusted, let alone the basis for these adjustments.

    What do you suppose the jury would do with a case like that?

    How would the defense attorney proceed?

    What do you think you would do if you were on that jury?

    Now instead of a speeding charge, switch gears and think temperature records.

    Can you draw any decent conclusions regarding the temperature record at this Australian location?

    My conclusion is: we have no idea of what went on here except there have been temperature readings in some given range.

    There’s no real evidence of any trend here because the records are so botched up.

    You said it well Willis:

    It is astonishing to me that at this point in the study of the climate, we still do not have a single agreed upon set of temperature data to work from.

  4. “I never guess. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. — Sir Arthur Conan Doyle, A Scandal in Bohemia”

    Spot on!

    Current CAGW argument is like measuring a balloon that is inflating and deflating, with different scales and sizes of ruler on differing time-scales and then saying that there is certainty and consensus about the size of the balloon, whether it is inflating or deflating and it is caused by a flea that landed on it farting!

  5. It seems like more of the same obfuscation we see in essentially all of the GISS data. “Homogenization”, “adjusted”, “value-added” data, without any explanation as to the reasoning behind the alteration of the raw data with the end result ALWAYS inducing warming where there was none originally.

  6. Willis did you look at the actual raw data of the Laverton RAAF Vic. station number 087031 monthly max http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=36&p_display_type=dataFile&p_startYear=&p_stn_num=087031
    monthly min http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=38&p_display_type=dataFile&p_startYear=&p_stn_num=087031

    There is no gap in the data. Station is indicated to have opened in 1941 but data starting in Oct 1943 and the station is still open.
    The mean monthly max. is indicated to be 19.7C and the mean monthly min. 9.2C
    It appears that there has been an increase in temperatures probably from UHI effect in recent years Ave Min 1944 8.2, 1945 8.9 1946 8.8 –2007 10.5 2008 9.6 2009 10.0
    Ave Max 1944 19.4, 1945 18.9, 1946 18.5 –2007 21.3, 2008 20.4, 2009 20.9

    My Subaru Forester has a thermometer to give outside temperatures. I regularly measure 2C difference (lower) at night from where I live on 1.5acres (surround by similar properties) compared to the main street of the town 5kms away (population of towns postcode about 40,000). So I would not be surprised by the 2C increase going from war time grass fields to bitumen airstrips surrounded by dense industry and housing.

  7. the late John Daly also did some analysis of Laverton aero — he gives a chart of the seasons here:

    and the write up is here:
    http://www.john-daly.com/press/press-03c.htm

    and there is a bit more of a write up with some pictures of Laverton here:
    http://rcs-audit.blogspot.com/2010/03/air-bases.html

    There is also an interactive map here that you can zoom in quite close. You can find the weather station (or at least that strange round building it is shown to be near) if you go up towards the top until you get to Sayers Rd and then follow it along until you get to turning bay going towards the top of the picture, and the weather station is down a bit. Actually the siting doesn’t seem that bad on the WUWT scale.
    http://www.ourairports.com/airports/YLVT/#lat=-37.855167049756865,lon=144.75531488656998,zoom=19,type=Satellite,airport=YLVT

  8. OT – I could not resist another attack at computer models.

    Washington Post – June 22, 2010
    Something appears to have changed inside the sun, something the models did not predict. But what? …..

    The flood of observations from space- and ground-based telescopes suggests…….

    When Hathaway’s NASA team looked over the observations to find out where their models had gone wrong, they noticed…..

    These contradictory findings have thrown the best computer models of the sun into disarray………

    By 2015, they could be gone altogether, plunging us into a new Maunder minimum — and perhaps a new Little Ice Age.
    By Stuart Clark, PHD

    http://www.washingtonpost.com/wp-dyn/content/article/2010/06/21/AR2010062104114_2.html?sid=ST2010062104203
    NASA, stay off the Nintendo please.

  9. So, the entire historic temperatures data set clearly needs to be audited by someone neutral and not a typical alarmist.

    Any chance of that happening – absolutely not! We can’t have the real figures interfering with the climate models.

    It is incredible that the proposal to tax us all back into the Stone Age still has any credibility.

  10. A cardinal rule for dealing with data from a medical experiment, say a comparison between two treatments, is that one must have a clear protocol for dealing with bad data and this protocol must be specified before the data are analyzed (and preferably before the experiment starts). One is not allowed to make up the rules as one goes along even if it leads to perfectly reasonable treatments of the bad data. In the medical context the choice is normally a binary one: a trial participant is either retained or dropped for the analysis. In the surface data context there is a broader choice, but the same principle should apply to the procedures for adjusting surface station data. It should not be acceptable that adjustments, even perfectly reasonable adjustments, are made for each station according to a local best judgment after inspection of the data. One needs an algorithm or in any case unambiguous rules that are applied uniformly to all station data. I think that specific criticisms by Willis here and by Wattsupwiththat in general of surface data adjustments are proper, but the proper response from the surface data folks can not be a case-by-case justification. The only proper justification for any specific surface data adjustments is that the adjustments are prescribed by the algorithm or the rules. Then there can be argument over the content of the rules, or it can be questioned if the rules are indeed applied correctly and uniformly. A case-by-case discussion of adjustments without reference to uniformly applicable rules for the adjustments is besides the point.

  11. As I pointed out in the comments to wrt Andrew Watt’s post there is some history available.
    This link to Andrew Bolt’s Blogg (21/3/2010) may be of use as it shows some aerial photos of Laverton (Victoria) a couple of years ago and in 1946:

    http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/could_more_concrete_asphalt_and_industry_have_made_laverton_warmer

    The 1946 photo shows open fields so there is clearly the potential for UHI effect when compared with the photo you present.

    Wiki says that there was a Grand Prix circuit on the runways in 1948 suggesting that the early training ground was upgraded with sealed runways possibly for heavier jets after WW2 This new activity may account for some of the sharp rise seen in the late 1940s.

    Wiki also says the base was decommissioned in the late 1990s which may account for the sharp drop as the base was closed down. The runways were apparently ripped up for housing development. In 2007 the land was released for subdivision as urban sprawl / UHI effect approached. This would explain the rapid rise from early 2000.

    I assume there has been quite a bit movement of weather stations during all this. I’m not sure that much of this is as purely evidence-based as you would like but hey, I’m a geologist. We have to work with what we have. But this may provide a starting place for the more rigorously inclined.

  12. The Australian record should be authoritative. All the Aussie records must come from the BOM ab initio. I believe that the “New Climate Data Online” website at the BOM is reasonably close to the raw data record, having done some satisfactory freehand comparisons of Tasmanian stations to published (hardcopy book) BOM Climatic Averages.

    The Reference Climate Stations are a subset (one hundred or thereabouts) of the hundreds of Australian climate stations, a subset especially picked for stability, likelihood of being maintained into the future, appropriate siting, and (one assumes) reasonable completeness over the available record time.

    It would be very strange for there to be years of missing data in this series in modern times. Apart from anything else, the station is probably needed for ongoing air operations. And it was previously designated an “AMO” or Airport Meteorological Office, meaning that there was an office of BOM staff at the location. If they skived off for years at a time, someone would have noticed…..

    So the Laverton data exists and always has existed! Laverton RAAF aka Laverton Aero (BOM ID 087031) starts at 1943, is ongoing at the present time, and switched over to an automatic weather station (AWS) in 1997.

    I have the BOM daily min/max temperature CD which confirms the dates of years of record. There’s 11 non-contiguous days of missing Tmin/Tmax data in the 1999-2003 period. Chiefio (chefio.wordpress.com) suggests that GHCNv2 drops whole years if it encounters any missing data. That’s pretty rigorous, if so. I guess it is not known just where in the BOM – GHCN – GISS QC/transfer/homogenisation/combination factory line the Laverton data goes astray.

    There’s also some missing daily data prior to 1961, but it’s pretty clean after that. Could some of the GISS traces be (yikes) different versions of the same trace? With different ‘missing data’ treatments? I couldn’t find any other “Lavertons” with the right years of record on a quick search.

    I’ve seen the missing data phenomenon before. I was recently checking Forrest AMO Reference Climate Station in Western Australia. GHCN seems to have “vanished” whole years within the past twenty years according to KNMI and GISS. Cold years, as it happens.

    My take on the Laverton graph? 1992, 1995 and 1996 were the coldest years in south-eastern Australian in the last twenty years. In Victoria, it looks like they were almost as cold as the chilly 1940s-1950s. (I was stationed in Melbourne in winter 1995. I can confirm it was a bleak, biting winter.) Australian year-to-year annual mean temperatures naturally vary around 2 degrees C over decades, with the big peaks probably corresponding to large or long El Nino events or other warm-ocean events. (Compare the Laverton plot with Ballarat Aerodrome). So the exciting rise from the early nineties to the late 2000s at Laverton is still consistent with natural variation on the Australian continent. We’ve had a decade dominated by El Nino, and some hard drought years, so not surprising we are at the ‘high end’ at the moment.

  13. The vexed question of recording accurate temperatures and the failure to do this initially alerted me to the suggestion that theory of ‘Catastrophic Anthropogenic Global Warming’ is built on extremely shaky foundations. How can the Met Office in the UK justify spending millions of scarce pounds to run forecasting models built on data that is possibly wildly inaccurate?
    While in London, my wife and I live in a suburban house with no lawn, just private paved areas and pebble garden behind the house and public areas to the front, but still my daily temp. readings (taken outside in my back yard and in the shade) almost invariably show slightly lower max temps and slightly higher min temps – I use a good quality thermometer, so if the Met Office figures are correct, why are my readings showing less heat during daylight and more heat during darkness when the opposite should be the case if the Met Office corrects for UHI?
    And I hope that Anthony’s missing luggage is soon found. Confusing enough to operate out of a suitcase in different time zones, but not to have one’s suitcase to hand can be quite distressing.

  14. If I want to know what the climate was doing in the past, I’ll consult a geologist. If I want a climatic prediction for the future, I’ll best be served by consulting.. a geologist.

  15. Willis Laverton Victoria ceased to be used as an airfield back in the 1990s. The weather station is located well away from runways, buildings and bitumen. However whilst the site is located in a sizable grassed area, the urban fringe of Melbourne now pretty well surrounds the old Laverton Air Force Base. The location is here at the green arrow

  16. Willis,

    Some more answers to some questions.

    There are missing data in the early years, see this part of 1943 in Tmin:
    Year Month Day Tmax Tmin

    1943 10 31 17.3 7.6
    1943 11 1 18.4
    1943 11 2 17.7
    1943 11 3 27.8
    1943 11 4 20.1
    1943 11 5 16.1
    1943 11 6 19.6 9.3
    1943 11 7 14.4
    1943 11 8 14.6
    1943 11 9 17.5
    1943 11 10 13.8
    1943 11 11 16.2
    1943 11 12 22.1
    1943 11 13 22.2
    1943 11 14 23.8 7.9
    1943 11 15 23.2
    1943 11 16 30.2
    1943 11 17 31.6 18.4
    1943 11 18 16.9
    1943 11 19 17.3
    1943 11 20 16.8
    1943 11 21 18.2
    1943 11 22 20.6 11.1
    1943 11 23 19.6 11
    1943 11 24 18.2
    1943 11 25 17.9
    1943 11 26 23.3 12.2
    1943 11 27 29
    1943 11 28 17.8
    1943 11 29 21.3
    1943 11 30 17.9
    1943 12 1 19.9
    1943 12 2 21.9
    1943 12 3 24.4
    1943 12 4 15.8
    1943 12 5 18.3
    1943 12 6 22.8
    1943 12 7 18.4
    1943 12 8 21.7 9.7
    1943 12 9 18.6 11.4
    1943 12 10 23.2 11.8
    1943 12 11 24.3 12.6
    1943 12 12 22.2
    1943 12 13 21.7 15.7
    1943 12 14 18.4
    1943 12 15 18.3 10.3
    1943 12 16 17.8 11.3
    1943 12 17 16.6 12.1
    1943 12 18 21.4 12.8
    1943 12 19 21.1 9.5
    1943 12 20 20.3 7.4
    1943 12 21 23.9 9.2
    1943 12 22 32.5 7.4
    1943 12 23 19.7 8.3
    1943 12 24 24.5 8.3
    1943 12 25 22.9
    1943 12 26 21.1
    1943 12 27 22.8
    1943 12 28 18.4 11.7
    1943 12 29 19.8
    1943 12 30 28.8 11.8
    1943 12 31 39.2 14.8
    1944 1 1 39.3
    1944 1 2 25.3
    1944 1 3 23.2
    1944 1 4 21.6
    1944 1 5 21.1
    1944 1 6 21.6 12.2
    1944 1 7 27.8
    1944 1 8 40.1
    1944 1 9 24.2
    1944 1 10 24.7
    1944 1 11 18.2
    1944 1 12 20.4
    1944 1 13 28.3
    1944 1 14 40.8
    1944 1 15 23.6
    1944 1 16 22.2
    1944 1 17 23.8
    1944 1 18 23.8
    1944 1 19 19.8
    1944 1 20 28.3
    1944 1 21 39.3
    1944 1 22 40.7
    1944 1 23 19.9
    1944 1 24 19.4
    1944 1 25 21.6
    1944 1 26 21.6
    1944 1 27 21.3
    1944 1 28 29.3
    1944 1 29 23.8
    1944 1 30 24
    1944 1 31 29.1
    1944 2 1 18.9
    1944 2 2 20
    1944 2 3 25.9
    1944 2 4 26.8
    1944 2 5 34.2
    1944 2 6 35.8
    1944 2 7 29.5
    1944 2 8 22.9
    1944 2 9 19.4

    The cold year in 1949. My official BoM record gives Tmax 17.97 deg C, Tmin 8.32 for an average of 13.14 deg C for the year, after infilling about 12 missing data points of the 730 with intermediate round figures. This agrees with your yellow graph at bottom for Aust data. It does not resemble anything shown for GISS.

    The DMS for the Laverton station 087031 are stated as 144 45 24, -37 51 24. If you trust Google Earth, this picks up some instruments 180m south of Sayer’s Rd. If these are the correct instruments, they are 870m bearing 222 deg from the NE end of runway 23. That is, not far to the north of the flight path. However, while Laverton was a busy place in the war years, it ceased to be an official RAAF airport in 1992. I spent some years 5 miles to the South of Laverton at RAAF Point Cook Academy in the the 1959-61 period and took some training at Laverton. It was way out in the wilderness then. We used to spot and shoot rabbits with the landing light from a P51 Mustang mounted on a car roof with a hole cut in it. In 2010 the area is rather suburban, as can be seen on the aerial photo above. Little reason exists to disregard UHI.

    There was also a temperature station for a short time at Salines, a few miles South of Laverton, where salt water was evaporated and salt harvested; but the weekend data are much missing. There is also a record of a Laverton Comparison 087177 from 1 Mar 1997 to 31 July 1998. The comparison point seems to have been at 144 44 44, -37 51 59, some 210 m West of the N-S runway 35. I have not cross checked the comparison, but I note it because it seems additional to the Heinz 57 varieties shown for GISS.

    I hope this helps with context.

  17. @Alexander K says: June 24, 2010 at 4:26 am

    “How can the Met Office in the UK justify spending millions of scarce pounds to run forecasting models built on data that is possibly wildly inaccurate?”

    And how can our beloved politicians in the UK (in between slashing and burning benefits and public spending, and hiking taxes and fuel costs) contentedly plan to spend £400 Billion (their figures and no doubt grossly underestimated) on “combatting climate change” and moving to a “low carbon economy”, in response to the MET Office’s ridiculous prophesies based on exceedingly dodgy, cherry picked and “homogenised” data?

    Facts? We don’t need no stinkin’ facts! We’ve made up our minds!!!

  18. How far downwind does the ‘wake’ of heat from a built up area affect the temperature over the adjoining rural area? The answer to this is particularly relevant to this discussion about the temperatures at Laverton, but has more general implications as well.

    I recently spent a few days tracking temperatures recorded by the Bureau of Meteorology at the Geelong airport (38.22S, 144.33E) compared with those at the Avalon airport (38.03S, 144.48E). Geelong is a provincial city about 47km. SW of Laverton. The Geelong airport is about 1.5km. S of the built up area, and Avalon airport is 8 to 10 km. NNW. Both are rated as high quality automatic stations. Readings are reported at half hour intervals.

    The point here is that depending on the direction of the wind, if one is in clear air upstream, then the other is in the wake, so that any differential should be an indication of an extended heat island effect.

    What showed up quite clearly was that even at those distances the station in the downwind wake could be up to about 0.5 deg.C higher during much of a day and into the evening.

    Laverton is downwind of built up areas when the wind is in any quarter except the NW, which means that the temperature records must reflect the impact of the increasing density of adjacent urbanization over the past fifty years. A separation of even a few hundred meters is apparently quite insufficient to guarantee isolation.

  19. The whole temperature record is work of fiction! If you take the individual works of fiction for each site and combine them, what do you have? The public library, Fiction section!

  20. Interesting UHI adjustment, given the population of the City of Wyndham, the local municipality, has shown the following population growth:
    1954 9,414#
    1958 10,520*
    1961 13,629
    1966 18,369
    1971 25,116
    1976 31,790
    1981 40,555
    1986 52,458
    1991 60,563
    1996 73,691
    2001 84,861
    2006 112,695
    Not to mention the increased heat generated by the rising number of cars and gadgets per capita.

  21. [~SNIP~ The d-word is against site policy. Try again without the name calling. ~dbs, mod.]

  22. One way to tell if the data sets are corrupted, is to ask oneself, “do the adjusted data give a random error, or is it always one way”. This is called a systematic error if it is unidirectional. In the warm-earther cabal, I posit that this is intentional wrongdoing.

    These GISS et.al. adjustments and algorithms always, without exception, change the raw data into a warm-earther friendly trend (onwards, always upwards!). These data sets then lead an observer with less than the abilities of a Willis, to believe the earth is warmer than it is at present, and/or cooler in the past.

    I know you sai, W.E., that you don’t assert wrongdoing. In light of the direction of the systematic adjustments, I certainly do, and enough circumstantial evidence (from “adjustments”, to “missing Ms”, to bogus sensor sitings, to cynical starting points or intervals, to “interpolations”, to “extrapolations”, to “dog ate my raw data”) has been collected to prove scientific and criminal wrongdoing beyond a reasonable doubt.

  23. OT: Monbiot predictably has a piece gloating about the Sunday Times cave in.

    One section intrigued me though:

    “North was right to point out that the IPCC should not have relied on a report by WWF for its predictions about the Amazon. Or he would have been right if it had. But it hadn’t. The projection was drawn from a series of scientific papers by specialists in this field, published in peer-reviewed journals, some of which are referenced in the first section of the IPCC’s 2007 report (pdf).”

    The only link is not to any specific papers but to to a pdf of the whole IPCC chapter, at the end of which are hundreds of citations. But searching the chapter for “40%” brings up as far as I can see, only passages dealing with quantities concerning clouds and precipitation – not any proportion of the rainforest supposedly at risk.

    Has Monbiot been deliberately vague because there is nothing which clearly would back up the IPCCs claim that up to 40% of the Amazon rainforest is in danger from CC? Surely if there had been a killer quote Monbiot would have used it with relish.

  24. re: Limbooloolarat

    Actually it’s probably more like “larvo”, since they somehow get “arvo” out of “afternoon”. Of course, with all the bikini-clad maidens thereabouts, I’d be pretty confused too. ;)

  25. “Is it reasonable that when there is only one raw dataset for a period, like 1944–1948 and 1995–2009, the “combined” result is different from that single raw dataset?”

    I think under certain conditions this could be reasonable. Let’s say there’s dataset A and dataset B. They overlap for part of the time. When they overlap, dataset A is consistently a degree higher than dataset B. So then when they combine them into one dataset, it’s reasonable to lower A a half degree when it is by itself, and raise B a half degree when it is by itself, and average them when they are both present.

    However, I do not know if this is the situation with the example in the post.

  26. AleaJactaEst says (June 24, 2010 at 2:05 am): “Breaking news from Ozland, Prime Minister Kevin Rudd has stepped down in the face of an internal Labour leadership vote……”

    Buh-bye!

  27. Cement a friend says:
    June 24, 2010 at 2:19 am

    Willis did you look at the actual raw data of the Laverton RAAF Vic. station number 087031 monthly max http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=36&p_display_type=dataFile&p_startYear=&p_stn_num=087031
    monthly min http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=38&p_display_type=dataFile&p_startYear=&p_stn_num=087031

    Yes. It is the average of those two datasets (max and min) which is shown in yellow in Fig. 5.

  28. JohnH says: “Off topic but news of a more selatious (sic) type…”

    Yes, it’s off topic here. And, yes, salacious. And ad hominem, with no relation to Gore’s already-nonexistent veracity in matters of science. This is more of a Weekly World News item than a WUWT factoid. Much as I abhor the Goracle, this may very well turn out to be akin to “Aliens Abduct Michael Jackson.” I wouldn’t pin any hopes on this as relevant.

    Reply: I’ve trashed it. ~ ctm

  29. I appreciate the analysis in this post–it shows that even when mysterious “adjustments”
    don’t favor warming, there are still data handling and integrity issues to be considered.

    Climate science is only as good as the underlying data.

  30. Willis, this becomes repetitive from you. If you were interested in explaining the algorithms to your readers, you could. They’re quite simple; there is no need to leave them as some mystery. As for the questions of whether the results are reasonable:

    for the purposes of GISTEMP, the absolute values of the combined record don’t really matter. What’s important are the trends, as what you’re building towards is a spatially averaged set of anomalies.

    As for the GISS adjustment, we’ve been through this before. Essentially, the one-legged trend adjustment attempts to eliminate the effect of this station from the spatial mean, by forcing it to have the same trends as its rural neighbors. That’s all there is to it. So if this station had for whatever reason a cooler apparent trend than its rural neighbors in the early part of the record, then it will be adjusted so that its trend during that time better matches the neighbors. The effect of the UHI adjustment (on the regional or global mean) should be roughly the same as simply tossing out all the urban stations. Actually, that’d be a nice calculation to do, to check…

    Is the adjustment crude? You bet. Do there exist much more sophisticated ways of going about it? Sure. But instead of periodically bringing up an example of a GISS UHI adjustment, saying it looks weird, and doing nothing to explain why the adjustment did what it did, you could try to progress the discussion by giving an overview of what the UHI adjustment tries to do, and how it does it.

    The net effect is to pretty much remove the impact of an urban station from the final result. Or at least, that’s the idea. If you don’t understand that context, then of course some of the individual adjustments will look odd – especially if you don’t do the legwork of compiling the neighbors, and doing the comparison between that station and the neighbors.

  31. “Is this reasonable?”

    In a (weak) defense, there is an awful lot of data for GISS to check to see if the results of their adjustments look reasonable. But a counter-argument is which stations has GISS checked to see if their adjustments produce reasonable results.

    Again, it comes down to both good basic science (fact checking the data) and good computer programming practices (testing the results). Neither appears to have been done in this case.

  32. @ Jack Simmons says:
    June 24, 2010 at 1:47 am

    “Imagine a DA bringing speeding charges against you. These are serious charges and could result in felony charges involving words like “criminal indifference to human life” or “speeding through a school zone”.

    “Your defense attorney asks what is the evidence.

    “He is informed there are six different readings from two police jurisdictions. These readings are from at least six different one radar guns, all with a history of giving, on occasion, spurious results. Because of these technical difficulties, the data from these readings are combined and then homogenized. Both the specific locations of the radar units and the basis of the data adjustments are not available to the defense. In fact, they are not available to anyone, as the records storage area of the two jurisdictions are in a state of complete chaos. . . . ”

    Do we know for sure that there were multiple thermometers? My impression is that when there are multiple records for one station, there was originally only a single thermometer in most cases. One of the mysteries of temperature data is how in the world different data sets come to exist where there was originally only one instrument.

  33. Typo in my comment; it’s a two-legged adjustment, not a one-legged adjustment.

    Looks like the pivot point here was around 1997. The bit after that might be an over-fit by the algorithm; it can do that when it places the pivot very close to the beginning or end of the record; one would have to see the composite of the rural (or rather, dark-at-night) neighbors to see what was going on.

  34. Geoff Sherrington says:
    June 24, 2010 at 5:24 am

    The University of Melbourne has done actual measurements of melbourne UHI as reported by http://www.earthsci.unimelb.edu.au/~jon/WWW/uhi-melb.html

    If this work is correct, and it does not seem to have received objections, the top map shows Laverton in the second or third highest colour zone, with a 6-6.5 degree UHI in 1985-94 Winter (JJA) Mean Minimum.

    I love the bi-polar nature of the AGW adherents. Half the time they claim that there is no UHI, the other half of the time they say there is UHI but it doesn’t matter because the records are adjusted for it.

    This is another question that to me should be obvious to anyone who has driven into or out of a city …

  35. I love how people try to catch Willis in errors all the time and he just calmly replies and stuffs the evidence that he has obvoiusly already presented previously down their virtual throats but does it so eloquently and politely.

    As usual Willis and Anthony, well done well done indeed.
    Here’s a thought, in Seinfeld, there is a humorous scene where Elaine screams “The Dingo ate may babyyyyyyy!”
    Maybe the Dingo ate the evidence?

    :-)

  36. carrot eater says:June 24, 2010 at 11:20 am
    for the purposes of GISTEMP, the absolute values of the combined record don’t really matter. What’s important are the trends, as what you’re building towards is a spatially averaged set of anomalies.

    How do you determine a trend without absolute values?

    As for the GISS adjustment, we’ve been through this before. Essentially, the one-legged trend adjustment attempts to eliminate the effect of this station from the spatial mean, by forcing it to have the same trends as its rural neighbors.

    You’re assuming there are rural stations used in the spatial adjustment. As we’ve seen in the USA, the rural stations have been dropped.
    http://chiefio.wordpress.com/

  37. carrot eater says:
    June 24, 2010 at 11:20 am

    Willis, this becomes repetitive from you. If you were interested in explaining the algorithms to your readers, you could. They’re quite simple; there is no need to leave them as some mystery.

    carrot eater, this objection becomes repetitive from you. I guess I’m not being sufficiently clear, it’s not the first time that I think I’ve explained something and someone doesn’t understand what I’m trying to say. Let me see if I can clarify it.

    My point is not the details of the adjustment algorithms. They are not at issue, and they are public record that anyone can look up.

    My point is whether the algorithms provide reasonable and defensible results.

    As for the questions of whether the results are reasonable:

    for the purposes of GISTEMP, the absolute values of the combined record don’t really matter. What’s important are the trends, as what you’re building towards is a spatially averaged set of anomalies.

    Unfortunately, the GISTEMP records are used for many, many more things than a “spatially averaged set of anomalies”. In addition, why would you want to excuse a procedure that messes with the absolute values?

    But even just considering the trends, the algorithm in this case (and many others) makes the trend less accurate, not more accurate.

    As for the GISS adjustment, we’ve been through this before. Essentially, the [two]-legged trend adjustment attempts to eliminate the effect of this station from the spatial mean, by forcing it to have the same trends as its rural neighbors. That’s all there is to it. So if this station had for whatever reason a cooler apparent trend than its rural neighbors in the early part of the record, then it will be adjusted so that its trend during that time better matches the neighbors. The effect of the UHI adjustment (on the regional or global mean) should be roughly the same as simply tossing out all the urban stations. Actually, that’d be a nice calculation to do, to check…

    Is the adjustment crude? You bet. Do there exist much more sophisticated ways of going about it? Sure. But instead of periodically bringing up an example of a GISS UHI adjustment, saying it looks weird, and doing nothing to explain why the adjustment did what it did, you could try to progress the discussion by giving an overview of what the UHI adjustment tries to do, and how it does it.

    What does it matter “why the adjustment did what it did”? The question is whether the result is correct, not whether the adjustment had an unhappy childhood …

    You seem to think that if the algorithm does what it was designed to do, that the results are perforce reasonable, and thus the case is closed.

    I hold that despite doing what it is supposed to do, the result of the adjustment makes absolutely no sense. It’s not making a proper adjustment for anything.

    The net effect is to pretty much remove the impact of an urban station from the final result. Or at least, that’s the idea.

    Whoa, whoa, whoa. Do you hear what you are saying? “Or at least, that’s the idea”??? That kind of handwaving doesn’t cut it in the real world. It’s like saying “the net effect of the blowout preventer is to shear the drill pipe and cap the well. Or at least, that’s the idea” …

    In this case, it increases, not removes but increases, the impact of UHI on this station from 1944 to 1997 … and I hardly think “that’s the idea”.

    If you don’t understand that context, then of course some of the individual adjustments will look odd – especially if you don’t do the legwork of compiling the neighbors, and doing the comparison between that station and the neighbors.

    “Look odd”? I’m not saying they look odd. I’m saying that the adjustments are wrong, that they don’t make sense, that they don’t pass the smell test, that they do the opposite of what they are designed to do. “Odd” is not a synonym for “incorrect”.

    And I’m not interested in making billion dollar decisions based on an algorithm that makes “odd” individual adjustments. “Kinda good enough some of the time” may be fine for you. When it comes to hugely important public policy decisions, it is totally inadequate in my book.

    I hope that clears up the confusion.

    w.

  38. Willis,
    None of that forwards your argument at all.

    You come along, eyeball some adjustments, and declare them wrong. Based on… what? The only way it’s “wrong” for the purposes here is if the adjusted record has long term trends which are unlike the composite of the rural neighbors. Have you done that comparison? I don’t see that you have.

    The GISS adjustment does one thing. It’s very clear what it does, and why it does it. Under the hypothesis that urban stations may have spurious long-term trends, it takes a broad brush and gives it the same trends as the rural neighbors. That’s all the meaning the adjustment has. That adjustment may be up, down, whatever. It just makes it so that the urban stations have little effect on the long term trends of the spatial averages. Your readers would be better served if you actually made that clear.

    Would you be happy if GISS just tossed out all urban stations? If so, then you shouldn’t have a problem with these crude adjustments.

    The point is that you use something that’s appropriate for your needs. If you want a temperature record that is a careful reconstruction of what that location’s history would have looked like, if it weren’t for station moves, instrument changes, etc, etc, then GISS is simply not where you go. GISS does not do that, it does not pretend to do that. So if that’s what you want, don’t go to GISS. GISS makes an adjustment suitable for its purposes.

  39. There is a town called Laverton in outback Western Australia as well. This could be the picture in Anthony Watt’s post. Perhaps the inconsistencies are due to a confusion between the two?

    [reply] See update and Willis Eschenbach’s followup post. RT-mod

  40. BTW – the data from 99 – 03 perfectly matches the data for Moorabbin airport 34 km away.

  41. I haven’t looked at their data for this location, but the Australian BoM does something that’s probably more akin to what Willis seems to want – painstaking adjustments with manual human judgment, guided by field notes about station moves, etc.

    If that’s what you want, then that’s where you go. You don’t go to GISS. Different records are created for different purposes, so you have to know if the record you’re looking at is appropriate for whatever it is you are trying to do. The point of GISTEMP is to build up spatially averaged anomalies, to represent regional or global trends. They feel they can do this without making really detailed adjustments, so they don’t. As it is, those adjustments they do make do not have much impact on the global trends, as you can see.

  42. Why the missing data at Laverton RAAF? It was an RAAF Base from before WWII, the ATC Radar, at least until a few years ago was still operational.

  43. “carrot eater says:
    June 24, 2010 at 1:43 pm

    The GISS adjustment does one thing. It’s very clear what it does, and why it does it”

    Instead of throwing rocks at Willis here, perhaps you should do some work yourself and show how and why the adjustments made by GISS for THIS station make sense. If the adjustments do make sense, then it should be straightforward to show that to us all.

    Willis is saying that the adjustments look strange. From what I’ve seen so far, I’d have to agree with his conclusions.

    If the adjustment in Fig 4 is meant to be a compensation for the UHI effect, then its shape seems unjustifiable – UHI for this site is surely going to be increasing with time as more and more urbanization creeps around it. Any apparent drop in UHI effect is more likely to be a pointer to the fact that local comparison “rural” stations ain’t so rural any more…

  44. Carrot eater unfortunately BOM don’t make UHI adjustments for Laverton. It appears NASA do. Who should we believe?

  45. This is funny.
    So what they are saying is the oldest data is no good, not accurate, and is corrected down.
    But the exact same newest data, from the same equipment in the same place, is very accurate, accurate enough that they can apply a computer generated correction to it.

    Makes sense that you would correct for UHI by raising temperatures, right? and when raising temperatures doesn’t show enough increase, it must mean the oldest data is wrong and must be corrected down………………….

  46. thank you Willis for original post and thank you carrot eater for june24 /1:47pm link to clear climate code showing how GIS calculates adjustment

    I thought it would be helpful for other newbies to post how GIStemp calculates the adjustment, so I have pasted the code comments from /code/step2.py (from carrot eaters link) that document the calculations at the bottom of this post. (My apologies to the oldies who already know this stuff for the length.)

    I understand the code picks out all the stations classified as rural within a radius of ”
    d.cslat = math.cos(station.lat * pi180)” , calculates a trend for these, does some data checks, then adjusts the “urban” station with this trend.

    Presumably, the “rural stations” around Laverton Aero showed a stronger and inconsistent warming trend then it already it had, so in this case applying the adjustment for UHI causes an even greater warming trend.

    Of course, this brings us back to the central theme of this blog: Are the “rural” classifications in the GISTemp accurate and valid? Or to answer Carrot Eater’s question: Would you be happy if GISS just tossed out all urban stations?

    I would, Yes. That is the point of surfacestations.org : to find a large sample of stations with no UHI corruption and see if there really is a significant, widespread trend in temperatures.

    Here are the code comments from step2.py:

    def urban_adjustments(anomaly_stream):
    “””Takes an iterator of station records and applies an adjustment
    to urban stations to compensate for urban temperature effects.
    Returns an iterator of station records. Rural stations are passed
    unchanged. Urban stations which cannot be adjusted are discarded.

    The adjustment follows a linear or two-part linear fit to the
    difference in annual anomalies between the urban station and the
    combined set of nearby rural stations. The linear fit is to allow
    for a linear effect at the urban station. The two-part linear fit
    is to allow for a model of urban effect which starts or stops at
    some point during the time series.

    The algorithm is essentially as follows:

    For each urban station:
    1. Find all the rural stations within a fixed radius;
    2. Combine the annual anomaly series for those rural stations, in
    order of valid-data count;
    3. Calculate a two-part linear fit for the difference between
    the urban annual anomalies and this combined rural annual anomaly;
    4. If this fit is satisfactory, apply it; otherwise apply a linear fit.

    If there are not enough nearby rural stations, or the combined
    rural record does not have enough overlap with the urban
    record, try a second time for this urban station, with a
    larger radius. If there is still not enough data, discard the
    urban station.
    “””
    def combine_neighbors(us, iyrm, iyoff, neighbors):
    “””Combines the neighbor stations *neighbors*, weighted according
    to their distances from the urban station *us*, to give a combined
    annual anomaly series. Returns a tuple: (*counts*,
    *urban_series*, *combined*), where *counts* is a per-year list of
    the number of stations combined, *urban_series* is the series from
    the urban station, re-based at *iyoff*, and *combined* is the
    combined neighbor series, based at *iyoff*.
    “””
    def prepare_series(iy1, iyrm, combined, urban_series, counts, iyoff):
    “””Prepares for the linearity fitting by returning a series of
    data points *(x,f)*, where *x* is a year number and *f* is the
    difference between the combined rural station anomaly series
    *combined* and the urban station series *urban_series*. The
    points only include valid years, from the first quorate year to
    the last. A valid year is one in which both the urban station and
    the combined rural series have valid data. A quorate year is a
    valid year in which there are at least
    *parameters.urban_adjustment_min_rural_stations* contributing
    (obtained from the *counts* series).

    Returns a 4-tuple: (*p*, *c*, *f*, *l*). *p* is the series of
    points, *c* is a count of the valid quorate years. *f* is the
    first such year. *l* is the last such year.
    “””
    def cmbine(combined, weights, counts, data, first, last, weight):
    “””Adds the array *data* with weight *weight* into the array of
    weighted averages *combined*, with total weights *weights* and
    combined counts *counts* (that is, entry *combined[i]* is the
    result of combining *counts[i]* values with total weights
    *weights[i]*). Adds the computed bias between *combined* and
    *data* before combining.

    Only combines in the range [*first*, *last*); only combines valid
    values from *data*, and if there are fewer than
    *parameters.rural_station_min_overlap* entries valid in both
    arrays then it doesn’t combine at all.

    Note: if *data[i]* is valid and *combined[i]* is not, the weighted
    average code runs and still produces the right answer, because
    *weights[i]* will be zero.
    “””
    “””Finds a fit to the data *points[]*, using regression analysis,
    by a line with a change in slope at *xmid*. Returned is a 4-tuple
    (*sl1*, *sl2*, *rms*, *sl*): the left-hand slope, the right-hand
    slope, the RMS error, and the slope of an overall linear fit.
    “””
    “””Decide whether to apply a two-part fit.

    If the two-part fit is not good, the linear fit is used instead.
    The two-part fit is good if all of these conditions are true:

    – left leg is longer than urban_adjustment_short_leg
    – right leg is longer than urban_adjustment_short_leg
    – left gradient is abs less than urban_adjustment_steep_leg
    – right gradient is abs less than urban_adjustment_steep_leg
    – difference between gradients is abs less than urban_adjustment_steep_leg
    – either gradients have same sign or
    at least one gradient is abs less than
    urban_adjustment_reverse_gradient

  47. carrot eater:

    I’m surprised that anyone with any analytic acumen would defend GISTEMP and their data suppliers. There is virtually no QC in the data analysis, which is done on time-series often patched together from short stretches of inconsistent data at the same station. Incisive QC routines find egregious offsets of decadal and longer duration in both the “raw” and the “adjusted” series. The basic premise of their homogenization is that a low night-lights station is “rural,” and for every grid-cell, one such station is designated as the “reference,” whose trend all other stations are then forced to mimic. This is trend management of the most obvious subjective kind. And the tendentiousness of their management technique is amply evident in comparing the two versions of the USA48 anomalies. They differ substantially only at the extreme ends of the series, in what is an obvious attempt to maintain a consistent trend theroughout the decades, rather than a genuine methodological change, as is advertised.

  48. MikeEdwards:

    If there was a difference in trends between the urban station and the rural neighbors, then the method will try to get rid of them, as dixonstalbert outlines.

    Exactly why the trend was different: this does not come into play. Maybe it was UHI. Maybe it was something else. Maybe it was an artifact of a step change at a station move. Maybe the rural neighbors had, for whatever reason, a higher warming trend than the urban station, so the urban station is adjusted to warm faster.

    The algorithm doesn’t know or care why this would be the case; it just puts its head down and makes the urban stations look like the rural stations.

    So the real question is, is it a good idea to neuter the urban stations in this way?

    Or, put another way, do you think it’s a good idea to just eliminate the urban stations from the sample?

    Because that’s roughly what this method is doing; any long term trends unique to the urban stations are not allowed into the result.

    This is the context that Willis’s posts always miss. Along with the context of how little the adjustment affects the overall result.

  49. janama says:
    June 24, 2010 at 3:11 pm

    Willis – there’s more data for Laverton available here

    this record goes back to 1910 and is the High Quality temperature record used for annual temperature analyses.

    Many thanks, janama. Gotta love more data. Here’s the long-term high quality Australian temperature record, the one used for the analyses, along with the plain vanilla Australian record.

    carrot eater says:
    June 24, 2010 at 3:41 pm

    … The point of GISTEMP is to build up spatially averaged anomalies, to represent regional or global trends. They feel they can do this without making really detailed adjustments, so they don’t. As it is, those adjustments they do make do not have much impact on the global trends, as you can see.

    One of the “adjustments” that GISS has made is to throw away thirty years of Australian data. This may or may not “have much impact on global trends”, but it certainly would have an effect on regional trends.

    Again, let me restate my main point. The field of climate science needs to have one agreed upon method for selecting temperature data, for combining records at the same location, for adjusting for UHI, and for area-averaging records. The problem is not just that the GISS adjustment method may or may not be right. It is that we get very, very different answers if we use the GISS data (adjusted or unadjusted) and the Australian HQ data.

    PS – carrot eater, we know for a fact that UHI exists and is a couple of degrees in many cities. The link from Geoff Sherrington shows that it has been measured and is significant in Laverton Aero. The fact that (as you note) the GISS adjustments for UHI “do not have much impact on the global trends” should give you pause regarding the validity of those adjustments …

    Your link above shows that the GISS UHI adjustments make absolutely no difference to the post-1950 data … is that supposed to impress me? Because it does exactly the opposite. Post-1950 we should see the largest change from any UHI adjustment and the GISS method shows none at all … does that make sense to you?

    Finally, you ask above “Would you be happy if GISS just tossed out all urban stations? If so, then you shouldn’t have a problem with these crude adjustments.” Yes, I’d be happy if GISS did that … but what does that have to do with adjustments that are improper?

    Also, that might not fix it, because the GISS method for distinguishing urban from rural has some big problems, and for the same reason I detailed above, no quality control. They use night-time brightness, but then (as with the UHI adjustments) they neglect the quality control. You can’t just make up a computer method and apply it to every place on the globe. You need to go through every station, one by one, and see if your method makes sense.

    Here’s some examples. Srinagar is a city with a population just under a million, a population density of 6,400 people per square kilometre, and a brightness of 6, so it is rural. Baghdad has a brightness of 8, as does Jiuquan, China, population one million. Bangui is the capital of the Central African Republic, population half a million, brightness zero. Zero is also the score for Nanchang, China, population four million. … riiiiight. I looked up all of these on Google Earth, the temperature station is inside the city in all cases. And all of them will be used to homogenize other stations …

    Someone had a good idea (use brightness to distinguish urban from rural). But they didn’t detail an intern to actually look up each and every station to make sure that the brightness made sense.

    Sure, carrot eater, if you classify Baghdad and Srinagar as “rural”, no matter what algorithm you use you may see little difference between adjusted and unadjusted data … so?

  50. BOM has several records for Laverton at http://www.bom.gov.au/climate/data/weather-data.shtml
    Untick the box “Only interested in open stations” to see them-
    087031 Laverton RAAF, 087177 Laverton Comparison, 087032 Laverton Salines, 087086 Laverton explosives, 087065 Werribee Research Farm (8.8km away). It’s only 18.4km from BOM Regional Office in Melbourne.
    I’m currently analysing Victorian climate data and should have a post up in a couple of days at
    kenskingdom.wordpress.com
    which will include Laverton. On ya janama and Willis.
    Ken

  51. ‘Does it make sense that after combining the data, the “combined” result is often colder than any of the five individual datasets?”

    It makes no sense to me , but does somehow to the CRU team.

    Here are just a couple of examples where not one but two stations have been adjusted (in favour of a warm bias and lowering the 1961-1990 baseline) in Australia by CRU.




    The BOM quality site is a bit of a joke IMO.

    Here they have turned a 100 year flat trend over two stations 12km apart and 65 odd metres different in elevation into up 1.6 degree trend in the Maximums.

    Yet the data shows the earlier station minimums were much colder in winter

    Compare them with a plot with CRU2010 details released after the climate gate that for Australia purport (by the met office) to be “Based on the original temperature observations sourced from records held by the Australian Bureau of Meteorology”.

    The CRU 2010 data shows a cooling trend from 1950.

    Bear in mind that the Halls creek station is extrapolated over at least 1mil sqr km or 15% of Australia’s land mass.

    More Aus info here

    http://www.warwickhughes.com/blog/?p=510

  52. just a tip for all – the Weatherzone site is the way to access where the sites are physically.
    here’s Laverton
    http://www.weatherzone.com.au/vic/melbourne/laverton
    scroll down and click on “full climatology »” and scroll to bottom of page and you’ll see the exact location of the site 37.8565°S 144.7566°E – remove the degree signs and paste into google earth. It will take you directly to the Stevenson box site.

  53. Willis

    “One of the “adjustments” that GISS has made is to throw away thirty years of Australian data. ”

    Where is that? I must have missed something. GISS just takes what NOAA gives them. If a record is less than 20 years long (I think), they toss it out.

    “The field of climate science needs to have one agreed upon method for selecting temperature data, for combining records at the same location, for adjusting for UHI, and for area-averaging records.”

    NO they absolutely don’t. I couldn’t disagree more. Because there simply isn’t any obvious best way to do any of these things. That’s why there’s value in different groups using different methods with roughly the same data – CRU, GISS, NCDC, the individual countries, and now a whole slew of bloggers as well – you see the effects of different choices in processing, you see what matters, what doesn’t matter. When it comes to hemispheric or global means, it’s remarkable how little processing choices matter; you get about the same results. But it’s still useful to have different people trying different things.

    “It is that we get very, very different answers if we use the GISS data (adjusted or unadjusted) and the Australian HQ data.”

    1. So?
    2. No, they aren’t very, very different.
    3. Differences are going to come up when different groups use different adjustment methods. GISS takes the raw, and then applies its crude adjustment to make it look like the rural stations. The BoM will, on the other hand, sit there with both field notes and statistical methods and try to specifically adjust for each little thing that happened there – station moves, micro-site stuff, whatever. Again, why is there this need for everybody to come to the exact same results? That’s weirdly bureaucratic.

    “Your link above shows that the GISS UHI adjustments make absolutely no difference to the post-1950 data … is that supposed to impress me?”

    The reason I show you that is to show that the adjustments you are so suspicious of have very little impact to the big picture. It’s something to keep in mind when obsessing over each individual adjustment.

    “PS – carrot eater, we know for a fact that UHI exists and is a couple of degrees in many cities. ”

    Yes. But based on that, you can’t just eyeball a global graph and know how much UHI is in there. I’m sorry, but you just can’t. You have to do some analysis. One simple thing to do is exactly what GISS does: not let any urban station carry its own trend into the analysis. The trends in GISTEMP are driven by the rural stations.

    “Yes, I’d be happy if GISS did that … but what does that have to do with adjustments that are improper?”

    It has everything to do with it.. because that’s essentially what the GISS adjustment accomplishes. I don’t understand why this is so difficult to grasp, but it’s fundamental to understanding what GISS does. If you would be happy with GISS dropping the rural stations, then you can’t be upset with them doing what they do now.

    Anyway, if you want to see what happens, you can go use ccc code to see what GISTEMP does when you do eliminate all the urban stations. I think Ron Broberg has posted such results.

    As for your point about using light/dark for urban/rural in poor countries: that’s a reasonable objection,on the face of it, but separate from the point here. I think they think nightlights scales with energy usage, and UHI scales with energy usage. I’m not convinced, since UHI has as much to do with materials of construction, obstructions to convection and changes in surface moisture, as it does anything else, and you can have all these things in a poor country that’s dark at night. So without doing or seeing further analysis on poor countries, I’m not convinced by their choice there.

  54. carrot eater says:
    June 24, 2010 at 3:41 pm “I haven’t looked at their data for this location, but the Australian BoM does something that’s probably more akin to what Willis seems to want – painstaking adjustments with manual human judgment, guided by field notes about station moves, etc”.

    Reply. (1). Your earlier assertion that trend is more important than accurate absolute value is simply wrong. Ultimately, there is a national average, then a global average calculated. These are independent of trend. They are entirely dependent upon the accuracy of the temperature measurement.

    Reply (2). Yes, the BoM does the things you state. Here is one result derived from the 100 years of annual average data from Laverton. It deals with the number of times a value appears among those 100. Let’s start with the whole numbers and half numbers.

    13 deg 1 time
    13.5 deg 3
    14 deg 3
    14.5 deg 4
    15 deg 2

    But, looking at intermediate values, we have

    13.8 deg 13 times
    14.1 deg 10
    14.3 deg 10

    So, only 3 values account for a third of the numbers reported and there is a tendency to avoid whole and half numbers. Does this seem like a natural data set? Emphatically no. It looks “adjusted”. If the adjustment is this far askew, what else is askew?

    carrot eater, I’m calling you out. Let’s use Laverton as an example. Please give your ideas on why the original BoM data, as read from the primary records, are suppressed in favour of the adjusted values from 1910 as referenced above.

    Please explain how one can recover from the BoM, the original data, plus the time periods when adjustments were made, plus the magnitude of those adjustments. Please state why there are missing values as I posted above, and how they were infilled.

    Remember, this is before the data are exported for GISS to play with.

  55. “Reply. (1). Your earlier assertion that trend is more important than accurate absolute value is simply wrong. Ultimately, there is a national average, then a global average calculated. These are independent of trend. They are entirely dependent upon the accuracy of the temperature measurement.”

    I don’t think you understood what I mean. A temperature series that goes

    1 1 2 2 1

    is, so far as GISS is concerned, the same as a temperature series that goes

    2 2 3 3 2

    That’s what I mean that absolutes don’t matter, when what you’re ultimately calculating are anomalies. Trends are what matter. And the process of GISTEMP doesn’t calculate national averages. The grid boxes don’t know or care about political boundaries.

    As for the rest of it, I’m not interested in numerology.

  56. I’ve come across a letter of mine published in The Australian, our national newspaper, on 25 Sept 2007. It still seems pertinent, in the light of data concerns which have arisen since 2007, and particularly as the new Prime Minister Julia Gillard has acknowledged the lack of a popular consensus on AGW policies.

    “Letter to the Editor, The Australian. Published 25/9/07 (lead letter)

    Climate change is the natural condition of the Earth. Several questions must be answered before policy decisions are made to attempt to modify the rate of change.

    Is the Earth really warming at an unusual rate? If so, is this a problem and what are the costs and benefits of climate change? What is the cause of any change and can policy-driven human actions significantly affect the future rate of change and the level of global temperature? What are the costs and benefits of such actions? Are they worthwhile?

    Clearly, no one can decisively answer all of these questions. All policy proposals are based on a high degree of ignorance and uncertainty, which should be recognised. Yet none of the questions were addressed [in an op-ed piece by a government minister]. It would be better to focus our efforts on developing a clearer understanding of climate change than on pursuing ad hoc and disparate measures, many of which will clearly not be cost-effective.

    Michael Cunningham, West End, Qld”

    I think that Gillard should use this as a starting point.

  57. Speaking as a scientist Just looking at the graph displayed at “Willis Eschenbach says: June 24, 2010 at 5:53 pm above ” I think that anyone who draws a straight line through that data set as dislayed is either heroic or has rocks in their heads.

    You can fiddle with it as much as you like, but until you have another hundred years of data or can explain the fluctuations, all you can reliably say is that temperature goes up and temerature goes down. I mean just look at those huge drops. What are they all about? I doubt its UHI effect.

    Now you can give that data to NASA to have a fiddle with but after you’ve read Case Study 12 of D’Aleo and Watts (“show this to Jim then hide it”) which shows that it is apparently standard practice to alter “raw data” and then delete the original, and when you’ve digested that read https://wattsupwiththat.com/2010/03/30/nasa-data-worse-than-climate-gate-data-giss-admits/#more-17958 where NASA says its data is even worst that the Uni of East Anglia for heaven’s sake – Then any output “homogenised” or otherwise is just not credible, however you tart it up.

    I guess I’m addressing this to various people out there who eat vegetables.

  58. carrot eater says:
    June 24, 2010 at 8:54 pm “As for the rest of it, I’m not interested in numerology.”

    You are caught out badly on selective quotation. Whereas you state “And the process of GISTEMP doesn’t calculate national averages” my statement was “Ultimately, there is a national average, then a global average calculated.” I did not limit my comment to GISTEMP. Would you care to answer how an anomaly temperature is calculated if not from the mean of absolute numbers, whose accuracy is vitally important as a base, irrespective of trend?

    Of course accuracy matters. Only a novice would argue otherwise.

    Thank you for taking up the challenge. You do not score points for telling people to do them near impossible, then failing to show that you can do it.

    I’m not dealing with numerology in the sense of predicting horse races from past patterns of winners. I’m talking about a natural number set that ought to have an explainable distribution. This was does not. Why, oh wise carrot eater?

  59. Geoff Sherrington:

    “Would you care to answer how an anomaly temperature is calculated if not from the mean of absolute numbers, whose accuracy is vitally important as a base, irrespective of trend?”

    Seriously? Maybe I’m misunderstanding what you are saying, but this looks like an elementary misunderstanding of how anomalies are calculated. The only time the absolute numbers are averaged together are to find the monthly means at any given location from the daily observations – something that happens before the data gets to NOAA or GISS or CRU.

    Regardless of how you combine the stations – RSM, CAM or FDM, you aren’t simply averaging together absolute values from different locations.

    If I’m not misunderstanding you, and this is actually a point of contention, then I would have you work out a simple example of how you think anomalies are calculated. Start with my silly example of
    1 1 2 2 1
    and
    2 2 3 3 2

  60. Phil says:
    June 24, 2010 at 11:32 am

    Do we know for sure that there were multiple thermometers? My impression is that when there are multiple records for one station, there was originally only a single thermometer in most cases. One of the mysteries of temperature data is how in the world different data sets come to exist where there was originally only one instrument.

    Phil,

    You are right, of course. There probably was only one thermometer. But, as you observed, why the different data sets? Until these things are explained, we should toss the whole thing and start over.

    In any event, when looking at the combined datasets from a global perspective, the CO2 induced warming theory collapses.

    See http://www.climate4you.com/ClimateReflections.htm#20080927:%20Reflections%20on%20the%20correlation%20between%20global%20temperature%20and%20atmospheric%20CO2

  61. This adjustment process reminds me of an old joke:

    A man goes into a butcher shop and orders 5 pounds of ground beef.
    The butcher puts some on the scale which then reads 5 pounds. But the customer says; “hey get your thumb off the scale!” The butcher says “oops, looks like its just 3 pounds.” Then the customer says; “now get your other thumb off the scale!” The butcher complies and the scale reading drops to 1 pound. Finally the customer says; “now get your belly off the scale.” The butcher complies again and then looks up sheepishly and say; “well what do you know, there ain’t no meat!”

    Maybe there ain’t no temperature anomaly either.

  62. Carrot eater
    Willis has shown the UHI adjustments. I have given you the population growth. Only a true believer would say they can be reconciled. I do not know what the UHI adjustment needs to be, as an order of magnitude, but I know that it needs to be a downward one and needs to have grown over the years in proportion to some function – maybe logarithmic, maybe not – of the population, with something in the mix for additional heat-generating energy use in the area. It doesn’t. So it MUST be wrong. It doesn’t have a chance of being right. It’s no good arbitrarily dividing the stations into urban and rural, as each station has its own unique population growth characteristics. When climate scientists start making a serious effort to estimate the effect, rather than indulging in what you call numerology in a spurious effort to show there is no such thing as UHI, then we might have a better global record.
    By the way, I think there is global warming, and that anthropogenic CO2 plays a part, but I do not think you or anyone else has a handle on how much.

  63. I think I agree with you David S at 8:30. Not much anomaly.

    I dont eat too many carrots but my eye sight is good enough that when I look at the graph presented in the comments above, I dont see a straight line, I see a saw tooth. Since that does not seem to correlate too well with CO2 I think I can assume its natural. OK, so there is a slight rise in the saw teeth but that goes way back to 1910 and I think one could assume that it goes right back to the minimum at the end of the Little Ice Age. That doesn’t correlate with CO2 either so that must be natural too.
    “Homogenised” temperature graphs that still show these two natural trends are dishonest. Pure and simple.

    So here’s the challenge – Take the temperature graph and remove all the trends that are natural and show me what the residual is that is “unnatural.” Or are we supposed to be stopping natural climate change too now? If thats the case, explain to me again -How, if the two dominant trends are clearly independant of CO2, is stopping CO2 emissions going to turn these trends around?

    When I put my geologist’s hat on, I find that this religious fevour to stop species extinction, stop sea level rise, stop climate change, and , I suppose,ultimately stop plate techtonics is, well, silly. Its as if the current moment is so special because it is graced with our own presence that it must be preserved in tact for the future to enjoy too. Geology just doesn’t work like that.

  64. AC of Adelaide (11:04pm):

    You’re spot-on in pointing to strong, low-frequency, natural variations (evident in century-long temperature records from many warm- and temperate-climate stations) as counterindications of inherent linear trends. But linear regression is just about all the analytic expertise in data analysis that most climate scientists posess. Thus they fit linear trends opportunistically to woefully short stretches of record and pretend that they are secular features. Little do they realize how volatile multidecadal trends really are when the power spectrum of the temperature signal is dominated by multidecadal- and centennial-scale oscillations. And when there are substantial uncertainties in datum-level, as with stitched-together records, linear trends become particularly suspect as a metric, because they are quite sensitive to data values at both ends of record. Linear trends fitted over arbitrarily chosen time-intervals are not very meaningful and certainly cannot be projected into the future.

    Those who may lack enough protein in their diet fail to understand that “anomalization” of data scarcely provides an antidote to datum uncertainty. While it does not change the apparent trend, the reliability of the anomalies themselves depends on having an ACCURATE ABSOLUTE value of the mean temperature over the base period. Those of us experienced in analyzing station records realize that even genuinely rural stations can show spurious trends due to inconsistent datum levels.

    What makes trend-managing homogenization particularly onerous, however, is that in many regions of the world there are no rural records in the GHCN data base. Thus darkly lighted cities, often major ones, become the nominal “rural” stations in GISTEMP analysis. The fact that homogenization of megacities makes little difference in the trends obtained is scarcely evidence of insignificant UHI corruption of the anomaly time-series.

    Sadly, where reliable rural station records are quite plentiful, they are being arbitrarily altered in the name of homogenization. USHCN Version 2 is a farce.

  65. Looks like we have two David S’ again. So I’ll go back to being David S the 1st.

    [One of the of idiosyncrasies of WordPress; two people can have the same nickname. ~dbs, mod.]

  66. carrot eater says:
    June 24, 2010 at 8:54 pm (Edit)

    “Reply. (1). Your earlier assertion that trend is more important than accurate absolute value is simply wrong. Ultimately, there is a national average, then a global average calculated. These are independent of trend. They are entirely dependent upon the accuracy of the temperature measurement.”

    I don’t think you understood what I mean. A temperature series that goes

    1 1 2 2 1

    is, so far as GISS is concerned, the same as a temperature series that goes

    2 2 3 3 2

    That’s what I mean that absolutes don’t matter, when what you’re ultimately calculating are anomalies. Trends are what matter. And the process of GISTEMP doesn’t calculate national averages. The grid boxes don’t know or care about political boundaries.

    As for the rest of it, I’m not interested in numerology.

    carrot eater, you give an example of two temperature series:

    1 1 2 2 1

    You say that as far as GISS is concerned, this is the same as

    2 2 3 3 2

    However, this is only true for the procedure of making area (gridcell) averages. It is definitely not true when combining records at the same site.

    This is because when GISS makes area averages, they are concerned (as you say) only with anomalies, and their end result is an anomaly. When they combine records at the same site, however, they do not end up with anomalies — they end up with absolute temperatures.

    Which makes their combining of the Laverton Aero most curious. As I pointed out, they end up with a combination that at times gives an intermediate value somewhere in between all available records, and at other times gives a value that is below all of the records.

    Since you claim that you know how they do the combining, perhaps you could step through the combination process for the Laverton Aero records, and show us how it is done? Because I’ve tried what I think is their method, and I can’t get an answer that looks like what they end up with.

    Finally, as I said, GISS is not using the Australian data from 1910 to 1944 … what is your explanation for that?

  67. carrot eater says”If I’m not misunderstanding you, and this is actually a point of contention, then I would have you work out a simple example of how you think anomalies are calculated. Start with my silly example of
    1 1 2 2 1
    and
    2 2 3 3 2″

    Yes, carrot eater, you are misunderstanding me, I think intentionally. Before we get back to the original topic, I’ll reply to your bait-and-switch this way:

    I object to climate scientists who see similarity in these two number series:

    1 1 2 2 1

    and what we have in practise,

    1 (maybe 3), 1 (maybe minus 2), 2 (maybe 2 +/- 1), 2 (maybe missing data), 1 (interpolated from point 100 km away).

    The discussion is about ACCURACY, not trend. Without accurate point readings, your trend is inaccurate.Now be a good lad and address the topic restated verbatim from above –

    “Let’s use Laverton as an example. Please give your ideas on why the original BoM data, as read from the primary records, are suppressed in favour of the adjusted values from 1910 as referenced above.

    “Please explain how one can recover from the BoM, the original data, plus the time periods when adjustments were made, plus the magnitude of those adjustments. Please state why there are missing values as I posted above, and how they were infilled.”

    To which I might add, how accurate is an infill of missing data?

Comments are closed.