BoM’s bomb on station temperature trend fiddling

From Jo Nova: BOM finally explains! Cooling changed to warming trends because stations “might” have moved!

It’s the news you’ve been waiting years to hear! Finally we find out the exact details of why the BOM changed two of their best long term sites from cooling trends to warming trends. The massive inexplicable adjustments like these have been discussed on blogs for years. But it was only when Graham Lloyd advised the BOM he would be reporting on this that they finally found time to write three paragraphs on specific stations.

 

875141-a5eda3f6-2a03-11e4-80fd-d0db9517e116[1]Who knew it would be so hard to get answers. We put in a Senate request for an audit of the BOM datasets in 2011. Ken Stewart, Geoff Sherrington, Des Moore, Bill Johnston, and Jennifer Marohasy have also separately been asking the BOM for details about adjustments on specific BOM sites. (I bet Warwick Hughes has too).

The BOM has ignored or circumvented all these, refusing to explain why individual stations were adjusted in detail. The two provocative articles Lloyd put together last week were  Heat is on over weather bureau  and  Bureau of Meteorology ‘altering climate figures, which I covered here. This is the power of the press at its best.

more here: http://joannenova.com.au/2014/08/bom-finally-explains-cooling-changed-to-warming-trends-because-stations-might-have-moved/

About these ads

130 thoughts on “BoM’s bomb on station temperature trend fiddling

  1. Anybody that is NOT a believer that pays attention to the weather will see things such as heavy down pours in one town and not a drop 15 miles north or south of that town. Mosher would say discarding that rain is correct because all the surrounding towns recieved no rain, therefore it must have been a sensor issue. The same thing happens with temps. Just use the raw data, stop fiddling with it, one gets thrown in jail for fiddling with cancer drug effectiveness data, yet gets grant money to do it with temps, IF they get those temps to rise.

  2. I await detailed comments from Mosher, Zeke and their ilk explaining that these types of adjustments do not apply to USA institutions such as GISS and USHCN and furthermore Steven Goddard is totally wrong with his discoveries of “adjustments” made to US surface temperature records to “improve” on current global warming.

  3. If the fiddles were not all to push a “Warming Agenda” you might give them some leeway – but this is plain old FOLLOW THE MONEY – FRAUD

    Go get ‘em Batman…

  4. 5 years ago I downloaded data from BOM for my nearest weather station (Eagle Farm, Qld). Data is below. This also shows a cooling trend which has been “adjusted” into a warming trend (load it into excel and you’ll soon see). When I go back to the site now, I am unable to find an option to download both raw and homogenised data.

    YEAR,JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,DEC,Avg,D-J-F,M-A-M,J-J-A,S-O-N,Raw,Homogenised 1950,24.9,24,23.9,21.3,18.3,16.2,16.9,15.8,18,20.2,21,23.2,20.30833333,24.1,21.2,16.3,19.7,20.33,19.73 1951,22.8,23.7,23.5,20,16.9,16.4,14.2,14.7,17.6,19.8,22.6,24.1,19.69166667,23.2,20.1,15.1,20,19.62,19.02 1952,25.8,24.4,23.4,21.8,17.9,15.7,15.2,16.8,18.2,20.9,23.9,25,20.75,24.8,21,15.9,21,20.68,20.07 1953,24.2,24.2,23.6,22.6,18.4,15.8,15.1,15.7,18.6,21.3,23.6,25.7,20.73333333,24.5,21.5,15.5,21.2,20.68,20.07 1954,24.4,25.2,25.3,22.4,18.6,16.1,16.4,16.4,18,20.4,23.1,24.2,20.875,25.1,22.1,16.3,20.5,21,20.5 1955,25.1,25.7,24.8,22.4,18.3,16.2,14.6,16.2,18.5,21.7,22.9,24.1,20.875,25,21.8,15.7,21,20.88,20.38 1956,25.3,25.1,24.7,22.3,18,15.2,14.4,14.8,17,20.5,22.4,24.7,20.36666667,24.8,21.7,14.8,20,20.32,19.82 1957,24.9,25.4,23.5,22.9,18.5,17.3,14.1,16.4,18.2,21.6,24.3,25.5,21.05,25,21.6,15.9,21.4,20.98,20.48 1958,25.1,25.4,24.8,22.1,20.3,17.5,15.5,17,17.7,21.6,22.9,24.4,21.19166667,25.3,22.4,16.7,20.7,21.28,20.78 1959,24.5,24.5,23.7,21.4,18.2,16,15.7,16.1,18.4,19.5,22,25,20.41666667,24.5,21.1,15.9,20,20.37,19.87 1960,25.3,25.1,22.8,21.6,17.6,15.5,15,15,18.5,20.7,22.2,23,20.19166667,25.1,20.7,15.2,20.5,20.36,19.86 1961,23.4,24.1,23.1,21.7,18.1,16.3,14.8,15.9,18.4,21.5,22.7,24.4,20.36666667,23.5,21,15.7,20.9,20.25,19.85 1962,24.9,25.5,23,20.8,18,17,15.9,15.6,18.6,21.1,23.4,23.2,20.58333333,24.9,20.6,16.2,21,20.68,20.28 1963,24.7,25.6,24.3,21.6,19.2,16.3,14.4,16.7,18.6,19.9,21.6,22.9,20.48333333,24.5,21.7,15.8,20,20.51,20.12 1964,26,25.1,24.1,21.8,18.9,15.7,15.6,16.1,19.1,20.4,22.7,24.2,20.80833333,24.7,21.6,15.8,20.7,20.7,20.32 1965,24.7,24.7,24.4,22.1,19.2,16.9,14.3,16.4,19.7,21,23.2,23.9,20.875,24.5,21.9,15.9,21.3,20.9,20.51 1966,24,25.9,24,21.9,18.3,16.6,15,16.6,18.5,20,22.7,23.8,20.60833333,24.6,21.4,16.1,20.4,20.62,20.22 1967,24.9,24.4,23.1,21.1,18.6,18,15.2,15.6,18.7,21.8,23.1,23.4,20.65833333,24.4,20.9,16.3,21.2,20.69,20.29 1968,24.8,25.1,24.5,23.4,18.2,16.3,15.1,15.9,18.2,21.6,24.6,24.4,21.00833333,24.4,22,15.8,21.5,20.93,20.61 1969,26.3,25.4,24.3,22.2,19.4,16.5,16.7,17.9,17.3,20.5,22.4,25.1,21.16666667,25.4,22,17,20.1,21.11,20.77 1970,25.8,25,23.6,21.6,17.1,15.6,13.9,15.8,17.7,20.4,22.2,24.4,20.25833333,25.3,20.8,15.1,20.1,20.32,20.06 1971,25.1,24.8,23.3,20.8,17.8,15.5,14.3,15.9,18.3,21.6,22.3,23.8,20.29166667,24.8,20.6,15.2,20.7,20.34,20.04 1972,24.5,23.4,23,21,18.3,16.3,14,16.1,18,20.6,22.8,24.6,20.21666667,23.9,20.8,15.5,20.5,20.15,19.86 1973,26.1,25.5,24.9,22.5,20.4,16.7,16.7,16.9,19.2,21.4,23.5,24.8,21.55,25.4,22.6,16.8,21.4,21.53,21.23 1974,24.9,24.6,23.7,21.9,18.5,15.7,14.4,15.1,17,19.5,21.3,24.4,20.08333333,24.8,21.4,15.1,19.3,20.12,19.81 1975,24.6,24.7,24.7,21.5,18.6,15.7,15.8,15.9,18.5,20.3,22.6,24.1,20.58333333,24.6,21.6,15.8,20.5,20.61,20.41 1976,24.8,24,24.2,21.3,19.1,16.6,16,14.7,17.5,19.6,23.3,25.4,20.54166667,24.3,21.5,15.8,20.1,20.43,20.23 1977,25.3,25.7,24.4,22.3,18.7,15.3,14.6,16.2,17.9,21.3,23.5,24.5,20.80833333,25.5,21.8,15.4,20.9,20.88,20.7 1978,25.9,25.7,24.3,21.9,18.3,15.1,14.6,15.4,17.6,19.7,22.2,24.1,20.4,25.4,21.5,15,19.8,20.43,20.25 1979,25.1,24.4,23.8,21.9,18.1,17.5,14.9,16,18.4,20.7,23.5,26.2,20.875,24.5,21.3,16.1,20.9,20.7,20.51 1980,26.2,25.4,24,22.2,19.8,16.5,14.7,17,19.5,21.6,23.4,23.7,21.16666667,25.9,22,16.1,21.5,21.38,21.18 1981,25,26.1,24.7,22.1,18.6,15.1,15.1,15.3,18.8,19.8,22,25.2,20.65,24.9,21.8,15.2,20.2,20.53,20.3 1982,25.8,25.3,24.6,21.2,18.8,14.4,14.1,16.6,18,19.2,22.1,24.2,20.35833333,25.4,21.5,15,19.8,20.44,20.36 1983,24.8,25.5,24.5,21.4,19.7,15.9,15.4,16.1,19.7,21,22.4,22.9,20.775,24.8,21.9,15.8,21,20.88,20.73 1984,24.7,24.5,23.8,21,18.2,16.9,14.6,15.6,17.4,19.6,22.4,24.8,20.29166667,24,21,15.7,19.8,20.13,20.03 1985,25.8,24.9,23.7,21.6,19.1,15.1,15.1,15.7,18,20.4,22.7,25.6,20.64166667,25.2,21.5,15.3,20.4,20.58,20.42 1986,25.4,25.4,23.9,22.6,19.8,16.5,16,15.9,19,21.3,22,23.9,20.975,25.5,22.1,16.1,20.8,21.12,21 1987,26.7,25.5,23.8,21.8,19.4,17,14.8,17,18.6,20.8,22.2,24.4,21,25.4,21.7,16.3,20.5,20.96,20.88 1988,24.9,23.9,22.2,21.7,18.9,16.4,16.3,16.2,18.8,22.6,23.1,24.4,20.78333333,24.4,20.9,16.3,21.5,20.78,20.68 1989,24.3,23.3,23.5,22.1,19.7,15.8,14.1,14.1,17.6,21.3,21.7,23.4,20.075,24,21.8,14.7,20.2,20.16,20.16 1990,24.7,25.2,23.1,21.4,18.7,15.3,14.6,14.5,17,20.7,22.9,25.7,20.31666667,24.4,21.1,14.8,20.2,20.12,20.12 1991,25.7,25.3,23.5,21.4,19.6,17,14.1,16,18.3,20.8,23.1,23.9,20.725,25.6,21.5,15.7,20.7,20.88,20.87 1992,25.9,24.9,23.4,21.2,18.5,14.9,14.9,15.8,17.6,19.8,22.5,23.5,20.24166667,24.9,21,15.2,20,20.28,20.27 1993,25.1,25.3,23.2,21.1,19.1,15.9,17.7,16.6,17.8,20.1,21.6,23.2,20.55833333,24.6,21.1,16.7,19.8,20.58,20.58 1994,25.7,23.9,22.5,20.5,17.4,,14.7,15.1,17.3,19.7,23.4,23.3,20.31818182,24.3,20.1,15.1,20.1,19.91,19.91 1995,24.7,23.9,23.2,20.3,18.3,15.1,14.1,15.8,18.1,19.7,23.3,23.2,19.975,24,20.6,15,20.4,19.98,19.98 1996,24.9,24.1,,20.7,18.6,16.2,14,14.6,17.5,19.7,22.1,23.4,19.61818182,24.1,20.9,14.9,19.8,19.92,19.92 1997,23.3,24.9,23.2,20.4,17.7,14.9,14.5,14.8,18.1,19.6,22.5,25.1,19.91666667,23.9,20.4,14.7,20.1,19.78,19.77 1998,25.3,25.7,24.4,21.1,18.3,15.6,15.3,16.7,18.9,20.6,21.1,23.9,20.575,25.4,21.3,15.9,20.2,20.68,20.68 1999,24.8,24,23.4,19.7,18.4,,15.7,,,,,22.1,21.15714286,24.2,20.5,,,, 2000,23.9,,,21.4,18.5,14.9,14.3,,18.7,20.8,21.5,24.2,19.8,23.1,21.2,14.7,20.3,19.84,19.83 2001,25,24,24.7,21.3,17.7,16.7,15.2,,,20.9,22,25,21.25,24.4,21.2,16.1,20.3,20.51,20.51 2002,25.6,25.7,23.7,21.5,17.9,16,13.9,15.6,19,20.7,22.5,24.1,20.51666667,25.4,21,15.2,20.7,20.59,20.59 2003,24.1,24.6,22.7,20.9,18.1,16.3,14.8,15.8,19.3,20,21.5,24.1,20.18333333,24.3,20.6,15.6,20.3,20.18,20.18 2004,26.1,26.3,23.9,21.6,17.4,15.8,15.2,15.6,17.7,21.6,22.5,23.9,20.63333333,25.5,21,15.5,20.6,20.65,20.65 2005,25.1,25.5,23,21.7,18.1,16.5,15.8,15.5,18.2,22,23.2,25.7,20.85833333,24.8,20.9,15.9,21.1,20.71,20.71 2006,25.9,25.7,23.2,21.3,17.2,15.4,15.3,16,18.3,20.3,21.9,22.5,20.25,25.8,20.6,15.6,20.2,20.52,20.52 2007,24.9,24.3,24.3,21.4,19.9,15.1,13.3,16.2,18.3,21.6,22,23.8,20.425,23.9,21.9,14.9,20.6,20.32,20.32 2008,24.8,24.4,22,19.7,17.4,17,14.8,14.1,18.9,20.2,23.1,24.5,20.075,24.3,19.7,15.3,20.7,20.02,20.02 2009,25.2,24.9,23.9,21.8,18.1,15.8,14.8,17.6,19.1,20.7,23.7,25.2,20.9,24.9,21.3,16.1,21.2,20.84,20.84

  5. Unless a warm bias is proven to be introduced overall, this sort of implication of fraud is counterproductive since it makes popular exposure of proven cases of fraud much more difficult. The usual suspects of the Hockey Stick Team and the Tree House Club will use it against skepticism by merely calling the plots cherry picking, since they lack inclusion of a statistical breakdown of actual impact on the final country wide trend. Chance alone offers a slight bias in either direction, overall. Homogenization itself is here being criticized for exactly what in most science circles justifies its use: lack of data about instrument changes. Twisting this alone into innuendo about fraud or extreme bias amounts to mere motivated and paranoid sounding word games.

    Steven Goddard at least has a warm bias to report, but he too simply mixes in widely accepted (to the likes of Steve McIntyre and nearly every scientist you could find) measurement time of day adjustments to his overall stark claims of outright fraud, again subjecting skeptics to ridicule by seasoned and PR firm coached activists many of whom have great technical savvy too.

    All of these type of mere smoking gun claims stop the real message of fraud from being successfully exposed, namely fake hockey sticks. Now that the latest one was exposed as having no blade in *any* of the input data thrusting suddenly upwards out of noise, the gig should be up, the Enron nature of peer reviewed climate “science” in top journals having been proven finally outside of black box confusion.

    You don’t even have a missing person here yet you are loudly claiming an organization to be murderers, essentially crying wolf as far as that crucial demographic is concerned, the one that still figures skepticism is indeed just another right wing attack on science.

  6. ” ….. the power of the press at its best ….. .”
    Unfortunately the MSM here in GB seems to have ignored this.
    No change there, then.

  7. Dear Moderators,

    From the post:

    The two provocative articles Lloyd put together last week were Heat is on over weather bureau and Bureau of Meteorology ‘altering climate figures, which I covered here.

    Without the links in the original piece, this section looks messed up.

    Here is the original as it appeared:

    The two provocative articles Lloyd put together last week were Heat is on over weather bureau and Bureau of Meteorology ‘altering climate figures, which I covered here.

    Please restore the excerpt to as it was in the original.

  8. Nik, you are one track obsessed, reposting without following. TOB is not a part of the BOM explanation. Weasel worded might haves are, and they equate to poor science. The post is accurate. There are many logical reasons to conclude the adjustments are FUBAR. One thing is recognizable however, the adjustments outside of TOB, also cool the past and warm the present.

    This, in combination with dozens of other climate science scandals, amounts to quack science. In pursuit of an explanation from the curious, many times the BOM ducked like a quack.

  9. The effect of some fiddling is only transient.

    Future readings will then show a ‘decline’ from the artificial high.

    But won’t enable recouperation of unnecessary spending.

    • @Nick Phillips
      ……….How do I load your data into Excel?
      Answer>Copy data into a text file and save it with a csv extension in windows. You can then normally just double click on the file and Excel will open it.

  10. Much easier for a public servant to enter a few dodgy numbers into a computer, and log a semi-plausible excuse, than to get off their lazy butt and do some fieldwork.

    If you were a work shy public servant, what would be easier – nudging the data conform to the prevailing confirmation bias, or fighting to have your controversial interpretation prevail?

  11. OK, so the data was adjusted because the station was moved… but no-one knows if the station actually was moved.

    So why do they think it was moved?
    Because nearby (50km away) stations show different trends? No, the nearby trends are “consistent with” the unadjusted measurements.

    So what really is the reason to think the station was moved? It seems to be that the trend at the station was downwards and so that must mean it had a fault. Thus the trend is adjusted upwards and that shows the station was warming after all.
    Circular reasoning?

    At least these are the best stations in Australia. The second-rate stations must be worthless.
    What has the BOM been doing on quality control, all these years?

    What do other Met Offices (like the UK’s) make of this practise? The Aussie BOM claims justification by comparing with international practise so let’s get the words from the horses’ mouths.

  12. “It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.”

    Isn’t it great that there really does appear to be some interest on the part of the scientists to have an understanding of the process that goes beyond just the interest of revising history/data to suit their narrative?

  13. I say this is BS – Surely if there had been a sudden change such as a station move or a change in the surrounding area then that would show up as a sudden change in the raw data – even a large scale change of one year vs the next. One word, rhymes with rollocks.

  14. The site has suddenly become harder to read. Then it hit me. Has the font of the main text been changed? Legibility has gone down.

  15. When climate zones shift around some places within a single geographical region become less cloudy and warmer whilst others become more cloudy and cooler.

    There might still be an underlying background trend overall from warmer to cooler or cooler to warmer but if one applies a homogenisation process becausae one erroneously thinks that all locations should have the same sign of response to shifting climate zones then the underlying background trend would be lost, obscured or completely reversed.

    That is what has happened here.

    Those apparently contradictory changes in trend from one place to another within a single region were actually valuable diagnostic information as to how the climate zones overhead were shifting and having different effects in different locations.

    They have destroyed that diagnostic information through ignorance.

    There should be some penalty for such dozy and unprofessional acticity.

  16. How much of this is applicable to the US and other nations? We know it was done in New Zealand also. Where else? The Climate of Corruption is just one aspect of the CO2 social madness.

  17. In a nutshell, the BOM claims that a possible thermometer shift of only metres, can cause an incorrect reading of 2C. They can determine this by consulting thermometer trends hundreds of kilometers away.

    The lack of intelligence of some working in this field is mindboggling.

  18. Again as I keep saying…All hand waving until Federal Police intervene and SEIZE/IMPOUND ALL BOM records for investigation of FRAUD. The culprits doing this need to account. (most likely its a very FEW people in the organization).ie same applies to US USHCN NDCD ect

  19. NikFromNYC
    August 26, 2014 at 12:43 am
    —————————————–
    “You don’t even have a missing person here yet you are loudly claiming an organization to be murderers, essentially crying wolf as far as that crucial demographic is concerned, the one that still figures skepticism is indeed just another right wing attack on science.”

    Your statements are very, very telling. “another right wing attack on science”?! Was there another? The only instance I can recall of conservatives being accused of attacking science is the global warming hoax, and then only because it became the preferred stalking horse of the Professional Left who perceived any attack on their “last great hope” to be politically rather than scientifically motivated. Sceptics have a wide range of political opinions, what they have in common is a desire to defend the scientific method from the inanity of politics.

    That is why you lost. If you don’t know who you are fighting, you can’t win.

    And your attempts at gentle steering? That sceptics should fear over reaching and looking foolish? Alinsky methods fail on the interwebs. Trying to tell sceptics to be quisling little lukewarmers and avoid calling fraud as fraud for fear of looking foolish? Fail.

    You don’t get to tell Australians who have endured years of snivelling propaganda from BoM what to say.

    The record is clear, from the endless propaganda, the Climategate emails from those faithful to the cause claiming BoM was overdoing it, Darwin Zero (H/T Willis), the temporal bias to warming and cooling adjustments in the HQ record, the squealing retreat from HQ to ACORN when court challenge loomed, the evidence of data clipping in ACORN (Tmin exceeding Tmax in multiple records) and now this “might” filth. Adjustments without supporting metadata? There will be no forgiveness.

    BoM have acted like scum. There is no if. There is no but. There is no maybe. This sorry hoax is collapsing. There will be no “warming but less than we thought” soft landing. When the Australian public demand answers they will be given the name “D. Jones (BoM)” before many, many others.

  20. Brute August 26, 2014 at 2:39 am

    The site has suddenly become harder to read. Then it hit me. Has the font of the main text been changed? Legibility has gone down.

    The low-level text is too gray and lacks serifs. It was OK yesterday, but it’s been changed back.
    ===========

    @KenS: Jo’s site is working as of now.

  21. Maybe they should homonize the rain gauges also. This would help those that have been flooded in a particular area eliminate their local flooding by a simple elimination of the rain that fell. That would work really well to eliminate tornadoes and hurricane centers/eye walls.

  22. ralfellis
    August 26, 2014 at 2:00 am
    Err, can nobody be found at those station who can remember if the thermometer locations were changed? Its not that long ago.
    Has anyone written to the Aircrew Association to find out?

    If you go the BOM site you can find details of the Amberley weather station here

    This gives the site number which is 040004 and the date since records were kept – 1941.

    If you then hit the pdf basic site summary link you get this information

    There is nothing to indicate that the site has been changed.

    If you then go to Google Maps and put in the co-ordinates you will find out where the observation station is located on the airfield.

    Put in the co-ordinates exactly as in in this format:

    -27.63, 152.71

    Note when the position appears on the map it will be in degrees, seconds and minutes not decimal.

    Cheers.

  23. and this series of adjustmens happened under? a KRudd JULiar govt run..didnt it:-)
    so govvy bom wanting to keep their jobs- n bosses happy had a fair bit to do with it Id be guessing…same as abc pushing maurice newman out to get a greener shade of boss in the big chair.very evident now with strident squeals over the new govts cutting carbon cons and looking to remove the RET as well.

  24. So let us step back and take stock. So far, we have documented temperature fiddling (not supposition, not conjecture, DOCUMENTED) from:

    Canada (Steven Goddard)
    The United States of America (Paul Homewood, Steven Goddard)
    The British Isles (Paul Homewood)
    Australia (Jennifer Marohasy, Joanne Nova)
    New Zealand (Joanne Nova)
    Iceland (the whole country)
    Russia (ditto above)

    So I have to ask the question – is there ANY land areas that have had NATURAL warming (natural as defined by unadjusted raw temperature trends that are positive)?

  25. NikFromNYC, think that Steve Goddard’s and others claim of fraud when it comes to land based temperature data is over the top?

    If it looks like a duck, walks like a duck and quacks like a duck … it’s a duck.

  26. The BOM statement points out that there are stations where the homogenised data have a stronger negative trend than the unhomogenised data.

    Citing Mackay in particular, they say “the trend in minimum temperatures is +0.40 C/decade in the raw data but only +0.18 C/decade in the adjusted data.”

    Furthermore, BOM claims that from 1950 to the present “homogeneity adjustments have little impact on national trends and changes in temperature extremes.”

    Just wondering if anyone has checked this, and if so, what exactly is the difference between the adjusted and unadjusted trends? If BOM is lying about this, then it’s a lie that should be relatively easy to expose.

  27. The BoM got this one right. The need for the adjustment is very clear from neighboring stations. I’ve done the analysis here. The change happened in August 1980, and I calculated the adjustment at 2.8°C.

  28. Slowly but surely, EVERYTHING we have said they are doing to the data is being proven. Can’t wait for GISS to eventually be debunked.

  29. NIKfromNYC,
    That almost all of these “adjustments” results in warming was proven long, long ago.

  30. There are so very many sad things about this. One is that if you really have no dog in this race, and are simply humanly or scientifically curious about what the climate is really doing, the further they bury the actual data, and the more they continue to present the data, adjusted, with no error analysis showing the before and after error (where the “after” error cannot really decrease because you cannot squeeze blood from a turnip or information from sparse, possibly corrupt, past data) the more difficult it gets to assess either reality or our state of knowledge or ignorance relative to it. If one looks at patterns of in the weather, they look like they correspond to times in the past when the average temperature was (allegedly) somewhat cooler and actually cooling — if our knowledge of the weather THEN isn’t sufficiently suspect that any such comparison is statistically impossible.

    But the really, really sad thing is this. Start with the past data. It, processed straight up at face value, shows some trend — it doesn’t matter what that trend is, warming, cooling, neutral — but when you put the data straight through the statistical mill, you get some trend. You look at the trend and say “That can’t be right! I’m pretty sure (because I am an unregistered scientific psychic) that it has really been trending at thus and such a rate. I will therefore build a model that corrects the data in such a way that its trend corresponds with my psychic predictions, by finding some way to increase the statistical weight of stations that have the right answer and decreasing the statistical weight of stations that have the wrong answer. Between kriging, PCA, and other exotic tools, I’m pretty sure I can fix the data with some model I discover/invent, even if I have to write my own “special” PCA to accomplish it a la Mann.”

    After a fair bit of work, you discover an algorithm that, using tools and methods in Real Statistics Books, build a model (which, because it is a statistical model based on Bayesian prior assumptions that have only a basically unknown probability of being themselves correct) works by adding a well-hidden uncertainty associated with the priors to the inherent statistical uncertainty in the original data. You ignore this — in fact, you ignore error analysis altogether — and use the model to basically rewrite the original data. Because the original data was obviously incorrect, you may even overwrite the original data with your corrected version so that it becomes the “official” history of the time even though not one single number is the result of actual measurement made by an actual human. This makes it, of course, considerably more difficult for future generations to question my insight and wisdom and psychic abilities. Think this doesn’t happen? Talk to Lief about what has happened to parts of the sunspot record. Getting to raw, original data is more difficult than one might think, and in the computer age it is actually becoming more difficult, not less, because one can still read paper written in manuscript by 12th century writers, but one cannot read a 5.25″ floppy written in 1985, or the data you had carefully stored on your desktop computer in 1994 before the hard disk crashed with no backup.

    All of this (except for actually overwriting/losing the original data) is even a perfectly reasonable thing to do, part of the data exploration process. It shows, among other things, that there exists a set of assumptions that — if true — would confirm your psychic impressions of the past, and if one is honest about those Bayesian priors and the extent of the data manipulations required to get the answer to come out “right”, might even give one a way of estimating the probability that you are in fact correct.

    What you cannot then do, no matter how much you want to, is to turn around and use the modified data to prove that this model, or any other models that your psychic beliefs agrees with, are right!

    Suppose that I really believe that “If wishes were horses, then beggars would ride”. I conduct a survey and find that in the general population, most people who wish to ride horses do. Indeed, I construct the joint probability of wishing to ride horses and actually riding horses, and find that it is reasonably high. I then sample the small subpopulation consisting of beggars and little girls from poor families and discover that this population has a much lower joint probability of wishing to ride horses (or ponies) and actually riding them. This data confounds my expectation that the mere act of wishing for a horse to ride will probably produce one, somehow, so I note that there are comparatively few beggars in the general population, and that they are often surrounded by the wealthy they beg from who have a very high joint probability. I then can use a variety of perfectly legitimate (well, not really all that legitimate) data manipulation techniques to adjust my raw data. I can assume that the beggars are some sort of statistical outlier, sampling error, and simply throw them out of the data set. I can use the data from their surroundings to “correct” the survey data — if the five people who live spatially closest to the streetcorner where we encountered the beggar are all wishful horse riders, surely the beggar is too. I can “krige” the data — there are vast parts of the countryside where I didn’t take any data, but I did go stay at expensive bohemian castles belonging to horse-owners here and there who denied the very existence of beggars in their midst but who assured you that if there were any, surely they would ride, so I can use this sparse data to fill in huge areas that are marked “unknown P(w,H) and P(b,w) and P(b) and P(w)”. Eventually I find a way of manipulating the data that shows that in well over 98% of the world, wishful beggars indeed ride. I sigh and tell myself that this model is probably right because it is in accord with my democratic and religious vision of the Universe. God provides horses to wishful beggars despite any claims to the contrary by skeptical individuals who might doubt it. This sort of self-serving manipulation of data is perfectly natural and we all do it all of the time on issues ranging from religion to who usually ends up taking out the garbage at home, and sometimes it even works and gives you the right answer. Maybe even most of the time, who knows?

    If anything, it exemplifies how the human brain is partially broken. We are greedy pattern matching engines and can easily see fluffy sheep in the clouds if we become convinced that they are there.

    What one cannot, or should not, ever do, however, is to then take the model to congress and use it to pass a bill requiring all wishful beggars to register and pay a horse tax, because the model itself proves the assumptions that went in to building the model, at least after the data was successfully “adjusted”.

    If I think a set of data “has to be” exponential, find the best exponential fit to the data plus an estimate of the noise/error relative to this fit, and then adjust the data to improve the fit/reduce the noise, I cannot reasonably turn around and use the adjusted data to prove that the data is, in fact, exponential in character.

    rgb

  31. Nick,
    I love the way you assume that if a station doesn’t show what you believe it should show, it must be adjusted.
    Make no effort to determine which station is accurate and which are being polluted. Just find the one station that doesn’t show what you want it to show, assume that it, not the others are in error, and just blindly adjust.
    It’s better than actually studying the situation and determining the cause, isn’t it.

  32. I hope the Aussies pull hard on this thread, to see what else unravels. Eventually, if it is shown that the BOM there doesn’t have clean hands, governments in other countries would be pressured to instigate audits of their own record-keeping agencies. If those too have had their thumb on the scale it would taint the rest of climatology badly, because such bias and unscrupulousness wouldn’t likely have been confined only to one of its sectors.

  33. MarkW August 26, 2014 at 5:39 am
    “Nick,
    I love the way you assume that if a station doesn’t show what you believe it should show, it must be adjusted.”

    The thing is, it should be adjusted. You want the best estimate of the temperature. If the data shows a sudden change that is outside the expected variation, taking account of the history and neighbors, then it is very likely to be due to a move or other event.

    Moves happen. Sometimes an adjustment is wrongly made. But if the policy is right more often that it is wrong, then it should be followed. It’s better than doing nothing. And people like Menne do the statistics.

    In this case the graphs alone show that the adjustment is getting it right.

  34. If one is adjusting stations based on the adjusted trends of other stations, then you are just adjusting based on previous adjustments.

    A self-reinforcing continual upward adjustment. It never really ends. You just keep adding and adding.

    They need to use the Raw temperatures only in these adjustment algorithms but once you start down the road of saying XY station’s raw records contain an error and must be replaced, then you have already started down the road of a sel-reinforcing continual upward adjustment. Which they are quite happy to implement.

  35. The claim is that the station was moved in 1980 and resulted in a lower minimum temperature being recorded since then, would it not be more accurate to add a constant term to the minimum temperatures since 1980 and leave the past temperatures alone,I don’t think this would have altered the trend because there has been almost no change in the graph since 1980.

  36. I live in a town in the mid US that has two weather stations located about 8 miles apart, they are at about the same elevation. The temperatures at these two stations almost always differ by one or two degrees F. I thought this was normal. As far as I know they don’t adjust one temperature to match the other.

  37. An independent expert panel reviewed the BOM ACORN-SAT dataset of selected 112 stations and states:

    “Before public release of the ACORN-SAT dataset
    the Bureau should determine and document
    the reasons why the new data-set shows a lower
    average temperature in the period prior to 1940
    than is shown by data derived from the whole
    network, and by previous international analyses
    of Australian temperature data.”

    Says it all really.

    Not only does it get colder before 1940 once the ‘homogenisation’ process is done to the selected 112 stations compared to the ‘whole network’ before 1940, it also manages to get them cooler before 1940 than in any other previous international attempts.

    This is obviously what the process is specifically designed to do, increase the warming trend by making it cooler before 1940 after each iteration, and then warmer thereafter. Apparently this has increased the warming since 1910 from about 0.6 degrees C to about 1 degrees C in Australia, an increase of about 60%, yet elsewhere in the report it states that the effect of homogenisation has not greatly effected the warming trend.

    The only useful thing is that these propaganda documents will be available for all to see in future, the hubris and chicanery is breathtaking.

    But why didn’t any of the expert review panel have the guts to stand up and say the methodology is flawed and only increases any warming trend on each iteration? It’s just another hockeystick.

  38. Please excuse me for attempting to interject common sense into a Nick Stokeian discussion, but:

    how is it that a station that is moved can be considered “the same station”. If it is taking readings from a different location than shouldn’t it be a new station?

  39. “Nick Stokes August 26, 2014 at 5:55 am

    Have you looked at the graphs that I showed? The need for the adjustment is obvious”

    Why is the Need for adjustment obvious ?

    The data is what it is.

    Just because its slightly different to other stations doesn’t make it wrong.

    There might be something about the stations microclimate, or anything.

    Data is just that. Data. When you adjust it, you introduce your own perceived biased into “what the data should be”

    Other wise know as cheating in experiments. Or making the data up.

    Just because YOU think it should be adjustment, doesn’t make the adjustments valid of necessary.

  40. @John from Au……….How do I load your data into Excel?

    Cut and paste the text strings into excel. Highlight it. Go to the Data group and find “Text to columns”. Select the “delimiter” and set to spaces (could be tabs, or even commas, but probably spaces). Ratchet that over until it’s separated into a nice set of columns.

  41. statistically, the adjustments should not introduce a trend. some will be adjusted higher, some lower, but over a large number of stations the net effect should be zero.

    thus, if the adjustments are introducing a trend, a correction is required to remove the overall trend. it would appear that the current algorithms used for adjustments do not include a correction for induced trend.

    this can be see in the comparison of raw to actual s, where the net adjustments very much mimic the existing trend. for example, the adjustments themselves add a net increase during the 1980’s and 1990’s, then level off during the 2000’s. Just like the raw data.

    this suggest that the adjustment mechanism has an unrecognized bias.

    A similar thing happens in models. For examples, in a GCM model small errors in total energy accumulate such that the model gains or loses energy independent of external forces. This residual needs to be apportioned back into the model to prevent introduced bias.

    It would appear however that the currently in use temperature homogenization processing routines do not recognize or deal with introduced bias. That for currently unrecognized reasons, the current algorithms introduce a bias that has a high correlation with the underlying trend in the raw data.

  42. Overall, it could be argued that there is no need for adjustments, unless you are interested in individual stations.

    Because the adjustments should not introduce a trend, when you are dealing with averages, it makes no sense to correct individual stations beforehand, as the corrections should on average balance out to a net zero. which means that the adjustments should have zero effect on the averages, and thus are not required.

    Only when you are dealing with an individual station is corrected data required. otherwise, when an average is calculated over all stations, the raw data should statistically produce an average that is at a minimum as accurate as the adjusted data.

    the adjusted data on the other hand can produce an average that is less accurate than the raw data, due to the possibility of introduced and unrecognized bias. thus, when one is interested in overall averages, the raw data is most likely to provide the correct answer.

  43. Ralph kramden you are 100% correct. I live 3 miles from where I work (I work in town and live outside of town). The town is small just 6000 people yet UHI is easily noticeable as it is always 2+ degrees warmer in town. This past winter I ran into many instances that it was -5 degrees in town and -12 degrees just outside of town. Again this is a puny town of 6000. Nick Stokes would tell us UHI is negligible. No need to adjust 2014 temps down for UHI. Hey Stokes I see a step change in the global temps in 1998. I guess we need to adjust all temps after 1998 down because that step change is very noticeable. If all these adjustments that Stokes and the boys say are negligible then why even make them. Just use the raw data and stop wasting your time adjusting if the adjusting is so negligible. Hmmmmmmm

  44. All ground based sites are biased, and all adjustments build warming trends. It is NOT science but political purpose that drives the adjusters. Perhaps it’s time to scrap land based sites that only cover 30% of the world (and poorly distributed at that) and take this propaganda tool out of the hands of dishonest agenda based individuals. I personally have never seen adjustments that do not create a steeper slope that benefits alarmist ideology. I do not use or believe ANY ground based data and rely on satellite data despite their own problems. There simply is no need to have records that we can’t trust to be honest. I have a personal system that has been it place for nearly 30 years that correlates very nicely with satellite data. I haven’t made any adjustments to further my agenda.
    I show a slow decline of about .5C here in California’s Central Valley over the last 14 years. I trust that far more than any government site.

  45. How do these adjustments take into account any micro-climate differences? It would seem these differences are lost when the data is Homogenised.

  46. Nick Stokes
    August 26, 2014 at 5:55 am

    Have you looked at the graphs that I showed? The need for the adjustment is obvious.
    ==============
    Nick, are you saying the majority of stations were moved to a warmer location?
    ..or the majority of stations were moved to a cooler location?

    Using your logic……it should be evenly split….so no adjustments are necessary at all

  47. JustAnotherPoster August 26, 2014 at 6:45 am
    “The data is what it is.
    Just because its slightly different to other stations doesn’t make it wrong.
    There might be something about the stations microclimate, or anything.”

    The data is what it is for that particular combination of micro-sites.

    Adjustments are made for spatial averaging. We aren’t interested in the microclimate. We want to use Amberley as representative of a large area around. Amberley experienced a 1.4°C drop in August 1980, but the region didn’t, as shown by the other three stations. So you don’t want to project that onto the whole area.

    don penman August 26, 2014 at 6:21 am
    “The claim is that the station was moved in 1980 and resulted in a lower minimum temperature being recorded since then, would it not be more accurate to add a constant term to the minimum temperatures since 1980 and leave the past temperatures alone,”

    This point is related. It actually doesn’t matter which you do, so the convention is to leave the present unchanged. The reason that it doesn’t matter is that we aren’t trying to capture microclimate. That’s why anomalies are used; they take that factor out. We want Amberley to tell us about that region, not what location it is in on the airbase.

  48. The arrogance that drives these adjustments and the fact that they thought they could do this and have it be accepted shows how highly they hold themselves as much smarter than the rest of us. Did they think no one would notice? Compounding this is the” loss”, in some cases, of the raw data, obviously in an attempt to cover their asp. Ignore these false records they are a distraction and unusable to real research.

  49. willnitschke August 26, 2014 at 3:24 am
    In a nutshell, the BOM claims that a possible thermometer shift of only metres, can cause an incorrect reading of 2C. They can determine this by consulting thermometer trends hundreds of kilometers away.

    The changes are in many cases necessary, for example if you look at the raw data for the Darwin weather station in Australia you’ll see that it exhibits a cooling trend. Closer analysis reveals that there was a substantial drop in 1939-42. Prior to that date the station was based at Darwin Post Office and didn’t have a Stevenson’s screen and the postmaster had to move the thermometer so that the direct sunlight didn’t shine on it, also a tree grew such that by the 30s the site was shaded. In 1941 the station was moved to the Darwin airport hence the sudden change, just as well because the Darwin PO was destroyed by the Japanese bombing raid in 1942.

  50. JohnWho August 26, 2014 at 6:41 am
    “how is it that a station that is moved can be considered “the same station”. If it is taking readings from a different location than shouldn’t it be a new station?”

    Mosh would applaud. That’s what BEST does (scalpel). But it loses information. Because the two stations are very close, you expect them to have very close correlation. But if you throw that away and regard them as separate, information in the data is used to establish the relation between them, losing degrees of freedom. Long records are very valuable for trends.

    This example shows one facet of homogenisation. The three comparison stations aren’t in ACORN. It’s not because of quality; it is short duration. But there is enough data to sort out what happened in 1980. That means we have a homogeneous 70 yr series for that part of Qld, using stations that individually couldn’t provide it.

  51. Nick Stokes
    August 26, 2014 at 7:26 am

    Adjustments are made for spatial averaging. We aren’t interested in the microclimate. We want to use Amberley as representative of a large area around. Amberley experienced a 1.4°C drop in August 1980, but the region didn’t, as shown by the other three stations. So you don’t want to project that onto the whole area.

    Then don’t. Spatial averaging of temperatures (intensive properties) gives you nothing physically meaningful in return.

  52. Nick Stokes
    August 26, 2014 at 7:40 am

    Mosh would applaud. That’s what BEST does (scalpel). But it loses information. Because the two stations are very close, you expect them to have very close correlation. But if you throw that away and regard them as separate, information in the data is used to establish the relation between them, losing degrees of freedom. Long records are very valuable for trends.

    Define “very close”. The distance between where I work and where I live is about 13 miles as the crow flies. I’ve seen temperature differences between the two vary by as much as 27F. Which one should be adjusted and why?

  53. Anthony, have you considered starting a reference page on “temperature adjustments” ? Please?

  54. MarkW
    August 26, 2014 at 5:39 am

    Nick,
    I love the way you assume that if a station doesn’t show what you believe it should show, it must be adjusted.
    Make no effort to determine which station is accurate and which are being polluted. Just find the one station that doesn’t show what you want it to show, assume that it, not the others are in error, and just blindly adjust.
    It’s better than actually studying the situation and determining the cause, isn’t it.

    I looked at Nick’s analysis and too thought that his adjustment was quite arbitrary (no “calculation” involved – just guess a number and plot. Hey it looks “good”.). In my opinion, UNLESS you have photographic and eyewitness confirmation of a station move, the data should NEVER be changed or “homogenized” in any way whatsoever! If you suspect the data is corrupt for some reason, you can choose not to use it and clearly state the reason for doing so when you present your temperature analysis. Any time trends are flipped by adjustments such as these (either warming or cooling), alarm bells should start ringing loudly…

  55. Nick Stokes, without a written justification for the adjustment and without a clear method statement the adjustment is wrong.

    It doesn’t matter whether the actual temperature rose, fell or stayed the same.

    The data has been corrupted. It is now meaningless.

    Good work in trying to replicate the adjustment and so find meaning… but you can’t tell if that is what happened.
    And you can’t tell if that is why the adjustment was made.
    So the data is still corrupted.

  56. Adjustments are made for spatial averaging. We aren’t interested in the microclimate.

    “We want to use Amberley as representative of a large area around”. <——

    you can't do this….

    A thermometer records the temperature at its own location. Nothing more nothing less.

    You are just assuming "something" was odd. You can't prove it. Your basically guessing "something" was wrong with the data.

    There is absolutely no justification for changing or adjusting the raw temperature.

  57. “The data is what it is for that particular combination of micro-sites.

    Adjustments are made for spatial averaging. We aren’t interested in the microclimate. We want to use Amberley as representative of a large area around.”

    So when we look at a graph of monthly minimum temps ‘deaseasonalised’ of a particular combination of micro-sites that happen to be all over the joint where the ancestors decided to plonk themselves, we can see some squiggly coloured lines that deviate quite a bit from each other and yet move somewhat in unison. Yeah I got that this is not the equator and Antarctica we’re talking about here but Mount Glorious isn’t in any of those places so obviously we need to give it the flick and stick with our evenly gridded temp stations to get a proper spatial average for any adjusting of the odd outlier or problem thermometer.

    Err no wait a bit the temp stations haven’t been chosen in a nice a priori grid like a surveyor would taking levels for a contour map. Just a few levels hundreds of kilometres apart and he’s missed Mt Glorious. Not to worry our intrepid surveyor Nick reckons he’s got it all sorted with some clever averaging and there’s no mountains to climb or deep ravines to fall into here folks, just a nice gentle incline as far as his eye can see.

  58. The global temperature is a poor way to determine if the temperatures we measure at individual stations are trending upwards as predicted by agw because it is too easy.If all the stations were to increase over time (which we don’t observe) or if we always observe a high temperature record when conditions are ideal for this to happen(which we don’t see) then we would have evidence that measured temperatures were trending upwards.

  59. They are called un documented station moves. Happens all the time.
    Some of the most un reliable data folks have is the meta data.

    So you are faced with a situation.

    You have the time series of multiple stations.
    You have incomplete and often un verifiable station metadata that may or may not accurately
    record the location and the instrument.

    You compare the stations and find that one sticks out like a sore thumb.

    What you DONT HAVE is answers. You have choices

    1. assume the metadata is correct and that somehow over decades one region has a cooling trend
    while surrounding it you have warming trends. Or the opposite, you see one site with sky rocketing
    trends while all around it the world cools. You try to make thermodynamic sense of this? huh? how
    could one little pocket of the world warm by 2C while all around it cooled? or how could one place
    cool while trends around it warmed. Hard to make thermodynamic sense of that.

    2. Assume that the metadata is in correct or incomplete and create a field based on that assumption.

    How do you decide between these choices.

    Well, for number 1, the first thing you do is update the metadata so you dont repeat the problem in the future. And you watch the sites that exhibited this weird behavior. You also would try to develop a physical
    theory that explained how a patch of earth can cool for decades while a few km away things warmed.
    This is not done by “hand waving”. I’ve spent a considerable amount of time looking at “cooling” stations.
    I can say this:

    1. The phenomena occurs in two places: the US and Australia.
    2. There is no unique geography that all of these sites share.
    3. In the case of the US, the cooling is associated with station moves, ( if one trusts the metadata)

    For number 2, you can simply compute the field under two cases: case number 1. No adjustment.
    Case number 2: adjusted. Then look at your global result which is all that matters. What you find
    is that the global answer doesnt change. Such that you might have the local detail wrong, but in the
    big picture the global answer doesnt change.

    Bottom line: you get the same global answer whether you include cooling stations or not.
    whether you adjust them or not. Given the absence of any physical mechanism to explain how
    one patch of earth can cool while the rest warms, Given that the metadata record is not
    gods word, given that the global answer doesnt change, it is reasonable and justifiable to apply occams razor and assume that the metadata missed a station move. It’s the most simple explanation.

  60. Reminds me of the tale the now retired civil engineer bro could tell about the not very old rural home he got to urgently assess that was suddenly collapsing in the middle. Nice rural land you have here folks but sorry no-one is to enter the home again because it’s built smack bang over an old abandoned copper mine and oops the footings core boys seemed to have missed that at the time.
    It was bulldozed into your cute average hole in the ground in case you were wondering Mr Stokes, et al.

  61. Nick Stokes: This is why your assumption that you KNOW what the data ought to be is nothing but hubris and has no place in data gathering. Exibit A is the Brisbane Aero station data 50 km away which also shows a similar cooling trend in 1980 which you claim from your armchair without any actual research is wrong. Your method is clever but when untested against the real world, is nothing but speculation not science.

  62. The blue trend line looks wrong to me. Just eyeballing it, it looks like it should be nearly flat.

  63. It would seem the problem is one of analysis and expectations. The BoM intended use does not seem suited for our expected use. The better skeptical questions might be, what does the BoM intend to do with the measurements, what message do they intend to send/advertise, how do the adjustments support the message, and why is that message good/correct?

  64. rgbatduke
    August 26, 2014 at 5:37 am

    my psychic predictions

    As usual spot on.

    If anyone is interested in what NCDC’s Global Summary of Days data looks like without scalpels, kriging, and homogenization, follow the URL in my name. NCDC does some sorts of “data quality” repairs before I get it, but the least I can do is not make it worse.

  65. Nick Stokes
    August 26, 2014 at 5:30 am

    The BoM got this one right. The need for the adjustment is very clear from neighboring stations. I’ve done the analysis here. The change happened in August 1980, and I calculated the adjustment at 2.8°C.

    Nick Stokes
    August 26, 2014 at 5:35 am

    Nick Stokes August 26, 2014 at 5:30 am
    Oops, mixed up a number there. The adjustment is 1.4 C; the change to trend is 2.8°C/century.

    Very nice, Nick. A beautiful job.

    We see that station siting was improved in August 1980 to remove some artificial UHI. We need many more such changes, as Anthony Watts crowd research on US stations proved.

    Then we need full access to raw data so we can see whether all or too many adjustments result in increasing the alarmism; or whether these adjustments are, in fact, valid and unbiased.

  66. For number 2, you can simply compute the field under two cases: case number 1. No adjustment.
    Case number 2: adjusted. Then look at your global result which is all that matters. What you find
    is that the global answer doesnt change. Such that you might have the local detail wrong, but in the
    big picture the global answer doesnt change.

    Bottom line: you get the same global answer whether you include cooling stations or not.
    ====
    Bottom line: you get the same global answer whether you include warming stations or not….

    then there’s absolutely no reason to apply an algorithm or an adjustment to any of the temp readings, or any of the past temp readings……
    ……..they would all average out without it

  67. I nominate the esteemed Duke professor’s comment for elevation to full post status. Title: “Begging for Climate Ponies”

  68. A small change of a location of a weather station matters. But homogenizing over hundreds of kilometers is OK.

  69. Hey Mosh,
    You’ve mentioned that BEST does spot checking between the GAT field and test station measurements. Since you normalize for latitude, altitude, and whatever else you adjust for, what is the process to prepare for the test?
    Do you (BEST), un-normalize the field value at that point, or do you take the station values (min and max), and run the same normalization process to convert it into a single point field value?

  70. Nick Stokes August 26, 2014 at 5:55 am
    Have you looked at the graphs that I showed? The need for the adjustment is obvious.

    Nick, inspection of your first chart at your link

    http://moyhu.blogspot.com.au/2014/08/adjusting-amberley-as-it-must-be.html

    suggests that you must warm Samford before 1980 and cool it afterward. It shows the exact opposite pattern of Amberly and it stands out with double the trend of the other 2 sites.

    Was Samford adjusted downward? If so, cheers. If not, why not?

  71. Whenever one has many data points with a certain error in measurement, then one takes the average in order for the too high measurements to offset the too low measurements.

    What one CANNOT do is take the too low measurements, move them upwards to better match the average, then recalculate the average.

    Latitude is correct. If the microclimate data does not change the trend in the overall average, then the data needs no adjusting, res ipsa loquitor.

    Remember that we are talking about 0.1 degree or less per decade. It has been demonstrated that in most cases past tmeps were moved downward and more recent temps moved upwards. That will indeed move the average trend enough to see at a resolution of 0.05 degrees.

  72. I wonder if the people behind this, would be OK with the idea that pay-packet should be reduced because ‘they might’ not been coming to work, or would they insist that their employer proves they did not ?
    Just when you think climate ‘science’ can have lower academic standards they it already does , they prove you wrong .

  73. In what other area of science can are you allow to make up data if your actual data is no good? Wouldn’t it be scientifically more rigorous to discard bad data? Isn’t it better practice to start tracking a site as new if it’s relocated instead of the fiction that the data is continuous?

  74. “What you DONT HAVE is answers.”

    But “you guys” are perfectly OK going in front of congress and the world with policy recommendations with something like this? You might be right. Or, you might not. That station might have moved, unless it didn’t…

  75. Seems to me that ALL of these adjustments require certain UNPROVEN ASSUMPTIONS be made. Mosher’s explanation clearly shows that:

    What you DONT HAVE is answers. You have choices

    1. assume the metadata is correct …

    2. Assume that the metadata is in correct or incomplete …

    Yes, I read the rest of it. The problem is here: “ASSUME”. In either case there is no evidence.

    How about – don’t assume anything and accept that we really don’t know?

    But I guess you don’t get grants for that.

  76. Oh, my. there is a serious problem with Nick’s explanation of Ambley.

    If the site was moved in 1980, then only the data before 1980 would need to be adjusted so that you could get a trend line. If pre-1980 the thermometer reading was .7 too high, then you, Nick, are saying that it was .7 too high compared to the post 1980 readings after the thermometer was moved. But that is not what you are saying. You are saying it was originally in a spot that was .7 too warm, and then in 1980 it was moved to a spot that was .7 too cold.

    If you adjust the pre and post 1980 numbers, you are saying BOTH locations were inaccurate compared to some imaginary midpoint. Mathematically that seems OK, but in reality, it doesn’t work unless by some improbable chance the thermometer was moved from one location that was exactly .7 too warm, to a location that was EXACTLY .7 too warm at exactly August 1980. Possible, but unlikely.

  77. Let’s take it for granted that stations move – sometimes without documentation. Am I nuts, or is it reasonable to assume that over time stations like this will move to areas where the measure higher temps, to areas where they measure lower temps, and to a place where they measure basically the same temp. If this is what happens, then the changes average out over time and no need for adjustments of any sort. However, what I keep seeing is adjustments only in the direction of emphasizing warming. That does not make sense – there should be adjustments to lower temps too. And they should balance out over time – and once again, no need for any adjustment. About the only place I can see that adjustments might be necessary is in a place where UHI is growing around the station – and then the adjustment should be down.

  78. Sometime they move without actually moving:

    Analysis of the 100-year ­record at the station shows a cooling of 0.35C in the raw data had become a 1.73C warming after “homogenisation” by BOM.

    A review of the data by independent scientist Jennifer Marohasy shows the warming trend had been achieved by progressively dropping temperatures from 1973 back to 1913.

    For 1913 the difference ­between the raw temperature and the BOM homogenised figure was 1.8C.

    BOM said the discrepancy in the data was consistent with the thermometer site moving from a farm building on a small hill outside the town to its current ­location on low-lying flat ground. Minimum temperatures are normally higher on slopes than on flat ground or in valleys.

    However, the official catalogue of all stations used to make up the national temperature record says the Rutherglen thermometer is an automatic weather station in the grounds of a research farm, 7km southeast of Rutherglen. Not only has the station not moved since being ­established in 1913, it’s “well outside the town area, on flat ground over grass but with low hills a few hundred metres to the north”.

    “There have been no documented site moves during the site’s history,” it says.

    BOM has so far been unable to explain that discrepancy.

    Retired scientist Bill Johnston, who has worked at Rutherglen, said a temporary thermometer had been put on higher ground near the office of the farm but it never provided temperatures to the bureau.

    http://www.theaustralian.com.au/national-affairs/climate/climate-records-contradict-bureau-of-meteorology/story-e6frg6xf-1227037936046

  79. Location is a physical reality, a specific longitude and latitude.
    Temperature is man’s measurement at that specific location.
    Temperature Data are strictly numbers.
    Metadata is information about data, preferably as comprehensive as possible.
    Location name is transient meta-data

    Change any of these, for any reason affects all of them!
    Change the temperature data because Stokes believes it is right; the number(s) are no longer temperature data for a specific location!

    Yeah, world climastrologists believe they have the right to muck with temperature data; that still does not make the action correct and it most especially does not make temperature data accurate or usable.

    In fact it makes the data unusable, completely.

    Data collection wrong? Fix it!
    Old data is skewed? Don’t use it! Be prepared to define exactly why in explicit detail.
    Sensor is bad? Data is bad, don’t use it!
    Data contradicts other data? Find out why! Explicitly! And then fix the problem b>if there is really a problem!

    Climastrology’s insistence on using adjusted and re-adjusted temperature data proves the old maxim; “Garbage in, Garbage out!).


    Nick Stokes August 26, 2014 at 7:26 am

    JustAnotherPoster August 26, 2014 at 6:45 am
    “The data is what it is.
    Just because its slightly different to other stations doesn’t make it wrong.
    There might be something about the stations microclimate, or anything.”

    The data is what it is for that particular combination of micro-sites.

    Adjustments are made for spatial averaging. We aren’t interested in the microclimate. We want to use Amberley as representative of a large area around. Amberley experienced a 1.4°C drop in August 1980, but the region didn’t, as shown by the other three stations. So you don’t want to project that onto the whole area…”

    From JoNova’s post on Graham Lloyd article in the Australian:
    “…Amberley is near Brisbane which also shows a cooling raw trend, though other neighbours like Cape Moreton Lighthouse, Bundaberg, Gayndah, Miles, and Yamba Pilot Station have an average warming trend. (See Ken’s Kingdom) NASA’s Goddard Institute also adjusts the minima at Amberley up by homogenization with other stations. But the radius of those stations is nearly 1,000 km…

    Even if temperature adjustment made any sense, Amberely’s adjustments are still irrational.

    Let’s repeat for clarity:
    If the physical location actually changes than treat the new location as a new location. The record for the old position is now discontinuous unless and until a new sensor is placed into the old location.

    Sensor is bad; when the sensor is replaced it is tested alongside of the new sensor and irregularities are recorded. This does not allow data adjustment, but the metadata will identify corrupted or possibly corrupted data to exclude. Corrupted data should also go into a database to better identify when a sensor declines.

    Use of ‘adjusted’ data is using corrupt data.

    From an outsiders data viewpoint, it appears that climastrologists are adjusting data to further political actions as every time the data is used, the results are not strictly science but inferred future disaster. Urgent action is required now, but the science behind the reports and their data collection and storage procedures are near stone age.

    Otherwise when a ‘climate’ graph is presented it would be easy for other scientists, degreed or not, to view all aspects of the data including the original data separately and then with all adjustments included to build the graph.

    How can anyone look at any of the current climastrology outputs and make a decision based on actual facts is beyond rational.


  80. Mario Lento
    August 26, 2014 at 12:32 am

    Mosher clearly has his BEST interests in mind…

    In fact BEST’s results were used as a defense for the BOM corrections. What bugs me isn’t the homogenization so much as the apparent alteration of individual station records. Homogenized data should only be used to create a separate table of data entirely, area weighted, and not linked to any specific station(s). The resulting records can be tied to a centroid defined by the polygon delimited by the locations of the various stations used in the local homogenization. Also, the entire correction debacle seems to ignore the fact that if there is a global trend in a phenomenon, then that trend is going to be present in any series of measurements of that phenomenon regardless of whether raw data is corrected for measurement biases like TOBs and step changes or not. The only measurement problem that could seriously affect a trend estimate done properly would be a changing instrument sensitivity to the phenomenon being measured over time. Step changes would only be a problem if the trends were calculated over the step rather than treating the step as a discontinuity. In fact the timing, direction and extent of step changes should be analyzed separately, since they can offer distinct and independent data on the phenomenon

  81. Jennifer Marohasy Puts BOM On The Chopping Block
    Posted on August 26, 2014 by stevengoddard

    HEADS need to start rolling at the Australian Bureau of Meteorology. The senior management have tried to cover-up serious tampering that has occurred with the temperatures at an experimental farm near Rutherglen in Victoria. Retired scientist Dr Bill Johnston used to run experiments there. He, and many others, can vouch for the fact that the weather station at Rutherglen, providing data to the Bureau of Meteorology since November 1912, has never been moved.

    Senior management at the Bureau are claiming the weather station could have been moved in 1966 and/or 1974 and that this could be a justification for artificially dropping the temperatures by 1.8 degree Celsius back in 1913.

    http://stevengoddard.wordpress.com/2014/08/26/jennifer-marohasy-puts-bom-on-the-chopping-block/


  82. Steven Mosher
    August 26, 2014 at 8:25 am

    They are called un documented station moves. Happens all the time.
    Some of the most un reliable data folks have is the meta data.

    There are also metadata errors that are the result of “apparent” location changes that are pure humbug. Not long ago there was a discussion regarding a station on Long Island that in reality moved about 10 meters max. The metadata has it hopping all over a half-mile square area. The only real change was the installation of an automatic system in the same yard, a few meters away from the original Stevenson screen. The move would have been (barely) detectable with a consumer grade gps unit. No automatic (computerized) system on the planet will be able to determine this.

    ….

    Bottom line: you get the same global answer whether you include cooling stations or not.
    whether you adjust them or not. Given the absence of any physical mechanism to explain how
    one patch of earth can cool while the rest warms, Given that the metadata record is not
    gods word, given that the global answer doesnt change, it is reasonable and justifiable to apply occams razor and assume that the metadata missed a station move. It’s the most simple explanation.

    The real bottom line from a methodological perspective is that if what you say above s true, there is no justification for any “adjustment” of the data.

  83. Have you looked at the graphs that I showed? The need for the adjustment is obvious.

    Did you read anything at all that I wrote?

    And clearly, we have very different meanings for the word “obvious”.

    There are three issues here: a) identifying a “rejectable” data outlier on the basis of some objective statistical criterion; and b) instead of rejecting it as an outlier, claiming that you can fix it; and c) once you’ve fixed it, including it in the data averages and error estimates with the same weight.

    a) is basically impossible. As the wikipedia page on data outliers points out, identification of outliers is an essentially subjective process (by which they mean that one cannot justify it with the a priori application of statistical principles, one has to at some point make a subjective choice as an implicit Bayesian prior). If one makes the usual assumption of a smooth unimodal normal distribution, one basically cannot possibly argue that given four samples as in your graphs above, one of those samples qualifies as an “outlier” simply because the trends do not agree. You don’t a priori know what the correct trend is, or what the correct variance should be, for the four sites you select. Consequently, you cannot make a quantitative statement for how likely it is that your correction is correct, or what the distribution of reasonable corrections might be.

    b) OK, so according to your subjective beliefs, it is an outlier even with only four samples. So reject it. Don’t claim that you can fix it, and acknowledge that in doing so, you cannot really shrink the variance as much as you would like. The existence of the outlier and the lack of metadata means that you cannot be certain that you understand either why it is different or why it appears perfectly reasonable and well-formed and yet cannot be right. Your evidence that it isn’t right is weak at best and can equally easily be interpreted as evidence that the other three sites are deviating the other way systematically from a “true” behavior somewhere in between. In this case just going from four samples to three is going to substantially lower N-1 (from 3 to 2) but again, the resulting sample standard error is going to be too large as it fails to account for the Bayesian prior probability that your rejection is in fact justified. It might not be, and you cannot be certain that it is.

    c) But whatever you do, don’t try to fix it! This adds several more degrees of freedom — the “fit parameters” of your fix, in this case two independent numbers. You now have several Bayesian (subjective) assumptions — that the rejected data is in fact an outlier, that the other data is in fact accurate, that the rejected data is an outlier for a specific, modellable cause (less likely than it is an outlier for any possible cause) and that the rejected data can be “fixed” by optimizing the model-adjusted data against the remaining unrejected data. This process by definition is going to (on average) preserve the mean trend of the unrejected data, since that (or something extremely similar) is the criterion you optimize against. If you then use the fixed data to form the mean/determine the trend it is a self-fulfilling prophecy from the unrejected data — you merely affirm your subjective beliefs by optimally fitting the rejected data to conform to them. This is fine, but you can hardly blame people for not agreeing with your subjective beliefs.

    The problem arises when one tries to form the standard error from the data including the fixed data. That data came with a triple price tag of Bayesian assumptions, each one of which should have reduced your certainty that your fix was correct. It certainly no longer counts as an “independent and identically distributed” sample from the point of view of evaluating standard error. You have gained nothing in the way of certainty along the way. You’ve simply substituted a subjective decision to deliberately reduce the variance of your samples around the original mean, the one that conforms to your subjective expectations, for the objective variance of the actual samples around an intermediate mean, or the objective variance of the reduced number of unfixed samples around their even less reliable mean.

    The point being that your corrections could be correct. It might even be the case that they are probably correct, although I think you’d be hard pressed indeed to make that a quantitatively defensible assertion (which all by itself should give you pause, by the way — perhaps one can, as Mosh asserts, use global metadata from many other sites to justify a posterior model used to “fix” the data, but that seems very dicey to me as it makes a lot of assumptions about local spatial homogeneity of temperature trends that I’d be very skeptical about just based on looking at temperature measurements sampled at different sites in my own back yard over time, let alone sampled in different yards tens of miles or more apart).

    rgb

    • rgbatduke commented

      The point being that your corrections could be correct. It might even be the case that they are probably correct, although I think you’d be hard pressed indeed to make that a quantitatively defensible assertion (which all by itself should give you pause, by the way — perhaps one can, as Mosh asserts, use global metadata from many other sites to justify a posterior model used to “fix” the data, but that seems very dicey to me as it makes a lot of assumptions about local spatial homogeneity of temperature trends that I’d be very skeptical about just based on looking at temperature measurements sampled at different sites in my own back yard over time, let alone sampled in different yards tens of miles or more apart).

      The really devastating point of this is if you don’t do all of this hacking to the only data we have, the results are different!

  84. Pretty amazing how Mosher and Stokes emphatically claimed the site moved. Then we get 2 people that worked there and both say it did not move. WHOOPS. The real world isn’t a flawed formula computer program. Fix your formula because it is downright horrendous. Are you going to fix your horrendous formula Mosher or Stokes? Doubt it as you both seem to believe your formula is God even though it once again completely whiffed at real world analysis.

  85. Maybe there was no step change because the site slid slowly down the hill. So slowly the folks that worked there didn’t notice that it had moved.

  86. Duster August 26, 2014 at 1:11 pm
    “What bugs me isn’t the homogenization so much as the apparent alteration of individual station records. Homogenized data should only be used to create a separate table of data entirely, area weighted, and not linked to any specific station(s). The resulting records can be tied to a centroid defined by the polygon delimited by the locations of the various stations used in the local homogenization.”

    That’s pretty much what is done. People here complain the BoM and others are altering the record. In fact what they produce is an announced adjusted file, separately, and for some reason sceptics are drawn like moths to a flame, and don’t even want to kinow about the unadjusted data.. BoM announced a specific set ACORN which is intended for the area use you describe, which is basically a spatial integration.

    You say that the station name should not then be used. Well, what’s in a name? I actually agree with you; keeping the name is easier for them to remember, but causes more trouble than it is worth.

  87. Jared August 26, 2014 at 2:54 pm
    “Pretty amazing how Mosher and Stokes emphatically claimed the site moved. Then we get 2 people that worked there and both say it did not move. WHOOPS.”

    Different place

  88. I see the usual suspects are here arguing that changing the recorded data to suit their warmist religion is the only way to do it. It is astounding to see people defend blatant wrongdoing and pretend that they are ”doing science” when they are destroying the very idea of science.

    Karma my friends. One hopes they get what they deserve.

  89. Mosher, you have another choice: you can simply discard the dodgy data instead of making adjustments based on unknowns.

    That’s what a high-quality statistical analysis would do in any case. But then again, a high-quality analysis wouldnt try to create a meaningless single “gloabl temperature” statistic in the first place….

  90. rgbatduke August 26, 2014 at 2:44 pm
    “c) But whatever you do, don’t try to fix it! This adds several more degrees of freedom — the “fit parameters” of your fix, in this case two independent numbers. “

    I don’t think you are taking account of the purpose of homogenization. It’s for spatial integration. Most degrees of freedom will disappear.

    We aren’t trying to “fix” Amberley. We’re trying to get a series that is representative of the subregion, for integration. So when there is an outlier that looks as if it is caused by something that isn’t related to the region climate, we switch to relying on other data for some period in that sub-region. I agree with Duster that it would be better to give it another name.

    “b) OK, so according to your subjective beliefs, it is an outlier even with only four samples. So reject it. “
    Mine is just a rustled up calc – I’m sure BoM would use more than four.

  91. What adjustments were made for the UHI effect, which is a much more well known, documented and widespread problem?

    Truth is a two way street.

  92. Nick Stokes claims: “We’re trying to get a series that is representative of the subregion, for integration.”

    Pray tell, by what magic can a series that diverges materially from those at neighboring stations be made “representative” of an uncircumscribed “subregion” whose spatially integrated temperature field is unknown? Such grossly non-conforming series should be simply discarded! But that would leave even greater gaps in usually sparse geographic coverage. Let’s get real: the patent purpose of ad hoc “homogenization” schemes is to maintain the pretense of adequate coverage, while introducing a surreptitious means of manipulating the “trend.”

  93. [snip - be nice- Anthony] like Nick Stokes don’t actually live in the real world. I, as a farmer with a science degree, am able to apply theory and reject it when it does not match reality. On my small holding I have 3 thermometers in 3 different locations but all within 500 metres of each – they vary by as much as 15% on some days. Last summer there was a day when the exposed thermometer near my sheds and cement was 44 degrees Celsius, whilst the one about 25o metres away near the septic water disposal area was 39 degrees Celsius (and not in shade), whilst the third thermometer near the horse stables was 41 degrees. All had been calibrated within the last 3 months. That’s just one example of many.

  94. Tom In Indy August 26, 2014 at 10:07 am
    “Was Samford adjusted downward? If so, cheers. If not, why not?”

    Samford has never been adjusted.The only adjustment done is in preparing the special ACORN set for spatial averaging.

  95. Nick: “So when there is an outlier that looks as if it is caused by something that isn’t related to the region climate, we switch to relying on other data for some period in that sub-region.”

    It sounds rather subjective to me. Who evaluates the looks?

  96. @Duster: Thank you for the response. Mosher slices in “poorly sited” stations that don’t show enough warming or show cooling. Way back in 2013, I responded to the nonsense!

    Mario Lento at 12/13 3:27 pm
    @Steven Mosher at 12/12 10:53 pm
    Mosher wrote “Regarding “3. Next I wanted to use methods suggested by skeptics””
    +++++++++++++
    When did skeptics say the stations with poor siting should be subjectively sliced and added to [the] mix so their warming could fit the narrative?

    Neither BEST nor you [Mosher] have ever honestly addressed why “if only urban areas show warming, while rural areas don’t, that you could slice (in?) the poorly sited urban stations to make their [so called] “crap” value warm the entire temperature record.
    ++++++++++

    BEST looked for a presumed conclusion and then invented improved data to prove they were right all along. I find it so sad that smart people can collude so disingenuously.

  97. Steven Mosher December 12, 2013 at 12:18 pm
    There are no adjustments.
    There is the raw data if you like crap.
    There is qc data
    There is breakpoint data.
    Then there is the estimated field.

    We dont adjust data. We identify breakpoints and slice.
    Then we estimate a field.
    +++++++++++++
    And here he admits what he does! In summary: The value in the conclusions by BEST is based on data which begins as “crap” that gets adjusted by three different (trusted?) sources, and for value-added BEST science, is then sliced and estimated so it can be served with conclusions that include “CO2 accounts for the warming”. This does not sound like science, but instead politics. Shame!

  98. Mosher says:
    “Well, for number 1, the first thing you do is update the metadata so you dont repeat the problem in the future. And you watch the sites that exhibited this weird behavior. You also would try to develop a physical
    theory that explained how a patch of earth can cool for decades while a few km away things warmed.”
    ++++++++++++
    Let me fix this for you, with my emphasis in [brackets]

    Well, for number 1, the first thing you do is update the metadata so you dont repeat the problem [of unexplained cooling] in the future. And you watch the sites that exhibited this weird behavior [that does not show the warming we expected]. You also would try to develop a physical theory that explained how a patch of earth can cool for decades while a few km away things [affected by UHI] have warmed. [You know the warming is expected, so these stations will be used to fix the cooling]. [The resultw cqn be used to prove that only warming can occur, because of CO2 forcing]. [This is settled science after all.]

  99. Steven Mosher August 26, 2014 at 8:25 am
    They are called un documented station moves. Happens all the time.
    ++++++++++
    This needs fixing:
    Steven Mosher August 26, 2014 at 8:25 am
    They are called [ILLEGAL] station moves. Happens all the time.
    [there it's politically incorrect now]

  100. lNick Stokes

    “The BoM got this one right. The need for the adjustment is very clear from neighbouring stations. I’ve done the analysis here”.

    I went to Nick Stokes’s site and was impressed by his work and his replies to bloggers. What I did not find, however, was any evidence as to why the raw data for Amberley, which was out of line with the other three stations he selected, fell into line in August 1980. It is all very well supposing that there was a station change but what physical evidence is there to support the supposition? For example, perhaps the other three had station changes at about that time and Amberley was the only one that did not. Highly unlikely, I know, but it is a hypothesis that needs to be disproved.

    One thing that any half decent statistician learns is never to discard an outlier without first finding out why it is an outlier. The second is never try to adjust the outlier to make it fit the pattern even if one has found the reason why it is an outlier – just do not use the data.

    Interpolation can destroy information. One can (and I have in the past) fiddle(d) many figures through judicious interpolation.

    • Yeah. What I said too. And even discarding outliers when you THINK you know a reason is deadly dangerous, unless you are gifted with perfect prior knowledge.

      The bete noire of empirical human reason is confirmation bias. There are some truly (in)famous examples of confirmation bias (and its statistical companions, data dredging, cherrypicking, etc) producing horrendous conclusions that were — eventually soundly rejected when somebody sane and unbiased re-examined the problem.

      Rejecting outliers — especially outliers that fail to conform to a belief about the way the data should behave — can easily be data dredging and cherrypicking both in disguise. The only “safe” way to do it is via an unbiased algorithm that works completely automatically and that can be shown in application to simulated data to actually work without bias (not just theoretically be without bias). Mann’s hockey stick was built via a selection process that rejected data as “unsuitable” or an “outlier” if it failed to conform to his prior beliefs about what tree rings were supposed to proxy. Again, there was a huge Bayesian assumption built right into the algorithm, one that took simulated random noise and turned it into hockey sticks and that would have been soundly rejected as a posterior probability if anybody had bothered to do this sort of analysis ahead of time.

      It also — literally — overrides the very power of statistics that one hopes to exploit when forming averages in the first place. The default assumption in most experimental science is that when one makes measurement errors, they are as likely to be too large as too small. Hence if one samples many times, one homes in on the true mean. In some cases, one can even support this assumption via the central limit theorem if the measurements themselves are in some sense an “average” of some microscopic process and hence are already likely to be normally distributed according to the CLT.

      There are, of course, examples of cases where a measurement has a systematic bias. Somebody always rounds down, never up. A piece of dynamic measurement apparatus is “sticky” and never reads as high as it should. The problem even there is that it is supremely dangerous to assume that you know how to correct it. Perhaps another person always rounds up, never down! Perhaps a different apparatus in a different location is sticky the other way and never reads as low as it should.

      The classic example of this in climate science is UHI. Note well: HADCRUT4, IIRC, does not correct for the UHI effect at all. GISS — from what I’ve read — has invented a “UHI correction” that actually warms the urban present compared to the rural past more often than it works the other way, which makes absolutely no sense, since the UHI should nearly always introduce a warming bias compared to non-urban (but nearby) sites. UHI is a specific example of an entire class of systematic biases that can result from weather station siting — another one is the horrendous placement of official weather stations at airports, often right next to concrete runways and underneath air that contains many times the average concentrations of CO_2 and H_2O simply because enormous jets burn thousands of gallons of kerosene every few minutes directly overhead as they take off and land. Again this is almost invariably a warming effect — active hot greenhouse gas production concentration plus solar heated runways a meter or so thick surrounded by shops, parking lots and car-filled expressways in no way compare to evaporatively cooled grassland surrounded by evaporatively cooled trees. All ignored in HADCRUT4 and turned into more warming of the present compared to the past in GISS.

      This warming bias can be seen at a glance on e.g. Weather Underground’s own personal weather station maps, in spite of the mediocre precision/accuracy of over-the-counter personal weather stations. I see it every day — the predicted weather (predicted to conform to the area “official” readings at RDU airport) is invariably 1-3 F warmer than the weather in my own back yard, or the temperatures reported by the many PWS in the surrounding rural countryside. The PWS temperatures in town are similarly warmer by a degree or two. You can even clearly identify hot side outliers — one sits only a mile or so from my house — where a PWS is systematically 4 or 5 F warmer than anyplace else, usually even RDU. I’m guessing that the PWS sits square over a south-facing driveway or that the thermometer is directly exposed to the sun. A second PWS, less than a mile from that one, reads numbers that are in reasonable agreement with my backyard and the general field of readings.

      Now consider — suppose one has four “official” weather stations that one wants to keep in the official record (perhaps because they have long-running records). Two of them are located at airports one in a town, and the fourth is in an area park in the comparatively rural country. The two cities have, over the lifetime of the temperature record, gone from populations of a few thousand people riding horses to a few hundred thousand people driving cars to and from the houses that have covered the landscape over 100 square miles around. The airports have gone from handling a propeller-driven flight or two a day on a single runway to becoming local hubs servicing hundreds of flights with hundreds of acres of tarmac and runway where maybe ten or twenty acres was all that was paved at the beginning.

      Along comes the automated “data homogenizer”. It notes that three of these sites have experienced substantial warming, and the fourth is neutral or maybe even cooled a bit. Aha! it thinks. An outlier! It rejects it, or worse, corrects it, and concludes that ten thousand square miles of surrounding countryside warmed like the cities simply because the thermometric record has a really, really significant urban bias today, and the major “official” anomaly computations correct the wrong way by both ignoring the UHI altogether and by “homogenizing” the record so that dissident sites — which might be the only ones reading the correct average temperature in spite of being outnumbered — are rejected in favor of the sites that outnumber them but have a common systematic bias.

      As I said way up at the beginning, somewhere (or perhaps on a different thread) — it is almost impossible to tell what the temperature is today compared to the temperature thirty, fifty, a hundred, two hundred years ago, on the basis of the thermometric record. We (apparently) cannot agree on how to handle the dominant source of systematic error in this record — the relentless urbanization of the places where thermometers, including/especially the most “official” government run thermometers with the longest running records, are located. The breaks visible when (some) of those thermometers are moved are pure evidence that they were not consistently reliable on either side of the breaks, nothing more. Many of the changes that affect their consistency are simply not visible in “metadata” — they aren’t discrete changes that come from resiting or poor siting, they are changes that come about because of gradual changes in the entire surroundings of the siting, slow changes that one cannot observe, measure, or correct for other than to note that we “expect” most of those changes to result in warming from many things — including, BTW, increased atmospheric CO_2. But some of that might be local increases. Some of it might come from alteration of groundwater retention as forestland is converted to farmland is converted to suburban backyards surrounded by shopping and business centers and expressways with acre after acre of pure asphalt. A nontrivial fraction of the land surface area of the U.S. (for example) is pure pavement, especially in selected urban zones.

      I read HADCRUT4 and GISS, as being “the temperature anomaly, relative to an arbitrary set point evaluated in the present and accurate to no more than 1C either way, as computed by a biased model that ignores or enhances UHI warming by some unknown amount and presented without any error bars as if it is a simple fact, a fait accompli, beyond question or doubt”. I then mentally subtract a guestimate for the ill-compensate UHI trend of a few tenths of a degree per century, add error bars that start at the HADCRUT4 acknowledged, modern error of 0.15C and scale up smoothly with time into the past to end up close to maybe 0.5 to 1.0C by the mid-19th century, where the error starts being constrained by independent proxy measurements and a string of plausible but possibly mistaken assumptions as much as by the lack of thermometers. Remember, over well over half of HADCRUT4 we basically knew next to nothing about the SS temperature of the oceans and much of the land area of the major continents.

      By the time you properly dress the corrected curves with error bars, it is actually rather difficult to be certain it has warmed at all on a century timescale. Lief has indicated how they have systematically fixed systematic biases in the sunspot record. This latter record was made by responsible, competent scientists! Metadata was insufficient to make the correction, and even the corrected sunspot record doesn’t correct reflect the absolute state of the sun! They were only able to manage this because they had four independent ways to measure solar state — not four sets of sunspot measurements, four independent methods — which was enough to use two to check the third and correct the sunspots. Suddenly solar activity no longer has a grand maximum.

      How improbable is it that we will eventually manage the same thing with the thermal record? How much of the “grand maximum” of temperature in the latter 20th century is a mix of systematic relative bias in the high frequency, highly accurate modern measurements compared to the less accurate and more sparse measurements made in the past, the uncorrected UHI effect corrupting the station data, the utter neglect and mistreatment of the SST component responsible for 70% of the Earth’s surface and the cavalier assumption that we have any knowledge at all about the temperatures in, say, Antarctica prior to the very recent past, if indeed we know them now?

      It won’t come easy, and it won’t come soon. Lief comments on how hard it is to get “grand maximum” solar researchers to quit because their funding depends on it now, even after the notion is pretty thoroughly rejected. GISS was under the tutelage of James Hansen for most of its evolution, and neglect of UHI is the least of its sins — GISS’s funding today is almost entirely a result of the predicted progressive grand maximum in global temperature — it just didn’t exist until he talked the US congress into it after addressing them in a meeting with the capitol building air conditioning deliberately turned off, until an unknown person named Michael Mann wrote his own “special” version of PCA code that could turn a single series of bristlecone pine records from one part of the US into an international multicentury hockey stick. At this point, the record is so muddled that the only thing that could motivate an objective re-treatment of the data by (newly) objective researchers is a stretch of blatent and inexplicable global cooling, cooling too large to be “corrected away” in GISS or HADCRUT, cooling that is openly constrained by the harder-to-futz satellite LTT measurements and ARGO. Or perhaps enough years without much warming.

      In the meantime, who really knows what the temperature anomaly is compared to 1850? We see assertions that it is 0.7C or thereabouts (sometimes even higher) but that neglects the error both systematic and not. Throw in the error and it might not have warmed much at all or it might have warmed by far more, 1.5 C, say. Arguably, the systematic error would make less warming more likely than more warming, especially in the recent past compared to the intermediate past. Was the latter half of the 20th century really a lot warmer than the first half? It’s hard to say. Just about exactly 1/2 of the state high temperature records were set in a single decade, the 1930s, in the first half of the century, with considerably less of a contribution from the UHI. Even with “global warming”, those records still stand well into the 21st century. Arctic ice was documented as almost disappearing during that same general period (although not documented as well given the lack of satellites or airplanes capable of overflying the Arctic).

      In my opinion, we will not have a reliable picture of global temperatures prior to maybe 1950 or 1960 ever. A more conservative person might even push it up to the mid-to-late 1970s and the beginning of satellite measurements and make the reliable record only 30-40 years long. We are decades away from having enough data to say much of anything about the climate that might not be neglected or mistreated bias, measurement error and plain old statistical sampling error in the data and models used to estimate past temperatures, pardon me, past temperature anomalies since we know we cannot accurately estimate the global average temperature within more than about a degree now, in the modern era. This greatly complicates the science, the modeling, the politics, the economics. We very probably are trying to build models that cannot possibly work (based on what we know of chaotic nonlinear fluid dynamic systems and the limits on the scale of integration over the globe, limits on our knowledge of initial state, limits on our knowledge of many of the actual input numbers to those models) that are nevertheless being tuned to try to reproduce the output of other models that are being tuned to reflect each other and hence the collective bias of the people that are paid to build and maintain them because they show warming that the other models predict will eventually have a catastrophic impact.

      Somewhere in there lies reality, but where? How would we even know? The bulk of “climate papers” are based on the predictions of the models that cannot even predict the past or present data models for the temperature outside of the tiny time period used as a reference/training set. The best thing to do is just wait. Time will tell. It usually does.

      And in the meantime, it might be nice to take the infinitude of thumbs off of the scales, and not assume that we know the answer better than the data itself, well enough to krige, interpolate, homogenize, infill, backfill, and what the heck, just shift the data around wholesale for everything but a sane implementation of the UHI.

      rgb

  101. Late to the game here, but I was too tied up yesterday to post…

    So, if I understand correctly, with the specific example of Amberley, as well as many other “outliers”, an undocumented site move is suspected of causing the seemingly aberrant behavior. What follows is a statistical approach to resolving the data into something that makes sense with the wider regional expectations/observations. But, as stated, why would these site moves cause different trends? Shouldn’t they merely change the y-intercept of the trend line? Do we really expect a gauge at the bottom of the hill to demonstrate a different pattern than at the top? And, furthermore, if we do expect a different behavior, how can it be accurate to just assume you know what it is? Shouldn’t we, like, empirically test it?

    Going back to the example of Amberley, my understanding is this physical site still exists. The suspected hill hasn’t been flattened. So, go stick another gauge on the hill and see what it does in relation to the one in the valley. Correlate the behaviors to each other. At least then you’d have some basis (other than your own confirmation bias) for adjusting.

    And before we exclaim in dismay that there are too many sites and records to do this, that it’s “just not feasible,” well, if the goal is simply academic, then I agree, probably not worth it. But if you’re advocating for massive societal changes that will literally impoverish millions, I think it’s reasonable to request a tougher standard. If you’re so worried that we’re responsible for sending the world into a death spiral, then get off your @$$ and go do some field work. Stop just playing with numbers on a computer.

    rip

  102. Nick Stokes
    August 26, 2014 at 5:50 am

    MarkW August 26, 2014 at 5:39 am
    “Nick,
    I love the way you assume that if a station doesn’t show what you believe it should show, it must be adjusted.”
    The thing is, it should be adjusted. You want the best estimate of the temperature. If the data shows a sudden change that is outside the expected variation, taking account of the history and neighbors, then it is very likely to be due to a move or other event.

    Moves happen. Sometimes an adjustment is wrongly made. But if the policy is right more often that it is wrong, then it should be followed. It’s better than doing nothing. And people like Menne do the statistics.

    In this case the graphs alone show that the adjustment is getting it right.
    *****************************************************************************
    I haven’t had time to read the whole thread so this might have been raised already.
    NO we don’t want the best estimate of the temperature, what we want is the best temperature measurement. It may not be precisely accurate but it is every bit more valid than an estimate based on a guess (probably a warmist guess at that).
    If there is no meta-data support – NO Changes

    SteveT

    • SteveT,
      “NO we don’t want the best estimate of the temperature, what we want is the best temperature measurement”
      I could have put it better – best estimate of the temperature in the local region. Because when you say, best measurement, the question is, of what? The measurements themselves are fine, but it’s a question of what to make of them. We don’t know that they represent the same point, we don’t know otherwise. We assume continuity of location because we have no evidence to the contrary. But data such as Amberley shows, relative to others, is evidence to the contrary.

      And of course, location isn’t the only cause of inhomogeneity. A change of observation time for min/max would do it, though I suspect Amberley would not have used min/max.

  103. I agree with RGB, and even further, this needs the KISS principle applied, the record is what it is and no need to artificially change anything especially using an algorithm that assumes and applies its assumptions, so that each specific period keeps changing. the simple thing is most agree that over a century we have warmed slightly under a degree C, we also know in a simple observation that the bigger cities have a UHI that can be three to four degrees or more above rural temperatures.

    Those of us that have lived through many really hot summers in Australia know the difference between those hot summers and the last few years that have been very mild indeed, apart from the BOM and CSIRO clamour when averaging or smearing desert heat to urban areas just to give them a half degree above a past historical “record”!! But then we find the historical records have been adjusted down and the modern temperature record subject to automatic adjustment.

    Seems to me that in the cities, the UHI SHOULD boost any record by at least four or more degrees, but that is just not happening, they struggle by artificial adjustments to concoct these claimed half degree “record” hot temperatures and ignore any acknowledgement of colder temperature “records” and that in itself exposes the confirmation bias of those involved

    Then when you take a simple approach to Global temperatures, it is apparent that world surface temperatures have not risen for 15 to 18 years depending upon the global temperature set used and the stupid thing is that the actual previous warming period is shorter than the present hiatus that is likely to become a downward temperature trend.

    Suggestion for Mosher and Stokes that p**ing into a gale force wind on the good ship warming is now reaching a ludicrous stage and time to pack it in and admit the obvious, the public don’t buy stupid propaganda even if it is dressed up, mixed up and homogenized by algorithm trickery!

Comments are closed.