Australia and ACORN-SAT

Guest Post by Willis Eschenbach

As Anthony discussed here, some Australian climate scientists think that there was an “angry summer” in 2012. Inspired by the necromantic incantations in support of the Aussie claims coming from the irrepressible Racehorse Nick Stokes, I went to take a look at the Australian temperature data. I found out that in response to hosts of complaints about their prior work, in March of 2012 the Australian Bureau of Meteorology (BoM) released a new temperature database called ACORN-SAT. This clumsy acronym stands for the Australian Climate Observations Reference Network (overview here, data here)

acorn-sat overview

It’s a daily dataset, which I like. And they seem to have learned something from Anthony Watts and the Surfacestation project, they have photos and descriptions and metadata for each individual station. Plus the data is well error-checked and vetted. The site says:

Expert review

All scientific work at the Bureau is subject to expert peer review. Recognising public interest in ACORN-SAT as the basis for climate change analysis, the Bureau initiated an additional international peer review of its processes and methodologies.

A panel of world-leading experts convened in Melbourne in 2011 to review the methods used in developing ACORN-SAT. It ranked the Bureau’s procedures and data analysis as amongst the best in the world.

and

Methods and development

Creating a modern homogenised Australian temperature record requires extensive scientific knowledge – such as understanding how changes in technology and station moves affect data consistency over time.

The Bureau of Meteorology’s climate data experts have carefully analysed the digitised data to create a consistent – or homogeneous – record of daily temperatures over the last 100 years.

As a result, I was stoked to find the collection of temperature records. So I wrote an R program and downloaded the data so I could investigate it. But when I had just gotten all the data downloaded started my investigation, in the finest climate science tradition, everything suddenly went pear-shaped.

What happened was that while researching the ACORN-SAT dataset, I chanced across a website with a post from July 2012, about four months after the ACORN-SAT dataset was released. The author made the surprising claim that on a number of days in various records in the ACORN-SAT dataset, the minimum temperature for the day was HIGHER than the maximum temperature for the day … oooogh. Not pretty, no.

Well, I figured that new datasets have teething problems, and since this post was from almost a year ago and was from just after the release of the dataset, I reckoned that the issue must’ve been fixed …

… but then I came to my senses, and I remembered that this was the Australian Bureau of Meteorology (BoM), and I knew I’d be a fool not to check. Their reputation is not sterling, in fact it is pewter … so I wrote a program to search through all the stations to find all of the days with that particular error. Here’s what I found:

Out of the 112 ACORN-SAT stations, no less than 69 of them have at least one day in the record with a minimum temperature greater than the maximum temperature for the same day. In the entire dataset, there are 917 days where the min exceeds the max temperature …

I absolutely hate findings like this. By itself the finding likely make almost no difference for most applications. These are daily datasets, with each station having around 100 years of data, 365 days per year, that means the whole dataset has about 4 million records, so the 917 errors are 0.02% if the data  … but it means that I simply can’t trust the results when I use the data. It means whoever put the dataset out there didn’t do their homework.

And sadly, that means that we don’t know what else they might not have done.

Once again, the issue is not that the ACORN-SAT dataset had these problems. All new datasets have things wrong with them.

The issue is that the authors and curators of the dataset have abdicated their responsibilities. They have had a year to fix this most simple of all the possible problems, and near as I can tell, they’ve done nothing about it. They’re not paying attention, so we don’t know whether their data is valid or not. Bad Australians, no Vegemite for them …

I must confess … this kind of shabby, “phone it in” climate science is getting kinda old …

w.

THE RESULTS

Station, Bad days in record (w/ min. temperature exceeding the max. temp)
Adelaide, 1
Albany, 2
Alice Springs, 36
Birdsville, 1
Bourke, 12
Burketown, 6
Cabramurra, 212
Cairns, 2
Canberra, 4
Cape Borda, 4
Cape Leeuwin, 2
Cape Otway Lighthouse, 63
Charleville, 30
Charters Towers, 8
Dubbo, 8
Esperance, 1
Eucla, 5
Forrest, 1
Gabo Island, 1
Gayndah, 3
Georgetown, 15
Giles, 3
Grove, 1
Halls Creek, 21
Hobart, 7
Inverell, 11
Kalgoorlie-Boulder, 11
Kalumburu, 1
Katanning, 1
Kerang, 1
Kyancutta, 2
Larapuna (Eddystone Point), 4
Longreach, 24
Low Head, 39
Mackay, 61
Marble Bar, 11
Marree, 2
Meekatharra, 12
Melbourne Regional Office, 7
Merredin, 1
Mildura, 1
Miles, 5
Morawa, 7
Moree, 3
Mount Gambier, 12
Nhill, 4
Normanton, 3
Nowra, 2
Orbost, 48
Palmerville, 1
Port Hedland, 2
Port Lincoln, 8
Rabbit Flat, 3
Richmond (NSW), 1
Richmond (Qld), 9
Robe, 2
St George, 2
Sydney, 12
Tarcoola, 4
Tennant Creek, 40
Thargomindah, 5
Tibooburra, 15
Wagga Wagga, 1
Walgett, 3
Wilcannia, 1
Wilsons Promontory, 79
Wittenoom, 4
Wyalong, 2
Yamba, 1

About these ads

150 thoughts on “Australia and ACORN-SAT

  1. The following was extracted from the ABOM website which could
    plausibly explain the apparent absurdity .

    ” Maximum and minimum temperatures for the
    previous 24 hours are nominally recorded at 9 am local clock time.
    Minimum temperature is recorded against the day of observation,
    and the maximum temperature against the previous day.
    If, for some reason, an observation is unable to be made,
    the next observation is recorded as an accumulation.
    Accumulated data can affect statistics such as the Date of the
    Highest Temperature, since the exact date of occurrence is unknown ”

    http://www.bom.gov.au/climate/cdo/about/definitionstemp.shtml

    IMO the ABOM and the CSIRO were once great organizations
    but have become a national disgrace .
    You are correct to be scornful and distrusting of what they
    currently produce .

    Ross

  2. Juliar may have some time off now to go sort this out …. and she should have the funding too ….

    …. in the money she lied about up-front.

  3. When the BOM first released its ‘Angry Summer’ report, my first question was; who came up with the term ‘Angry Summer’?
    Was it an generated by an internal PR Dept at BOM? If it was, just who authorised the use of such a political, ‘on message’, emotive and deceptive title?
    If it was created by an external paid PR/Advertising company, I would be curious to learn just how much of the taxpayer’s money they were paid for what amounts to blatant propaganda.
    In either case just how much propaganda should we expect from the peak science organisation in Australia?

  4. Are they using regression to infill missing data from other stations (…and not following through with due diagnostics)? I found something like that while doing contract work many years ago. Of course there are many other possibilities…

  5. Completely OT but I’m looking for the Monckton v rgbatduke discussion on atheism if anyone remembers which article that was. I would love to share it with a friend.

  6. Charles Nelson says:
    “If it was created by an external paid PR/Advertising company, I would be curious to learn just how much of the taxpayer’s money they were paid for what amounts to blatant propaganda.”

    The Australian Climate Commission is paid $100 million per year to inform the country about climate change.

    Gobbels would be proud.

  7. charles nelson says:
    June 28, 2013 at 11:13 pm
    When the BOM first released its ‘Angry Summer’ report, my first question was; who came up with the term ‘Angry Summer’?

    They originally wanted to use “Cruel Summer” but Bananarama beat them to it. :)

  8. Australian Science and Research has completely lost its way and is now third world rate. I should know I did my PhD there in 1982. It used to be good about until 1985 when Keating and Dawkins started to destroy it. The results are plain to see today with scientist in BOM and the importation of really BAD scientists such as Lewy. Australian research has not produced anything of notable value for some time in the world context compared to the 40’s, to 70’s. Your typical scientist there these days are guys like Nick, Flannery ect you can work it out…..a result of the Keating Dawkins influence on higher education there (BTW I used to vote labor)..Its manufacturing base is about as shot as you can get with nearly all manufacturing going abroad (ie car industry).Australia will unfortunately have to stick to tourism, food production and entertainment, unless there are drastic changes which will not happen anyway. Maybe Abbot if elected (which I now doubt) could change things there LOL.

  9. Willis’ eagle soars again. What is wrong with these people that they just can’t tell the simple truth. No grant is worth your soul.

  10. Please can an entrepreneur bottle up Australia’s ‘angry summer’ and sell it to us Brits. It’s difficult to find the right adjective for our summer so far – maybe ‘inconsiderate’ would fit.

  11. Well according to your list my hometown of Hobart has 7 days where the maximum is less than the minimum and some days it sure feels that way.

  12. Further to Walter Dnes post in the “about” PDF it says;

    “The ACORN-SAT homogenised temperature database comprises 112 carefully chosen locations that maximise both length of record and network coverage across the continent.”

    Yeah, I can guarantee they were “carefully chosen” too. Australian land area is ~7.692 million square kilometres. So that’s one device per ~68,500 square kilometres!

    /Sarc on
    Now that’s what I call granularity!
    /Sarc off

  13. Willis, where is a post about two BoM databases http://kenskingdom.wordpress.com/2013/03/03/a-tale-of-two-records/ . Ken has been doing a great job analysing Australian temperature but more that that his has developed a system for forecasting upper air disturbance leading to very accurate forecast of precipitation many months ahead in the South East Queensland area. In several comments to his forecast posts I have noted his amazing (over 90%) accuracy.(eg see here http://kenskingdom.wordpress.com/2013/06/03/june-forecast-update/#comments). BoM which is filled with so called “climate scientists” input data into programs on a super computer often can not do any better than myself looking out the window at the falling rain which they did not forecast. Here, http://kenskingdom.wordpress.com/category/temperature/ he explains some of his method using mean sea level pressures and the second derivative of Tmin.
    I like your posts and try and read every one although I sometimes disagree as with your comment on my methane post.

  14. What are the error counts by year? I wonder if they are randomly distributed across time. In one sense it wouldn’t matter much, you still would have to be wary of the entire set.

  15. TimTheToolMan said @ June 29, 2013 at 12:40 am

    Well according to your list my hometown of Hobart has 7 days where the maximum is less than the minimum and some days it sure feels that way.

    Perhaps because it is that way. I live 40 minutes drive south of Hobart and it’s not unknown for the temperature at sunrise, usually the low point in any 24 hour period, to exceed the mid-afternoon temperature, usually the highest temperature in the day. Dunno about Wilson’s Prom, though. Fond memories of chasing bellbirds (and belle birds) in the bush there in the late 60s :-)

  16. Willis,

    Have a look at Charleville and see which record(s) they’re using.

    Charleville Met station started in 1942 (it was a major air base in WW2). But the Charleville Post Office started around 1875 and ran till 1959. So it overlaps the airport met station from 1942 till 1959. The post office is in town and backs on to the Warrego River – and Charleville up to 1959 likely didn’t generate a lot of UHI. Seems to me that the post office data has some “records” that overshadow those from the met station

    /sarc on but CO2 was lower then /sarc off

  17. Thanks Willis. Further problems with Acorn have been described by me and others, but don’t expect BOM to take any notice. Here are some of the posts:

    http://kenskingdom.wordpress.com/2012/05/14/acorn-sat-a-preliminary-assessment/

    and

    http://kenskingdom.wordpress.com/2012/06/23/acorn-or-a-con-acorns-daily-adjustments/

    But even Acorn shows that Australia has had no warming for 18 years:

    http://kenskingdom.wordpress.com/2013/03/19/warming-has-paused-bom-says/

    Ken Stewart

  18. That strikes me as curious in the extreme. I wonder if the error is a timing error: the minimum actually belonging to the next or the previous day. A mix up of AM and PM perhaps? That ought to be easy to check by seeing if systematically changing the day for either the min or the max would make more sense.

  19. Willis I question your term ”metadata”. I am a geologist and anything with meta in front means it has been subject to metamorphism ie., heat and pressure. So climate data subjected to heat and pressure?—- sounds about right in today’s political alarmist climate.

  20. The BOM clicks on to a new “day” at 9am (presumably when the sleepyheads roll into the office). It is quite possible, but quite misleading, for minima to exceed maxima for a 24 hour period given this. All it takes is a fast moving weather system, of which we get plenty on this vast continent.

    I guess whatever cutoff point they choose will introduce anomalies, but it also matters – in terms of real descriptions of the weather – that sunrise/sunset times vary by over four hours (cumulatively) where I live between seasons. In summer, at 9am the sun has been beating down for nearly four hours. In winter, it is just hitting the frost on the lawn with weak, slanting rays – although it has technically been risen for a couple of hours. I realise that it doesn’t affect the statistics if consistently applied, but it does provide a warped picture of actual temperature patterns throughout the day, across the seasons.

    As for the BOM’s reliability, I hope that Geoff Sherrington and a few other hardy souls who have been working on the data for a long time on their own dime drop by to comment. All that rhetoric about state-of-the-art, peer reviewed, bla bla bla is just self-congratulatory nonsense. It is on a par with statements about the IPCC being a source of “gold plated” science. They have adjusted data (without explanation), changed goalposts, ignored evidence of obvious errors and campaigned relentlessly for CAGW for many years.

  21. Ed Zuiderwijk says: June 29, 2013 at 2:05 am
    “That strikes me as curious in the extreme. I wonder if the error is a timing error: the minimum actually belonging to the next or the previous day.”

    That is the convention. From Blair Trewin’s Tech Note:
    “The current Bureau of Meteorology standard for daily maximum and minimum temperatures is for both to be measured for the 24 hours ending at 0900 local time, with the maximum temperature being attributed to the previous day. This standard has been in place since 1 January 1964″

    The idea is that when you read at 9 am, the min is probably today and the max yesterday. It may happen that the previously read min is greater than that max. There is discussion as to how to incorporate the 9am reading to rationalize that, but there isn’t always a 9am reading recorded. Trewin’s note also describes various other min-max reading practices by non-BOM managed sites. I see that the big deviant here was Cabramurra, which was probably managed by the Snowy Mountains Authority. Two others are lighthouses.

  22. Yep. One thing is just about certain. If one finds that many totally blatant errors, there will be multiple numbers of other errors lurking which are not so obvious. Almost a law of nature! This is just the tip of the iceberg. The dataset is probably unfixable. Sad.

  23. The ACORN record has been adjusted so much that it makes it almost useless. This is the record for Bourke in Jan 1939 showing raw temperatures v ACORN temperature data. Note all of the higher temps (above 30C) have been adjusted downwards, some by 0.9C.
    Temps below 30C have been adjusted upwards by 0.1C.
    Can anyone see any reason/logic for this?
    Jan raw ACORN
    1st 38.9 38.4
    2nd 40.0 39.1
    3rd 42.2 41.9
    4th 38.1 37.9
    5th 38.9 38.4
    6th 41.7 41.5
    7th 41.7 41.5
    8th 43.4 43.0
    9th 46.1 45.7
    10th 48.3 47.9
    11th 47.2 46.8
    12th 46.2 45.8
    13th 45.7 45.3
    14th 46.1 45.7
    15th 47.2 46.8
    16th 46.7 46.3
    17th 40.0 39.1
    18th 40.1 39.1
    19th 40.0 39.1
    20th 41.9 41.7
    21st 42.5 42.1
    22nd44.2 43.8
    23rd 36.7 36.5
    24th 40.3 39.2
    25th 36.6 36.5
    26th 29.4 29.5
    27th 29.3 29.4
    28th 28.8 28.9
    29th 30.6 30.5
    30th 35.6 35.4
    31st 38.6 38.3

  24. Oh great mayte, yeah o allright right mate….so says Nick Stokes. A O KAAAAAAAAY.
    Because at 9am the high is yesterdays and the low is todays….eeerrrrrrrrr uhhhhhhhhhhh on only 917 records. Whoooowwhhaaaaattttt!?!?!?! So, the weather stations are then wrong for all of the other records?

    Okay okay…what are they correct for then nick stokes….they are either correct only 917 times or they are completely incorrect. And, if they are only correct 917 times or they are completely incorrect why haven’t they been fixed yet?

    Australian science mate. Gooooooday

  25. Can I ask the obvious question and suggest that perhaps those are days when the temperature fell during the day/24 hour period, so that the maximum temperature was recorded before the minimum, rather than the more common other way around. Someone forgot to correct for this.

    This would make sense since some of the sites where this occurs more often are generally located near the sea, where it is known southerly changes come in and drop temperatures rapidly, espcially on warm days.

    eg Wilsons Promontory 79, (southerly changes often)
    Cape Otway lighthouse 63 ditto.

    I grew up in Sydney where it is known you can get a southerly to drop temperatures 12 C in ~20 minutes.

  26. “thingodonta says:

    June 29, 2013 at 3:49 am”

    Good point. From my own experience, Christmas Day, 1998, Melbourne. I don’t recall what the temperature was at 9am that day (If that is when the TMin is measured) but by 12noon-ish, lunchtime, it was ~36c. By 2pm, it was ~12c simply due to a change in wind direction. While this may be nothing to note for a Melbournite, I recall it because it was my first Aussie summer (Having previously lived in Wellington, New Zealand). I had just got used to 35c+ days (Dry heat is OK, the humidity kills me), only to shiver, literally shiver, that afternoon and night.

  27. Confusedandinfuriated says:
    June 29, 2013 at 12:04 am
    “They originally wanted to use “Cruel Summer” but Bananarama beat them to it. :)”

    Fantastic! Best part is, I listened to the song 5 minutes ago! Crazy coincidence.

  28. gaelan clark says: June 29, 2013 at 3:32 am
    “Australian science mate. Gooooooday”

    Australian science is fine. The people running ACORN have to deal, as would anyone doing it, with records as collected (not by scientists) in the past. I’m sure they would prefer that the thermometers were read at midnight, without fail. But the cycle was different and they have adopted a convention to adapt this to the calendar day. Sometimes this causes apparent inconsistency. They can’t re-do the readings.

  29. I worked at Yeelirrie Station, a million acre Pastoral Station in the middle of WA. Records and BOM data had been collected for the last 80 years. In the winter if 2012 we were regularly the coldest or one of in the whole of WA, nightly news would have to read out Yeelirrie minus whatever. BOM contacted us about halfway through winter and informed us that our 6am temp readings would no longer be required. To me this seemed odd, but I guess they did not want regular minus readings in the middle of WA? What effect does this have on calculating mean temps?

  30. Nick Stokes says:
    June 29, 2013 at 4:20 am
    … They can’t re-do the readings.
    ——————————

    They “redo” the readings every month. Making the past colder and the current temps warmer.

    Everyone knows what a “day” is. If you miss a reading, it is missed. You do not make up the missed day with obviously incorrect data.

  31. Sorry but I have two questions….

    First (Nick)….why does it take an American, so disgusted with your “angry summer” crapola and so far removed from your entire operanda, to find inconsistencies, irregularities and just plain wierdisms within your very own network…which WAS supposed to be sterling?

    Second (anyone)….this is the 21st century, not 1869, we can automate temperature readings and take measurements without human eyes…why dont we?

  32. John Marshall

    Meta is a prefix used in English (and other Greek-owing languages) to indicate a concept which is an abstraction from another concept, used to complete or add

    Metadata is hence data about data

  33. DirkH says:
    June 29, 2013 at 4:19 am

    Confusedandinfuriated says:
    June 29, 2013 at 12:04 am
    “They originally wanted to use “Cruel Summer” but Bananarama beat them to it. :)”

    Fantastic! Best part is, I listened to the song 5 minutes ago! Crazy coincidence.
    – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
    For those who do not know this song: (one of several versions)

  34. gaelan clark – there has been a whole team of Aussies (Kenskingdom above being one of them and Jo Nova another) that have been investigating the BoM and their methods. Prior to ACORN there was the HQ (High Quality) data yet when this team made an FOI request to BoM for the algorithms used to create the HQ data set the BoM initially ignored it but then replied that they no longer used the HQ set and now used the ACORN set so the algorithms weren’t necessary.
    Such is the arrogance of the BoM senior management.

  35. When the Melbourne Regional Office reports 7 topsy-turvy days I really don’t know what to say. This can’t be a mistake by rank amateurs.

  36. “Nick Stokes says:

    June 29, 2013 at 4:20 am”

    You do not need to be a “scientist” to read a thermometer, wind, pressure or any other kind of gauge device what-have-you, that has some form of visible indicator (Like a speedometer) of what the current state is for that particular instrument. And it’s completely ridiculous to suggest otherwise!

    Do you need to be a “scientist” to read a thermometer that has taken the body “temperature” of your child that indicates 42c to know that’s where the brain gets damaged?

  37. So, on some days the minimum temperature for the day shows higher than the maximum for the day, and we should just go with it – it is just the way it is done and it is easy to isolate these “anomalies”.

    But, what about all the other days when the minimum and maximums are also incorrect, just not “upside down”? How do we determine these “anomalies”? How does anyone know which days are accurate and which have some level of error from this collection/reporting methodology?

    I see Nick is here defending the indefensible once again. At least that is consistent. :)

  38. Ian George said @ June 29, 2013 at 3:24 am

    The ACORN record has been adjusted so much that it makes it almost useless. This is the record for Bourke in Jan 1939 showing raw temperatures v ACORN temperature data. Note all of the higher temps (above 30C) have been adjusted downwards, some by 0.9C.
    Temps below 30C have been adjusted upwards by 0.1C.
    Can anyone see any reason/logic for this?

    It’s the temperature adjuster’s job to adjust the temperature readings. If the temperature adjuster failed to adjust the temperatures, then the temperature adjuster would be out of work! Simples, really :-)

  39. jeremyp99 said @ June 29, 2013 at 4:30 am

    Well, maximum, minimum, husbands, wives … what’s in a word?

    The genesis of this was the perversion of using a grammatical term, “gender”, as a synonym for “sex”. The genders are masculine, feminine and neuter; the sexes are male and female. Gender and sex don’t even map 1:1. No wonder confusion reigns…

  40. You not going to believe this but just as I had read down to the bananarama video, I became aware of the song playing on the radio (hadn’t been listening really) singing the ‘cruel summer’ part!!! Neil Galway Ireland Classic Hits 4FM 1.30pm

  41. JohnWho said @ June 29, 2013 at 5:11 am

    I see Nick is here defending the indefensible once again. At least that is consistent. :)

    So how would you suggest we go back and redo the readings? Enquiring minds…

  42. Nick Stokes says: “They can’t re-do the readings.”

    But they can adjust them. They can close stations that don’t agree with their agenda and only use those that do.
    Such is the bias built into the current BoM. It’s disgraceful.

    Lismore centre street has an unblemished record going back to 1907 – they closed it in 2003.

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=058037&p_nccObsCode=36&p_month=13

    It’s pretty clear why they don’t use it in their current records.

    similarly with Casino:

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=058063&p_nccObsCode=36&p_month=13

  43. Thanks Willis.
    tokyoboy says:June 28, 2013 at 10:54 pm

    Since Australia is in the southern hemisphere, occasionally things can be upside down?

    I was sure Nick S. would chime in with that, he does , but you beat him to it.

  44. The Pompous Git says:
    June 29, 2013 at 5:37 am


    So how would you suggest we go back and redo the readings? Enquiring minds…

    Like they do in professional science–by marking such readings with great big asterisks and indicating the problem in a companion comment. Leaving them unmarked leaves the assumption that nothing is amiss, when indeed there is.

    Science is more than numbers–it’s narrative, too.

    Then the most accurate accounting of the dataset would be one in which such readings are omitted. An “educated adjustment” would always introduce error. How much and why? Again–another narrative would be in order.

  45. “Steve Keohane says:

    June 29, 2013 at 6:15 am

    tokyoboy says:June 28, 2013 at 10:54 pm

    Since Australia is in the southern hemisphere, occasionally things can be upside down?”

    No he’s got it wrong, from here in Aus, those in the northern hemisphere are up side down, and their logic is inverse. It would be so much easier if this rock was a 2 dimensional flat thing, black body at that (lol)! *sigh* Rather than this sphere thing we live on! You win again, gravity! – Zapp Brannigan.

  46. Patrick says:
    June 29, 2013 at 5:07 am
    “Nick Stokes says:

    June 29, 2013 at 4:20 am”

    You do not need to be a “scientist” to read a thermometer, wind, pressure or any other kind of gauge device what-have-you, that has some form of visible indicator (Like a speedometer) of what the current state is for that particular instrument. And it’s completely ridiculous to suggest otherwise!

    The standard equipment used in Stevenson’s screens to measure temperature were Max-Min thermometers, you don’t read the current temperature, you read the max and min since it was last read. Hence if the standard measurement time was 9 am local the max would probably be from the previous day and the min from the current day. Apparently the practise was to read at 9am and assign the max to the previous day. If that’s what was done in the past and how it was recorded in the ledgers then that’s the data you have to live with when doing the analysis.

  47. “Phil. says:

    June 29, 2013 at 6:36 am”

    Does one need a PhD (Or be a “scientist”) to do that? No!

  48. Anthony,
    I think it is time to start independent temperature/weather spotter program using the same digital devices exclusively from wattsupwiththat.

    Could become largest independent automatic temperature recording reading system network and could start with one state or the USA

    Need to find manufacturer that can make a weather station that can automatically send wireless readings at locations to a network you can create. Individuals can purchase the units through wattsupwiththat and hook up at their home or work place in a location that is not near Pavement or air-conditioning/ heating units or can allow some . Pictures of location of each weather station sensor must be recorded also.

    Sound good?

    Have s good weekend

  49. Obviously if a site has 100 years of data, a lot of the data was manually recorded initially. Then transcribed. Maybe multiple times. Lots of opportunity for transcription and/or transposition errors (entering numbers in the wrong column). I once worked for a while on a project that involved OCR of handwritten data and printed data in a variety of typefaces. It is astonishing (well, it astonished me anyway) how ambiguous a lot of numbers can be when smudges, drop-outs, fading, etc occur — as they often do — in the raw data. Would you believe that 5s and 6s can be indistiguishable in some common typefaces if a couple of pixels drop out during imaging?

    Anyway, it might, and I emphasize MIGHT be possible to improve the data quality by comparing values to a five or seven day moving average, and simply rejecting any points that are too far from those averages. But there are some types of errors that even that won’t catch. e.g. Tom, who recorded the data every third week, consistently reversed the min and max fields or Jane, who did the recordings in the Summers of 1934,1935 and 1937 wrote handwritten 1s, 7s and 4s were indistinguishable.

    Or the data may simply be useless for fine distinctions. That might, I suspect, be the case with a lot of historical climate data.

    And of course it is possible that, as several commenters suggest, the data may have been “adjusted” so much that it is unusable for any purpose.

  50. DonK – Willis posted

    “A panel of world-leading experts convened in Melbourne in 2011 to review the methods used in developing ACORN-SAT. It ranked the Bureau’s procedures and data analysis as amongst the best in the world.”

    and this

    “The Bureau of Meteorology’s climate data experts have carefully analysed the digitised data to create a consistent – or homogeneous – record of daily temperatures over the last 100 years.”

  51. Thanks to all for the comments. One other thing. I started this expedition because of Racehorse Nick Stokes’ comments on the supposed “cruel summer”. I pointed out that the RSS and UAH satellite temperature datasets showed no such warmth. He said he wanted surface records. So I went off to find them.

    Now, The Australian BoM said:

    “Of the 112 locations used in long-term climate monitoring, 14 had their hottest day on record during the summer of 2012/13.”

    That was what I wanted to verify … but the above-cited public records for the “112 locations used in long-term climate monitoring” STOP AT THE END OF 2012. So they only contain the first month of the summer, not the other two. So those jerkwagons are claiming a record, and still haven’t released the data they claim it is based on.

    So, Nick, until your favorite rent-boys extract digit and publish their secret ACORN data, I fear you’ll have to wait …

    w.

  52. Oh, yeah, I forgot to mention. I only have 111 of the famous 112 records the BoM’s copy of the minimum temperature for Butlers Gorge is all screwed up, see here. I could likely unscramble it, but then if I find further errors someone would just say I unscrambled it incorrectly … so I just left it out.

    w.

  53. In principle, which situation is worse for good science?
    A) have a dataset that contains errors
    B) have a dataset where the errors are “fixed” and the origional problems unseen?

    Data is data.
    If the data has errors, we do not decrease uncertainty by adjusting the data.

    I argue that the idea situation is to have
    A) the dataset that shows the errors, and
    C) a dataset that shows the fixes and how they were fixed.

    The key is to realize that the uncertainty in (C) is greater than that of (A).
    (A) has an error component.
    (C) is not the subtraction of error from (A),
    but the ADDITION of corrections which themselves have error.

    Granted the corrections are highly correlated with the hypothesized error in (A). If you track down the source of each error to root cause and can reliably correct the error, such as a clear transposition of keypunched data entry, you can indeed reduce error in (C). But without an autopsy on each error, realistically the total error in (C) should be treated as greater than the error in (A).

  54. From jeremyp99 on June 29, 2013 at 4:49 am:

    Meta is a prefix used in English (and other Greek-owing languages) to indicate a concept which is an abstraction from another concept, used to complete or add

    Metadata is hence data about data

    And metaphysics is physics about physics. With metaphysics completing or adding to physics.

    I think the more practical working definiton is “it’s not that but related to it”.

    Thus metaphysics is not physics but (presumably) related to physics, in that case generally as “alternate explanations”.

    And metadata is not the data but is related to it, usually it’s the when, where, and how the data was obtained.

  55. what do the terms minimum and maximum mean?
    are they defined by time of observation?
    or are they defined by being the lowest and highest in any 24 hr period?

  56. ABM per Ross : “Accumulated data can affect statistics such as the Date of the
    Highest Temperature, since the exact date of occurrence is unknown ”

    Real scientists would tag fudged entries.

  57. After in excess of 100 years of a procedure it’s best to maintain that procedure so the data are consistent throughout the set. If the procedure is changed, that is a different data set.

  58. Note all of the higher temps (above 30C) have been adjusted downwards, some by 0.9C.
    Temps below 30C have been adjusted upwards by 0.1C.
    Can anyone see any reason/logic for this?

    It’s to adjust for parallax errors. On hot days the person reading the thermometer had drunk a lot of Lager and was on his knees. On cold days he was standing up in his padded high heeled boots.

  59. Tamino has a post “like candy from a baby” about the “angry” summer. Disparaging both Willis and Bob Tisdale. To my amazement he posted my response and responded as follows:

    Gonzo: [Drawing any conclusions from the Oz land temp record is like trying to determine ocean heat content pre-ARGO. Sparse data and in many cases bad data ie….many of the older records were recorded in whole degrees or when they converted to celsius in 1972. Save for a few quality stations which amount to a regional effect the Ozzie data should be taken with a serious dose of doubt. Much ado about nothing. BTW how many state heat records were broken during the “angry” summer? Oh none!

    [Response: Gonzo proves the point. He doesn’t like what the thermometer says, so his comment amounts to nothing more than calling the thermometer a liar. That’s what those in denial have to resort to. But wait, there’s more!]

    BTW how many state heat records were broken during the “angry” summer? Oh none!

    Tamino: [Response: Bonus points — Gonzo adds “moving the goal posts” to denying the facts. The relevant fact is that last summer, and especially January, was scorching hot in Australia — not that some state broke a heat record. Gonzo hopes that by pointing to one factoid which isn’t a record-breaker he can distract everyone from such facts as:

    During this period, Australia registered the warmest September–March on record, the hottest summer on record, the hottest month on record and the hottest day on record.

    A record was also set for the longest national scale heatwave.

    Does Gonzo actually believe that just because no state heat record was broken, that will magically transform the hottest summer on record nationwide, the hottest month on record, the hottest day on record, and the longest national-scale heat wave, into “nothing at all unusual about the 2012 summer”? Not too bright.

    Perhaps most important for those of us who are interested in the truth, Gonzo has denied the reality of Australia’s scorching hot summer in order to distract us all from the fact that summers like that are now more likely than they used to be, by a lot. He must distract everyone from that fact, because that’s the real point.]

    To which I responded back with this:

    You don’t have the stones to post this but here goes anyway. cheers

    Your response is puzzling to me as I would call you the Mr T of cherry picking…..HEY FOOOO YOU’RE CHERRY PICKING!!!

    Who “denied” OZ is hot? Not me. You do know Oz is the hottest continent yes? Have you been to Oz? I have many times surfing both east and west coasts. WA is efffn hot always has been ie .. it gets hot there.

    Tamino: [During this period, Australia registered the warmest September–March on record, the hottest summer on record, the hottest month on record and the hottest day on record.] the hottest day on record? really? You sure about that mate?

    ” Whilst it is probable that remote areas of the Australian desert have seen extreme temperatures that have gone unrecorded, the outback Queensland town of Cloncurry originally held the record for the highest known temperature in the shade, at 53.1 °C (127.5 °F) on 16 January 1889. Cloncurry is a small town in northwest Queensland, Australia, about 770km west of Townsville.

    The Cloncurry record was later removed from Australian records because it was measured using unsuitable equipment (that is, not in a Stevenson screen, which only became widespread in Australian usage after about 1910). According to the Australian Bureau of Meteorology, the current heat record is held by Oodnadatta, South Australia, 50.7 degrees Celsius, occurring on 2 January 1960.

    The world heat record for consecutive days goes to Marble Bar in Western Australia, which recorded maximum temperatures equaling or over 37.8°C on 161 consecutive days, between

    Concerning Cloncurry why would the BoM call a thermometer a liar.

    Of course he only posted my response as follows:
    [You don’t have the stones to post this but here goes anyway. cheers] Followed up with disparaging remarks………

    Par for the course over their when in doubt attack the messenger.

    REPLY: there’s no point in paying attention to Grant Foster aka “Tamino” his rants are irrelevant – Anthony

  60. Kalgoorlie-Boulder, 11

    I am surprised by how small this number is. Summer temperatures in Kalgoorlie are highly dependent on cloud cover. It’s fairly common to have consecutive days when the difference in daytime temperatures differs by well over 20C.

    Nick partly explained the problem. Observations are taken during the day of a min/max thermometer, but the BoM then convert this dataset to a international standard midnight to midnight day, Hence the assumption that the maximum was from the previous day.

    Were the raw data used without shifting it to a midnight to midnight day there wouldn’t be a problem. Although there may be a time of Observation bias, a largely separate issue.

    In summary, the original data was for consecutive 24 hour periods (more or less) and is perfectly satisfactory for calculating long term temperature trends across Australia.

    The problem results from converting the data to the M to M standard and results in some minimum temperatures exceeding maximums. This looks strange but shouldn’t affect long term trends.

    The real issue is the multiple adjustments in ACORN, which are just an invitation to confirmation basis.

  61. gnomish said on June 29, 2013 at 9:18 am:

    what do the terms minimum and maximum mean?
    are they defined by time of observation?
    or are they defined by being the lowest and highest in any 24 hr period?

    With daily reporting a 24 hour period is assumed, all measuring occurring on a single day. Although the daily measurements were rarely taken exactly at midnight, leading to adjustments.

    Minimum is the coldest measured temperature during the measured period. This is normally expected in the early morning just before sunrise, after the ground has been cooling off all night. But with weather fronts moving in, and the evaporative cooling of wind and rain, the minimum could happen anytime.

    Maximum is the hottest measured temperature. Presumably that happens in the afternoon, with the Sun warming up the ground. Which is a problem. If you take your daily measurements at 9AM, you must then assume the highest reading was actually from yesterday afternoon, so you mark it down as yesterday’s high. Even though the cold front passed through, your “high” is from the morning you took the measurement as yesterday was bitterly cold morning to night.

    This gets adjusted later as quality control. The weather station in the deep valley reports a 23°C high on a sunny Tuesday, but stations within 100km on flat ground reported a similar high amount on a rainy Monday, therefore the valley was reported wrong and their high gets moved to Monday so the similar highs happened at the same time.

    Since ideally all of the measurements should go from midnight to midnight, this brings about the Time of Observation (TOBS) adjustments. It is excessively argued exactly how important the TOBS adjustments are for an accurate record, with the “usual suspects” insistent how absolutely necessary they are, and will promptly point to some analysis one of them did that proves it as the temperature trends are much too low without it.

    (I’m still waiting for a coherent brief explanation of what TOBS really is and how it’s calculated. As expected, normally we’re told to read some paper by Hansen or someone else who presides over a screwed-up temperature dataset.)

    Of course for much of the world’s “raw temperatures” we’re lucky to have numbers reported for a day at all, let alone the observation times, so TOBS is basically meaningless except perhaps for countries like the US.

    Then comes the real magic. The twenty minutes when there was a break in the clouds and the Sun shone directly on the thermometer shelter/housing, will be averaged together with the five hours of hurricane-strength winds that plastered 10cm of packed snow on the side of the shelter/housing, to determine the average temperature that day was sufficiently high to cause a significant amount of snow melt. Which is in agreement with the projections of the climate models thus is confirmation of (C)AGW theory.

  62. One of the BOM ‘quality control’ measures is bizarre. Page 32 of their manual:

    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    shows an example where a fall in temperature just before dawn is eradicated for ‘quality control’ because it is a ‘spike’ in hourly readings of more than 4°C. They ignore changes of >4°C as long as they are not followed by a change of >4°C of opposite sign.

    In my view, in the example they show, they threw away the data possibly showing a change of state of water in the atmosphere or a transient change in wind direction with the net result in their ‘quality controlled’ output of raising both the minimum and the average temperature for that day.

    They are effectively assuming that there was a transient error in the electronic thermometer recording the data which ocurred at that time but afterwards it continued to operate correctly and no maintenance was carried out.

    This is astonishgly bad. No wonder we get global warming when clowns like these operate on the data.

  63. Willis, I wish you’d just use the data as you originally intended and skip the whole “lows higher than the highs” thing. You’re majoring on the minors here and fighting the wrong battle. There’s a clear explanation for the phenomenon you found: reading a high-low thermometer set once a day. It’s not phone-it-in or shabby and if someone attempted to correct it, they’d be rightly criticized for massaging the instrumental record.

    The rule-of-thumb they use for classifying which day the high goes with and which day the low goes with is reasonable, base on how sunlight affects things. At least here in a moderate area where I live, there’s approximately a 15-degree F swing in temperature each day from the low before dawn to the high around 3:00 PM. (On top of that, add in weather and seasonal effects, and we can end up with as little as a 4-degree high-low swing or a little more than a 30-degree swing.)

    So a day in which a front comes through and raises the temperature 15+ degrees F (or whatever it takes in a location to overcome the sun’s effect, plus the weather) overnight may cause a swap of the high and lows for a pair of days. As you note, this is extremely rare.

    This kind of issue will show up with any regime that uses a high-low thermometer that’s read at any point except exactly midnight. It is a weakness of the historical (and perhaps present) surface station network, for sure, and is worth pointing out, but it doesn’t indicate that record keeping is sloppy, measurements are wrong, or anything else that is worth making a big deal over.

    The measurements are apparently correct, just occasionally attributed one day off. I say “apparently” because there could be problems with the measurements. But that’s not what you’re describing.

    You made this kind of mistake back with your “linear trend” fiasco, where you had a good main argument but moved a minor argument to the top of your posting and spent so much time defending a misunderstanding on your part that you let your main argument fall off the table into obscurity. Please, just let this high-low thing go and focus on actually using the data to actually show Nick is wrong on the original question.

    Do what you originally set out to do, and it’ll have much more impact than trying to snipe around the edges.

  64. Billy liar, that document is an interesting read. Makes you realize how many adjustments are being made and how many are potentially questionable.

    This caught my attention,

    Figure 10 shows an example of data flagged by this check, a minimum temperature of 20.6°C at
    Giles (25°S 128°E) on 11 January 1988. The lowest three-hourly temperatures at the site were
    26.2°C at 09:00 the previous day, and 27.5°C at 06:00, while other sites in the broader region
    mostly exceeded 27°C. (As there are no sites to the north or south of Giles within 500 km, the
    value affects the analysis over a large area). Such differences, if real, would almost certainly be
    associated with a thunderstorm at Giles, but no significant rain was recorded there, suggesting
    that the value on that day was suspect.

    Their thunderstorm assumption is wrong. I can speak from personal experience of the central desert that non thunderstorm forming areas of cloud occur on a regular basis and cause a significant drop in temperature over a short period of time.

  65. Willis,
    “but the above-cited public records for the “112 locations used in long-term climate monitoring” STOP AT THE END OF 2012. So they only contain the first month of the summer, not the other two. So those jerkwagons are claiming a record, and still haven’t released the data they claim it is based on.”

    The records don’t stop at the end of 1912. You can look up each of them to the most recent half-hour. You can get the daily records for each month. Bourke in January? Here it is. They may not yet be in a convenient table for you, but it’s all there.

    Philip Bradley says: June 29, 2013 at 12:08 pm
    “Billy liar, that document is an interesting read. Makes you realize how many adjustments are being made and how many are potentially questionable.”

    These aren’t adjustments. They have QC procedures designed to automatically flag suspect individual numbers in their huge database. There will always be some that are borderline. They are describing the most difficult cases to decide, and how they go about it.

    gaelan clark says: June 29, 2013 at 4:46 am
    Sorry but I have two questions….

    First (Nick)….why does it take an American, so disgusted with your “angry summer” crapola and so far removed from your entire operanda, to find inconsistencies, irregularities and just plain wierdisms within your very own network…which WAS supposed to be sterling?

    Second (anyone)….this is the 21st century, not 1869, we can automate temperature readings and take measurements without human eyes…why dont we?”

    Well, I could ask why does it take an American to tell us that we didn’t have a hot summer – satellites prove it, they weren’t hot. None of this Acorn nitpicking has anything to do with whether the summer was hot. Historic records anywhere in the world have inconsistencies etc. You have to learn as best you can from them.

    But of course measurements are now automatic. Here is just one Australian State. You can check records every half hour. No eyes involved.

  66. kadaka (KD Knoebel) says:
    June 29, 2013 at 9:18 am
    From jeremyp99 on June 29, 2013 at 4:49 am:
    Meta is a prefix used in English (and other Greek-owing languages) to indicate a concept which is an abstraction from another concept, used to complete or add

    Metadata is hence data about data

    And metaphysics is physics about physics. With metaphysics completing or adding to physics.
    =================================================================
    Way back when I was studying philospophy, I learnt the origin of ‘metaphysics’ as being ‘beyond physics’. From Aristotle’s works; his writings on the subject now known as part of philosphy coming after those on physical science. (‘meta’ being beyond or after in Attic Greek)

    ‘meta’ has changed meaning in the last 40 or so years to its current usage

  67. To all you temperature adjusters out there, including the ABOM(inables): If the raw data over 100 years can’t show a definitive signal of CAGW, and one feels the need for adjustment to make it show itself, then isn’t this an admission (or fear) in itself that the signal must be diminishingly small? Is there anyone here who disputes that the old end of the record has been adusted downward and the recent end shifted upward, even if only a few tenths of a degree?

    If the problem facing us is that we could have 4 to 6C increase by 2100 (I know the IPCC has had to trim this to half in the last year or two, but the practice of adjustments was introduced when 4-6 was “95% certain”), then all that would be needed would be a few hundred thermometers with raw readings distributed around the world in non-urban areas to unequivocally detect such a strong AGW signal (we may still have to determine that it is “A” GW but at least we wouldn’t be trying to squeeze that out of 0.7C a century.

    The fact that a century of warming has only been 0.7C, and this with basically raw data and some natural variation (remember, most of the AGW has occurred since 1950) underscores the point that the adjustments to the old record are unnecessary. Could the old thermometer readers have been out several degrees in their reading of a temp? Please, all agree, “I don’t think so”. Could the thermometers themselves have been so crude as to have been inaccurate by several degrees (all in one direction)? That we have only 0.7C difference is virtually proof in itself that this is not so. Heck, if you want to adjust the data, round it off to the nearest degree C. What is wrong with this? We are only interested in a change of 2-3 degrees in a century into the future. If we don’t have a degree or two by 2050 with the raw data, then we are pretty safe (and remember at 2013 we are 25% of the way there). To emphasize for the unconvinced, would you measure sea-level changes with a micrometer if you were worried about changes in a century of a metre or more? Do you believe a mighty oak will grow from a homogenized ACORN?

    .

  68. Nick, any change to raw data is an adjustment to the data set. To call some changes Quality control is mere semantics.

    As usual, you don’t address the substantive issue. That they are making a changes to raw data based on a clearly wrong assumption. And note, Giles is probably the station that has the largest geographic effect on Australian temperatures.

  69. Just as an agenda can be seen in the results of climate models, a similar agenda can be discovered in the changes/adjustments to raw data.

    Even if human eyes aren’t used in taking the measurement or making the adjustments, an agenda or bias can be expected.

  70. Philip Bradley says: June 29, 2013 at 1:06 pm
    “Nick, any change to raw data is an adjustment to the data set. To call some changes Quality control is mere semantics.”

    No, it isn’t semantics. They are at the stage of deciding what the data is. Data isn’t just what someone wrote on a page (and someone deciphered). Typos aren’t data. It isn’t a change until you figure what you’ve got.

  71. Gonzo
    Another example – Mildura had a reading of 50.8C (recorded as 50.7C in BOM data) in Jan 1906 but this was downgraded to 48.3C based on the temp in Deniliquin.
    janama
    Totally agree. I have been checking, and made copies, of Lismore (Centre St) for some time and comparing the raw data with the adjustments they made on their HQ data site. It’s now called ‘Australian climate change site networks’ and Lismore has since disappeared.
    Casino’s long-term manual w/s was closed recently. The data clearly showed that Casino had cooled over the past 20 years with only 4 years being above the yearly max average.
    By the way, on Friday, 21st of June, Casino had its coldest max temp since daily records started in 1965 (12.7C). Maybe Lismore and a few others also did.
    I don’t recall any mention of this in the press/TV. Imagine if it had been the hottest!
    Nick
    I go back to my post above re the totally illogical adjustments to Bourke for Jan 1939. ACORN is corrupted – and this is the data that they used for their ‘angry summer’. Unbelievable.
    Absolutely shoddy work – worse even than their adjustments to raw data for their old HQ data site.
    Can you honestly defend ACORN’s data when presented with the evidence?

  72. The BoM document states,

    In general, data that were classified as suspect after review were flagged and excluded from
    further analysis.

    So in the example, I originally gave, I’d assume the claimed ‘suspect’ data was in fact excluded from the dataset.

    The example was from 1988 at a professionally manned station with presumably automatic temperature recording. ;What people ‘wrote down’ or potentially misread is irrelevant. And anyway the data are what was recorded, errors and all.

  73. Wayne says:
    June 29, 2013 at 12:03 pm

    Willis, I wish you’d just use the data as you originally intended and skip the whole “lows higher than the highs” thing. You’re majoring on the minors here and fighting the wrong battle. There’s a clear explanation for the phenomenon you found: reading a high-low thermometer set once a day. It’s not phone-it-in or shabby and if someone attempted to correct it, they’d be rightly criticized for massaging the instrumental record.

    I’d like to use the data as originally intended … but they haven’t published it, as far as I know.

    Regarding the “explanation”, I don’t care about the explanation. Whatever the circumstances and assumptions might have been, it’s an error.

    You seem to think that they are somehow prohibited from fixing an error because they’d be “rightly criticized” … are you serious? Do you know how many times these guys have “adjusted” and otherwise changed the data, without any such obvious error?

    Now, I don’t care how they fix it. They can throw out the bad data. Or they can flag it and leave it in. My point is that doing nothing to an admitted error, in a supposedly scientifically quality controlled dataset, does not give me confidence in their other actions.

    w.

  74. Nick Stokes says:
    June 29, 2013 at 12:48 pm

    Willis,

    “but the above-cited public records for the “112 locations used in long-term climate monitoring” STOP AT THE END OF 2012. So they only contain the first month of the summer, not the other two. So those jerkwagons are claiming a record, and still haven’t released the data they claim it is based on.”

    The records don’t stop at the end of 1912. You can look up each of them to the most recent half-hour. You can get the daily records for each month. Bourke in January? Here it is. They may not yet be in a convenient table for you, but it’s all there.

    No, it’s not there at all. That’s just the raw data. For the ACORN-SAT data, they claim that the 112 records in their survey have been subjected to additional scientific oversight and error-checking and quality control. So like the man said in Star Wars, “These are not the records you are looking for”—we’re looking for the ACORN-SAT records, which are made with extra science and special sauce. Your raw records? Sorry, not the same.

    Which means that as usual …

    You’re wrong.

    And I’m sure that as usual, you’ll explain in very painful detail why you are 100% correct in 3 … 2 … 1 …

    w.

  75. Well, if the facts are only ‘facts’ – for whatever reason – the deductions can only be ‘deductions’ for the GIGO reason.

    Pretty sad that this seems to be the case for Australia.

    And 2013 summer in the UK – so far, I’d call it the sunken summer. At least tomorrow is going to be seriously HOT – possibly over 24C! [per Metcheck]! Look, we’re over 50 North, and the globe seems to be cooling.
    Don’t like what that does to the growing season here.

    Auto

  76. What is meant by the “minimum” and “maximum” temperature?

    We’ve had at least two days recently in Auckland, NZ where the temperature at midnight was higher that the temperature at 3pm (Warm previous day, persistent cloud over night and during the day, with a strong southerly during the day to cool things down). If the “minimum” is actually the “overnight low” and the maximum is the “afternoon high”, then you could end up with the minimum higher than the maximum.

  77. The error is in trying to retrospectively apply the WMO temperature recording standard.

    There are other issues related to this.

    For example, I was surprised to learn this.

    Daily summaries in SYNOP messages are based on measurements that occur between
    synoptic reporting times and often over a period less than 24-hours. For instance in Europe
    minimum temperatures are recorded over the first 12-hour period and maximum temperatures
    during the next 12-hour period. Measured in this way, the true daily minimum and maximum
    temperatures are often not reported because they occur outside those 12-hour periods.

    http://www.wmo.int/pages/prog/gcos/aopcXVIII/6.3_daily_messages.pdf

    Clearly, measuring min and max temperatures in this way will frequently give you minimums higher than maximums.

  78. Philip Bradley says: June 29, 2013 at 1:52 pm
    “The example was from 1988 at a professionally manned station with presumably automatic temperature recording.”

    Do you have evidence that it was professionally manned? It certainly wasn’t automatic – that came in June 1992.

    But really – we’re talking about a single apparently deviant 3hr reading on a day in 1988, which ACORN give as a particularly hard case to decide.

    Ian George says: June 29, 2013 at 1:23 pm
    Sorry I misunderstood your earlier “not any more”. I don’t know why the reason for the change at Bourke. It’s possible that they had more than one record and averaged. GHCN v2 had duplicate records for Bourke from 1953 onwards.

  79. Willis Eschenbach says: June 29, 2013 at 2:34 pm
    “And I’m sure that as usual, you’ll explain in very painful detail why you are 100% correct”

    No pain. ACORN checked historic data – recent data, automatically recorded every half hour, doesn’t change. If you check Bourke ACORN for December 2012 (or any other recent month) vs general BoM, available to present, you’ll find they are identical.

  80. I prefer to adjust raw data myself. After watching Hansen adjust temperature data I am absolutely certain that I would want raw temperature data if I were doing a correlation or a proof for some reason. But I would be very careful what adjustments I would use. Let me demonstrate with an example that you can do. Go to Google and input “temperature Washington DC”, select wunderground.com from the list and scroll down to Washington Weather Stations and review the 30 or 40 temperature readings. These are within 25 miles to 50 miles of DC. The temperatures range from 83F to 94F. I have done this same chore for where I live in all kinds of weather/time of day and the differential range is always similar to DC.

    The measurement stations are MADIS, RAPIDFIRE and NORMAL. You can click on the name and get time graphs of the data. You can click on those stations with an > at the end and get a full range of weather information.

    My point is that one can fool ones self with small adjustments. Looking at the big picture often yields better decisions. I would never do what BEST did, which is to make a mulligan stew by throwing all readings into a pot. I would carefully select perhaps 100 or fewer temperature records around the globe, examine them very carefully for suitability and work with those data sets.

    I agree with Wayne, we got side tracked on this one. But thank you very much, Willis. You are a tremendous worker with great ability to analyze data and spot problems.

  81. Nick
    Check ACORN’s data for Bourke for Jan 39 against the raw data – downward adjustment by 0.36C for the month. The source was from the Bourke PO. One explanation why they were changed was because they were compared to neighbouring stations (eg Cobar, Walget) but after checking these, there appears to be no correlation.
    My whole point is, if these adjustments are applied to enough stations during the earlier records, it makes the whole ACORN data set meaningless and not worthy to base such conclusions about this ‘angry summer’.
    Raw data for Bourke Jan 1939 here.

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=122&p_display_type=dailyDataFile&p_startYear=1939&p_c=-461101351&p_stn_num=048013

    17 days straight of over 40C. That’s a pretty ‘angry month’.

  82. Giles has been manned since 1956, although only since 1972 by BoM staff. I’ll take your word for it not being automated until 1992.

    The issue here is the data is the data. Transcription errors are something else. Trying to find errors in the data as recorded, decades later, just gives free rein to confirmation bias.

    The example I gave is arguably an example of this, where an unusually low temperature is removed from the record, for what to me seems faulty reasoning. The problem with confirmation bias, is the people doing it, don’t know they are doing it, and it is devilishly hard to pin down after the fact.

  83. Billy Liar
    ‘They are effectively assuming that there was a transient error in the electronic thermometer recording the data which ocurred at that time but afterwards it continued to operate correctly and no maintenance was carried out.’
    Reading what you were saying about invalidating lower temps around dawn, I’m reminded about when, on 18th Jan, 2013, Sydney had its highest temp of 45.8C.
    It happened at approximately 2:54pm. I followed the AWS record that day and it was showing temps at 10 min intervals (usually it’s 30min).
    At 2:49 the temp was 49.9C – the temp at 2:59 was 49.7C.
    So in that 10min period, the temp jumped 0.9C and then dropped 1.1C. Obviously there was a ‘spike’. It appears that, as you say, there may have been a ‘transient error’ but, maybe because it was a high spike, it was not invalidated.
    The Automatic Weather Station (weather.iinet) shows the temp only reaching 45.1C.
    It would be interesting to find out from someone in the know as to wha really happened that day.

  84. I’ve been looking at “time of observation” issues for several years now, and I cannot come up with a reason why it would produce minimum temperatures higher than maximum temperatures, even with the TOBS causing the pairs of readings to be recorded for separate days, and even if days were missed (unless some very creative infilling methods were used).

    For those unfamiliar with the issue, for a long time temperatures were taken with vertical mercury thermometers that had two effectively “ratcheted” markers. The maximum-temperature marker could be pushed up by the rising column of mercury, but would not fall if the mercury column fell. The minimum-temperature marker could fall with a falling column, but would not rise when the column rose.

    Generally once per day, the observer would go out to the Stevenson box, record the settings of the maximum and minimum markers, then manually move them to the present height (temperature) of the column, which would allow them to be moved by the column over the next 24 hours.

    If the reading were not taken at midnight, and it virtually never was, then the question could arise as to which calendar day the extreme really occurred on. For the 9am readings, the most reasonable assumption (barring other info) was that the maximum occurred the previous afternoon and the minimum occurred the present early morning. This is apparently what the ACORN records did.

    But could this by itself explain the “inverted” readings? I don’t see how. Let’s say that on the 6th of the month, the observer notes a maximum of 20C and a minimum of 10C. The 20C max is recorded for the 5th and the 10C min is recorded for the 6th. The markers are reset to the temperature at the time of the reading, which MUST BE between 10C and 20C.

    Let’s say a cold front is moving through that day, and the temperature in the next 24 hour period never again gets higher than that at the time of the reading. When the observer comes out on the morning of the 7th, the maximum temperature marker will always read at least 10C. To continue our example, he reads a maximum of 10C, which is recorded for the 6th, and a minimum of 5C, which is recorded for the 7th.

    Let’s say that instead, the observer does not make any reading on the morning of the 7th, and his next reading (and resetting!) is on the morning of the 8th. His maximum reading, which is now for the previous 48-hour period, still must be at least 10C, even if the actual maximum temperature for the previous 24-hour period never got as high as 10C.

    So to continue our example, our observer notes a 10C maximum and a 0C minimum. The 10C he records for the 6th, and the 0C he records for the 7th. Now he has a gap to fill in the records. It would be extremely doubtful, IMO, if he did not do some sort of interpolation, filling in a max for the 6th of around 15C to go between 20C for the 5th and 10C for the 7th. The same with minimum temperatures.

    Fundamentally, though, I cannot construct a reasonable scenario in which either of these effects would lead to a minimum temperature greater than a maximum temperature.

  85. Ian George – the other factor in your link is that it states the highest daily max reading is 49.7 on Jan 31 1903. Because ACORN starts in 1910 the new ACORN highest temp in Jan is 48.3 on 12th, 2013. a 1.4C difference hence the “RECORD” temperature in the Angry Summer.

  86. stan stendera says:June 29, 2013 at 12:08 am
    >… What is wrong with these people that they just can’t tell the simple truth. No grant is worth your soul.

    Mastering the art of Lying is an entry requirement for Australian socialist politicians and senior public servants … much like Obama and his mob.

  87. Janama
    Yes, I have mentioned this before in other posts. The reasoning behind it is that the Stevenson Shields weren’t rolled out to all stations until 1910/11. However, I am sure that Bourke would have had an SS then as it was an important w/s. It has daily temps back to 1871.
    By discarding all temps prior to 1910 and adjusting raw data, it makes it easy for the BOM to make that claim about the ‘hottest summer’.
    Check Bourke’s temps for 1896 – 22 days straight over 40C (highest 48.6C twice).

    http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=122&p_display_type=dailyDataFile&p_startYear=1939&p_c=-461101351&p_stn_num=048013

    Now that’s a heatwave.

  88. Willis, The reason the BOM went to the trouble of doing a whole new data set was because a group including Sen. Cory Bernardi, myself, Ken Stewart, and the BOM independent audit team (which I put together in 2010) asked the Australian National Audit Office to do an independent audit of the BOM data. Bernardi raised this in parliament. They had to respond.

    The original audit request: http://joannenova.com.au/2011/02/announcing-a-formal-request-for-the-auditor-general-to-audit-the-australian-bom/
    My write up of the BOM response:

    http://joannenova.com.au/2012/06/threat-of-anao-audit-means-australias-bom-throws-out-temperature-set-starts-again-gets-same-results/

    There are 20 + articles analyzing the BOM data: http://joannenova.com.au/tag/australian-temperatures/

    The independent audit team are the ones who spotted the problems with F–>C conversions, with different datasets, with weighting, grids, maxes greater than mins, inexplicable adjustments. Several of them engage with the BOM constantly asking for the original data and methods.
    Credit to Chris Gillham, Ken Stewart, Geoff Sherrington, Ed, Andrew, Ian, Lance, Janama, David Stockwell, Warwick Hughes and several others behind the scenes. They keep the BOM under pressure.

  89. Nick Stokes says (my emphasis):
    June 29, 2013 at 3:08 pm

    Willis Eschenbach says: June 29, 2013 at 2:34 pm

    “And I’m sure that as usual, you’ll explain in very painful detail why you are 100% correct”

    No pain.

    I’m sure listening to your explanations is never the slightest pain for you … I was talking about pain for us.

    ACORN checked historic data – recent data, automatically recorded every half hour, doesn’t change. If you check Bourke ACORN for December 2012 (or any other recent month) vs general BoM, available to present, you’ll find they are identical.

    Well, let’s see. You haven’t said what “recent” means, so that’s useless. And surprise of surprises, you haven’t provided anything but your big mouth to back up your claims.

    The Aussies say that “ACORN-SAT is a complete re-analysis of the Australian homogenised temperature database. “. So definitely, it is something more than your claim that it’s just the regular results. They also say:

    The Bureau maintains a layered approach to correcting data errors. Automated and semi-automated quality control systems are used to identify observational errors.

    An extensive audit trail of data and metadata keeps track of corrections that may need to be applied. The data from each of the ACORN-SAT observing locations go through ten different quality control checks.

    So your claim that “recent data, automatically recorded every half hour, doesn’t change” is obviously nonsense. First, the automatically recorded data goes through automated and semi-automated quality control. Of course, if they find errors, the data changes.

    Then, after the automated quality control, there are a number of other “quality control checks”, and these will also result in changed data.

    As a check, take a look at the standard BoM Bourke data here, and the ACORN-SAT Bourke data for the same period. Take a look at December 12, 2009, that’s a recent month, only four Decembers ago … the standard Bourke data has a max temp for that day of 32.7°C … but the ACORN-SAT data has thrown that data point out entirely, they have 99999.9 for that day, missing data. So your claim, that

    If you check Bourke ACORN for December 2012 (or any other recent month) vs general BoM, available to present, you’ll find they are identical.

    is, as usual, just another one of your fantasies. December 2009 in the data, and there may be others, I just took a quick look and found several.

    So as usual, Nick, you’re just spouting the first BS that comes to mind … the Australians say that the data goes through a number of checks after it’s first collected, and obviously, and contrary to your specious claim, the data changes as a result of that quality control.

    But if you do have the ACORN-SAT data for this year, I’m glad to take a look at it. And eventually, I’m sure the Aussies will get off of their duffs and get around to posting it. By then, of course, their bogus claims of an “angry summer” will be long forgotten … which may be only coincidental.

    w.

  90. Sorry if this is a duplicate. Please delete if the last came through.
    —————————————————————————–
    Willis, The reason the BOM went to the trouble of doing a whole new data set was because a group including Sen. Cory Bernardi, myself, Ken Stewart, and the BOM independent audit team (which I put together in 2010) asked the Australian National Audit Office to do an independent audit of the BOM data. Bernardi raised this in parliament. They had to respond.

    The original audit request: http://joannenova.com.au/2011/02/announcing-a-formal-request-for-the-auditor-general-to-audit-the-australian-bom/
    My write up of the BOM response:

    http://joannenova.com.au/2012/06/threat-of-anao-audit-means-australias-bom-throws-out-temperature-set-starts-again-gets-same-results/

    There are 20 + articles analyzing the BOM data: http://joannenova.com.au/tag/australian-temperatures/

    The independent audit team are the ones who spotted the problems with F–>C conversions, with different datasets, with weighting, grids, maxes greater than mins, inexplicable adjustments. Several of them engage with the BOM constantly asking for the original data and methods.
    Credit to Chris Gillham, Ken Stewart, Geoff Sherrington, Ed, Andrew, Ian, Lance, Janama, David Stockwell, Warwick Hughes and several others behind the scenes. They keep the BOM under pressure.

  91. Max/Min transposition may be explained in certain circumstances, but not all, but there are many, many other errors in Acorn. There are many data entry errors e.g. 26.8 instead of 36.8 (Alice Springs 28/01/1944) as well as obviously wrong adjustments e.g. Rutherglen maxima adjusted by -8.1C (13/10/1926) to produce a glaringly obvious anomaly compared to previous and following days; and the metadata in the Station Catalogue is still poor with much missing information.
    Further, the international review panel wrote : “(T)he surface temperature observation network fails to meet the internationally recommended minimum spatial density through much of inland Australia.” Acorn’s lead author Blair Trewin admits this, saying “Even today, 23 of the 112 ACORN-SAT locations are 100 kilometres or more from their nearest neighbour, and this number has been greater at times in the past, especially prior to 1950.”
    They also note: “The WMO Guide states that an acceptable range of error for thermometers (including those used for measuring maximum and minimum temperature) is ±0.2 °C. However, throughout the last 100 years, Bureau of Meteorology guidance has allowed for a tolerance of ±0.5 °C for field checks of either in-glass or resistance thermometers. This is the primary reason the Panel did not rate the observing practices amongst international best practices.”
    The introduction of Acorn was rushed and resulted in many errors, but it is odd that they have not reviewed and corrected them.
    Willis, I urge you to read my preliminary analysis at http://kenskingdom.wordpress.com/2012/05/14/acorn-sat-a-preliminary-assessment/
    which might give you some further background. There are many other faults to be highlighted.
    Ken Stewart

  92. Philip Bradley said @ June 29, 2013 at 2:49 pm

    The error is in trying to retrospectively apply the WMO temperature recording standard.

    Given the rather infrequent application of the WMO Standard at temperature recording stations, it does seem rather pointless. Then some of us find the idea of using average temperature as a proxy for entropy rather pointless.

    @ Gonzo

    You left off the dates for Marble Bar’s record hot spell: Oct. 30, 1923 to Apr. 7, 1924.

    D’you think Ms Gaia was angry that summer, too? ;-)

  93. Willis,
    “the standard Bourke data has a max temp for that day of 32.7°C … but the ACORN-SAT data has thrown that data point out entirely”

    Yes, the 32.7 is there – but you didn’t mention that is is flagged “Not quality controlled or uncertain, or precise date unknown”.

    By “recent” I was referring to the data set I linked to “Recent Months at Bourke”. Basically the last year. That’s enough to cover any post-ACORN period. Just don’t use flagged data, if you see any.

  94. Streetcred said @ June 29, 2013 at 5:00 pm

    stan stendera says:June 29, 2013 at 12:08 am
    >… What is wrong with these people that they just can’t tell the simple truth. No grant is worth your soul.

    Mastering the art of Lying is an entry requirement for Australian socialist politicians and senior public servants … much like Obama and his mob.

    How odd! Back in the early 1970s I worked in the CES and PMG department. I was told that since I was not Roman Catholic and I was a member of the Labor Party, that I would never be promoted. After teaching two people their job when they were promoted above me, I realised the truth of this and quit.

  95. Nick Stokes said @ June 29, 2013 at 6:09 pm

    Willis,
    “the standard Bourke data has a max temp for that day of 32.7°C … but the ACORN-SAT data has thrown that data point out entirely”

    Yes, the 32.7 is there – but you didn’t mention that is is flagged “Not quality controlled or uncertain, or precise date unknown”.

    Contradiction. The number in the record is either 99999.9, or 32.7. Someone’s telling porkies…

  96. Ian George says: June 29, 2013 at 5:39 pm
    “However, I am sure that Bourke would have had an SS then as it was an important w/s.”

    According to Trewin, the screen at Bourke was installed Augusr 1908.

  97. In outback Australia things are different and you should be surprised by such extreme temperatures because it can get pretty hot out there. It is far from the sea and the surrounding desert acts as a heat trap.
    Of course if you come from the city you have no idea about what the outback is like. The scale of everything is bigger and more extreme.
    In the city you are spoiled and pampered in every way. Unless you have lived out there you really have no idea of the scale of things.
    For example you take it for granted that you will have instant information on tap. In the Australian outback even the radio news is three days old. Just going from your front door to your letter box you take at least a weeks rations.
    As for extreme weather, you can be sure of it out there. The dust storms are so thick that the rabbits dug warrens in them. The wild life has adapted to the environment and become more fierce. The mosquitoes don’t suck blood, they suck bone marrow.
    Of course people also adapt. One of the well known characters from the old days was Crooked Mick who worked on stations (ranches) out there. One of his jobs was putting up fences and he was the best and fastest. He would lay so much fence in one day that it took him three days to walk back to the start.

    So the idea that minimum temperatures can exceed maximum temperatures is no big deal.

  98. @ Lew Skannen

    Wasn’t Crooked Mick the bloke with a face like a buffalo turd? And if you think that’s bad, you should have seen his bird! :-)

    The Git has fond memories of an outback cop called Blue who introduced him to the philosophy of Spinoza over a few beers…

  99. Lew Skannen – thanks for introducing some truth into the discussion of the BOM’s data. :)

    Like the UK Met office, the BOM was taken over in the late 1990s by spivs with hair gel and communications degrees whose job was to hype CAGW. They easily found naive and compliant people with science degrees (jobs in science always being hard to find) and away they went.

    In conjunction with CSIRO, the nation’s two most respected science bodies proceeded to swerve completely off the path of impartial research and measurement and into propaganda. Worrying about “global warming” (later rebadged “climate change”) found its way into their mission statements and corporate plans. This peaked in 2007, when our briefly resurrected Prime Minister, Kevin Rudd, announced that it was the greatest moral challenge facing our generation.

    Like the Royal Society in the UK, and many other prestigious institutions, they sold their hard-won reputations for a mess of pottage. That the BOM had to invent a new metric called the national temperature average (or whatever they call it) and go along with unscientific crap like the “angry summer” must have the traditional and quickly moved along scientists who once worked there crying in their beer.

  100. Jo Nova says:
    June 29, 2013 at 5:41 pm
    Willis, The reason the BOM went to the trouble of doing a whole new data set was because a group including Sen. Cory Bernardi, myself, Ken Stewart, and the BOM independent audit team (which I put together in 2010) asked the Australian National Audit Office to do an independent audit of the BOM data. Bernardi raised this in parliament. They had to respond.

    … more good stuff snipped …

    First, Jo, thanks for your comments giving credit to those to whom it is assuredly due.

    Next, thanks for your great blog, which is always worth reading.

    I fear my knowledge of the Australian situation is somewhat out of date. I knew that complaints about shabby records had forced a re-build of the whole deal, but I didn’t realize you were on the front lines.

    In any case, my very best to you, and congratulations, keep them on the hop.

    w.

  101. Yes the Stevenson screen was installed at Burke in 1908.
    Here’s the adjustments made to the Bourke temperature record by Simon Torok in 1996.

    This is the code they used.

    Station
    Element (1021=min, 1001=max)
    Year
    Type (1=single years, 0=all previous years)
    Adjustment
    Cumulative adjustment
    Reason : o= objective test
    f= median
    r= range
    d= detect
    documented changes : m= move
    s= stevenson screen supplied
    b= building
    v= vegetation (trees, grass growing, etc)
    c= change in site/temporary site
    n= new screen
    p= poor site/site cleared
    u= old/poor screen or screen fixed
    a= composite move
    e= entry/observer/instument problems
    i= inspection
    t= time change
    *= documentation unclear

    48013 1021 1965 0 -0.2 -0.2 odn
    48013 1021 1909 0 +1.0 +0.8 ords*
    48013 1021 1897 0 -1.7 -0.9 ord
    48013 1021 1885 0 -1.5 -2.4 ord
    48013 1021 1880 1 +2.0 -0.4 rd
    48013 1001 1965 0 +0.3 +0.3 fn
    48013 1001 1915 0 +0.6 +0.9 frd
    48013 1001 1909 0 -1.5 -0.6 ords*
    48013 1001 1898 0 +0.5 -0.1 od
    48013 1001 1893 0 -1.0 -1.1 od
    48013 1001 1882 1 +0.9 -0.2 od
    48013 1001 1881 1 +0.9 -0.2 od
    48013 1001 1880 1 +0.9 -0.2 od
    48013 1001 1879 1 +0.9 -0.2 od
    48013 1001 1872 1 +5.0 +3.9 d

    as you can see adjustments for the SS were made to both Max and Min.

  102. Nick Stokes says:
    June 29, 2013 at 6:09 pm (Edit)

    Willis,

    “the standard Bourke data has a max temp for that day of 32.7°C … but the ACORN-SAT data has thrown that data point out entirely”

    Yes, the 32.7 is there – but you didn’t mention that is is flagged “Not quality controlled or uncertain, or precise date unknown”.

    No need to mention it, it’s relevant. You claimed that the two records were “identical”. I assume you understand what “identical” means. It doesn’t mean “almost the same, but one is flagged 32.7°C and one is 99999.9″. Here’s your claim:

    ACORN checked historic data – recent data, automatically recorded every half hour, doesn’t change. If you check Bourke ACORN for December 2012 (or any other recent month) vs general BoM, available to present, you’ll find they are identical.

    I showed very clearly that they are NOT IDENTICAL. But you, being the nit-picking ridiculous jailhouse lawyer that you are, you are just being true to your sworn duty to never, ever admit that you are wrong.

    Sad to say, Nick, they are not identical. So your claim of no further processing after the data is published, your idea that historic data “doesn’t change” is just more patented Stokes BS, thrown out to deceive the unwary.

    But heck, keep it up. The entertainment value of watching you wriggle and squirm trying to prove that “identical” really means “sorta similar” is priceless.

    w.

  103. RockyRoad says:
    June 29, 2013 at 6:30 am

    The Pompous Git says:
    June 29, 2013 at 5:37 am


    “So how would you suggest we go back and redo the readings? Enquiring minds…”

    Like they do in professional science–by marking such readings with great big asterisks and indicating the problem in a companion comment. Leaving them unmarked leaves the assumption that nothing is amiss, when indeed there is.

    That is exactly what they do. They flag what seem to be anomalies and investigate them. Try reading their material on methods.

    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

  104. Patrick,

    “They always come crawling back to the “The Zapper” for some more…. sweet, sweet candy”

    Great character!! LOL

  105. If only they would invest as much time into doing the science as they have done in coming up with an acronym. Usually you know you are face-2-face with a bs machine when they have an acronym which is longer than 4 letter. In most cases they have thought up the acronym first, then designed the subject to fit the acronym.

  106. Willis, you really should store the data in an Access database. No programming required to find those errors,just a simple SQL would have done it. Access will also allow you to do other data mining, such as the number of days per year the temp is above a certain line, also the number of record breaking days, etc. All simple SQL, the results of which you copy and paste into Excel for plotting.

    Just a suggestion.

  107. Downloaded some of the data, I wanted to see what the temp range was 5 days on either side of the bad records, such as
    ID Location TMin TMax Date DateString
    15590 Alice Springs 21.7 20.8 01-Mar-10 19100301

    Once plotted you cant tell if the TMax is wrong, or the Tmin is wrong. It looks like a cold front moved through as the days before are in the high 30’s, but after mid 20’s

    Temp_Data.Date Temp_Data.MinTemp Temp_Data.MaxTemp
    27-Feb-10 23.1 34.8
    26-Feb-10 23.1 36
    25-Feb-10 22.4 34.8
    24-Feb-10 22.4 34.1
    02-Mar-10 17.4 20.5
    28-Feb-10 23.1 30.2
    01-Mar-10 21.7 20.8
    06-Mar-10 17.4 23.8
    05-Mar-10 15.1 20.8
    04-Mar-10 20.5 26.6
    03-Mar-10 17.4 27.9

    So how you “fix” such mistakes will be a head scratcher.

  108. Nick
    Thanks, I stand corrected re Bourke’s SS date of siting.
    Also thanks to Janama for the adjustment formula. Still, can’t see any logic in the adjustments with high temps reduced by differing amounts and low temps increased.

  109. Willis, I figured out why some days have higher TMin than TMax. It’s an issue of timing when TMax and TMin are taken. The one time I checked the temps 5 days on either side, it was clear a cool frontal system moved through. Follow this. The TMin is taken at night, say just after midnight, but when the TMax is taken, some time in the following afternoon, the night time temp could be higher than the following day high as the cold front moved through. Hence the data is actually correct. Nothing to fix.

  110. What is also interesting is that anomalous TMin>TMax was in 2010. 2010 was actually an abnormally cool year. Not once did it get above 29C. 232 days were below average. On that one anomalous day, Mar 1, and the day before were the coolest TMax of the year, a large plunge down for TMax in that period.

  111. If you read ACORN’s procedures you will see that when they measure temperatures they record the minimum and maximum temperatures from separate thermometers. If they read within 0.5C of each other the temperatures are accepted. A temperature recorded as 14.5 for the maximum and 14.8 for minimum would be acceptable because of the uncertainty of the thermometers. If the temperature then went down for the rest of the day (due to a passing cold front) the minimum would be higher than the maximum. Anyone who has measured data is aware that issues like this come up, especially when you have millions of data points.

    Some of the differences are too large for this explanation. At Open Mind the scientist in charge of the ACORN data set posted this response:

    “The situation actually arises because, where adjustments are carried out to the data (e.g. because of site moves), the maxima and minima are adjusted independently. What this means that if the maxima at a site in a given year are adjusted downwards because the former site is warmer than the current one (or if the minima are adjusted upwards because the former site is cooler), and you have a day when the diurnal range in the raw data is zero or near zero, you could end up with the adjusted max being lower than the adjusted min (e.g. if the raw data have a max of 14.8 and a min of 14.6, but the mins are adjusted up by 0.4, you would end up with a max of 14.8 and a min of 15.0).

    What this reflects, in essence, is uncertainly in the adjustment process (the objective of which is to provide the best possible estimate of what temperature would have been measured at a location if the site on that day was as it was in 2013). Clearly in these cases either the estimate of the max is too low or the min is too high; however, providing the adjustment process is unbiased, these cases will be offset by cases where the max in too high/min is too low, and there is no overall bias.

    We’ve decided, though, that the internal inconsistency (which, as Tamino notes, affects only a tiny percentage of the data) looks strange to the uninitiated, so in the next version of the data set (later this year), in cases where the adjusted max < adjusted min, we'll set both the max and min equal to the mean of the two."

    Any real data sets with millions of points have issues where adjustments are required. If you ask you can get explanations for many things.

  112. Michael Sweet says:
    July 1, 2013 at 7:15 am

    If you read ACORN’s procedures you will see that when they measure temperatures they record the minimum and maximum temperatures from separate thermometers.

    … Any real data sets with millions of points have issues where adjustments are required. If you ask you can get explanations for many things.

    Thanks, Michael, but so what? You seem to think that my issue was that they had not explained why the maximum was below the minimum. I don’t care a fig for their very reasoned explanation.

    My issue was not I hadn’t heard their excuses for the problem. It was that they had not fixed the problem despite having a year to do so.

    And while (as both Tammy and I noted, it seems) this only affects a small portion of the data … so what? My motto in my blog posts is “Perfect is good enough”, and that’s just for a blog post. As I said in the post, the issue is that if you are willing to let one problem slide that we can see, how many others are you letting slide that we can’t see?

    Sounds like they are planning to fix it at some future date … whoopee …

    w.

    PS—You are also discussing this as if the Australian BoM did not have a horrible, terrible record in curating and “adjusting” their data. You talk as if they were reasonable folks who just slipped up, as though they were actually honest men attempting to present a true picture of the historical temperature.

    But history is very much against you on that one, they are far from honest brokers. Go read the series of articles on Jo Nova’s blog (the links are upstream), and you’ll get a sense of just how sly and slippery the Aussie BoM is, and how much they’ve sold their souls to Noble Cause Corruption … no, I don’t trust them, Michael, not one bit, and I don’t trust their excuses and you’d be a fool to do so yourself.

  113. “My issue was not I hadn’t heard their excuses for the problem. It was that they had not fixed the problem despite having a year to do so.”

    Because there is nothing to fix if the temps are taken at different times. The only way to know for sure it to look at the hourly data for Feb 28, Mar 1, and Mar 2 to see what the actual readings were and how it changed.

    Doesnt make sense they would have one to read TMin and one to read TMax. Here in Canada, Environment Canada takes hourly measurements, then the Tmin and Tmax are taken from that dataset. So TMin cant be higher than TMax.

    This is why I suspect it is the timing they are taking the temps, assuming night is cooler than mid day.

    Only the hourly measurements will tell us. Is that available?

  114. I got to thinking about Michael Sweet’s post above, where he quotes the “scientist in charge” of the ACORN-SAT data as saying over at Tamino’s web site:

    “The situation actually arises because, where adjustments are carried out to the data (e.g. because of site moves), the maxima and minima are adjusted independently. What this means that if the maxima at a site in a given year are adjusted downwards because the former site is warmer than the current one (or if the minima are adjusted upwards because the former site is cooler), and you have a day when the diurnal range in the raw data is zero or near zero, you could end up with the adjusted max being lower than the adjusted min (e.g. if the raw data have a max of 14.8 and a min of 14.6, but the mins are adjusted up by 0.4, you would end up with a max of 14.8 and a min of 15.0).

    What this reflects, in essence, is uncertainly in the adjustment process (the objective of which is to provide the best possible estimate of what temperature would have been measured at a location if the site on that day was as it was in 2013). Clearly in these cases either the estimate of the max is too low or the min is too high; however, providing the adjustment process is unbiased, these cases will be offset by cases where the max in too high/min is too low, and there is no overall bias.

    We’ve decided, though, that the internal inconsistency (which, as Tamino notes, affects only a tiny percentage of the data) looks strange to the uninitiated, so in the next version of the data set (later this year), in cases where the adjusted max < adjusted min, we'll set both the max and min equal to the mean of the two."

    Now, let us suppose that the explanation of The Scientist In Charge Of ACORN-SAT is correct. First, this means that everyone who has theorized that the reason was a cold front moving through, or some actual meteorological condition, is incorrect.

    Next, I wondered, how large could such an error be? The adjustments to the record are done a tenth of a degree at a time, so it seems like the errors would all be less than a degree. In his example, it is two tenths of a degree. And indeed, most of the errors are in that range.

    But if his explanation is correct, what are we to make of these errors?

    Tibooburra, 1912/06/23, minimum temp was 1.2°C warmer than the maximum

    Tibooburra, 1913/10/31, minimum temp was 2.2°C warmer than the maximum

    Tibooburra, 1930/10/29, minimum temp was 2.2°C warmer than the maximum

    The Tibooburra ACORN-SAT record starts in 1910. For the ACORN scientist’s explanation to hold water, by 1912 they’d have had to already adjusted the Tibooburra record by two degrees … sorry, I’m not buying that explanation for one minute.

    Then we have

    Tenant Creek, 1940-01-31, min above max by 3.9°C … are you starting to see the problem?

    Then we go on to

    Tarcoola 1923-05-01, minimum temp was 2.4°C warmer than the maximum

    Now this one is interesting. There were no examples in the record of this happening at Tarcoola prior to this one, and the next example did not occur for another twenty-five years, in 1948 … and The Scientist In Charge claims that this is an artifact of the adjustment process?

    And those are just the problems in the stations that start with “T” …

    This is a perfect example of why this kind of error should concern the “scientist in charge”, or at least why I’m concerned about this kind of error. My last job in the accounting field was Chief Financial Officer for a company with $40 million dollars of annual sales. And as any good accountant will tell you, assuming that you know the reason for an error can be lethal, regardless of the size of the error. The problem is that one error can live in another error’s shadow, or two errors can counteract each other, leaving little visible. And like anyone who has done much accounting, I’ve been bitten by both of those problems, and I’m very aware of them.

    Now, the very, very worst thing that you can do with a small error like that is sweep it under the rug without determining exactly, not approximately but exactly, what is wrong in every instance of that error. Yes, nine times out of ten sweeping a small error under the rug makes little difference. But, as in this case, sometimes not all of the errors are from what The Scientist In Charge Of ACORN-SAT thinks they are from. Sometimes, as with Tarcoola, there’s a 2.4°C error hiding in the midst of a bunch of half-degree errors.

    Which is why the most problematic statement in The Scientist’s explanation is this one:

    … so in the next version of the data set (later this year), in cases where the adjusted max < adjusted min, we'll set both the max and min equal to the mean of the two.

    HAIEEE … this is exactly why I was concerned. They are going to automate the error out of existence. All his proposed procedure will do is make damn sure that Willis never discusses these errors again, not because they’ve gone away, but because they will be hidden. Not removed. Not fixed. Simply hidden. No effort to find out what caused the 3.9°C error in the midst of the other tenth-of-a-degree errors in Tenant Creek. Just change the procedure to hide, not eliminate but hide, that error when it gets bad enough that it is visible … but make no attempt to actually fix the underlying problem. Look, if the minimum temp can be off so far that it is more than the maximum temp … then how many 3.9°C errors are there in the record where it didn’t happen to get caught because it ended up more than the maximum? And now, we’ll never find out, because they plan to hide those occurrences.

    And that is why, as an accountant, I become concerned when I see such small “trivial” errors. Because it makes me wonder—what other errors are hidden in there?

    The main problem in all of this is excessive reliance on computers, and a corresponding unwillingness to dig through and look at each record. Now, when it’s the BEST dataset with 30,000+ station records, I can understand that … but when you have 112 records, and they are supposed to be your “Climate Reference Network”, that is just plain intellectual laziness, part of the “phone-it-in” attitude that seems prevalent in the Aussie BOM.

    Obviously, The Scientist In Charge of ACORN-SAT didn’t take the trouble to do what I did, look at each individual error. He made an incorrect assumption, and has now hurried off to implement an incorrect solution.

    Oh, well, I suppose at least he fooled Tamino with his nonsense, that’s always a good sign … me, not so much.

    w.

    PS—I did love The Scientist’s explanation of how it will all come right …

    Clearly in these cases either the estimate of the max is too low or the min is too high; however, providing the adjustment process is unbiased, these cases will be offset by cases where the max in too high/min is too low, and there is no overall bias.

    A real scientist would, you know, actually determine if “the adjustment process is unbiased” before making such an unsupported claim, rather than simply assuming that it is unbiased …

    But then since the adjustment process is not actually the problem causing the two- and three-degree errors, I guess it doesn’t matter … maybe that’s what he meant by the errors offsetting each other …

  115. Willis, you have highlighted what we have been saying for a year- if there are so many mistakes of this magnitude, and they missed any quality checking before publication, and haven’t been fixed a year later, what confidence can we have in the whole dataset? As I mentioned previously, Acorn is riddled with errors- 10 degree mistakes are easy to find, as are adjustments of over 8 degrees. We only have Blair Trewin’s word for it that there is no bias, and I’m not convinced- that’s what David Jones assured me about the previous “High Quality” (sic) dataset before I proved it to be comprehensively biased by at least 40%. ACORN should have been called A CON. Having said that, the record since 1979 reflects UAH for Australia quite well.
    Thanks for your work.
    Ken Stewart

  116. Obviously, The Scientist In Charge of ACORN-SAT didn’t take the trouble to do what I did, look at each individual error. He made an incorrect assumption, and has now hurried off to implement an incorrect solution.

    What was the assumption you allege he made? Seems to me like he explained how such anomalies occur.

    PS—I did love The Scientist’s explanation of how it will all come right …

    “Clearly in these cases either the estimate of the max is too low or the min is too high; however, providing the adjustment process is unbiased, these cases will be offset by cases where the max in too high/min is too low, and there is no overall bias.”

    A real scientist would, you know, actually determine if “the adjustment process is unbiased” before making such an unsupported claim, rather than simply assuming that it is unbiased …

    He didn’t state an assumption, he gave a conditional caveat. Aren’t you making an assumption yourself? A real scientist would test the proposition. A real scientist would have asked questions and investigated. Looks to me like you threw your hands up when you came across some anomalies and made a grand statement about the quality of ACORN-SAT. That is blog-standard science. At any time you could have contacted ACORN and asked about the anomalies, but it took a commenter to go through the painstaking task of discovering the email address of the ACORN director, and laboriously constructing some sentences to discover more about the issue. That commenter followed a reasonable procedure in investigating the issue.

    And that is why, as an accountant, I become concerned when I see such small “trivial” errors. Because it makes me wonder—what other errors are hidden in there?

    You discovered a 0.02% ‘error’ based on the 917 inverted min/max in the data out of several million data points. The BOM say they have an error rate of a few tenths of a percent, an order of magnitude greater than you discovered. How is that hiding the errors? They even discuss some of the errors that crop up. If you want to find out what kinds of errors there have been, you can read their reference material, and, if you are a real investigator, contact them for further details. ‘Real scientists’ go the distance.

    But if you can’t be bothered doing that, how about calculating what difference the ‘errors’ you discovered would make to the claim that, based on BOM surface data, summer 2012/2013 was the warmest on record. If the record was broken by 0.2C, then how much impact could 917 data errors have of 4 million? For instance – summer data is 10416 data points per annum for a network of 112 stations. If all 917 errors occurred in 2012/2013 summer, and all were biased high by 3.9C, what impact would that have on the record if you removed those anomaly altogether? But you know when these anomalies occurred, so you can simply take them out of the record and see what impact that has. Then you would be applying a statistical test to the issue that kicked off your investigation.

    Either work with the information you have and do some statistical analysis with appropriate caveats, or find out more about the information you don’t have and do a throrough job. So far you’ve made sweeping criticisms with little effort to explore them.

  117. summer data is 10416 data points per annum for a network of 112 stations

    That would be the number of averages, the min/max data points would be twice as many, obviously.

  118. Barry, the commenter who contacted BOM and got such a rapid response was indeed fortunate- normal response time from Webclimate is 3 days, and it took 3 months and a complaint to the minister before I got a reply to my queries re HQ data, which didn’t really answer my questions even with continued pushing. The point remains- this is one example of the many errors in Acorn which have not been fixed 15 months after its first release.
    Ken Stewart

  119. barry says:
    July 1, 2013 at 7:17 pm

    Obviously, The Scientist In Charge of ACORN-SAT didn’t take the trouble to do what I did, look at each individual error. He made an incorrect assumption, and has now hurried off to implement an incorrect solution.

    What was the assumption you allege he made? Seems to me like he explained how such anomalies occur.

    He did explain how they occur. He said they happened because of the adjustments made to the min and max datasets. That was an assumption, as the data shows.

    If you think that bit of handwaving from the Scientist in Charge explains a minimum temperature which is 3.9°C above the max, just one day like that, with twenty years on either side without the minimum ever exceeding the maximum, then I fear you need professional help. You’re beyond my poor ability to educate.

    And The Scientist In Charge, obviously, didn’t actually look at the errors. If he had, he wouldn’t have tried a bullshit excuse like the one about the data adjustments causing the problem to handwave away a 3.9°C error.

    But if you can’t be bothered doing that [writing to the BOM, see below], how about calculating what difference the ‘errors’ you discovered would make to the claim that, based on BOM surface data, summer 2012/2013 was the warmest on record.

    I’d love to do that, in fact it’s why I started looking at the ACORN-SAT dataset … but unfortunately, the Scientist In Charge hasn’t bothered updating the ACORN dataset since December 2012 …

    Next, you claim that the tiny size of the error should make it immune from discussion. As I said, the error itself is not an issue with me. All datasets contain errors. The issue is that a year ago, it was pointed out to the Scientist in Charge or at least to his merry men that there was a problem with the minimum and maximum temperatures—some of the minimums were above the maximums. And this week, after having had a year to look at the issue, he’s giving us fairy tales about how the error is from the adjustments … some may be, but a 3.9°C error in the middle of forty years of perfectly good data is not from the adjustments, no matter what The Scientist foolishly claims.

    And The Scientist was only able to blithely assume that the error was from adjustments because he has not taken one simple dang look at the error that was brought to his attention a year ago. If he had, he’d have said wow, minimum 3.9°C above the max, that needs fixing.

    That’s what worries me, barry … nobody’s minding the store. Nobody’s looked at the error.

    Finally, you suggest that I write to the Australian BoM to find out what excuse they are trotting out this week for their continuing series of errors. Clearly, you haven’t been following the BoM story for the last five years or so. Check out the links in Jo Nova’s post upstream. They’ve been asked, they’ve been cajoled, and finally they’ve been forced to submit to an audit to try to get them to reveal their methods and what they’re doing to the data, and they are still stonewalling … you can write them if you think it’s a brilliant plan, I’ll pass. So far in this thread, all that they have delivered is vague platitudes about temperature adjustments that turned out not to explain the errors at all …

    In short, your assumption of good will on their part is touching, and does credit to your heart, but the real world is not so accommodating …

    w.

    PS—Automatically “averaging out” errors, as The Scientist In Charge says they are going to do in future, is very poor technique. It prevents you from ever finding out just what did cause that ugly 3.9°C error … not a good plan.

  120. Ken, can you describe what the error is, exactly, and how it leads to a bias in the records sufficient to undermine the assertion that Australian summer 2012/13 is the warmest on record? I can’t see how this ‘error’ would make a substantial difference.

  121. He did explain how they occur. He said they happened because of the adjustments made to the min and max datasets. That was an assumption, as the data shows.

    I still don’t see an assumption. He neither assumes the anomalies are correct (he says the opposite), nor makes an assumption about the adjustment process. What are you referring to? Explanation =/= assumption in my dictionary.

    And The Scientist In Charge, obviously, didn’t actually look at the errors. If he had, he wouldn’t have tried a bullshit excuse like the one about the data adjustments causing the problem to handwave away a 3.9°C error.

    Now, that is an assumption. Why not just ask them if they noticed these anomalies, and why they didn’t do something about it if they did?

    Next, you claim that the tiny size of the error should make it immune from discussion.

    On the contrary. I said:

    how about calculating what difference the ‘errors’ you discovered would make to the claim that, based on BOM surface data, summer 2012/2013 was the warmest on record

    Rather than making the ACORN data set immune to analysis, which seems to be the point you are pursuing, I urged you to work with the information you have. Analyse what you know – don’t throw out the data just because it’s problematic. And for what you don’t know, investigate more deeply. Talk about projection!

    <blockquote.Check out the links in Jo Nova’s post upstream.

    None, that I could see, refer to this particular issue. When was the BOM made aware of the min/max anomalies a year ago? Do you have a link?

    I am reminded of Fall et al, which actually did the hard yards, made a reference network and compared trends and data. That was ‘real science’, and they analysed and documented problems with the min/max trends (while finding the average values seemed to be ok).

    I have no problem with, indeed I encourage you (or anyone else) to investiage problems you perceive with the BOM data. What I think is outlandish and ironic is BOM being scolded for not doing their ‘homework’ when you’ve clearly done very little on this particular issue (min/max anaomalies) yourself. Handwaving? That’s when one is dismissive without doing much analysis, isn’t it?

    In short, your assumption of good will on their part is touching…

    That is incorrect and irrelevant. Analysis should take place without assumption of good-will or bad. ‘Real science’ is neutral. Think there’s not enough information? Then take steps to rectify that. You have speculated that min/max adjustments may be biased. Follow your inquiry, test data randomly, and also consider the validity or not of adjustment methods. Anyone can make a graph from selected data to make a point as Jo Nova does, but that’s not neutral analysis.

    If you could formulate your scientific criticisms into questions for the BOM, what would they be? These would clarify your concerns and focus your investigation. Maybe you could politely email the BOM for further information.

  122. barry says:
    July 1, 2013 at 10:07 pm

    He did explain how they occur. He said they happened because of the adjustments made to the min and max datasets. That was an assumption, as the data shows.

    I still don’t see an assumption. He neither assumes the anomalies are correct (he says the opposite), nor makes an assumption about the adjustment process. What are you referring to? Explanation =/= assumption in my dictionary.

    Barry, the assumption was that the errors were from the adjustments. The Scientist In Charge said, and I quote:

    The situation actually arises because, where adjustments are carried out to the data (e.g. because of site moves), the maxima and minima are adjusted independently. What this means that if the maxima at a site in a given year are adjusted downwards because the former site is warmer than the current one (or if the minima are adjusted upwards because the former site is cooler), and you have a day when the diurnal range in the raw data is zero or near zero, you could end up with the adjusted max being lower than the adjusted min (e.g. if the raw data have a max of 14.8 and a min of 14.6, but the mins are adjusted up by 0.4, you would end up with a max of 14.8 and a min of 15.0).

    What this reflects, in essence, is uncertainly in the adjustment process (the objective of which is to provide the best possible estimate of what temperature would have been measured at a location if the site on that day was as it was in 2013).

    OK, I can see how that might kinda makes sense for small errors, that the minimums end up slightly higher than the maximums because of the adjustments to the data … although if your adjustments lead to physically impossible situations, wouldn’t you question the adjustments?

    But in any case, here are the errors in just one station, Tennant Creek.

    Barry, perhaps you can explain to the class how The Scientist In Charge was NOT making assumptions, but was actually correct when he said that these errors are from “adjustments” …

    And if not, if you can’t explain those two degree and three degree plus errors as the result of adjustments, perhaps you might point out to the class the incorrect assumption of The Scientist In Charge.

    I can explain it to you, and I have, several times … but the understanding part, you’ve gotta provide that yourself.

    w.

  123. barry says:
    July 1, 2013 at 9:23 pm

    Ken, can you describe what the error is, exactly, and how it leads to a bias in the records sufficient to undermine the assertion that Australian summer 2012/13 is the warmest on record? I can’t see how this ‘error’ would make a substantial difference.

    BZZZT! Ken didn’t say anything about an error being “sufficient to undermine the assertion” about the Australian summer being so hot. Near as I can tell he said nothing about that summer at all. If he did, you need to quote the claim that you are objecting to so we can decipher your objection. What on earth are you referring to?

    Nor do you say what “this ‘error'” is that you are questioning in the final sentence. Since this is ACORN-SAT, there’s lots to choose from … for example, you claim you can’t see how it will make a “substantial difference”.

    But since you are the first person in this thread to use the term “substantial difference”, I’m clueless what error you might be referring to.

    More information, please …

    w.

  124. Sorry Barry, not with you- I’m not sure we’re talking about the same things. There were obvious errors causing bias in the HQ dataset. There are numerous errors in Acorn- I’ve mentioned a couple above. Many stations have past winter maxima cooled, but there is no evidence of deliberate bias. I don’t know whether the errors cause bias, but they certainly cause lack of confidence in the record. And I haven’t mentioned the Angry Summer in this thread. And Willis- Tennant Creek in the hot interior- I can’t imagine minima ever exceeding maxima for any reason,

    Ken

  125. although if your adjustments lead to physically impossible situations, wouldn’t you question the adjustments?

    Sure. But are the anomalies physically impossible? “It is not possible that the warmest time in a 24 hour period could be when the thermometers are reset” (9:00 am in the case of BOM practises, mostly) – I would also test that assertion. I did some googling to see if there were other places in the world where this has happened, and indeed it does, as far as weather watchers have posted. Of billions of data worldwide, surely this could happen a few times. But I would not assume that this was the case for the BOM data either. I’d make no assumptions.

    Barry, perhaps you can explain to the class how The Scientist In Charge was NOT making assumptions, but was actually correct when he said that these errors are from “adjustments” …

    Short of doing the work, I could not explain that. The same should apply equally to anyone else who has not investigated the matter.

    I can explain it to you, and I have, several times

    Really? I may have missed it, but it seems to me you have made assertions (eg, “it’s physically impossible”). But have you tried to replicate the process? Can you explain the adjustment process to begin with, and what is wrong with it? To the point that initiated this branch of the conversation, would they make a difference? Would ‘errors’ of the kind you’ve pointed out lead to a biased temperature record sufficient to discredit the notion of a record-breaking summer or not?

    Ken didn’t say anything about an error being “sufficient to undermine the assertion” about the Australian summer being so hot.

    It appears Ken has done some work on BOM data, so I wondered if he wanted to weigh in on the summer temps issue that provoked your interest.

    And it’s my interest, too, because I live in Australia, and followed the weather reports around the country during the summer. My ‘experience’, limited as it was to anecdotes and an array of data points, was that summer nationally was a particularly warm one, and there were clear records broken across the country. That doesn’t ‘prove’ that the national average was a record-breaker, of course, but that’s why I’m curious about the issue as raised here.

  126. barry says:
    July 1, 2013 at 9:23 pm
    Ken, can you describe what the error is, exactly, and how it leads to a bias in the records sufficient to undermine the assertion that Australian summer 2012/13 is the warmest on record? I can’t see how this ‘error’ would make a substantial difference.

    ———-

    I’ve only looked at Alice Springs, but TMax for the year has gone up since 1910. However, it was a faster rise from 1910 to 1960. 1960 had the highest Tmax, since then the rise over all is very shallow. But what is interesting is that every 9 to 11 years Alice Springs has an abnormally cool summer. Those years are quite prominent.

  127. Barry, here is Alice Springs record Jan TMax:

    Day – Temp – Year
    02 – 45 – 1960
    03 – 45.2 – 1960
    04 – 44.2 – 1972
    05 – 44.6 – 2004
    08 – 43.3 – 1932
    09 – 43.4 – 1932
    10 – 42.9 – 1915
    11 – 42.8 – 1935
    12 – 42.7 – 1928
    13 – 43.5 – 1981
    14 – 43.9 – 1936
    15 – 44 – 1944
    16 – 43.3 – 1932
    17 – 43.2 – 1939
    18 – 44.4 – 2001
    19 – 43 – 1928
    20 – 43 – 1928
    21 – 42.8 – 1935
    22 – 43.2 – 1939
    23 – 42.9 – 1915
    24 – 43 – 1928
    26 – 43 – 1928
    27 – 43 – 1928
    28 – 43.9 – 1936
    29 – 42.6 – 1938
    30 – 44.7 – 1990

    You can see it is dominated by years before 1950. Even the two recent records, 5th and 18th of Jan, were below the highest of 45C in 1960. Record breaking years has nothing to do with getting warmer. It has to do with accounting. In the first year of records, every day is a record breaker. As the years of data accumulate, the number or record breaking days drops off in a decay curve. The reason is the number of possible slots is huge. If the range for any given temp for any day in Jan is between 20 and 45C, measured in 1/10C, then there are 250 possible slots, for the full year times that by 366.

    To fill all those possible slots would take somewhere round 3000 years.

  128. Barry said:

    But if you can’t be bothered doing that, how about calculating what difference the ‘errors’ you discovered would make to the claim that, based on BOM surface data, summer 2012/2013 was the warmest on record. If the record was broken by 0.2C, then how much impact could 917 data errors have of 4 million? For instance – summer data is 10416 data points per annum for a network of 112 stations. If all 917 errors occurred in 2012/2013 summer, and all were biased high by 3.9C, what impact would that have on the record if you removed those anomaly altogether?
    ——————————–
    Barry, I am not a scientist nor a mathematician. But, I did spend the best years of my life feeding numbers to very senior politicians to sprout in Parliament. In the early years, I had people up the line checking everything I did, but later, as I got better at it, so not so much. To the best of my knowledge, nobody ever gave wrong information to Parliament based on my briefs.

    What I learned is (i) always do a back-of-the-envelope check about where the decimal point should be; and (ii) small errors often conceal, or flag, large ones.

    The point that Willis has very patiently been trying to make to you is that the thing the BOM consistently avoids is transparency about errors and weird results. Nobody is jumping up and down yelling “gotcha” because errors occur. Of course they do. Nobody is instantly claiming conspiracy theories because of errors or weird results – they happen, sometimes for valid reasons.

    The problem is that they refuse to be transparent about what they are doing. They make adjustments, expunge records from their working datasets, use new algorithms, invent new metrics – without leaving a visible trail or providing more than a bunch of platitudes to explain what they are doing.

    When preparing briefs for Premiers and Prime Ministers, when confronted with this kind of bullshit, I was always careful to insert the words “I am advised by the BOM (or whoever) that …”. No way was I going to drop some uninformed politician into saying that he/she actually believed it. Some of them chose to drop the qualifier – that was their call.

    But “I am advised that …” doesn’t work so well on WUWT. What possible justification is there for not laying on the table exactly what is happening with publicly funded weather statistics, when, and why?

  129. That’s interesting, jr, but incidental. No state broke the record, and plenty of places locally did not. BOM did not claim that Alice Springs broke its record. The BoM claim is about the national average.

    There will always be more record-breakers early in the record at the time, and by rights there should be fewer and fewer in an unchanging climate as records come. You’re right that record breakers by themselves do not describe a warming trend. Trend analysis is a different, only slightly related facet to my query.

    • Barry, that depends on what is being used for the trend. If the average is being used, it is largely irrelevant and grossly misleading. Average is simply (TMax-Tmin)/2. But that average is not the median temp. Those two extreme ends of the day may have only been under the hour interval of measurements. If you add up all the hourly temps, then divide by 24, you get a different number, more often than not, below the average.

      An increase in the average isnt tell us what is physically going on. For example, in Canada, the average temp have been going up since 1900. But that average increase is because winters are not going as cold. Milder shorter winters. In fact, max temps in the summer in Canada have been dropping since the mid 1930’s. Just the winter increase is faster than the summer decrease, hence the increase in average.

      Thus the only way to see what is physically going on, beyond the claim of increasing average, is to look at each station’s daily temps. If that data is corrupted in AU, then one can’t make any claims one way or the other. Scientifically, that’s unfortunate.

  130. The point that Willis has very patiently been trying to make to you is that the thing the BOM consistently avoids is transparency about errors and weird results.

    I’ve read several documents on their methods, which mention errors that crop up and – to some degree – how they deal with them. I don’t see Willis trying to explain that, but he has linked me to Jo Nova pages, advising me that I must be unaware of the flaws in BOM. I flicked through those to see if what he has lit upon is covered, but it isn’t.

    What possible justification is there for not laying on the table exactly what is happening with publicly funded weather statistics, when, and why?

    There is data, discussion of methods, uncertainty and problems with the data easily accessible at the BOM website. Commenters who are regulars here have posted links to them, and so have I. A commenter emailed a question regarding the issue Willis brought up here and they responded. This could be the beginning of a dialog, but neither Willis nor anyone else seems to want to make it so.

    I have tried to focus on the issue Willis brought up, particularly with regard to the matter that initiated his investigation (record-breaking summer) but people keep talking about how awful BOM is. They may or may not be right, but reading complaints and seeing a lack of willingness to investigate very deeply, or engage with BoM; can you understand why this might not be persuasive?

    I read at Jo Nova’s that after much lobbying, BoM addressed its data (with little change in general results).

    Rather than work with the data that’s available, the default is to rail against BoM. It seems like a distraction; a talking point to avoid number-crunching. I have asked what the upshot is of throwing out the anomalies. Nothing. I have asked for Ken and Willis to describe what they think has happened to the inverted anomalies. Nothing. I have asked what steps Willis has taken to understand the adjustment processes. Nothing. Did Willis contact BoM to enquire about the issue he discovered? Nothing.

    This pattern does not encourage me to take on the general blandishments. No one is obliged to do any of this, of course, but as sweeping statements are made, I wonder what steps could be taken to address the questions they raise. “BoM have not done their homework”? Statements like that make me skeptical. I’ve read about their methods, and that is not the impression I get. Are their methods invalid? Are their adjustments unreasonable? I don’t know, but the answers are not here yet, or at Jo Nova’s, as far as I’ve read.

    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    BoM provides raw and adjusted data (Jo Nova et al made use of it) The above links to an overview of their methods. What is missing in it that you think is needed?

  131. Wrote a little program which would count the number of record breaking days for each year as if each year was new from 1910 to 2012 for Alice Springs. Interesting, more of an asymptotic drop:

    (Not sure how to embed images in a post here).

  132. jrwakefield,

    If that data is corrupted in AU, then one can’t make any claims one way or the other. Scientifically, that’s unfortunate.

    if the problems with the data are not understood, then no one can say anything. That’s my point.

    ‘Corrupted’ is loaded language. We need less of that if we want to illuminate issues rather than use them as talking points. As Willis said;

    All datasets contain errors

  133. Willis, I missed a question of yours upthread.

    But since you are the first person in this thread to use the term “substantial difference”, I’m clueless what error you might be referring to.

    The one you pointed out in your article above, and which I’ve mentioned consistently since we started sommunicating.

    In the entire dataset, there are 917 days where the min exceeds the max temperature…

    You kind of answered the question in your article.

    By itself the finding likely make almost no difference for most applications.

    But I wondered if you’d put that to any testing.

    You also said;

    <blockquote.it means that I simply can’t trust the results when I use the data. It means whoever put the dataset out there didn’t do their homework.

    And sadly, that means that we don’t know what else they might not have done.

    Have you read this?

    http://www.cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    6. Data quality control within the ACORN-SAT data set ………………………………30

    6.1
    Quality control checks used for the ACORN-SATdata set ……………………………… 31
    6.2
    Follow-up investigations of flagged data………………………………………………………. 39
    6.3
    Common errors and data quality problems …………………………………………………… 41

  134. barry says:

    Rather than work with the data that’s available, the default is to rail against BoM. It seems like a distraction; a talking point to avoid number-crunching.
    —————————————————–
    barry, if Willis never responds to you again, I could understand it. The BOM controls the data “that is available”. They alter it, delete it and reframe it with explanations (if any) that the Tax Commissioner would not accept – like – I just thought up a better way of recording my expenses, and have therefore thrown out my receipts.

    I am reminded of a passage in Spike Milligan’s war memoirs, where they found themselves stationed next to an old cemetery. Mrs so-and-so’s marble slab was being used as a washboard by one of his colleagues. Her inscription said: “Not dead, only sleeping”.

    “She’s not fooling anyone but her bloody self”, muttered Spike’s pal, as he wrung out his socks on her.

  135. johanna,

    is the take-home message “there’s no use, you simply can’t do anything good with BoM data?”

    Do they not make the raw data available? I believe they are in the process of making the codes and methods, the programs that are particular to the computer system that they use for ACORN data, available via the internet.

    I’ve just started posting at Ken’s blog, where they’ve been working with the data. No ill will intended to anyone, by the way.

Comments are closed.