Australia and ACORN-SAT

Guest Post by Willis Eschenbach

As Anthony discussed here, some Australian climate scientists think that there was an “angry summer” in 2012. Inspired by the necromantic incantations in support of the Aussie claims coming from the irrepressible Racehorse Nick Stokes, I went to take a look at the Australian temperature data. I found out that in response to hosts of complaints about their prior work, in March of 2012 the Australian Bureau of Meteorology (BoM) released a new temperature database called ACORN-SAT. This clumsy acronym stands for the Australian Climate Observations Reference Network (overview here, data here)

acorn-sat overview

It’s a daily dataset, which I like. And they seem to have learned something from Anthony Watts and the Surfacestation project, they have photos and descriptions and metadata for each individual station. Plus the data is well error-checked and vetted. The site says:

Expert review

All scientific work at the Bureau is subject to expert peer review. Recognising public interest in ACORN-SAT as the basis for climate change analysis, the Bureau initiated an additional international peer review of its processes and methodologies.

A panel of world-leading experts convened in Melbourne in 2011 to review the methods used in developing ACORN-SAT. It ranked the Bureau’s procedures and data analysis as amongst the best in the world.

and

Methods and development

Creating a modern homogenised Australian temperature record requires extensive scientific knowledge – such as understanding how changes in technology and station moves affect data consistency over time.

The Bureau of Meteorology’s climate data experts have carefully analysed the digitised data to create a consistent – or homogeneous – record of daily temperatures over the last 100 years.

As a result, I was stoked to find the collection of temperature records. So I wrote an R program and downloaded the data so I could investigate it. But when I had just gotten all the data downloaded started my investigation, in the finest climate science tradition, everything suddenly went pear-shaped.

What happened was that while researching the ACORN-SAT dataset, I chanced across a website with a post from July 2012, about four months after the ACORN-SAT dataset was released. The author made the surprising claim that on a number of days in various records in the ACORN-SAT dataset, the minimum temperature for the day was HIGHER than the maximum temperature for the day … oooogh. Not pretty, no.

Well, I figured that new datasets have teething problems, and since this post was from almost a year ago and was from just after the release of the dataset, I reckoned that the issue must’ve been fixed …

… but then I came to my senses, and I remembered that this was the Australian Bureau of Meteorology (BoM), and I knew I’d be a fool not to check. Their reputation is not sterling, in fact it is pewter … so I wrote a program to search through all the stations to find all of the days with that particular error. Here’s what I found:

Out of the 112 ACORN-SAT stations, no less than 69 of them have at least one day in the record with a minimum temperature greater than the maximum temperature for the same day. In the entire dataset, there are 917 days where the min exceeds the max temperature …

I absolutely hate findings like this. By itself the finding likely make almost no difference for most applications. These are daily datasets, with each station having around 100 years of data, 365 days per year, that means the whole dataset has about 4 million records, so the 917 errors are 0.02% if the data  … but it means that I simply can’t trust the results when I use the data. It means whoever put the dataset out there didn’t do their homework.

And sadly, that means that we don’t know what else they might not have done.

Once again, the issue is not that the ACORN-SAT dataset had these problems. All new datasets have things wrong with them.

The issue is that the authors and curators of the dataset have abdicated their responsibilities. They have had a year to fix this most simple of all the possible problems, and near as I can tell, they’ve done nothing about it. They’re not paying attention, so we don’t know whether their data is valid or not. Bad Australians, no Vegemite for them …

I must confess … this kind of shabby, “phone it in” climate science is getting kinda old …

w.

THE RESULTS

Station, Bad days in record (w/ min. temperature exceeding the max. temp)

Adelaide, 1

Albany, 2

Alice Springs, 36

Birdsville, 1

Bourke, 12

Burketown, 6

Cabramurra, 212

Cairns, 2

Canberra, 4

Cape Borda, 4

Cape Leeuwin, 2

Cape Otway Lighthouse, 63

Charleville, 30

Charters Towers, 8

Dubbo, 8

Esperance, 1

Eucla, 5

Forrest, 1

Gabo Island, 1

Gayndah, 3

Georgetown, 15

Giles, 3

Grove, 1

Halls Creek, 21

Hobart, 7

Inverell, 11

Kalgoorlie-Boulder, 11

Kalumburu, 1

Katanning, 1

Kerang, 1

Kyancutta, 2

Larapuna (Eddystone Point), 4

Longreach, 24

Low Head, 39

Mackay, 61

Marble Bar, 11

Marree, 2

Meekatharra, 12

Melbourne Regional Office, 7

Merredin, 1

Mildura, 1

Miles, 5

Morawa, 7

Moree, 3

Mount Gambier, 12

Nhill, 4

Normanton, 3

Nowra, 2

Orbost, 48

Palmerville, 1

Port Hedland, 2

Port Lincoln, 8

Rabbit Flat, 3

Richmond (NSW), 1

Richmond (Qld), 9

Robe, 2

St George, 2

Sydney, 12

Tarcoola, 4

Tennant Creek, 40

Thargomindah, 5

Tibooburra, 15

Wagga Wagga, 1

Walgett, 3

Wilcannia, 1

Wilsons Promontory, 79

Wittenoom, 4

Wyalong, 2

Yamba, 1

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
150 Comments
Inline Feedbacks
View all comments
Editor
June 29, 2013 6:00 am

Thanks, Willis.

janama
June 29, 2013 6:04 am

Nick Stokes says: “They can’t re-do the readings.”
But they can adjust them. They can close stations that don’t agree with their agenda and only use those that do.
Such is the bias built into the current BoM. It’s disgraceful.
Lismore centre street has an unblemished record going back to 1907 – they closed it in 2003.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=058037&p_nccObsCode=36&p_month=13
It’s pretty clear why they don’t use it in their current records.
similarly with Casino:
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=058063&p_nccObsCode=36&p_month=13

Steve Keohane
June 29, 2013 6:15 am

Thanks Willis.
tokyoboy says:June 28, 2013 at 10:54 pm
Since Australia is in the southern hemisphere, occasionally things can be upside down?

I was sure Nick S. would chime in with that, he does , but you beat him to it.

Bob Layson
June 29, 2013 6:16 am

For Britain’s recent run of weather might I suggest ‘sullen summer’.

RockyRoad
June 29, 2013 6:30 am

The Pompous Git says:
June 29, 2013 at 5:37 am


So how would you suggest we go back and redo the readings? Enquiring minds…

Like they do in professional science–by marking such readings with great big asterisks and indicating the problem in a companion comment. Leaving them unmarked leaves the assumption that nothing is amiss, when indeed there is.
Science is more than numbers–it’s narrative, too.
Then the most accurate accounting of the dataset would be one in which such readings are omitted. An “educated adjustment” would always introduce error. How much and why? Again–another narrative would be in order.

June 29, 2013 6:31 am

Kevin Rudd and Julia Gillard two politicians destroyed each and their party over implementing Carbon Tax.

Patrick
June 29, 2013 6:33 am

“Steve Keohane says:
June 29, 2013 at 6:15 am
tokyoboy says:June 28, 2013 at 10:54 pm
Since Australia is in the southern hemisphere, occasionally things can be upside down?”
No he’s got it wrong, from here in Aus, those in the northern hemisphere are up side down, and their logic is inverse. It would be so much easier if this rock was a 2 dimensional flat thing, black body at that (lol)! *sigh* Rather than this sphere thing we live on! You win again, gravity! – Zapp Brannigan.

June 29, 2013 6:36 am

Patrick says:
June 29, 2013 at 5:07 am
“Nick Stokes says:
June 29, 2013 at 4:20 am”
You do not need to be a “scientist” to read a thermometer, wind, pressure or any other kind of gauge device what-have-you, that has some form of visible indicator (Like a speedometer) of what the current state is for that particular instrument. And it’s completely ridiculous to suggest otherwise!

The standard equipment used in Stevenson’s screens to measure temperature were Max-Min thermometers, you don’t read the current temperature, you read the max and min since it was last read. Hence if the standard measurement time was 9 am local the max would probably be from the previous day and the min from the current day. Apparently the practise was to read at 9am and assign the max to the previous day. If that’s what was done in the past and how it was recorded in the ledgers then that’s the data you have to live with when doing the analysis.

Patrick
June 29, 2013 6:57 am

“Phil. says:
June 29, 2013 at 6:36 am”
Does one need a PhD (Or be a “scientist”) to do that? No!

Chris @NJSnowFan
June 29, 2013 7:27 am

Anthony,
I think it is time to start independent temperature/weather spotter program using the same digital devices exclusively from wattsupwiththat.
Could become largest independent automatic temperature recording reading system network and could start with one state or the USA
Need to find manufacturer that can make a weather station that can automatically send wireless readings at locations to a network you can create. Individuals can purchase the units through wattsupwiththat and hook up at their home or work place in a location that is not near Pavement or air-conditioning/ heating units or can allow some . Pictures of location of each weather station sensor must be recorded also.
Sound good?
Have s good weekend

Don K
June 29, 2013 7:40 am

Obviously if a site has 100 years of data, a lot of the data was manually recorded initially. Then transcribed. Maybe multiple times. Lots of opportunity for transcription and/or transposition errors (entering numbers in the wrong column). I once worked for a while on a project that involved OCR of handwritten data and printed data in a variety of typefaces. It is astonishing (well, it astonished me anyway) how ambiguous a lot of numbers can be when smudges, drop-outs, fading, etc occur — as they often do — in the raw data. Would you believe that 5s and 6s can be indistiguishable in some common typefaces if a couple of pixels drop out during imaging?
Anyway, it might, and I emphasize MIGHT be possible to improve the data quality by comparing values to a five or seven day moving average, and simply rejecting any points that are too far from those averages. But there are some types of errors that even that won’t catch. e.g. Tom, who recorded the data every third week, consistently reversed the min and max fields or Jane, who did the recordings in the Summers of 1934,1935 and 1937 wrote handwritten 1s, 7s and 4s were indistinguishable.
Or the data may simply be useless for fine distinctions. That might, I suspect, be the case with a lot of historical climate data.
And of course it is possible that, as several commenters suggest, the data may have been “adjusted” so much that it is unusable for any purpose.

janama
June 29, 2013 8:05 am

DonK – Willis posted
“A panel of world-leading experts convened in Melbourne in 2011 to review the methods used in developing ACORN-SAT. It ranked the Bureau’s procedures and data analysis as amongst the best in the world.”
and this
“The Bureau of Meteorology’s climate data experts have carefully analysed the digitised data to create a consistent – or homogeneous – record of daily temperatures over the last 100 years.”

janama
June 29, 2013 8:51 am
June 29, 2013 9:07 am

In principle, which situation is worse for good science?
A) have a dataset that contains errors
B) have a dataset where the errors are “fixed” and the origional problems unseen?
Data is data.
If the data has errors, we do not decrease uncertainty by adjusting the data.
I argue that the idea situation is to have
A) the dataset that shows the errors, and
C) a dataset that shows the fixes and how they were fixed.
The key is to realize that the uncertainty in (C) is greater than that of (A).
(A) has an error component.
(C) is not the subtraction of error from (A),
but the ADDITION of corrections which themselves have error.
Granted the corrections are highly correlated with the hypothesized error in (A). If you track down the source of each error to root cause and can reliably correct the error, such as a clear transposition of keypunched data entry, you can indeed reduce error in (C). But without an autopsy on each error, realistically the total error in (C) should be treated as greater than the error in (A).

kadaka (KD Knoebel)
June 29, 2013 9:18 am

From jeremyp99 on June 29, 2013 at 4:49 am:

Meta is a prefix used in English (and other Greek-owing languages) to indicate a concept which is an abstraction from another concept, used to complete or add
Metadata is hence data about data

And metaphysics is physics about physics. With metaphysics completing or adding to physics.
I think the more practical working definiton is “it’s not that but related to it”.
Thus metaphysics is not physics but (presumably) related to physics, in that case generally as “alternate explanations”.
And metadata is not the data but is related to it, usually it’s the when, where, and how the data was obtained.

gnomish
June 29, 2013 9:18 am

what do the terms minimum and maximum mean?
are they defined by time of observation?
or are they defined by being the lowest and highest in any 24 hr period?

Mike M
June 29, 2013 9:25 am

ABM per Ross : “Accumulated data can affect statistics such as the Date of the
Highest Temperature, since the exact date of occurrence is unknown ”
Real scientists would tag fudged entries.

Chris4692
June 29, 2013 9:59 am

After in excess of 100 years of a procedure it’s best to maintain that procedure so the data are consistent throughout the set. If the procedure is changed, that is a different data set.

Kelvin Vaughan
June 29, 2013 10:00 am

Note all of the higher temps (above 30C) have been adjusted downwards, some by 0.9C.
Temps below 30C have been adjusted upwards by 0.1C.
Can anyone see any reason/logic for this?
It’s to adjust for parallax errors. On hot days the person reading the thermometer had drunk a lot of Lager and was on his knees. On cold days he was standing up in his padded high heeled boots.

Gonzo
June 29, 2013 10:28 am

Tamino has a post “like candy from a baby” about the “angry” summer. Disparaging both Willis and Bob Tisdale. To my amazement he posted my response and responded as follows:
Gonzo: [Drawing any conclusions from the Oz land temp record is like trying to determine ocean heat content pre-ARGO. Sparse data and in many cases bad data ie….many of the older records were recorded in whole degrees or when they converted to celsius in 1972. Save for a few quality stations which amount to a regional effect the Ozzie data should be taken with a serious dose of doubt. Much ado about nothing. BTW how many state heat records were broken during the “angry” summer? Oh none!
[Response: Gonzo proves the point. He doesn’t like what the thermometer says, so his comment amounts to nothing more than calling the thermometer a liar. That’s what those in denial have to resort to. But wait, there’s more!]
BTW how many state heat records were broken during the “angry” summer? Oh none!
Tamino: [Response: Bonus points — Gonzo adds “moving the goal posts” to denying the facts. The relevant fact is that last summer, and especially January, was scorching hot in Australia — not that some state broke a heat record. Gonzo hopes that by pointing to one factoid which isn’t a record-breaker he can distract everyone from such facts as:
During this period, Australia registered the warmest September–March on record, the hottest summer on record, the hottest month on record and the hottest day on record.
A record was also set for the longest national scale heatwave.
Does Gonzo actually believe that just because no state heat record was broken, that will magically transform the hottest summer on record nationwide, the hottest month on record, the hottest day on record, and the longest national-scale heat wave, into “nothing at all unusual about the 2012 summer”? Not too bright.
Perhaps most important for those of us who are interested in the truth, Gonzo has denied the reality of Australia’s scorching hot summer in order to distract us all from the fact that summers like that are now more likely than they used to be, by a lot. He must distract everyone from that fact, because that’s the real point.]
To which I responded back with this:
You don’t have the stones to post this but here goes anyway. cheers
Your response is puzzling to me as I would call you the Mr T of cherry picking…..HEY FOOOO YOU’RE CHERRY PICKING!!!
Who “denied” OZ is hot? Not me. You do know Oz is the hottest continent yes? Have you been to Oz? I have many times surfing both east and west coasts. WA is efffn hot always has been ie .. it gets hot there.
Tamino: [During this period, Australia registered the warmest September–March on record, the hottest summer on record, the hottest month on record and the hottest day on record.] the hottest day on record? really? You sure about that mate?
” Whilst it is probable that remote areas of the Australian desert have seen extreme temperatures that have gone unrecorded, the outback Queensland town of Cloncurry originally held the record for the highest known temperature in the shade, at 53.1 °C (127.5 °F) on 16 January 1889. Cloncurry is a small town in northwest Queensland, Australia, about 770km west of Townsville.
The Cloncurry record was later removed from Australian records because it was measured using unsuitable equipment (that is, not in a Stevenson screen, which only became widespread in Australian usage after about 1910). According to the Australian Bureau of Meteorology, the current heat record is held by Oodnadatta, South Australia, 50.7 degrees Celsius, occurring on 2 January 1960.
The world heat record for consecutive days goes to Marble Bar in Western Australia, which recorded maximum temperatures equaling or over 37.8°C on 161 consecutive days, between
Concerning Cloncurry why would the BoM call a thermometer a liar.
Of course he only posted my response as follows:
[You don’t have the stones to post this but here goes anyway. cheers] Followed up with disparaging remarks………
Par for the course over their when in doubt attack the messenger.
REPLY: there’s no point in paying attention to Grant Foster aka “Tamino” his rants are irrelevant – Anthony

Philip Bradley
June 29, 2013 10:45 am

Kalgoorlie-Boulder, 11
I am surprised by how small this number is. Summer temperatures in Kalgoorlie are highly dependent on cloud cover. It’s fairly common to have consecutive days when the difference in daytime temperatures differs by well over 20C.
Nick partly explained the problem. Observations are taken during the day of a min/max thermometer, but the BoM then convert this dataset to a international standard midnight to midnight day, Hence the assumption that the maximum was from the previous day.
Were the raw data used without shifting it to a midnight to midnight day there wouldn’t be a problem. Although there may be a time of Observation bias, a largely separate issue.
In summary, the original data was for consecutive 24 hour periods (more or less) and is perfectly satisfactory for calculating long term temperature trends across Australia.
The problem results from converting the data to the M to M standard and results in some minimum temperatures exceeding maximums. This looks strange but shouldn’t affect long term trends.
The real issue is the multiple adjustments in ACORN, which are just an invitation to confirmation basis.

kadaka (KD Knoebel)
June 29, 2013 11:21 am

gnomish said on June 29, 2013 at 9:18 am:

what do the terms minimum and maximum mean?
are they defined by time of observation?
or are they defined by being the lowest and highest in any 24 hr period?

With daily reporting a 24 hour period is assumed, all measuring occurring on a single day. Although the daily measurements were rarely taken exactly at midnight, leading to adjustments.
Minimum is the coldest measured temperature during the measured period. This is normally expected in the early morning just before sunrise, after the ground has been cooling off all night. But with weather fronts moving in, and the evaporative cooling of wind and rain, the minimum could happen anytime.
Maximum is the hottest measured temperature. Presumably that happens in the afternoon, with the Sun warming up the ground. Which is a problem. If you take your daily measurements at 9AM, you must then assume the highest reading was actually from yesterday afternoon, so you mark it down as yesterday’s high. Even though the cold front passed through, your “high” is from the morning you took the measurement as yesterday was bitterly cold morning to night.
This gets adjusted later as quality control. The weather station in the deep valley reports a 23°C high on a sunny Tuesday, but stations within 100km on flat ground reported a similar high amount on a rainy Monday, therefore the valley was reported wrong and their high gets moved to Monday so the similar highs happened at the same time.
Since ideally all of the measurements should go from midnight to midnight, this brings about the Time of Observation (TOBS) adjustments. It is excessively argued exactly how important the TOBS adjustments are for an accurate record, with the “usual suspects” insistent how absolutely necessary they are, and will promptly point to some analysis one of them did that proves it as the temperature trends are much too low without it.
(I’m still waiting for a coherent brief explanation of what TOBS really is and how it’s calculated. As expected, normally we’re told to read some paper by Hansen or someone else who presides over a screwed-up temperature dataset.)
Of course for much of the world’s “raw temperatures” we’re lucky to have numbers reported for a day at all, let alone the observation times, so TOBS is basically meaningless except perhaps for countries like the US.
Then comes the real magic. The twenty minutes when there was a break in the clouds and the Sun shone directly on the thermometer shelter/housing, will be averaged together with the five hours of hurricane-strength winds that plastered 10cm of packed snow on the side of the shelter/housing, to determine the average temperature that day was sufficiently high to cause a significant amount of snow melt. Which is in agreement with the projections of the climate models thus is confirmation of (C)AGW theory.

Billy Liar
June 29, 2013 11:25 am

One of the BOM ‘quality control’ measures is bizarre. Page 32 of their manual:
http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf
shows an example where a fall in temperature just before dawn is eradicated for ‘quality control’ because it is a ‘spike’ in hourly readings of more than 4°C. They ignore changes of >4°C as long as they are not followed by a change of >4°C of opposite sign.
In my view, in the example they show, they threw away the data possibly showing a change of state of water in the atmosphere or a transient change in wind direction with the net result in their ‘quality controlled’ output of raising both the minimum and the average temperature for that day.
They are effectively assuming that there was a transient error in the electronic thermometer recording the data which ocurred at that time but afterwards it continued to operate correctly and no maintenance was carried out.
This is astonishgly bad. No wonder we get global warming when clowns like these operate on the data.