Guest Post by Willis Eschenbach
As Anthony discussed here, some Australian climate scientists think that there was an “angry summer” in 2012. Inspired by the necromantic incantations in support of the Aussie claims coming from the irrepressible Racehorse Nick Stokes, I went to take a look at the Australian temperature data. I found out that in response to hosts of complaints about their prior work, in March of 2012 the Australian Bureau of Meteorology (BoM) released a new temperature database called ACORN-SAT. This clumsy acronym stands for the Australian Climate Observations Reference Network (overview here, data here)
It’s a daily dataset, which I like. And they seem to have learned something from Anthony Watts and the Surfacestation project, they have photos and descriptions and metadata for each individual station. Plus the data is well error-checked and vetted. The site says:
Expert review
All scientific work at the Bureau is subject to expert peer review. Recognising public interest in ACORN-SAT as the basis for climate change analysis, the Bureau initiated an additional international peer review of its processes and methodologies.
A panel of world-leading experts convened in Melbourne in 2011 to review the methods used in developing ACORN-SAT. It ranked the Bureau’s procedures and data analysis as amongst the best in the world.
and
Methods and development
Creating a modern homogenised Australian temperature record requires extensive scientific knowledge – such as understanding how changes in technology and station moves affect data consistency over time.
The Bureau of Meteorology’s climate data experts have carefully analysed the digitised data to create a consistent – or homogeneous – record of daily temperatures over the last 100 years.
As a result, I was stoked to find the collection of temperature records. So I wrote an R program and downloaded the data so I could investigate it. But when I had just gotten all the data downloaded started my investigation, in the finest climate science tradition, everything suddenly went pear-shaped.
What happened was that while researching the ACORN-SAT dataset, I chanced across a website with a post from July 2012, about four months after the ACORN-SAT dataset was released. The author made the surprising claim that on a number of days in various records in the ACORN-SAT dataset, the minimum temperature for the day was HIGHER than the maximum temperature for the day … oooogh. Not pretty, no.
Well, I figured that new datasets have teething problems, and since this post was from almost a year ago and was from just after the release of the dataset, I reckoned that the issue must’ve been fixed …
…
…
… but then I came to my senses, and I remembered that this was the Australian Bureau of Meteorology (BoM), and I knew I’d be a fool not to check. Their reputation is not sterling, in fact it is pewter … so I wrote a program to search through all the stations to find all of the days with that particular error. Here’s what I found:
Out of the 112 ACORN-SAT stations, no less than 69 of them have at least one day in the record with a minimum temperature greater than the maximum temperature for the same day. In the entire dataset, there are 917 days where the min exceeds the max temperature …
I absolutely hate findings like this. By itself the finding likely make almost no difference for most applications. These are daily datasets, with each station having around 100 years of data, 365 days per year, that means the whole dataset has about 4 million records, so the 917 errors are 0.02% if the data … but it means that I simply can’t trust the results when I use the data. It means whoever put the dataset out there didn’t do their homework.
And sadly, that means that we don’t know what else they might not have done.
Once again, the issue is not that the ACORN-SAT dataset had these problems. All new datasets have things wrong with them.
The issue is that the authors and curators of the dataset have abdicated their responsibilities. They have had a year to fix this most simple of all the possible problems, and near as I can tell, they’ve done nothing about it. They’re not paying attention, so we don’t know whether their data is valid or not. Bad Australians, no Vegemite for them …
I must confess … this kind of shabby, “phone it in” climate science is getting kinda old …
w.
THE RESULTS
Station, Bad days in record (w/ min. temperature exceeding the max. temp)
Adelaide, 1
Albany, 2
Alice Springs, 36
Birdsville, 1
Bourke, 12
Burketown, 6
Cabramurra, 212
Cairns, 2
Canberra, 4
Cape Borda, 4
Cape Leeuwin, 2
Cape Otway Lighthouse, 63
Charleville, 30
Charters Towers, 8
Dubbo, 8
Esperance, 1
Eucla, 5
Forrest, 1
Gabo Island, 1
Gayndah, 3
Georgetown, 15
Giles, 3
Grove, 1
Halls Creek, 21
Hobart, 7
Inverell, 11
Kalgoorlie-Boulder, 11
Kalumburu, 1
Katanning, 1
Kerang, 1
Kyancutta, 2
Larapuna (Eddystone Point), 4
Longreach, 24
Low Head, 39
Mackay, 61
Marble Bar, 11
Marree, 2
Meekatharra, 12
Melbourne Regional Office, 7
Merredin, 1
Mildura, 1
Miles, 5
Morawa, 7
Moree, 3
Mount Gambier, 12
Nhill, 4
Normanton, 3
Nowra, 2
Orbost, 48
Palmerville, 1
Port Hedland, 2
Port Lincoln, 8
Rabbit Flat, 3
Richmond (NSW), 1
Richmond (Qld), 9
Robe, 2
St George, 2
Sydney, 12
Tarcoola, 4
Tennant Creek, 40
Thargomindah, 5
Tibooburra, 15
Wagga Wagga, 1
Walgett, 3
Wilcannia, 1
Wilsons Promontory, 79
Wittenoom, 4
Wyalong, 2
Yamba, 1

It looks like the folks at Cape Otway and Wilsons Promontory have been having too many gin-and-tonics in those quiet evenings.
Willis, I wish you’d just use the data as you originally intended and skip the whole “lows higher than the highs” thing. You’re majoring on the minors here and fighting the wrong battle. There’s a clear explanation for the phenomenon you found: reading a high-low thermometer set once a day. It’s not phone-it-in or shabby and if someone attempted to correct it, they’d be rightly criticized for massaging the instrumental record.
The rule-of-thumb they use for classifying which day the high goes with and which day the low goes with is reasonable, base on how sunlight affects things. At least here in a moderate area where I live, there’s approximately a 15-degree F swing in temperature each day from the low before dawn to the high around 3:00 PM. (On top of that, add in weather and seasonal effects, and we can end up with as little as a 4-degree high-low swing or a little more than a 30-degree swing.)
So a day in which a front comes through and raises the temperature 15+ degrees F (or whatever it takes in a location to overcome the sun’s effect, plus the weather) overnight may cause a swap of the high and lows for a pair of days. As you note, this is extremely rare.
This kind of issue will show up with any regime that uses a high-low thermometer that’s read at any point except exactly midnight. It is a weakness of the historical (and perhaps present) surface station network, for sure, and is worth pointing out, but it doesn’t indicate that record keeping is sloppy, measurements are wrong, or anything else that is worth making a big deal over.
The measurements are apparently correct, just occasionally attributed one day off. I say “apparently” because there could be problems with the measurements. But that’s not what you’re describing.
You made this kind of mistake back with your “linear trend” fiasco, where you had a good main argument but moved a minor argument to the top of your posting and spent so much time defending a misunderstanding on your part that you let your main argument fall off the table into obscurity. Please, just let this high-low thing go and focus on actually using the data to actually show Nick is wrong on the original question.
Do what you originally set out to do, and it’ll have much more impact than trying to snipe around the edges.
Billy liar, that document is an interesting read. Makes you realize how many adjustments are being made and how many are potentially questionable.
This caught my attention,
Figure 10 shows an example of data flagged by this check, a minimum temperature of 20.6°C at
Giles (25°S 128°E) on 11 January 1988. The lowest three-hourly temperatures at the site were
26.2°C at 09:00 the previous day, and 27.5°C at 06:00, while other sites in the broader region
mostly exceeded 27°C. (As there are no sites to the north or south of Giles within 500 km, the
value affects the analysis over a large area). Such differences, if real, would almost certainly be
associated with a thunderstorm at Giles, but no significant rain was recorded there, suggesting
that the value on that day was suspect.
Their thunderstorm assumption is wrong. I can speak from personal experience of the central desert that non thunderstorm forming areas of cloud occur on a regular basis and cause a significant drop in temperature over a short period of time.
Willis,
“but the above-cited public records for the “112 locations used in long-term climate monitoring” STOP AT THE END OF 2012. So they only contain the first month of the summer, not the other two. So those jerkwagons are claiming a record, and still haven’t released the data they claim it is based on.”
The records don’t stop at the end of 1912. You can look up each of them to the most recent half-hour. You can get the daily records for each month. Bourke in January? Here it is. They may not yet be in a convenient table for you, but it’s all there.
Philip Bradley says: June 29, 2013 at 12:08 pm
“Billy liar, that document is an interesting read. Makes you realize how many adjustments are being made and how many are potentially questionable.”
These aren’t adjustments. They have QC procedures designed to automatically flag suspect individual numbers in their huge database. There will always be some that are borderline. They are describing the most difficult cases to decide, and how they go about it.
gaelan clark says: June 29, 2013 at 4:46 am
Sorry but I have two questions….
First (Nick)….why does it take an American, so disgusted with your “angry summer” crapola and so far removed from your entire operanda, to find inconsistencies, irregularities and just plain wierdisms within your very own network…which WAS supposed to be sterling?
Second (anyone)….this is the 21st century, not 1869, we can automate temperature readings and take measurements without human eyes…why dont we?”
Well, I could ask why does it take an American to tell us that we didn’t have a hot summer – satellites prove it, they weren’t hot. None of this Acorn nitpicking has anything to do with whether the summer was hot. Historic records anywhere in the world have inconsistencies etc. You have to learn as best you can from them.
But of course measurements are now automatic. Here is just one Australian State. You can check records every half hour. No eyes involved.
kadaka (KD Knoebel) says:
June 29, 2013 at 9:18 am
From jeremyp99 on June 29, 2013 at 4:49 am:
Meta is a prefix used in English (and other Greek-owing languages) to indicate a concept which is an abstraction from another concept, used to complete or add
Metadata is hence data about data
And metaphysics is physics about physics. With metaphysics completing or adding to physics.
=================================================================
Way back when I was studying philospophy, I learnt the origin of ‘metaphysics’ as being ‘beyond physics’. From Aristotle’s works; his writings on the subject now known as part of philosphy coming after those on physical science. (‘meta’ being beyond or after in Attic Greek)
‘meta’ has changed meaning in the last 40 or so years to its current usage
To all you temperature adjusters out there, including the ABOM(inables): If the raw data over 100 years can’t show a definitive signal of CAGW, and one feels the need for adjustment to make it show itself, then isn’t this an admission (or fear) in itself that the signal must be diminishingly small? Is there anyone here who disputes that the old end of the record has been adusted downward and the recent end shifted upward, even if only a few tenths of a degree?
If the problem facing us is that we could have 4 to 6C increase by 2100 (I know the IPCC has had to trim this to half in the last year or two, but the practice of adjustments was introduced when 4-6 was “95% certain”), then all that would be needed would be a few hundred thermometers with raw readings distributed around the world in non-urban areas to unequivocally detect such a strong AGW signal (we may still have to determine that it is “A” GW but at least we wouldn’t be trying to squeeze that out of 0.7C a century.
The fact that a century of warming has only been 0.7C, and this with basically raw data and some natural variation (remember, most of the AGW has occurred since 1950) underscores the point that the adjustments to the old record are unnecessary. Could the old thermometer readers have been out several degrees in their reading of a temp? Please, all agree, “I don’t think so”. Could the thermometers themselves have been so crude as to have been inaccurate by several degrees (all in one direction)? That we have only 0.7C difference is virtually proof in itself that this is not so. Heck, if you want to adjust the data, round it off to the nearest degree C. What is wrong with this? We are only interested in a change of 2-3 degrees in a century into the future. If we don’t have a degree or two by 2050 with the raw data, then we are pretty safe (and remember at 2013 we are 25% of the way there). To emphasize for the unconvinced, would you measure sea-level changes with a micrometer if you were worried about changes in a century of a metre or more? Do you believe a mighty oak will grow from a homogenized ACORN?
.
Nick, any change to raw data is an adjustment to the data set. To call some changes Quality control is mere semantics.
As usual, you don’t address the substantive issue. That they are making a changes to raw data based on a clearly wrong assumption. And note, Giles is probably the station that has the largest geographic effect on Australian temperatures.
Just as an agenda can be seen in the results of climate models, a similar agenda can be discovered in the changes/adjustments to raw data.
Even if human eyes aren’t used in taking the measurement or making the adjustments, an agenda or bias can be expected.
Philip Bradley says: June 29, 2013 at 1:06 pm
“Nick, any change to raw data is an adjustment to the data set. To call some changes Quality control is mere semantics.”
No, it isn’t semantics. They are at the stage of deciding what the data is. Data isn’t just what someone wrote on a page (and someone deciphered). Typos aren’t data. It isn’t a change until you figure what you’ve got.
Gonzo
Another example – Mildura had a reading of 50.8C (recorded as 50.7C in BOM data) in Jan 1906 but this was downgraded to 48.3C based on the temp in Deniliquin.
janama
Totally agree. I have been checking, and made copies, of Lismore (Centre St) for some time and comparing the raw data with the adjustments they made on their HQ data site. It’s now called ‘Australian climate change site networks’ and Lismore has since disappeared.
Casino’s long-term manual w/s was closed recently. The data clearly showed that Casino had cooled over the past 20 years with only 4 years being above the yearly max average.
By the way, on Friday, 21st of June, Casino had its coldest max temp since daily records started in 1965 (12.7C). Maybe Lismore and a few others also did.
I don’t recall any mention of this in the press/TV. Imagine if it had been the hottest!
Nick
I go back to my post above re the totally illogical adjustments to Bourke for Jan 1939. ACORN is corrupted – and this is the data that they used for their ‘angry summer’. Unbelievable.
Absolutely shoddy work – worse even than their adjustments to raw data for their old HQ data site.
Can you honestly defend ACORN’s data when presented with the evidence?
The BoM document states,
In general, data that were classified as suspect after review were flagged and excluded from
further analysis.
So in the example, I originally gave, I’d assume the claimed ‘suspect’ data was in fact excluded from the dataset.
The example was from 1988 at a professionally manned station with presumably automatic temperature recording. ;What people ‘wrote down’ or potentially misread is irrelevant. And anyway the data are what was recorded, errors and all.
Wayne says:
June 29, 2013 at 12:03 pm
I’d like to use the data as originally intended … but they haven’t published it, as far as I know.
Regarding the “explanation”, I don’t care about the explanation. Whatever the circumstances and assumptions might have been, it’s an error.
You seem to think that they are somehow prohibited from fixing an error because they’d be “rightly criticized” … are you serious? Do you know how many times these guys have “adjusted” and otherwise changed the data, without any such obvious error?
Now, I don’t care how they fix it. They can throw out the bad data. Or they can flag it and leave it in. My point is that doing nothing to an admitted error, in a supposedly scientifically quality controlled dataset, does not give me confidence in their other actions.
w.
Nick Stokes says:
June 29, 2013 at 12:48 pm
No, it’s not there at all. That’s just the raw data. For the ACORN-SAT data, they claim that the 112 records in their survey have been subjected to additional scientific oversight and error-checking and quality control. So like the man said in Star Wars, “These are not the records you are looking for”—we’re looking for the ACORN-SAT records, which are made with extra science and special sauce. Your raw records? Sorry, not the same.
Which means that as usual …
You’re wrong.
And I’m sure that as usual, you’ll explain in very painful detail why you are 100% correct in 3 … 2 … 1 …
w.
Well, if the facts are only ‘facts’ – for whatever reason – the deductions can only be ‘deductions’ for the GIGO reason.
Pretty sad that this seems to be the case for Australia.
And 2013 summer in the UK – so far, I’d call it the sunken summer. At least tomorrow is going to be seriously HOT – possibly over 24C! [per Metcheck]! Look, we’re over 50 North, and the globe seems to be cooling.
Don’t like what that does to the growing season here.
Auto
What is meant by the “minimum” and “maximum” temperature?
We’ve had at least two days recently in Auckland, NZ where the temperature at midnight was higher that the temperature at 3pm (Warm previous day, persistent cloud over night and during the day, with a strong southerly during the day to cool things down). If the “minimum” is actually the “overnight low” and the maximum is the “afternoon high”, then you could end up with the minimum higher than the maximum.
The error is in trying to retrospectively apply the WMO temperature recording standard.
There are other issues related to this.
For example, I was surprised to learn this.
Daily summaries in SYNOP messages are based on measurements that occur between
synoptic reporting times and often over a period less than 24-hours. For instance in Europe
minimum temperatures are recorded over the first 12-hour period and maximum temperatures
during the next 12-hour period. Measured in this way, the true daily minimum and maximum
temperatures are often not reported because they occur outside those 12-hour periods.
http://www.wmo.int/pages/prog/gcos/aopcXVIII/6.3_daily_messages.pdf
Clearly, measuring min and max temperatures in this way will frequently give you minimums higher than maximums.
Philip Bradley says: June 29, 2013 at 1:52 pm
“The example was from 1988 at a professionally manned station with presumably automatic temperature recording.”
Do you have evidence that it was professionally manned? It certainly wasn’t automatic – that came in June 1992.
But really – we’re talking about a single apparently deviant 3hr reading on a day in 1988, which ACORN give as a particularly hard case to decide.
Ian George says: June 29, 2013 at 1:23 pm
Sorry I misunderstood your earlier “not any more”. I don’t know why the reason for the change at Bourke. It’s possible that they had more than one record and averaged. GHCN v2 had duplicate records for Bourke from 1953 onwards.
Willis Eschenbach says: June 29, 2013 at 2:34 pm
“And I’m sure that as usual, you’ll explain in very painful detail why you are 100% correct”
No pain. ACORN checked historic data – recent data, automatically recorded every half hour, doesn’t change. If you check Bourke ACORN for December 2012 (or any other recent month) vs general BoM, available to present, you’ll find they are identical.
I prefer to adjust raw data myself. After watching Hansen adjust temperature data I am absolutely certain that I would want raw temperature data if I were doing a correlation or a proof for some reason. But I would be very careful what adjustments I would use. Let me demonstrate with an example that you can do. Go to Google and input “temperature Washington DC”, select wunderground.com from the list and scroll down to Washington Weather Stations and review the 30 or 40 temperature readings. These are within 25 miles to 50 miles of DC. The temperatures range from 83F to 94F. I have done this same chore for where I live in all kinds of weather/time of day and the differential range is always similar to DC.
The measurement stations are MADIS, RAPIDFIRE and NORMAL. You can click on the name and get time graphs of the data. You can click on those stations with an > at the end and get a full range of weather information.
My point is that one can fool ones self with small adjustments. Looking at the big picture often yields better decisions. I would never do what BEST did, which is to make a mulligan stew by throwing all readings into a pot. I would carefully select perhaps 100 or fewer temperature records around the globe, examine them very carefully for suitability and work with those data sets.
I agree with Wayne, we got side tracked on this one. But thank you very much, Willis. You are a tremendous worker with great ability to analyze data and spot problems.
Nick
Check ACORN’s data for Bourke for Jan 39 against the raw data – downward adjustment by 0.36C for the month. The source was from the Bourke PO. One explanation why they were changed was because they were compared to neighbouring stations (eg Cobar, Walget) but after checking these, there appears to be no correlation.
My whole point is, if these adjustments are applied to enough stations during the earlier records, it makes the whole ACORN data set meaningless and not worthy to base such conclusions about this ‘angry summer’.
Raw data for Bourke Jan 1939 here.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_nccObsCode=122&p_display_type=dailyDataFile&p_startYear=1939&p_c=-461101351&p_stn_num=048013
17 days straight of over 40C. That’s a pretty ‘angry month’.
Giles has been manned since 1956, although only since 1972 by BoM staff. I’ll take your word for it not being automated until 1992.
The issue here is the data is the data. Transcription errors are something else. Trying to find errors in the data as recorded, decades later, just gives free rein to confirmation bias.
The example I gave is arguably an example of this, where an unusually low temperature is removed from the record, for what to me seems faulty reasoning. The problem with confirmation bias, is the people doing it, don’t know they are doing it, and it is devilishly hard to pin down after the fact.
Billy Liar
‘They are effectively assuming that there was a transient error in the electronic thermometer recording the data which ocurred at that time but afterwards it continued to operate correctly and no maintenance was carried out.’
Reading what you were saying about invalidating lower temps around dawn, I’m reminded about when, on 18th Jan, 2013, Sydney had its highest temp of 45.8C.
It happened at approximately 2:54pm. I followed the AWS record that day and it was showing temps at 10 min intervals (usually it’s 30min).
At 2:49 the temp was 49.9C – the temp at 2:59 was 49.7C.
So in that 10min period, the temp jumped 0.9C and then dropped 1.1C. Obviously there was a ‘spike’. It appears that, as you say, there may have been a ‘transient error’ but, maybe because it was a high spike, it was not invalidated.
The Automatic Weather Station (weather.iinet) shows the temp only reaching 45.1C.
It would be interesting to find out from someone in the know as to wha really happened that day.
I’ve been looking at “time of observation” issues for several years now, and I cannot come up with a reason why it would produce minimum temperatures higher than maximum temperatures, even with the TOBS causing the pairs of readings to be recorded for separate days, and even if days were missed (unless some very creative infilling methods were used).
For those unfamiliar with the issue, for a long time temperatures were taken with vertical mercury thermometers that had two effectively “ratcheted” markers. The maximum-temperature marker could be pushed up by the rising column of mercury, but would not fall if the mercury column fell. The minimum-temperature marker could fall with a falling column, but would not rise when the column rose.
Generally once per day, the observer would go out to the Stevenson box, record the settings of the maximum and minimum markers, then manually move them to the present height (temperature) of the column, which would allow them to be moved by the column over the next 24 hours.
If the reading were not taken at midnight, and it virtually never was, then the question could arise as to which calendar day the extreme really occurred on. For the 9am readings, the most reasonable assumption (barring other info) was that the maximum occurred the previous afternoon and the minimum occurred the present early morning. This is apparently what the ACORN records did.
But could this by itself explain the “inverted” readings? I don’t see how. Let’s say that on the 6th of the month, the observer notes a maximum of 20C and a minimum of 10C. The 20C max is recorded for the 5th and the 10C min is recorded for the 6th. The markers are reset to the temperature at the time of the reading, which MUST BE between 10C and 20C.
Let’s say a cold front is moving through that day, and the temperature in the next 24 hour period never again gets higher than that at the time of the reading. When the observer comes out on the morning of the 7th, the maximum temperature marker will always read at least 10C. To continue our example, he reads a maximum of 10C, which is recorded for the 6th, and a minimum of 5C, which is recorded for the 7th.
Let’s say that instead, the observer does not make any reading on the morning of the 7th, and his next reading (and resetting!) is on the morning of the 8th. His maximum reading, which is now for the previous 48-hour period, still must be at least 10C, even if the actual maximum temperature for the previous 24-hour period never got as high as 10C.
So to continue our example, our observer notes a 10C maximum and a 0C minimum. The 10C he records for the 6th, and the 0C he records for the 7th. Now he has a gap to fill in the records. It would be extremely doubtful, IMO, if he did not do some sort of interpolation, filling in a max for the 6th of around 15C to go between 20C for the 5th and 10C for the 7th. The same with minimum temperatures.
Fundamentally, though, I cannot construct a reasonable scenario in which either of these effects would lead to a minimum temperature greater than a maximum temperature.
Ian George – the other factor in your link is that it states the highest daily max reading is 49.7 on Jan 31 1903. Because ACORN starts in 1910 the new ACORN highest temp in Jan is 48.3 on 12th, 2013. a 1.4C difference hence the “RECORD” temperature in the Angry Summer.
stan stendera says:June 29, 2013 at 12:08 am
>… What is wrong with these people that they just can’t tell the simple truth. No grant is worth your soul.
…
Mastering the art of Lying is an entry requirement for Australian socialist politicians and senior public servants … much like Obama and his mob.