Corrupted Australian Surface Temperature Records

Older records may be fragile but was diligence any better in recent decades?

Guest essay by Bob Fernley Jones

acorn-sat overview

Background:

Out of over 20,000 Bureau of Meteorology (BoM) weather stations on record [1], 112 have had their data “corrected” under a process known as homogenisation, (the acronym for it in full is ACORN-SAT). Oddly though, eight of these high quality sites, are admitted to, quote:

“…have some urban influence during part or all of their record, hence are excluded from the annual temperature analyses”.[2]

Thus only the resultant 104 ACORN site records are used to establish temperature trends and it is these that are partly reviewed herewith. Some ACORN stations are known under a single site name but are actually several different locations combined, (typically moving from in-town to the airport), and their homogenisation is partly achieved by including data trends from surrounding stations of lesser status. The change from Fahrenheit to centigrade units in 1972 is a further complication involving the reading and rounding of 5/9 ratio smaller Fahrenheit units. Also of difficulty are significant issues in time of day in the readings.[3]

image

In probing the reliability of the resultant ACORN data; which were affected by varying resources etcetera during the 104 years of ACORN, there is an immediate controversy over imbalance in short-term station records being used ONLY in recent times. (Oh, and some are in hot places).

Whilst many of the 20,000 “lesser” stations are now closed, (as are many older ACORN composite sites), there must be adequate older data to redress that imbalance. Furthermore, since homogenization has greater importance with older records embracing a time known for reports of great heat and drought, it is arguable that any bias in time-series distribution should be the other way around, if there is to be a serious effort to determine centennial trends. In other words, it is desirable to increase the sample size of the older evidence if it is deemed to be weaker; as seems to be the implication in making the homogenizations. So why is it that in ACORN, the sample size in recent data which should be more reliable has been increased when it ought to be the other way around; if older data are suspect? More short record info is at § 6.

However, detailed consideration of this background would bring controversies into play, and the focus of this study is a test for reasonableness in only the distribution patterns of data; thus arguably avoiding any controversy.

[1] Unzip the  BoM file from the BoM FTP site for details.

[2] But controversially they are used in homogenization of rural sites and enable sensational media items of record heat at these Urban Heat Island (UHI) sites, including the grossly affected Melbourne R.O. which had to be closed Jan/2015; replaced by Olympic Park).

[3] Since 1964 all high/low readings were recorded during the 24 hours ending at 9 am; thus maxima are actually from the previous day. Prior to 1964 there were many other methods.

 

———————— And now about that data ————————

Foreword:

One of the difficulties in analysing the BoM data is the vastness of the database, which runs to about 38,000 lines of daily data in any typical long time-series record. Each series might have a mix of parameter concerns, but they are not easy to detect in those formidably long lists of numbers. Fortunately, Microsoft EXCEL software used herein enables sorting and plotting of those daily data into a visually comprehensible form. Any counterintuitive thought that there is too much noise in the daily data to make a sensible chart is not an issue because of high compression of that data which visually smooths such noise. Many features become prominently visible, the most obvious being seasonal cycling, blocks of missing data, and step-changes. Furthermore, EXCEL can use Search and Logic Formulae to find, replace or delete specific types of information, thus enabling the plotting of particular features.

The charts that follow are unusual in displaying a great deal of colour coded information into one page width. It is recommended to click them in the webpage to open them into a wider window, and to pause sufficiently to study the detail. Several charting methods are used in order to give alternative visualizations of data concerns for 24 stations[4], with most emphasis on recent decades. Sections §4 & §5 show two alternative methodologies together with expanded details from the main plots.

The main analysis starts from fig 2) and covers recent decadal periods at six ACORN sites on this main page and continues with 17 additional stations in sections attached.

Ahead of that, Fig 1 for Merredin is firstly viewed as background information, in order to show typical relationships over a long record. Merredin is an example station for detection of anticipated recent improvements in data quality versus records going back over a century, (additional examples in §5).

[4] Supplementary information is available for a total of 46 stations (44 % of 104) all having various corruptions in their data.

 

§1 – Baseline comparison; Fig – 1) Merredin, a long record:

· (Part 1a): The Bureau’s homogenised data was sorted using Microsoft EXCEL software to show a chart of all integer values over the past century as single points. (EXCEL found 7,778 integers, of which not all are visible because of limitations in pixel definition and overlapping, but varying densities are visible). Over a long time-series like this, a ten-percent distribution of integers is nominally expected, assuming good data. Whilst there are potential variations that could cause deviations from that norm, it is evident, (more accurately in the actual EXCEL spreadsheet counts), that in Fig 1a, intervals labelled ①②⑤&⑥ are close to that expectation. However, there is significant deviation in intervals ③&④ where there should be a higher number of integers, (Perhaps more so because there could be intent or a tendency for ocular approximations what with the smaller Fahrenheit units before decimalization in 1972?).

Whatever, the recent periods of ⑦&⑧ are seriously corrupted. Also, intervals ⑨&⑩ are bad at 17.1% & 15.3% although their 1,200-day sample is more modest. Nevertheless, there are serious unexplained inconsistencies in the records.

What is very odd in terms of data distribution is that the minima and maxima are affected for conflicting durations before 1972 including a baffling big “hole-in-the-middle” especially in the minima, as also seen in similar long record charts in §5.

Note: Prior to 1972, the data are shown in ⁰C but were found originally in ⁰F. The various controversies in rounding and conversion are not considered to be relevant here.

(Part 1b): Another aspect of the integers, is that they have occurred in both short and long clusters lasting from several days through to several or more years. Strangely, (but not uniquely), Fig 1b shows a high concentration of small clusters in a period similar to that of problems associated with AWS’. However, Merredin is archived as a manual station currently equipped with mercury thermometers according to this BoM source.

Additional comments on Fig 1 continue below the image. (Click to zoom into wider window):

image

History from the ACORN site catalogue: The original site (10093) was at a research farm about 5 km west of town. It moved 150 m southeast in May 1957. The Research Station continued to make observations until 1985 and these observations were used to merge the two datasets. The current site [10092] has been operating since 1966. It became overgrown for a time but was cleared in March 1986.

· (Part 1a) Arrow markers: The four arrows relate to the ACORN site history and it can be seen that these points, show no correlation with the changing data patterns over the century-long period.

· (Part 1c): This unitless bar chart shows within the available pixel definition the distribution of days where ACORN values are coded 99999.9 = No data. Before the 1960’s the records are rather incomplete, but the middle period has good completeness despite the otherwise inconsistent patches ⑤⑥⑨&. However, the last two decades are notable for their incompleteness, inaccuracy, and inconsistency.

Summary opinion on fig 1: No claim is made that Merredin is “typical”, although five comparable variations on it follow in §5. Neither can it simplistically be a criticism that it is “cherry-picking” to reveal its bad data record. It is one of the ACORN-SAT homogenised premier sites that are groomed out of many other surrounding sites, and claimed to be “Amongst World’s Best Practice”.

ÚÚÚÚ Now for those problematic modern decades ÚÚÚÚ

§2 – Figures 2–7) Six examples circa 2000:

 

Notes on reading the graphics:

•Click each image to open in wider window•

Red plots show maxima and blue the minima temperatures (0C). Darker shades show substandard values in whole ⁰C (integers) instead of to tenths. White space shows No Data. Colours plotted last may cover over part of earlier plots, so their sequencing is indicated. The data are all highly compressed daily values with limitations in pixel definition, but an annual seasonal cycle etcetera is obviously evident.

These two-part charts firstly survey integers for two decades around the time when AWS’s were introduced. The lower parts of the charts survey the full centennial record for distribution of missing ACORN data (99999.9 values). Caution; horizontal x axis scale differs greatly between the two parts.

 

Fig 2 – Larapuna (Eddystone Point):

From 1990 through to late 2003 the missing or integer values are excessive. (There are smaller “white spaces” in Fig 2a prior to 1997-2003). With introduction of the AWS there are no data in the maxima for nearly seven years, and, for the same period in the minima there are substandard whole-degree (integer) values. How did Head Office not notice or fail to correct these problems for some thirteen (~13) years? The simultaneous integer and blank data for ~7 years is intriguing; see also Fig 8 in §4).

image

Fig 3 – Cape Otway:

In this case the failure with the noted second AWS in both maxima and minima gave substandard integer values that remained uncorrected for over nine (>9) years!

The first AWS gave No Data in the minima for almost a whole year before it was replaced with new!

image

Fig 4 – Forrest:

This example is included primarily to show that substandard integer outcomes were apparent in 1993 with an early AWS immediately upon installation but remained uncorrected for over two years, and it was employed instead of the established and better functioning systems.

•AWS’ were mostly adopted circa late 1996•

image

Based on a limited search, various failure periods overlap within the BoM system until at least 2004, resulting in excessive integer counts spanning at least eleven (~11) years in total!

It is also of interest that in several cases, failure occurred only in the minima, an issue discussed further with figure 5a Bourke.

Fig – 5a Bourke:

A letter to me from the BoM of 3/June/2015 signed by acting Director of Meteorology and CEO, Dr Ray Canterford, responded to my follow-up questions from 2014 on bad data at Bourke:

…The occurrence of whole (integer) temperatures in the Bureau’s database is a well-known issue that affected early-generation Automatic Weather Stations (equipment), due to limitations in the coding software at the time. It is described on page four of the ACORN-SAT observation practices document on the Bureau’s ACORN-SAT website. [See §8 References] Where no adjustment is deemed necessary to the raw data, the rounded figures remain in the database…

 

But, (whilst not wishing to hypothesise in this study), unless the software differs between different hardware it seems unlikely that coding errors would erratically affect some stations but not others, and that the malfunctions are temporally sensitive in the software. In this graphic, maxima are affected earlier and for longer than in the minima. Minima values did not become corrupted until about three years after the hardware was installed. (This might suggest hardware instability rather than software “instability”?).

image

Fig 5b – Comments: In some cases as in the example of figure 2, it is seen that long periods may be void of data during AWS operation and it is apparent that together with the parallel substandard integer values that these two malfunctions are likely interconnected in some way. (Such concise parallels, also in fig. 8, are unlikely). Less obvious, in this example of Bourke, is that a more random distribution of increasing missing data can exist with an AWS. Limitations in pixel definition only give a broad impression of count but the EXCEL digitally determined values of day-counts of missing values associated with the AWS are for the minima 537 and for the maxima 415. Note that minima are plotted secondly so smother much of the maxima 99999.9 values underneath.

In the early days of my curiosity of matters BoM, I asked their enquiry desk why it was that Bourke had a very complete record prior to introduction of an AWS but then it deteriorated very badly. Here follows part of their response from mid last year:

Case # E3IH2A1856:

“…There are many different reasons for missing data. We refer you to our previous correspondence on the issue of missing AWS data. In general, one should not assume that an Automatic Weather Station (AWS) would necessarily provide more complete data than manual observations, especially in the instance that the AWS replaces a site with a very good history of data completeness. Bourke’s long-term data completeness over its whole record is 98.2%, very close to the median. The completeness of the Bourke PO data was exceptionally high. The completeness of the AWS has not been as good, but has returned to above 98% in the last 3 years…”.

Whatever, the strikingly bad performance of the AWS was allowed to exist for over a decade and has only improved over the last three or so years.

 

Fig 6 – Rutherglen:

The variation in this example is less visible, so the colour contrast is increased as an aid. Careful examination reveals a different pattern of small clusters only within the larger ellipse, (which spans some 1,500 days).

image

Shortly after installation of the 1998 AWS the first of a series of some 17 short clusters of integers occurred. (They are more see-able in the source EXCEL spreadsheet). Minima were not necessarily in phase with maxima, but again, it seems less likely that a software problem could be so erratically and temporally sensitive. (A conditional is that some of the “thinner” intervals are only three or four integers wide and might be unusual clusters within possibility, but less unlikely so when max&min are paired, as are some of them). However, the several pairs of “broader” clusters cannot possibly be flukes. An issue here is that there are clear malfunctions which seem to point towards inherent hardware instability rather than software coding issues as claimed by the Bureau. (Occurring in either short or very long clusters sometimes years after trouble free installation and suggestive in retrospect of; We don’t really know how or why…..when they later became attentive to it?).

§3 – Discussion on figures 2 through 6:

These five graphics give samples of the main concerns. Additional examples and other concerns can be accessed by clicking the links under the following sections; §4 – Fig’s 7 -16) & §5 – Fig’s 17 – 22). §6 Fig’s 23 & 24

This station search was primarily on remote coastal and inland regions under the consideration that remote sites are those that may be arguably in need of AWS’ and that perhaps transmission/logging of data might be potential factors. (Also UHI effects are less likely and station names comprising only one location more common)

Any suggestion that these remoter sites are unrepresentative is countered by them all being part of the acclaimed high quality ACORN-SAT homogenized system.

 

§4 – Figures 7 – 16) Nine more examples of problem sites circa 2000:

CLICKsection4.pdf to open in new window

•Figures 10 through 16 show much the same sort of information but offer different visual perspectives•

 

§5 – Figures 17 – 22) Centigrade versus Fahrenheit, including half-degrees:

CLICK: section5.pdf to open in new window

•These five examples are long records similar to that of introductory Fig 1 but with some visually expanded details•

 

§6 – Nineteen short record sites tabulation and Figures 23 & 24

This is further to the opening Background discussion, CLICK: section6.pdf  to open in new window

 

§7 – Basic Conclusions:

· It is nonsensical when there are thousands of older stations on record, that older records that are argued to require ACORN corrections should have a smaller sample than the “more reliable” largely uncorrected data of recent times.

· In the 24 stations reviewed herewith, corrupted data continued decadaly as if no one was particularly interested back then, at a time before alarming climate change theories became more popular.

· These bad data were especially associated with AWS’s and differed from other bad data in earlier times.

· Extracting from §5: …following 1972 decimalization there were significant increases in integer value counts. This is counterintuitive given that decimals are easier to read with 9/5 larger centigrade units. Thus, conversion from the original Fahrenheit temperatures has somehow gone painfully awry…

· The net outcome is that the ACORN data are not credible for reliable trend determinations.

· The penultimate conclusion is heightened (although it was not discussed herein) by another fact of the dismissal by the Bureau of strong data and comparative reports of hotter times prior to their ACORN “Start of Time” of 1/Jan/1910.[6] Accounts of great heat in the past were paralleled by prolonged and devastating droughts. Tellingly, expert hydrologists advise that droughts result in higher air temperatures.[7]

[6] E.g. Watkin Tench’s Book on Port Jackson settlement chapter 17, describing myriad bats and birds dropping dead from the trees etcetera. Etcetera.

[7] E.g. Professor Stewart Franks and histories such as the so-called “Federation Drought” from the late 1880’s.

§8 – References:

ACORN Station Catalogue. (Including history of sites involved)

Sortable list of ACORN-SAT stations with linked data

ACORN-SAT Website main page This includes other menu items including claims of peer review “World Quality”

Dorothea Mackellar (During visit to England 1908, before the beginning of ACORN time)

§9 – Disclosures:

I’m a retired mechanical engineer with no past or present funding for this research from anyone or any conflicting vested interests or motivations other than to see fair play in science-driven policy making.

Compiled by Bob Fernley-Jones Melbourne Oct/2015.

0 0 votes
Article Rating
91 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
November 15, 2015 4:01 pm

Hiding The Decline In Australia
Climate experts now pretend that the Australian temperature record began in 1910. They do this to hide all of the record hot years before 1910.comment image
https://stevengoddard.wordpress.com/2014/10/02/hiding-the-decline-in-australia/

Crispin in Waterloo
Reply to  Latitude
November 15, 2015 4:45 pm

Latitude – that is very interesting. Usually manipulation is not that blatant.

Reply to  Latitude
November 15, 2015 4:58 pm

There is a good reason for starting in 1910. Following Federation, the BoM was enacted in 1906, and became fully functional in 1908. It embarked on a national program of installing Stevenson screens, which were not common before. That was largely complete by 1910.
The pre-1910 records are of course available, and S Gaoddard is using them.

Reply to  Nick Stokes
November 15, 2015 7:21 pm

Every time someone does something, it is always done for “a good reason.” Doing something for “a good reason” doesn’t always make it the right reason.

Tom Yoke
Reply to  Nick Stokes
November 15, 2015 7:48 pm

pmhinsc has made the correct point.
I personally do not doubt that the many adjustments to the surface record, which have overwhelmingly “cooled the past and warmed the present”, have be done for “good reasons” with “good intentions”.
Bias is a SUBTLE problem. The double blind studies used in pharmaceutical research are required not because of bad motives. The difficulty is that when you know the answer you’re “supposed” to get, it is all to easy to nudge the analysis in that direction.
In climate research there is a trifecta of funding awards, political groupthink, and Green moral posturing, which all combine to provide powerful incentives for nudging the data.

bobfj
Reply to  Nick Stokes
November 15, 2015 8:49 pm

Nick Stokes,
Erh check this out for Melbourne
http://collections.museumvictoria.com.au/items/727521
Stevenson screens were not invented in 1910

Patrick
Reply to  Nick Stokes
November 15, 2015 10:08 pm

More excuses from Nick.

Bob Fernley-Jones
Reply to  Nick Stokes
November 15, 2015 10:35 pm

Nick,
I tried to respond earlier but maybe the spam filter did not like my link.
Try Googling (copy-paste): Negative – Stevenson Thermometer Screen, Melbourne Observatory, circa 1879
In short, Stevenson Screens were not invented in 1910.

Aynsley Kellow
Reply to  Nick Stokes
November 15, 2015 11:09 pm

What has always worried me is that if the BOM routinely excludes data pre-1910 (thus missing the Federation Drought’), but what is the source of data for Australia for the global temperature data? It cannot be unfit for Australia, but OK for global records, surely.

Reply to  Nick Stokes
November 16, 2015 12:08 am

BobFJ,
“In short, Stevenson Screens were not invented in 1910.”
No. There is a paper here by Nicholls (1995) on the history in Australia. he says:

Even as late as 1907, by which time most stations had Stevenson screens, the situation regarding exposure of thermometers was less than perfect. Hunt, the recently appointed Commonwealth meteorologist, hosted a Meteorological Conference in Melbourne in May 1907. In his address to the Conference he noted that ‘In different States various methods of exposure and types of equipment obtain. Some of the screens in use-in fact the majority-are of the dimensions of the British Stevenson pattern, and many that we have had the opportunity of seeing have been made without due consideration to the choice of wood in their construction, with the result, when subjected to our extremes of climate, have warped and cracked in all directions, arousing suspicion that the direct rays of the sun may occasionally reach the thermometers through the roof’ (Hunt, 1907b). Hunt’s appointment led rapidly to the standardization of exposures, with most of the remaining non-standard exposures being replaced by Stevenson screens from 1908.

Hunt wasn’t trying to rig the data for the IPCC.

Leigh
Reply to  Nick Stokes
November 16, 2015 12:48 am

Nick, as you would be quite aware, Stevenson screens were quite common pre 1910.
Jo Nova ‘s been all over this for years.
Somebody forgot to tell the UN those records pre 1910 are inacurate, they still refer to them.
That tells me that again Australia’s BOM is simply fitting data to an end, all according to worlds best practice of course!
Remove the adjustments and just what is left of the global warming scam?

Peter Plail
Reply to  Nick Stokes
November 16, 2015 1:10 am

Whilst not wanting to be an apologist for Nick Stokes,I should point out hat he did say they were not common in 1910, not that they weren’t invented.

igsy
Reply to  Nick Stokes
November 16, 2015 1:13 am

Can’t your friend Dr Mann find a couple of Australian trees that can help us out here? They are usually accurate to half a degree over hundreds of years, so we’re told.

Eystein Simonsen, Norway
Reply to  Nick Stokes
November 16, 2015 1:59 am

Seems fair to me. Best quality after 1910 with standardized equipement. But, I guess there are other options regarding the years before? Like reconstruction of the data baesd on best practice? Are Goddards data okay regarding the graph on the previous period?

Dave N
Reply to  Nick Stokes
November 16, 2015 2:55 am

“…which were not common before”
You must have a strange definition of “not common”. All of Queensland and SA had them by 1892, and though NSW and Vic were slower to follow, they were widespread.
Regardless, why ignore *all* the Stevenson screen measurements? They choose to ignore many stations now that *have* Stevenson screens.
Your argument is full of more holes than a Stevenson screen

Reply to  Latitude
November 15, 2015 5:28 pm

Hiding the incline?

bobfj
Reply to  Latitude
November 15, 2015 8:12 pm

Nick Stokes,
If I understand your opinion on “hiding the incline” for the Melbourne Regional Office…..it is relevant that RO was a joke of a UHI (and critically wind disrupted) site, and was closed maybe 50-years too late on 1/Jan/2015, replaced by a less controversial open sports stadium site. I speculate that puzzlingly it seems FROM the DATA that the thermal situation at RO in recent decades was treated by the BoM as unaffected by urbanisation and that they thought past temperatures needed correction upwards. (That is arguably generally opposite to typical urban site “corrections” in ACORN). A saving grace is that the BoM claims not to use RO data for national warming trends….yet they use it to homogenise rural ACORN stations….and they have always been keen to shout about any hot days at RO. Ho Hum.
OH, but when I use the collective ‘they’ it could of course be just one person’s judgement involved.
Maybe speculation of why is a waste of time and we should just test the DATA outcomes for reasonableness?

ozspeaksup
Reply to  Latitude
November 16, 2015 4:23 am

for Igsy above
well they could chop down a wallami pine or two?
🙂
as rare as the trees mann murdered

Michael
Reply to  Latitude
November 16, 2015 1:24 am

As you probably know, Darwin Was bombed in 1942 and the site of observations was changed. It’s a bit suspicious that the bombing appears to have caused a step change in the temperature!

Bernard Lodge
Reply to  Latitude
November 16, 2015 1:08 pm

Don’t forget that Australia and New Zealand temperatures are used to in-fill vast ares of the Southern Indian Ocean, The Southern Pacific Ocean and the Southern Ocean that do not have any temperature stations. That is why upwardly adjusting Australia’s temperature trends is such a high priority of the ‘Warmistas’. These adjustments have a much larger effect on the global average temperatures than represented by the area of Australia alone.

Robert of Ottawa
November 15, 2015 4:14 pm

Out of over 20,000 Bureau of Meteorology (BoM) weather stations on record [1], 112 have had their data “corrected”
Sorry, I’m confused here. If there were 20,000 weather stations, then the alteration of 112 of them wouldn’t make a difference. Perhaps there is an editorial faut-pas or maybe I am in error.

markx
Reply to  Robert of Ottawa
November 15, 2015 4:34 pm

As I understand it, of 20,000 stations, 104 have been selected, had their data corrected, and are now considered to be the ACORN data set.
It seems also that other station data (ie the remaining 19,896 or so) is not included, but has been used in some cases to adjust data for selected stations.
All extremely peculiar.

markx
Reply to  Robert of Ottawa
November 15, 2015 4:40 pm

Yes, here it is:
A new homogenised daily temperature set, the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) data set, has recently been released by the Australian Bureau of Meteorology.
This data set contains daily data for 112 stations, with 60 of them extending for the full period from 1910 to 2011, and the others opening progressively up until the 1970s.
(1910 is taken as the starting point, as it was only with the formation of the Bureau of Meteorology in 1908 as a federal organisation that instrument shelters became standardised). 

.. http://surfacetemperatures.blogspot.co.id/2012/06/new-homogenised-daily-temperature-data.html?m=1

markx
Reply to  Robert of Ottawa
November 15, 2015 4:54 pm

 
A new homogenised daily temperature set, theAustralianClimate Observations Reference Network – Surface Air Temperature (ACORN-SAT)data set, has recently been released by the Australian Bureau of Meteorology.
This data set contains daily data for 112 stations, with 60 of them extending for the full period from 1910 to 2011, and the others opening progressively up until the 1970s.
(1910 is taken as the starting point, as it was only with the formation of the Bureau of Meteorology in 1908 as a federal organisation that instrument shelters became standardised). 

http://surfacetemperatures.blogspot.co.id/2012/06/new-homogenised-daily-temperature-data.html?m=1

Patrick
Reply to  Robert of Ottawa
November 15, 2015 10:17 pm

These 112 (Or now 104 stations) stations are used to calculate the national average. A completely meaningless number. But every year that average is trotted out as being the hottest evah!

Rod
November 15, 2015 4:48 pm

Joanne Nova posted an article recently written by Bill Johnstone on BOM data problems that is well worth a read:
http://joannenova.com.au/2015/11/blockbuster-are-hot-days-in-australia-mostly-due-to-low-rainfall-and-electronic-thermometers-not-co2/

richard verney
Reply to  Rod
November 16, 2015 4:47 am

It is important to bear in mind the interaction between rainfall and temperature.
I posted a link to that article on Willis’ article on a tale of two convergences. getting a proper grip on the water cycle is a pre-requisite for any basic understanding of our Climate System, and there is no prospect of modelling the system until this is known and understood at a small grid level basis.
Whilst slightly different to the point being made by Bill Johnstone, there has never been any point to simply using temperatures without also incorporating relative humidity. We consider that temperatures are a proxy for energy, but without knowing RH, we simply do notknow what if any energy change is taking place 9and hence whether there is some growing imbalance), and as Bill Johnstone points out temperatures are impacted by rainfall.
This is another example of the problem that Michael Mann faced with his tree rings. Tree rings may be influenced by temperature, but tree rings are at their heart a proxy for growing conditions, and many factors apart from temperature influence whether growing conditions are good or bad. Bill Johnstone points out that temperatures are impacted by rainfall/drought conditions, and thus one has to be careful when looking at what temperatures (temperature anomalies) may be revealing.

markx
November 15, 2015 4:57 pm
Mick In The Hills
November 15, 2015 5:03 pm

The official government and BoM response to these blatant issues has been – “nyah, nyah, nyah, can’t hear you!!” (although of course delivered in bureaucratese).
But try as they might, more and more independent analyses such as Bob’s here are belling the cat.
The books are being cooked.

lee
Reply to  Mick In The Hills
November 15, 2015 5:22 pm

“The books are being cooked.”
sauteed over a low flame.

KJ
Reply to  lee
November 16, 2015 12:52 am

Today’s goose is being cooked at an average temperature of 1.0 degrees C above that of pre-industrial times.
I think I read that somewhere when having a gander at the peer-reviewed articles on climate change consensus.

MarkW
Reply to  lee
November 16, 2015 6:10 am

Stop egging him on.

November 15, 2015 5:06 pm

Reblogged this on The GOLDEN RULE and commented:
Just to keep the Australian scene available to the public.
A bit technical but it provides a contrast to the warmist propaganda!

Curious George
November 15, 2015 5:23 pm

104 is 97% of 112, which in turn is 97% of 20,000. The best Australian science.

Bob Fernley-Jones
Reply to  Curious George
November 15, 2015 6:29 pm

Curiously Curious, the number of stations on BoM records is 20,112. So maybe they extracted 112 named sites to leave a balance of exactly 20,000? Smiling brings relief.

November 15, 2015 5:31 pm

“112, which in turn is 97% of 20,000. The best Australian science”
Seems like unAustralian arithmetic to me.

Curious George
Reply to  Nick Stokes
November 16, 2015 7:47 am

Nick, please look at the original consensus article. This is exactly their arithmetic. Maybe unAustralian; feel free to discuss it with them.

RD
November 15, 2015 5:33 pm

The past is lowered and the present is raised in every one of these data sets. All of them. Every single country. Every scientific organization. All universities.
And all of the models are all wrong way high.
What is the probability?

RockyRoad
Reply to  RD
November 15, 2015 5:42 pm

100% if the intent is to protect the religion and avoid the science.

asybot
November 15, 2015 5:45 pm

Now where have I heard the name ACORN before? mmmmm

ozspeaksup
Reply to  asybot
November 16, 2015 4:25 am

yeah squirrels n nuts
BoM has both and theyre paid employees:-)

MarkW
Reply to  ozspeaksup
November 16, 2015 6:13 am

In the US there was a far leftwing group that got in trouble for breaking quite a few laws while registering new Democrat voters. Many of whom hadn’t gone through the trouble of getting themselves born, prior to registering to vote.

Ed
Reply to  ozspeaksup
November 16, 2015 2:34 pm

Correct,Mark. The spirit of ACORN lives on in the US. After countless infractions and embarrassments to the Democrat party, the ACORN organization voluntarily diversified itself into dozens of smaller organizations, all of which still push forward illegally on many fronts, including as you mentioned voter fraud. Their other main function is to instigate demonstrations and stir up discontent. To make matters worse, many of these new organization are secretly funded by subprime settlement monies extorted from banks by Obama’s “Justice” Department. Naturally they are 100% loyal to Obama. The rule of law in the US is disappearing rapidly.

Mike T
November 15, 2015 5:51 pm

With regard to AWS, it’s my understanding that these were installed at airports for aviation, not climate purposes. Hence the AWS at Bourke airport while there was an existing weather/climate/rainfall station in town at the Post Office. Early AWS were not especially reliable and manual observations continued alongside AWS readings at many stations, including BoM-staffed ones, The in-glass readings were the official readings until BoM decided that sensor reading were “sufficiently accurate” and they then became the standard, with in-glass reading becoming the backup. Early AWS were plagued with problems and are still not 100% reliable- it only takes something like a router falling over and all readings are lost- they are archived in head office, not locally. I have voiced concerns over the 9am maximum temp reading for many years- the true max for a particular day is lost when the next day is hotter before 9am (which happens with some frequency in Oz, even with daylight savings time).

Reply to  Mike T
November 15, 2015 8:20 pm

Mike T says – “the true max for a particular day is lost when the next day is hotter before 9am”
As I understand it, the 9 am reading is for the 24 hours prior, in which case the reading is correct at 9 am. The thermometer is then reset, for the next 24 hours.
This and similar types of TOBs events can be dealt with in different ways, like trying to adjust data by extrapolation, to the 24 hours ending midnight.
We really need actual data to support your claim that the warm start to the 9 am day happens with some frequency in Oz. I’m sure it does, but we want to know what that frequency is. The guys who set up this national recording service way back when were quite talented. It would be wrong to assume that they were unaware of potential problems with 9 am readings. That they went ahead with them is perhaps evidence that the effect was so small as to be ignored.
Anecdotally, I cannot remember a day when this pattern happened to me. There are automatic weather stations now, with readings many times a day, so that it is possible to model the daily temperature profile at all ACORN-SAT sites and count the number of days presenting a problem. I have never seen that done, would appreciate a reference if it has been.

Mike T
Reply to  Geoff Sherrington
November 16, 2015 12:47 am

Geoff, I read thermometers for 40+ years, as a meteorological observer. The “false max next day” scenario is not uncommon in summer. An extra 6 degrees (which I have seen) when added to the month’s max makes for an extra 0.2C for the mean max… when the warmists are wailing about hundredths. An example scenario: coastal location with regular seabreezes in summer. First day has the usual pre-noon seabreeze, the max is 30C. Next day, there is a howling NE’ly off the desert. The temp is already 30C at 7.30, by 9am it’s 36C on the way to a 42C max, with a late, weak seabreeze. The 36C replaces the 30C max for the previous day. I’ve raised this additional point in the past, as well: electronic probes give a slightly higher max temp than mercury-in-glass thermometers, between 0.1 and 0.5C. Presumably they are more sensitive- there is some inertia in the mercury column, plus they have a tendency to “suck back” about 0.1C when the day cools or overnight.

bobfj
Reply to  Mike T
November 15, 2015 8:39 pm

Mike,
Maybe the BoM preferred to use the faulty AWS data because well, at least it was digitised, even if reading nothing (coded 99999.9) and/or integers for years on end. A digitally registered nothing is better than some nebulous stuff written on paper? Perhaps the mercury readings were written down OK but too much trouble to load to digital file. Dunno……. just speculating.

Editor
November 15, 2015 6:31 pm

Be careful. The internet’s version of Scut Farkus – Tamino – is going to accuse you of attacking the data. While an independent scientific mind would objectively question the accuracy of any data they used in an objective scientific analysis, Tamino is far from objective or open-minded.

Werner Brozek
November 15, 2015 6:44 pm

In the article here:
http://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/
Tim Osborn:
http://wattsupwiththat.com/2014/10/05/is-wti-dead-and-hadcrut-adjusts-up-again-now-includes-august-data-except-for-hadcrut4-2-and-hadsst3/#comment-1756969
provided this link:
http://www.metoffice.gov.uk/hadobs/crutem4/data/previous_versions/4.2.0.0/CRUTEM.4.2.0.0_release_notes.html
“The principal subsets of station series processed and merged with CRUTEM (chronological order) are:
Norwegian – homogenized series
Australian (ACORN) – homogenized subset,”
etc
So the bottom line is this: If the adjustments to ACORN are faulty, then the adjustments to Hadcrut4 are also faulty.

bobfj
Reply to  Werner Brozek
November 15, 2015 7:05 pm

Werner,
I’m hoping that WUWT readership, particularly the victims of GISS and Hadley-CRU, might wonder if their oracles of climate disruption could possibly be using bad data.

Dreadnought
November 15, 2015 7:41 pm

It looks as though the thigh-rubbing data-fudgers have been up to their naughties again…

Chris Hanley
November 15, 2015 8:51 pm

Bob Fernley Jones and his ilk show enormous diligence and patience.
Australia’s climate is summarised in The Royal Atlas & Gazetteer 1890 which shows the then generally known isotherms for July and December:
http://www.nla.gov.au/apps/cdview/?pi=nla.map-raa32-s29-e
A comparison with the current BOM average daily averages for December, after conversion, shows nothing much has change, Melbourne in 1890 was ~65F and is now ~18C (64F), Sydney was ~70F now ~21C (70F) (, Brisbane was ~77.5F now ~23C (73F a bit cooler), Adelaide was ~70F now ~20C (68F a bit cooler), Perth was ~70F now 21 (70F):
http://www.bom.gov.au/climate/averages/climatology/temperature/hires_mean/aus/meanausdec.png
The comparison of July mean temperatures show the above cities were slightly warmer in the 1880s.
Now I realise that scientists had poorer eyesight back then and for that reason the BOM has decided not use the data before 1910 (sarc).
IMO as a non-scientists generally the smooth isotherms on the 1890 maps are more credible as interpolations between known data points than the jagged lines on the current maps.

Chris Hanley
Reply to  Chris Hanley
November 15, 2015 9:07 pm

Pardon the syntax gaffes.

richard verney
Reply to  Chris Hanley
November 15, 2015 9:59 pm

A really insightful post.
It would be interesting to review old Atlases/charts to see what they say about other countries, but of course, we are probably looking for temperatures changes of no more than 1degF in 130 years so we are not looking for a substantial change.

rogerknights
Reply to  Chris Hanley
November 15, 2015 10:44 pm

If the pre-1890 data had been colder than subsequent years, rather than hotter, wouldn’t the data have been spliced in somehow? That’s a question that could be put to the BOM.
But no need; to ask the question is to answer it.

Patrick
Reply to  Chris Hanley
November 16, 2015 1:48 am

I wonder how long it will take for these to be “disappeared”?

Hugs
Reply to  Chris Hanley
November 16, 2015 10:31 am

Now I realise that scientists had poorer eyesight back then and for that reason the BOM has decided not use the data before 1910 (sarc)

I don’t think they claim they had bad eyesight. I think the line goes: By using advanced statistical methods, it can be shown that the yearly average mean temperature has risen about 1.0C from the preindustrial level.
This is so small a change, that really sophisticated methods are needed to adjust the measurements done with different TOB, without Stevenson screens etc. However, at the same time it is a catastrophic change, which, if continued, may lead to disastrous consequences.
The temperatures are projected to rise at an accelerating speed. This will cause glaciers to melt and sea levels to rise due to thermal expansion and meltwater. Thinning Arctic ice may cause a strong positive feedback, and combined with Arctic methane bomb, may cause a vicious circle of warming, which may lead up to +10C warming before 2100.
UN projects there will be up to 50,000,000 climate refugees before year 2050. Act now! Switch off the toilet light. Drive less. Buy organic food from local producers. And vote for left-leaning parties. Remember: doing something is better than doing right.
(sarcasm detector may be needed before jumping into conclusions)

Mjw
November 15, 2015 11:11 pm

“Rounding of 5/9 ratio smaller Fahrenheit”
Don’t their computer keyboards have a decimal point?
What is the betting odds on which way the rounding so we’re made.

Peter Azlac
November 16, 2015 12:53 am

When the question of the accuracy of the temperature record at Rutherglen first emerged, I examined the Australian temperature records used by Acorn and BEST (GHCN). with the following results:
BOM Selection of Stations in Acorn
Using the BEST records, Steve Goddard finds around 200 long-term Australian stations. My examination of the BEST data gives around 165 with continuous or near continuous records of 100 years or more – excluding Rutherglen that BEST show as two stations. Out of these, according to the Acorn record, 90 have Tmin and Tmax data but I find that only 60 have it for the full record. BOM selected only 45 of these long term stations for their Acorn series but only 18 of these had Tmin and Tmax values for the full records. Even then BOM picked start dates for all records of 1910 at the earliest with 49 later of which 38 were later than 1938. Further, even when Tmin and Tmax values were available for the full record for many of the stations they only used the records after 1950. The BOM justification for cherry picking the 1910 earliest start date was the timing of installation of Stevenson screens but there is evidence that in many cases these were installed earlier than 1910. The net effect of these actions has been to cool the earlier part of the record and enhance the warming in the latter half to give a false trend.
Using the BEST non-homogenized raw data (most of their data comes from GHCN and so may have been manipulated by BOM before submission to GHCN) the following table shows the numbers of these long term stations with trends in different temperature brackets over the station record lifetimes.
Long term trend oC S&N East S&N West Central
Negative 44/28 3/3 0/0
Less 0.1 12/2 1/0 0/0
0.11 – 0.49 30/11 1/6 1/0
0.5 – 0.99 20/15 16/8 1/1
1.0 – 1.49 20/11 11/5 0/0
1.5+ 3/2 2/1 0/0
Total / with Tmin & Tmax
These raw temperature data show that the long-term cooling trend is in the Eastern areas of the country and covers 54 % of the stations with a further 15% showing trend of little significance It will be interesting to see what distinguishes these stations from the rest of the eastern stations. But what we can say right away is that it is not simply a matter of start date as there are stations with start dates post 1900 that also show similar cooling trends to those that commenced before this date.
What is clear is that there is no such thing as an all Australian climate trend. The trends differ by region with most of the warming in the central and western areas and most of the cooling in the east. This suggests that the climates of these two areas are subject to different drivers that are most likely linked to the IOD in the West and ENSO in the East.

Patrick
Reply to  Peter Azlac
November 16, 2015 1:38 am

“Peter Azlac
November 16, 2015 at 12:53 am
The trends differ by region with most of the warming in the central and western areas and most of the cooling in the east.”
Agreed. Yet the BoM still tries to average the WHOLE continent (With data from 112 stations). It’s exactly like putting your head in an oven at 100c and feet in a freezer at 0c and saying everything is ok at 50c. Silly!

steverichards1984
November 16, 2015 1:21 am

“it seems unlikely that coding errors would erratically affect some stations but not others”!!
Surely we all know that climate software “deals with this sort of discrepancy correctly”!
/sarc

richard
November 16, 2015 1:23 am

104 for the whole of Australia? the UK has over 300.
Surely micro climate comes into play –
• Upland regions
• Coastal regions
• Forest
• Urban regions
how can you allow for the above.

Patrick
Reply to  richard
November 16, 2015 1:31 am

I understood it was 112 stations that was used to calculate the national average seems now to be less. 112 stations is 1 device for every ~68,500 square kilometers that makes up Australia.

John Sayers
Reply to  richard
November 16, 2015 7:25 pm

Richard – when Simon J Torok and Dr N Nichols first homogenized the Australian temp record in 1996 they started with 1418 stations. They cut it down to 224 stations for their report. (It was Torok’s PhD.)
The BoM then reduced that to the 112 stations in the Acorn Data base.

spangled drongo
November 16, 2015 2:28 am

Bob FJ’s link to Watkin Tench’s personal observation of the hot weather at Rose Hill in Feb. 1791 is interesting. Tench reports that the ground was strewn with dead “peroquettes”.
These would have been Rosellas [Rose-Hillers] and Lorikeets and I have lived with these birds a lot closer to the equator than Rose Hill in occasional extremely hot conditions for nearly 80 years and have yet to see even one die from heat.
Pity Tench didn’t have a thermometer that day [not that the BoM would have acknowledged it].

knr
November 16, 2015 3:11 am

We are back to the two problems that have dogged climate ‘science ‘ right from the start . one we lack the actually ability to measure in a scientifically meaningful way the elements we are making judgements on , and two part of this is because that which we are attempting to measures is subject to chaos and we have little or no long term valid data .
And on that bottomless pit of quick sand , they built ‘settled’ science , which explains why so often it sinks and they have to resort to smoke and mirrors to pretend its not sinking.
Pre-climate doom , was accepted that weather prediction would be hit and miss , because of these problems. Post-climate doom the issues remained but by the ‘magic’ of CAGW they where no longer supposed to matter because ‘models !’
In practice what we have is poor experimental design leading it problems in results and so major issues with the claims based on these results.

November 16, 2015 3:16 am

BOM make it very hard to download their historic temperature data. You have to pointy-clicky and type a search to get each and every one. And there’s a secret, ever-changing code number in the dynamic URL constructed for every search. So I have written a script to get the entire lot of them.
Go to http://peacelegacy.org/ and select the Climate Research link in the right sidebar.

Peter Azlac
November 16, 2015 3:47 am

Richard says:
“104 for the whole of Australia? the UK has over 300.
Surely micro climate comes into play –
• Upland regions
• Coastal regions
• Forest
• Urban regions
how can you allow for the above.”
The major factors at play are precipitation (or irrigation) linked to soil types that determine water holding capacity, hence heat capacity, and the ease with which such water is evaporated to cool the surface air. The evaporation rate is linked to wind speed and atmospheric relative humidity but other factors, including seasonal trends. The only way to follow these effects is to have good min/max temperature, wind speed and evaporation data for surface water – some references::
http://static.msi.umn.edu/rreports/2008/319.pd
http://acacia.ucar.edu/staff/trenbert/trenberth.papers/i1520-0442-012-08-2451.pd
http://www.dca.ufcg.edu.br/mna/Anexo-MNA-modulo03g.pd
http://www.lasg.ac.cn/UpLoadFiles/File/papers/2013/2013-wly-zjy.pd
http://www.newton.dep.anl.gov/askasci/wea00/wea00105.ht
http://hockeyschtick.blogspot.fi/search?updated-max=2014-08-03T12:17:00-07:00&max-results=17&start=8&by-date=fals
These effects have been modeled by hydologists but apparently not taken up by BOM:
http://www.researchgate.net/publication/248808343_An_infiltration_model_to_predict_suction_changes_in_the_soil_profil
http://biomet.ucdavis.edu/biomet/SoilHeatFlow/SoilHF.ht
The IPCC AGW meme requires the claimed increased .surface temperature due to increased atmospheric carbon dioxide to substantially increase due to the evaporation of surface water increasing the GHC effect through the higher IR absorption and heat capacity of water vapour – this is claimed to increase global temperature by up to 10 C depending on which climate astrologer is making the claim and what they have been smoking when they make it! This claim is, however, easy to demonstrate as false as, since 1960 when the CO2 response is supposed to have taken effect, around the globe the evaporation of surface water has been measured in Class A Pan Evaporation units that are used in agriculture to measure the soil water deficit and hence needs for irrigation but no such effect has been noted unless linked to wind speed. This is well known to BOM as there was a seminar on the subject in Australia in 2005 when Roger Gifford of the CRC Greenhouse Accounting unit noted::
“There is no evidence anywhere in the world of large-scale, long-term increases in potential evaporation estimated as “Class A pan evaporation” over the past several decades, despite well-documented global warming.”
http://www.science.org.au/sites/default/files/user-content/nc-ess-pan-evap.pd
Chris Hanley refers to the BOM isotherm map but a better guide is to look at the Koppen map and the zones in which the Acorn stations are sited. The Koppen land classification system takes into account precipitation and the effects of soils, geology etc. It is a more direct measure of climate change as it defines to a large degree the boundaries within which plants can thrive. For example in Europe and N America a 1 C increase in surface temperature will, if soils, precipitation etc allow produce a change in the latitude where grains can be grown by around 170 km. During the cold of the Maunder Minimum the grain growing area moved south, leading to famine throughout Europe that was most intense in Scandinavia leading to a large death rate and migration to the USA. With the Sun becoming quiet we are heading for a similar period but our “Nero” like politicians will gather in Paris soon to take steps that will make matter worse.
Many stations that came within the Snowy Mountain irrigation scheme will have shown a cooling trend from the completion of the scheme in 1974 when arid areas around Rutherglen and other places switched to irrigated rice and other crops. Yet BOM and BEST interpret these changes as a ” breakpoint” indicating errors that must be corrected by homogenization or be treated as new site. The homogenisation uses stations in hotter Koppen zones and so gives a false correction – not that one was needed as an analysis of min and max data would show. The message to take from this is that climate change is neither global nor regional but zonal along the lines of the Koppen system.

Bob Fernley-Jones
Reply to  Peter Azlac
November 16, 2015 1:01 pm

Peter,
You mention the importance of wind speed but when it comes to surface air T’s but also, the effect of wind direction can be huge, especially in the southern States and more so towards the coast. In Melbourne for instance it can drop over ten degrees C in minutes with sudden reversal of wind from north to south. An extreme case was on Black Friday (bushfires), Feb/2009 when it suddenly dropped ~15 C and a little later a further ~10 C.
Others above have mentioned Bill Johnstons’ interesting recent study on AWS’ and rainfall. I doubt that the wind direction and force patterns are uniform over time so wonder if some of the AWS data interpretations might be affected by that consideration.
After all, I seem to recall claims recently of changed wind patterns in the Southern Ocean (“acidification” and ice) and there is convincing evidence that the great Khmer civilization at Angkor collapsed some 500 years ago because of changed monsoonal patterns.
Another thing that bothers me is that rain gauge measurement is not a reliable indicator of soil moisture profile. Steady “soaking” rain is far better absorbed than a heavy brief downpour that runs off. Also, compensation to rain gauge data is needed to correct for wind conditions and I’m not sure if that has been accurately engineered and modelled.

bobfj
Reply to  Peter Azlac
November 16, 2015 2:33 pm

The links in Peter’s comment have somehow been clipped short. e.g. .pd should be .pdf. This can be fixed by copy-pasting the link as is into Google.

November 16, 2015 4:51 am

And of those 112, they are not spread around Oz very representatively either. IMO.

richard verney
November 16, 2015 4:59 am

If one looks at the map, it appears that these stations have poor spatial coverage for Australia as a whole. appear to be biased towards coastal conditions, and to the South West at that. Does the spatial coverage impact upon the relevance of the data being returned?
We also need to know how well sited these stations are, precisely where are they and how have they been impacted by urbanisation (I know that 8 are claimed to be so badly impacted that they should be taken out of the data set).
Nick Stokes is an expert on this. Accordingly,. I would expect (and indeed like) to see some detailed input from Nick, not just his rather lame comment regarding why temperature data prior to 1910 is not being included.
Come on Nick, let us have the benefit of your expertise in this area.

Reply to  richard verney
November 16, 2015 7:12 pm

“I would expect (and indeed like) to see some detailed input from Nick”
Well, as you noticed, typing N… S….. puts your comment in moderation. I can’t avoid it, so that slows me down.
But the BoM Acorn station catalogue has a page on each station, including photos of siting etc.
The stations have been chosen for quality, including length of record. You can’t have that and uniform coverage as well. But 110 stations for a small continent is reasonable.

Bob Fernley-Jones
Reply to  Nick Stokes
November 16, 2015 8:51 pm

Nick,
Here is a 2013 Google street-shot of one of the ACORN sites that existed from 1910 to end 2014 as amongst the world’s best practice:comment image
And a satellite shot:comment image
Thanks for your humour it’s good to smile over this nonsense.

bobfj
Reply to  Nick Stokes
November 16, 2015 9:28 pm

BTW Nick,
You also say of these world leading ACORN sites:
“The stations have been chosen for quality, including length of record”.
Did you notice my complaint early in the post on the bias towards shorter term records despite that if it is older data in doubt, then somehow out of the >20,000 stations on record some should be selected to address that assumed weakness rather than increase the sample on the alleged better quality recent stuff?comment image

Reply to  Nick Stokes
November 16, 2015 10:35 pm

Bob,
That is the Melbourne Regional Office site. I know it well. But in fairness, it is one of the ones that was excluded from national averages because of its urbanity.

bobfj
Reply to  Nick Stokes
November 17, 2015 1:08 pm

Nick,
Yes Melbourne is one of eight ACORN “world’s best practice” sites that are DECLARED unfit for purpose in the direct sense WRT “climate disruption”. Indirectly though they have an influence in homogenization of neighbouring ACORN urban sites. And, of course, we get screaming reports whenever there is a very hot day without any such qualification.
OH, incidentally, why is Perth not declared UHI affected?
Why is Sydney DECLARED unfit for purpose when there is this wisdom in the official site history you mention? “… indicating that any urban influence on the data was already fully developed by the time ACORN-SAT begins in 1910.”
There is this wisdom for Melbourne: “There were strong rises in minimum temperature through the late 1950s and 1960s, possibly associated with increasing levels of road traffic” Is it not true that minima temperatures typically occur overnight when traffic is of little consideration?
Ah, but we know why Brisbane is fit for purpose because it has only existed for 66 years in the form of several airport sites.
There are concerns in credibility and diligence at the BoM.

Reply to  Nick Stokes
November 17, 2015 8:35 pm

“Indirectly though they have an influence in homogenization of neighbouring ACORN urban sites. And, of course, we get screaming reports whenever there is a very hot day without any such qualification.”
UHI does not disqualify from use in homogenisation. That is all about using neighbouring data to decide whether some jump is climatic or other change. UHI does not cause or prevent such jumps. In the same way, non-Acorn stations are used. Their records may be short, but they still provide a useful check where they have data.
As you know, the BoM has a large number of AWS around Melbourne, and on a hot day they will all be reported. Melbourne rarely leads. On Black Saturday, for example, Melb was 46.4, Avalon 47.9.
“Oh, incidentally, why is Perth not declared UHI affected?”
Because it never has been. It used to be in the observatory in Kings Park, then at the airport (from 1944, with merging). It is on airport land, but 500 m from the nearest runway, and 1500m from the nearest significant building, and a long way from any roads.

bobfj
Reply to  Nick Stokes
November 17, 2015 11:47 pm

Nick,
“UHI does not disqualify from use in homogenisation. That is all about using neighbouring data to decide whether some jump is climatic or other change. UHI does not cause or prevent such jumps. In the same way, non-Acorn stations are used. Their records may be short, but they still provide a useful check where they have data.”
But; taking one thing at a time; how can that be sensible if (briefly) there are nonsensical step-changes in the ACORN “corrections” as charted here:
http://jo.nova.s3.amazonaws.com/guest/aust/blog/bob-fj/capitals/melb-temperature-adjustments.gif
I could email a cleaner graphic less the pics BTW if you are interested.
How can you have a sharp step-up and then sharply back down again, that being uncharacteristic of urban growth. Dunno when the tarmac was drastically increased and don’t care because the sharp step-down is very daft. That’s only a small part of the silliness of Melbourne RO ACORN “corrections”.
One step at a time, let me take you through it if you are interested.
Cheers, Bob_FJ

Reply to  Nick Stokes
November 18, 2015 1:07 am

Bob,
The picture shows up fine in a separate tab. There is a more detailed analysis here. The big step was in 1964, when the time of obs was changed from midnight to the standard 9am. This affected the minimum, not the max. I don’t know why the change in 1929, but could well have been another TOBS change.
The adjustments have the effect of reducing the trend.

bobfj
Reply to  Nick Stokes
November 18, 2015 2:19 pm

Nick,
Thanks for the link. I’d not seen that Melbourne article before and it’s certainly interesting.
However, please note that trying to figure out WHY and HOW some of the ACORN data “corrections” are not credible is NOT my focus. Rather than devote a whole article to just one station I prefer to simply show that the ACORN “corrections” for multiple stations are daft, and that thus any claim to be able to properly determine trends is a false claim. This post covers 24 non-credible stations and other articles and submissions to authority (the latter all dismissed/ignored) bring the total so far to forty-four.
BTW as far as I can see, it is better to write down the max & min readings both on the same day as claimed by Tom prior to 1964 rather than the standardised daft practice after 1964 of recording the minima the day after the maxima. (when it is maybe possible the minima might be warmer than the maxima)
Oh, whilst on Melbourne, did you notice the regimented step-up of about ½ degree mean in the maxima for ~80 (eighty) years? Any thoughts?

Mike T
Reply to  richard verney
November 17, 2015 2:35 pm

Richard, the coverage of stations reflects the settlement pattern of the continent. To this day, the population is concentrated around the coastal fringe. The Bureau, when setting up its upper air network, selected towns near where they wanted the station so that there was housing and amenities for staff. The spacing was never ideal. Cobar, for instance, was selected even though a site further west may have been more representative, but apart from Wilcannia there is no town in the ~450kms between Cobar and Broken Hill. Some sites were selected as upper air stations even though the “town” was little more than a collection of houses and a service station (garage) and the Bureau contingent became one of the largest groups of paid workers in town. Eucla and Oodnadatta are examples. Eucla replaced Forrest, which was little more than a railway siding with a few homes for railway workers.

Reply to  Mike T
November 17, 2015 3:44 pm

It certainly reflects settlement. ACORN looks for long records, and upper air measurement is fairly recent. The main push for early weather monitoring came from rain gauges, which were a big factor in land prices, and land development. Since BoM, stations tended to be run by a federal employee. Post office, lighthouse, military or more recently, airport. Eucla, for example was at the Telegraph Office (as was Alice Springs, originally, then PO, then airport). Oodnadatta’s record is from the airport, which was important in WW2 (starts in 1940).

richard verney
November 16, 2015 5:04 am

Mods,
I have made a very innocuous comment which may have gone into moderation (since it has not been posted). It contains no bad language or accusations, and I can think of no reason why it has gone into moderation. After you have reviewed it, I am sure that you will post it since there is absolutely nothing objectionable in it. When posting it, please explain why it went into moderation so that I can learn from that and avoid similar problems in the future. Many thanks.
/////
If one looks at the map, it appears that these stations have poor spatial coverage for Australia as a whole. appear to be biased towards coastal conditions, and to the South West at that. Does the spatial coverage impact upon the relevance of the data being returned?
We also need to know how well sited these stations are, precisely where are they and how have they been impacted by urbanisation (I know that 8 are claimed to be so badly impacted that they should be taken out of the data set).
Nick Stokes is an expert on this. Accordingly,. I would expect (and indeed like) to see some detailed input from Nick, not just his rather lame comment regarding why temperature data prior to 1910 is not being included.
Come on Nick, let us have the benefit of your expertise in this area.

richard verney
Reply to  richard verney
November 16, 2015 7:46 am

Should have been South East (not South West). it would be well to check what I have typed before posting!.

Werner Brozek
November 16, 2015 7:31 am

I have made a very innocuous comment

Do not type Nick’s last name.

richard verney
Reply to  Werner Brozek
November 16, 2015 7:44 am

Thanks Werner, I will remember that for the future.

Gary Pearse
November 16, 2015 12:33 pm

I don’t have a link but I recall a couple of years ago mention here, perhaps in comments that two surveyors (I believe) from Britain came to Australia in the latter part of the 19th Century with thermometers and they took a number of readings (I’d bet they knew how to shelter a thermometer to take readings). The two thermometers are now in a museum in Britain (British Museum?). I mentioned this to Lord Monckton as well as there being a number of historic thermometers from earlier times and suggested it would be a good exercise to calibrate these old instruments with modern ones and see how accurate they were. Instrument makers of the day made all kinds of such things (clocks, for example and scientific apparati) were finely made. It is a mistake to think the craftsmanship of centuries ago would be crude. Read the Cavendish Experiment to determine the gravitational constant in Wiki, published in 1798. Scroll down the “The experiment” and see what giants we have to try to compete with!
https://en.wikipedia.org/wiki/Cavendish_experiment

November 16, 2015 2:33 pm

In every country of the World, every climatological reporting body, university or bureau, the temperatures of the past are corrected lower, and the temperature of the present is raised in the data sets.
There is nothing Kosher or Halal going on here, but there is a lot of cooking going on. Especially in the run-up to Climat Paris 2015.