By Bob Irvine
Conclusions based on large complex datasets can sometimes be simplified and checked by comparing expected results with the actual numbers generated. This is the case with the 58 temperature stations that cover the period 1910 to 2019 in the Australian BOM record.
Andy May was right to be suspicious when the USA warming as recorded of 0.25C per century
is changed using corrections and a gridding algorithm to 1.5C per century. It is, however, nearly impossible to prove non-climatic warming due to data manipulation by normal means. The fact that an algorithm increases warming may be legitimate, “time of observation” (TOB) for example.
A similar thing has happened in Australia.
“ACORN is the Australian Climate Observation Reference Network of 112 weather stations across Australia. The rewritten ACORN 2 dataset of 2018 updated the ACORN 1 dataset released in 2011. It immediately increased Australia’s per decade rate of mean temperature warming by 23%…” (Chris Gilham)
Again, it is right to be suspicious but difficult to prove by simply looking at the overall result. The changes could be legitimate, despite there being virtually no TOB issue in Australia.
EVIDENCE THAT NON-CLIMATIC WARMING HAS BEEN ADDED BY THE ACORNSAT CORRECTIONS
The term “corrections” here refers to all changes to the raw data for any reason. Some of these changes will be legitimate and some will distort the data in an artificial way. What follows is evidence that the artificial distortion of the data is significant and the “correction algorithms” should be revisited.
SOME BACKGROUND
I have used the following dataset for all calculations and acknowledge Chris Gilham’s effort in compiling it.
http://www.waclimate.net/acorn2/index.html
My term “magnitude of the corrections” for each station’s max, min and mean is derived as follows. From the Albany maximum data at the above link.
Average change per decade: ACORN 2.1 0.16C / raw 0.08C
The “magnitude of the correction” is the difference between “Acorn Sat 2.1” and “raw” for each station (max, min, mean). i.e. 0.08 (0.16 – 0.08) in this case.
The “Acorn 2.1 anomaly” in this case is 0.16C.
The Acorn 2.1 corrections are designed to correct the data for station moves, equipment changes etc. and the final result is expected to represent the real temperature anomalies at a particular station more realistically than the raw data.
While it is expected that the corrections will either warm or cool the data at any particular station, it can also be expected that the “magnitude of the correction” bear no relationship to the final temperature anomaly. The final temperature anomaly as represented by Acorn 2.1 should depend entirely on the real temperature anomalies at the particular station and, therefore, be completely unrelated to the “magnitude of the corrections”.
For example, if two hypothetical stations were a few hundred meters apart but showed a large difference in temperature anomaly due to differing vegetation or equipment etc., then Acorn 2.1 would hypothetically adjust one say by 1.0C per decade and the other by say 0.03C per decade to finish with the same temperature anomaly for each station as is expected. They are next to each other after all.
The “magnitude of the corrections”, in this case 1.0C/Dec. and 0.03C/Dec., should not correlate with the final temperature as recorded by Acorn 2.1. i.e., the final temperature anomalies in this hypothetical case will finish up being approximately the same while the “magnitude of the two corrections” are very different. I hope this is clear.
If we then take the 58 Australian stations that cover the period 1910 to 2019, we should find that the “magnitude of their corrections” and their final “Acorn 2.1 anomaly” have approximately zero correlation if the corrections are not artificially affecting the final Acorn 2.1.
To check this, I have calculated the Pearson Correlation Coefficient (PCC) for these two data sets. (“Magnitude of the 58 corrections” compared to the “Acorn 2.1 anomalies” for the 58 stations).
If this PCC is;
- Negative – then the corrections to the raw data are adding artificial or spurious cooling to the raw data.
- Approximately zero – then the corrections are not adding artificial cooling or warming and are doing their job as intended.
- Positive – then the corrections are adding artificial or spurious warming to the raw data.
JUSTIFICATION FOR THE METHOD USED
The BOM comparison station selection method has a number of reference points and two variables per reference point that they are trying to correlate.
My data is exactly the same and I use the same method as they do to correlate my variables.
The BOM’s reference points are the 12 months.
My reference points are the 58 temperature stations.
The BOM’s two variables are the Temp. anomalies at station 1 and station 2 for each particular month.
My two variables are the “Magnitude of the Correction” and the “Acorn-Sat homogenised Temp. anomaly” at each particular station”.
A Pearson Correlation Coefficient (PCC) is specifically designed for this application.
The BOM statisticians obviously thought this the best way to approach this issue, as I do.
See also Appendix A for a stress test of the method used.
LEGEND
A PCC only indicates correlation and is not designed to indicate certainty or put a number on probability etc. The significance of a PCC value depends entirely on the intrinsic nature of the datasets being compared.
In their Pairwise Homogenisation Algorithm, AcornSat extensively use nearby comparison stations to deal with incontinuities in any given station. For a comparison station to be considered useable they compare the monthly temperatures for a 5-year period either side of the incontinuity. If the PCC for these two monthly datasets is greater than 0.5 then that comparison station is considered accurate enough to one tenth of a degree Celsius.
I have used the AcornSat logic and PCC figure as a guide when comparing the “magnitude of the corrections” with the “Acorn 2.1 anomaly” for the 58 stations.
My legend;
- If the PCC is greater than 0.5 then – It is almost certain that the corrections are adding significant non-climatic warming to the raw data.
- If the PCC is greater than 0.3 and less than 0.5 – then it is very-likely that the corrections are adding significant non-climatic warming to the raw data.
- If the PCC is greater than 0.1 and less than 0.3 – then it is likely that the corrections are adding some non-climatic warming to the raw data.
- If the PCC is between -0.1 and 0.1 then it is likely that the corrections are not adding non-climatic cooling or warming to the raw data. The corrections are doing their job.
- If the PCC is less than -0.1 – then it is likely that the corrections are adding some non-climatic cooling to the raw data.
- Etc.
RESULTS
Figure 1 graphs the 58 Australian stations that cover the period 1910 to 2019. The general slope of the data points up to the right of the graph is solid evidence that the Acorn 2.1 corrections are adding non-climatic warming to the record.

Table 1 below shows clearly how the final Acorn 2.1 warming anomaly increases with the magnitude of the corrections. The implication is that the corrections are causing some of that apparent temperature increase.

Table 2, below indicates the Pearson Correlation Coefficient (PCC) (Magnitude of Corrections v Acorn temp. anomaly) for the total dataset. The red line has a lower PCC and implies that when less correction is required the final Acorn 2.1 temperatures have less non-climatic warming.

CONCLUSION
According to the precedent set by AcornSat for the use of the Pearson correlation coefficient (PCC) when applied to data of this type, a PCC of 0.56 (magnitude of corrections verses Acorn 2.1 temp. anomaly) as found here indicates that “It is almost certain that the corrections are adding significant non-climatic warming to the raw data”.
For some of the stations in the dataset, AcornSat decided that the raw data was relatively accurate and consequently did not need significant homogenisation or correction. These stations are likely to give a more accurate temperature reading and a better idea of Australia’s temperature history over the period covered.
As a check I calculated the same PCC for the 14 stations that only needed minimal correction (Red data in Table 1 and Table 2). A lower PCC of 0.25 still indicates some artificial warming but not nearly as much as for the whole dataset.
The average temperature anomaly per decade for the 14 stations that needed minimal correction is about 0.1°C/Decade (Table 1). As calculated, this will include some artificial warming so the actual average warming for these more accurate stations is likely to fall in the range, 0.08°C to 0.10°C per decade.
This method could possibly be used to check the adjustments to the USA datasets. These adjustments are fiercely contested so it would be good to have some objective analysis. Let the cards fall where they will.
APPENDIX “A”
This appendix seeks to stress test the method used above. It does this by applying the method to two extreme homogenisation algorithms. One that we know adds zero artificial warming and one that we know adds large artificial warming in proportion to the correction. In the first instance (“C” in table 1.) we expect the Pearson Correlation Coefficient (PCC) to be zero. In the second (“E” in Table 1.) we expect the PCC to be 1.0. If this is true in both cases, then the method used, and the conclusions drawn in the main paper above are likely to be correct.
THE TEST
We use accurate satellite technology to determine the temperature anomalies for all parts of an imaginary small island.
This imaginary accurate satellite data has been available for the last 100 years and tells us that the temperature of all parts of this island have increased at the same rate. This rate being 0.1C per decade over the last century (“B” in Table 1).
We the readers are the only ones aware of the true and accurate temperature rise for all parts of the island over the last 100 years as described.
The inhabitants of the island are unaware of this accurate satellite data set. In an attempt to determine the temperature rise, over the 100 years on the island, these inhabitants set up 10 temperature stations 100 years ago, all equally spaced around the island.
Three of these stations (Station # 1 to 3) were well sited with good equipment that has not needed to be changed or relocated for any reason for the 100 years. They each showed a correct temperature anomaly of 0.1C per decade.
The other 7 stations (Station # 3 to 10) were subject to various changes related to equipment, vegetation, station moves etc. The raw data from all these 7 stations showed anomalies less than the correct 0.1C per decade. To correct for these inconsistencies the inhabitants developed two homogenisation algorithms (“C” and “E”) that they then applied to these 7 variant stations.
To test which of these two algorithms was suitable, the inhabitants applied the test described in the main paper above.
| STATION # | A | B | C | D | E | F |
| 1 | 0.1 | 0.1 | 0.1 | 0 | 0.1 | 0 |
| 2 | 0.1 | 0.1 | 0.1 | 0 | 0.1 | 0 |
| 3 | 0.1 | 0.1 | 0.1 | 0 | 0.1 | 0 |
| 4 | 0.09 | 0.1 | 0.1 | 0.01 | 0.11 | 0.02 |
| 5 | 0.08 | 0.1 | 0.1 | 0.02 | 0.12 | 0.04 |
| 6 | 0.07 | 0.1 | 0.1 | 0.03 | 0.13 | 0.06 |
| 7 | 0.06 | 0.1 | 0.1 | 0.04 | 0.14 | 0.08 |
| 8 | 0.05 | 0.1 | 0.1 | 0.05 | 0.15 | 0.1 |
| 9 | 0.04 | 0.1 | 0.1 | 0.06 | 0.16 | 0.12 |
| 10 | 0.03 | 0.1 | 0.1 | 0.07 | 0.17 | 0.14 |
| Mean | 0.072 | 0.1 | 0.1 | 0.028 | 0.128 | 0.056 |
Legend;
A – Raw Temperature data for each station. (°C per decade).
B – Correct Satellite Data. (°C per decade).
C – Final temperature data after first accurate homogenisation algorithm is applied. It is correct and matches the satellite data. (°C per decade).
D – “Magnitude of the Corrections” as applied to “C”. Derived by subtracting Raw data from “C”. (°C per decade).
E – Final temperature data after using a homogenisation algorithm that adds artificial warming to the record. (°C per decade).
F – “Magnitude of the Corrections” as applied to “E”. Derive by subtracting raw data from “E”. (°C per decade).
Table 1. Compares the 10 stations with their raw temperature, correct satellite temperature, and two possible final temperatures after the two homogenisation processes are complete. The two “magnitude of the corrections” (“D” and “F”) are also shown and are derived by subtracting the raw temp. from both final homogenised temp. sets (“C” and “E”).
RESULT
The Pearson Correlation Coefficient (PCC) between “C” and “D” is zero as expected.
The Pearson Correlation Coefficient (PCC) between “E” and “F” is 1.0, also as expected.
CONCLUSION
On the imaginary island in this appendix, a homogenisation algorithm that was known to be accurate (“C”) gave a PCC when the “magnitude of its corrections” was compared to its final temperatures, of zero.
A homogenisation algorithm that was known to add artificial warming to the record in proportion to the corrections (“E”) had a similarly calculated PCC of 1.0.
Both these results are consistent and would be expected if the method and results of the main paper above were accurate.
The case for saying that the Australian BOM homogenisation algorithm adds artificial warming to the record is strong and possibly proven.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Bob Irvine … an interesting analysis and it’s fantastic that my site is serving its purpose partly to inform and partly to provide all the source data needed by other researchers to figure out the voodoo of Australia’s temperature trends.
I will note that you probably visited the site and used its data several weeks ago, as I’ve just updated it from top to bottom so that it primarily deals with ACORN 2.2 data, ACORN 2.2 having been silently introduced by the BoM in December to replace the previous ACORN 2.1 dataset.
25 weather stations had their ACORN 2.1 temperatures adjusted by ACORN 2.2.
The BoM states that the changes have made no difference to Australia’s long-term temperature trends, which is sort of true for anomalies based on 1961-90. Remarkably, that 30 year climate period had no change to its mean absolute temperature.
However, the ACORN 2.2 adjustments did make a difference if you compare absolute temperatures of 1910-1919 (the first decade) with 2010-2019 (the most recent decade), with maxima warming 0.06C and minima warming 0.11C – i.e. ACORN 2.2 mean temperatures are 0.085C warmer than ACORN 2.1 when comparing the first and most recent decades. Similar cooling occurs in ACORN 2.2 till 1939.
The warming occurs because of how the adjusted blocks of temperature are distributed through the 1910-2019 timescale.
An example is Scone in New South Wales where ACORN 2.2 warms the maximum temperature prior to 2016 by +0.29C, possibly due to surrounding vegetation growth and/or urban development. However, there is no warming prior to 1993.
i.e. in this case ACORN 2.2 warmed historic temperatures but only back to 1993 so modern temperatures warm, older temperatures don’t and climate change is happening. As a result, ACORN 2.2 produces 0.18C more warming at Scone than ACORN 2.1 since the station’s first decade of observations in 1965-1974.
From 1910 to 2020, I calculate ACORN 2.2 warms the 58 long-term ACORN station (those open in 1910) max by 0.089C per decade and the RAW warming is 0.072C per decade. Minima warm 0.117C per decade in ACORN 2.2 and 0.058C per decade in RAW.
You can average those for a rough ‘n ready mean temperature increase of 0.103C per decade in ACORN 2.2 and 0.065C per decade in RAW at the 58 long-term stations that are essentially the backbone of Australia’s official historic temperature history.
http://www.waclimate.net/acorn2/index.html is the only place on the planet where ACORN 2.2 is analysed and you can download about 240 different Excels with min and max details for all 112 ACORN stations. Apart from tabulated monthly, annual and day of week average calculations, the spreadsheets include every daily temperature since 1910 for the datasets of RAW, ACORN 1, ACORN 2, ACORN 2.1 and ACORN 2.2.
ACORN 2.2 extends the ACORN timescale from 1910-1919 to 1910-2020.
Bob, if you’re bored you might want to revisit my site for a similar analysis of ACORN 2.2 temperatures instead of ACORN 2.1. Anybody else who understands words and numbers, and who’s interested in Australian temperatures, is also encouraged to visit and start analysing if that’s your thing.
Since pre-1910 temperatures have been discussed in this message thread, you might also want to visit my recent four page series questioning whether Australia’s 2050 net zero pledge at Glasgow was worth it … http://www.waclimate.net/australia-net-zero.html
The last of the four pages is the most comprehensive analysis you’ll find of pre-1910 Australian temperatures (largely based on 225 weather stations instead of ACORN’s 104 non-urban stations), but all four pages contain data you probably haven’t previously seen.
If of interest, all Australia’s unadjusted temps from the mid 1800s to 1950 suggest a mean temperature increase slightly less than 0.6C compared to 2000-2021. These haven’t been adjusted or “corrected” for factors such as 60% of all pre-metric Fahrenheit observations being .0 rounded, one second observations by AWS temperature probes, large v small shelters, a curious shift since 2013, UHI, airport heat, the dissipation of smog, etc.
This work by Chris Gillham is fundamentally important.
Critics might say that it is unpublished, not peer reviewed, the author is not a climate change scientist, and so on as is the current trendy way to dismiss.
You cannot dismiss this work, because it is no more than a re-format of existing official BOM numbers. There is no subjective input by Chris.
….
Why is it important? Because it demonstrates that the several BOM data adjustments versions are unfit for serious use, because a person who is not a climate scientist can show that the adjusted numbers are not good enough for a number of given reasons. Particularly, they are not good enough because they are not absolute. They are variable at the whim of BOM, who use them to promote conclusions that are not supported by more rigorous numbers.
Data modified to allow selective, subjective interpretation is not acceptable in the world of proper science.
(Just imagine the uproar if a geologist altered assay values at whim, when promoting a new mine.) Geoff S
Chris, your data set is an important resource.
Geoff’s quote;
” You cannot dismiss this work, because it is no more than a re-format of existing official BOM numbers. There is no subjective input by Chris.”
nails it.
We all owe you a vote of thanks.
In other words, ad hominem attacks.
Wouldn’t it be fair to complain about public servants, now called employees of government departments, paid to provide services to taxpayers who effectively pay for their remuneration and operating assets, being guilty of deceptive behaviour?
By the way, public service employees pay taxes but not real taxes, what they “pay” is a part return of private sector tax revenue paid to them back into the public purse.
And with our elected representatives unwilling to stop them, or more accurately the elected representatives who would like to have BoM independently audited lacking sufficient numbers to arrange for an independent audit report.
Example, Doctor Jennifer Marohasy and colleagues submission/s to the Minister responsible, the Minister advising the Prime Minister and Cabinet colleagues, Prime Minister Abbott recommending that an independent audit be conducted at the BoM and not being supported by a majority in his Cabinet. And noting that in 2015 he was replaced by Prime Minister Turnbull who apparently continued to ignore the issues. The “Turnbull Government” as it was unofficially called by the PM and factional colleagues donated $440 million to a private organisation for GBR research and public service advice to pay the money in stages was ignored.
This evening Professor Peter Ridd was interviewed on a Sky News programme about the latest government decision to provide new funding for Great Barrier Reef research and care, $1 billion over 9 years, and when asked about why this money was needed Professor Ridd replied mainly to appease the UNESCO world heritage sites officials and discourage them from producing a report claiming that the GBR was in danger and impacting adversely on reef tourism in Queensland.
And therein lies the heart of the climate hoax modus operandi, I would call it blackmail.