By Bob Irvine
Conclusions based on large complex datasets can sometimes be simplified and checked by comparing expected results with the actual numbers generated. This is the case with the 58 temperature stations that cover the period 1910 to 2019 in the Australian BOM record.
Andy May was right to be suspicious when the USA warming as recorded of 0.25C per century
is changed using corrections and a gridding algorithm to 1.5C per century. It is, however, nearly impossible to prove non-climatic warming due to data manipulation by normal means. The fact that an algorithm increases warming may be legitimate, “time of observation” (TOB) for example.
A similar thing has happened in Australia.
“ACORN is the Australian Climate Observation Reference Network of 112 weather stations across Australia. The rewritten ACORN 2 dataset of 2018 updated the ACORN 1 dataset released in 2011. It immediately increased Australia’s per decade rate of mean temperature warming by 23%…” (Chris Gilham)
Again, it is right to be suspicious but difficult to prove by simply looking at the overall result. The changes could be legitimate, despite there being virtually no TOB issue in Australia.
EVIDENCE THAT NON-CLIMATIC WARMING HAS BEEN ADDED BY THE ACORNSAT CORRECTIONS
The term “corrections” here refers to all changes to the raw data for any reason. Some of these changes will be legitimate and some will distort the data in an artificial way. What follows is evidence that the artificial distortion of the data is significant and the “correction algorithms” should be revisited.
I have used the following dataset for all calculations and acknowledge Chris Gilham’s effort in compiling it.
My term “magnitude of the corrections” for each station’s max, min and mean is derived as follows. From the Albany maximum data at the above link.
Average change per decade: ACORN 2.1 0.16C / raw 0.08C
The “magnitude of the correction” is the difference between “Acorn Sat 2.1” and “raw” for each station (max, min, mean). i.e. 0.08 (0.16 – 0.08) in this case.
The “Acorn 2.1 anomaly” in this case is 0.16C.
The Acorn 2.1 corrections are designed to correct the data for station moves, equipment changes etc. and the final result is expected to represent the real temperature anomalies at a particular station more realistically than the raw data.
While it is expected that the corrections will either warm or cool the data at any particular station, it can also be expected that the “magnitude of the correction” bear no relationship to the final temperature anomaly. The final temperature anomaly as represented by Acorn 2.1 should depend entirely on the real temperature anomalies at the particular station and, therefore, be completely unrelated to the “magnitude of the corrections”.
For example, if two hypothetical stations were a few hundred meters apart but showed a large difference in temperature anomaly due to differing vegetation or equipment etc., then Acorn 2.1 would hypothetically adjust one say by 1.0C per decade and the other by say 0.03C per decade to finish with the same temperature anomaly for each station as is expected. They are next to each other after all.
The “magnitude of the corrections”, in this case 1.0C/Dec. and 0.03C/Dec., should not correlate with the final temperature as recorded by Acorn 2.1. i.e., the final temperature anomalies in this hypothetical case will finish up being approximately the same while the “magnitude of the two corrections” are very different. I hope this is clear.
If we then take the 58 Australian stations that cover the period 1910 to 2019, we should find that the “magnitude of their corrections” and their final “Acorn 2.1 anomaly” have approximately zero correlation if the corrections are not artificially affecting the final Acorn 2.1.
To check this, I have calculated the Pearson Correlation Coefficient (PCC) for these two data sets. (“Magnitude of the 58 corrections” compared to the “Acorn 2.1 anomalies” for the 58 stations).
If this PCC is;
- Negative – then the corrections to the raw data are adding artificial or spurious cooling to the raw data.
- Approximately zero – then the corrections are not adding artificial cooling or warming and are doing their job as intended.
- Positive – then the corrections are adding artificial or spurious warming to the raw data.
JUSTIFICATION FOR THE METHOD USED
The BOM comparison station selection method has a number of reference points and two variables per reference point that they are trying to correlate.
My data is exactly the same and I use the same method as they do to correlate my variables.
The BOM’s reference points are the 12 months.
My reference points are the 58 temperature stations.
The BOM’s two variables are the Temp. anomalies at station 1 and station 2 for each particular month.
My two variables are the “Magnitude of the Correction” and the “Acorn-Sat homogenised Temp. anomaly” at each particular station”.
A Pearson Correlation Coefficient (PCC) is specifically designed for this application.
The BOM statisticians obviously thought this the best way to approach this issue, as I do.
See also Appendix A for a stress test of the method used.
A PCC only indicates correlation and is not designed to indicate certainty or put a number on probability etc. The significance of a PCC value depends entirely on the intrinsic nature of the datasets being compared.
In their Pairwise Homogenisation Algorithm, AcornSat extensively use nearby comparison stations to deal with incontinuities in any given station. For a comparison station to be considered useable they compare the monthly temperatures for a 5-year period either side of the incontinuity. If the PCC for these two monthly datasets is greater than 0.5 then that comparison station is considered accurate enough to one tenth of a degree Celsius.
I have used the AcornSat logic and PCC figure as a guide when comparing the “magnitude of the corrections” with the “Acorn 2.1 anomaly” for the 58 stations.
- If the PCC is greater than 0.5 then – It is almost certain that the corrections are adding significant non-climatic warming to the raw data.
- If the PCC is greater than 0.3 and less than 0.5 – then it is very-likely that the corrections are adding significant non-climatic warming to the raw data.
- If the PCC is greater than 0.1 and less than 0.3 – then it is likely that the corrections are adding some non-climatic warming to the raw data.
- If the PCC is between -0.1 and 0.1 then it is likely that the corrections are not adding non-climatic cooling or warming to the raw data. The corrections are doing their job.
- If the PCC is less than -0.1 – then it is likely that the corrections are adding some non-climatic cooling to the raw data.
Figure 1 graphs the 58 Australian stations that cover the period 1910 to 2019. The general slope of the data points up to the right of the graph is solid evidence that the Acorn 2.1 corrections are adding non-climatic warming to the record.
Table 1 below shows clearly how the final Acorn 2.1 warming anomaly increases with the magnitude of the corrections. The implication is that the corrections are causing some of that apparent temperature increase.
Table 2, below indicates the Pearson Correlation Coefficient (PCC) (Magnitude of Corrections v Acorn temp. anomaly) for the total dataset. The red line has a lower PCC and implies that when less correction is required the final Acorn 2.1 temperatures have less non-climatic warming.
According to the precedent set by AcornSat for the use of the Pearson correlation coefficient (PCC) when applied to data of this type, a PCC of 0.56 (magnitude of corrections verses Acorn 2.1 temp. anomaly) as found here indicates that “It is almost certain that the corrections are adding significant non-climatic warming to the raw data”.
For some of the stations in the dataset, AcornSat decided that the raw data was relatively accurate and consequently did not need significant homogenisation or correction. These stations are likely to give a more accurate temperature reading and a better idea of Australia’s temperature history over the period covered.
As a check I calculated the same PCC for the 14 stations that only needed minimal correction (Red data in Table 1 and Table 2). A lower PCC of 0.25 still indicates some artificial warming but not nearly as much as for the whole dataset.
The average temperature anomaly per decade for the 14 stations that needed minimal correction is about 0.1°C/Decade (Table 1). As calculated, this will include some artificial warming so the actual average warming for these more accurate stations is likely to fall in the range, 0.08°C to 0.10°C per decade.
This method could possibly be used to check the adjustments to the USA datasets. These adjustments are fiercely contested so it would be good to have some objective analysis. Let the cards fall where they will.
This appendix seeks to stress test the method used above. It does this by applying the method to two extreme homogenisation algorithms. One that we know adds zero artificial warming and one that we know adds large artificial warming in proportion to the correction. In the first instance (“C” in table 1.) we expect the Pearson Correlation Coefficient (PCC) to be zero. In the second (“E” in Table 1.) we expect the PCC to be 1.0. If this is true in both cases, then the method used, and the conclusions drawn in the main paper above are likely to be correct.
We use accurate satellite technology to determine the temperature anomalies for all parts of an imaginary small island.
This imaginary accurate satellite data has been available for the last 100 years and tells us that the temperature of all parts of this island have increased at the same rate. This rate being 0.1C per decade over the last century (“B” in Table 1).
We the readers are the only ones aware of the true and accurate temperature rise for all parts of the island over the last 100 years as described.
The inhabitants of the island are unaware of this accurate satellite data set. In an attempt to determine the temperature rise, over the 100 years on the island, these inhabitants set up 10 temperature stations 100 years ago, all equally spaced around the island.
Three of these stations (Station # 1 to 3) were well sited with good equipment that has not needed to be changed or relocated for any reason for the 100 years. They each showed a correct temperature anomaly of 0.1C per decade.
The other 7 stations (Station # 3 to 10) were subject to various changes related to equipment, vegetation, station moves etc. The raw data from all these 7 stations showed anomalies less than the correct 0.1C per decade. To correct for these inconsistencies the inhabitants developed two homogenisation algorithms (“C” and “E”) that they then applied to these 7 variant stations.
To test which of these two algorithms was suitable, the inhabitants applied the test described in the main paper above.
A – Raw Temperature data for each station. (°C per decade).
B – Correct Satellite Data. (°C per decade).
C – Final temperature data after first accurate homogenisation algorithm is applied. It is correct and matches the satellite data. (°C per decade).
D – “Magnitude of the Corrections” as applied to “C”. Derived by subtracting Raw data from “C”. (°C per decade).
E – Final temperature data after using a homogenisation algorithm that adds artificial warming to the record. (°C per decade).
F – “Magnitude of the Corrections” as applied to “E”. Derive by subtracting raw data from “E”. (°C per decade).
Table 1. Compares the 10 stations with their raw temperature, correct satellite temperature, and two possible final temperatures after the two homogenisation processes are complete. The two “magnitude of the corrections” (“D” and “F”) are also shown and are derived by subtracting the raw temp. from both final homogenised temp. sets (“C” and “E”).
The Pearson Correlation Coefficient (PCC) between “C” and “D” is zero as expected.
The Pearson Correlation Coefficient (PCC) between “E” and “F” is 1.0, also as expected.
On the imaginary island in this appendix, a homogenisation algorithm that was known to be accurate (“C”) gave a PCC when the “magnitude of its corrections” was compared to its final temperatures, of zero.
A homogenisation algorithm that was known to add artificial warming to the record in proportion to the corrections (“E”) had a similarly calculated PCC of 1.0.
Both these results are consistent and would be expected if the method and results of the main paper above were accurate.
The case for saying that the Australian BOM homogenisation algorithm adds artificial warming to the record is strong and possibly proven.