Author: Bill Johnston
“… weather stations were not set-up to measure trend and change, but to monitor local weather …”
Note [1], [2], etc refer to references at the bottom of this page.
As a retired scientist and former weather observer with a keen interest in data, I have been researching Australian weather station datasets for almost two decades. I am disillusioned by the quality of the Australian Bureau of Meteorology (BoM) network; suspicious of data they rely on to monitor Australia’s climate (the Australian Climate Observations Reference Network – Surface Air Temperature: ACORN-SAT), and mightily concerned about constant adjustments to datasets that universally seem to result in on-going warming.
The main problem for ACORN-SAT and its precursor high-quality datasets [1, 2], is that weather stations were not set-up to measure trend and change, but to monitor local weather, compare climates, and provide day-to-day information relevant to commerce.
While agriculture was an early beneficiary – crops to sow and limits to farming for instance, weather maps and short-term predictions were important to shipping and trade, and later for aircraft flying major air routes. Using data telegraphed by post and telegraph offices, Australia’s first weather map was published by NSW Astronomer Henry Chamberlain Russell in The Sydney Morning Herald on 5th February 1877.
The bolt-on experiment that became ACORN-SAT commenced around the time the IPCC was preparing the 1990 First Assessment Report. Chapter 8 of the Scientific Assessment lamented the lack of terrestrial temperature data and recommended “setting up a climate change detection panel to coordinate model experiments and data collection efforts” [3]. Consequently, Australian scientists got busy stitching data together and applying adjustments to iron-out resulting kinks – they homogenised the data.
Homogenisation refers to the process of adjusting for non-climate effects, while resulting homogenised data are assumed to solely reflect the long-term climate.
All Australian weather station sites have moved and changed in ways that impact data. Consequently, trends in raw data are as fraught as poorly adjusted data. Thus, the overarching question for any long-term temperature record is whether methods used to adjust for non-climate impacts are appropriate and unbiased. Appropriate requires that adjustments directly align with significant changepoints in data, while unbiased requires they are directly proportional to the effect.


So, if independently of rainfall, mean maximum temperature (Tmax) at Victoria River Downs, Northern Territory (BoM ID 14825) stepped-up 1.12oC in 2013 (t(56)=6.28, p=<0.001), an unbiased adjustment would result in a sign-reversed proportionate change of -1.12oC [4]. While missing observations was a problem (particularly in 1973, which was ignored in subsequent analysis), and data before 1993 lacked precision, Victoria River Downs is one of the 112 ACORN-SAT sites used to monitor warming in Australia.

The overall Tmax raw-data trend of 0.126oC/decade, which was weakly significant (p = 0.039, R2adj = 0.057), was spuriously related to the 2013 step-change, highlighting that unless underlying inhomogeneities are accounted for, naïve approaches to determining trend can be highly misleading.
The ACORN-SAT Catalogue [5] mentions that while the site moved 250 m northwest in August 1987 and an automatic weather station (AWS) was installed on 9 May 1997, surroundings were “watered up until around 2007” – perhaps sporadic watering continued until 2013, or a 60-litre screen was installed or something else. The step-change was detected as highly significant by two entirely different statistical tests and verified using categorical multiple regression. Additional tests confirmed that data consisted of two non-trending segments disrupted by the step-change, with no residual trend attributable to CO2, coal mining, electricity generation or anything else.
Meanwhile, ACORN-SAT V.1 homogenisation (AcV.1 – to 2017) adjusted for a change in 1976, which was not detectable or verifiable, and an alleged move that made no difference to the data in 1987, leaving the 2013 step-change intact. AcV2.0 (to 2018), ignored the change in 1976, adjusted for the move in 1987, and in addition, for no stated reason made an adjustment in 1996 while also leaving the 2013 step-change intact. AcV2.3 (to 2021) made adjustments in 1968 (screen), 1974 (statistical), and 2007 (vegetation), but still no adjustment in 2013. Similarly, for AcV2.4 to 2.5 (to 2024), which made adjustments only in 1987 (move) and 1996.
While not resulting in significant trends, ACORN-SAT adjustments were inconsistent in their timing, and don’t align with the step-change in 2013. Changes in data they aimed to correct could not be detected, while confirmatory and post hoc analysis (Section 4 in [4]) found changepoints adjusted by ACORN-SAT were inappropriate for the data.
Victoria River Downs is by no means a special case. Inappropriate and disproportionate adjustments are a common feature of the thus-far 19 ACORN-SAT sites investigated and reported-on at www.bomwatch.com.au, and sites reported earlier at https://joannenova.com.au (Bourke, Port Hedland, Canberra and Sydney Observatory). An additional 66 weather stations in total, that have been used to homogenise ACORN-SAT sites have also been investigated using the same robust protocols as applied to data for Victoria River Downs.
Prior to the development of BomWatch protocols [6, 7], statistically sound methods for assessing the quality of data for individual weather stations and those that had been homogenised, were seriously lacking, which led to conflicting outcomes [2]. Further, station metadata – data about the data used to identify changepoints could not be considered reliable.
For instance, aerial photographs and a file held by the National Archives of Australia (NAA) show coordinates of the original met-enclosure at Cairns airport (BoM ID 31011), which was adjacent to the 1939 Aeradio office [8], were incorrectly specified in site-summary metadata. Metadata also ignored that the site moved to a 30m-by-30m mound near the centre of the airport in 1965, and due to building a new taxiway, to another mound before September 1983. Another site apparently also established around that time to the northwest, near the location of the current automatic weather station.
Ignoring previous moves, the ACORN-SAT catalogue simply states “the site moved 1.5 km northwest (to the other side of the runway)” in December 1992, which is both wrong and incomplete.
The Garbutt Instrument file at NAA show BoM commenced negotiations with the Royal Australian Air Force (RAAF) to move the site at Townsville airport in 1965, and that observations commenced at the new mounded site in 1970 [9]. Aerial photographs and RAAF airport plans showed the site had moved at least three times while on the eastern side of the runway, while subsequent to the 1970 move to the western side, it likely moved twice more before finally relocating to the current site in December 1994. Despite station files being in their possession and other material, including aerial photographs available in the public domain, the ACORN-SAT catalogue claims: “There are no documented moves until one of 200 m northeast on 8 December 1994, at which time an automatic weather station was installed”, which their own records would confirm is untrue.
A more subtle example of data corruption is that replacement of 230-litre Stevenson screens with 60-litre screens at individual sites is routinely ignored by site-summary metadata, including for the 66 comparator stations mentioned previously, each of which was studied in-depth. While it appears that temperature extremes have increased across the network over recent decades, much of the effect is due to the staged rollout of smaller screens, which respond more rapidly to changes in outside temperature, not to CO2 or a change in the climate.
Considering that datasets may be affected by multiple issues, development of a robust protocol required an objective, broad-brush approach.
Multiple attributes / year are derived from daily station datasets using the statistical program R, which is much faster than using a spreadsheet program such as Excel. Absolute extremes, data counts, means and medians, standard deviations, observations greater than 95th and less than 5th day-of-year percentiles, their Hi/Lo and log10 ratio, and indices of precision based on earlier work by Chris Gillham (www.waclimate.net). Frequency analysis of daily temperature and rainfall observations within classes, may provide additional insights.
As a reference frame, the First Law of thermodynamics predicts a linear relationship between mean maximum temperature (Tmax) and annual rainfall such that the drier it is the warmer it gets. Significance and goodness-of-fit – R2adj (adjusted for the number of terms and datapoints [10]) provide objective, comparable measures of data-fitness. Should relationships be not significant, or R2adj be less than 0.5 (<50% of variance in Tmax explained), something is wrong – data are no good (random to rainfall), or more typically relationships are contaminated by site moves or changes. For more detail, and what happens next see [11].
Commencing with Neville Nicholls and Simon Torok in 1990 with their HQ datasets [1], and most recently with Blair Trewin’s ACORN-SAT [12], aforementioned and recent www.bomwatch.au studies, and others that to date have not been reported, show unequivocally that BoM’s homogenisation methods create warming trends that are unrelated to the climate.
This is achieved by combinations of:
• Ignoring changes that happened; or alternatively, adjusting for site changes that made no difference to the data. Arbitrary application of changepoints and the lack of replicable, transparent and objective methods is the antithesis of the scientific method.
• Reliance on poor quality metadata, particularly with regard original locations of Stevenson screens, and when and where they moved to, without independent corroboration using aerial photographs.
• Adjusting for the presumed effect on daily maxima and minima of time of observation changes from 3 am to 9 am at airports and lighthouses; rounding differences due to metrication; and the rollout of 60-litre Stevenson screens, that are disproportionate to their effect at individual sites.
• Selected “neighbouring” sites, some of which are >1000 km away, whose first-differenced monthly values are highly correlated with those of the site to be homogenised likely embed parallel faults. Consequently, their use as reference series may reinforce rather than unbiasedly adjust faults in ACORN-SAT.
As the rollout of 60-litre screens is almost complete, and Google Earth Pro shows that transpiring vegetation at many unmanned sites has already been sprayed-out, graded around or scalped, data for individual weather stations are such that there is limited scope for instrument or methodology adjustments to continue to cool the past and warm current data in order to maintain the claimed trend. Thus the time is near when further adjustments would create grossly implausible data, which will be when the whole edifice must collapse. It is therefore surely time that in order to save reputations, and cease further damaging the science, the Bureau of Meteorology must abandon the ACORN-SAT project altogether.
Dr Bill Johnston
31 January 2025
References
[1]. Torok, S.J. and Nicholls, N. (1996). A historical annual temperature dataset for Australia. Aust. Met. Mag., 45, 251-260.
[2]. Della-Marta, P., Collins, D. and Braganza, K. (2004). Updating Australia’s high quality annual temperature dataset. Aust. Met. Mag. 53, 15-19[2].
[3]. https://www.ipcc.ch/site/assets/uploads/2018/03/ipcc_far_wg_I_full_report.pdf
[4]. https://www.bomwatch.com.au/wp-content/uploads/2024/02/VictoriaRiverDowns-16-Feb-2024-1.pdf
[5]. http://www.bom.gov.au/climate/data/acorn-sat/stations/#/14825
[6]. https://www.bomwatch.com.au/wp-content/uploads/2020/08/Are-AWS-any-good_Part1_FINAL-22August_prt.pdf
[7]. https://www.bomwatch.com.au/wp-content/uploads/2020/01/Methods-CaseStudy_-GladstoneRadar.pdf
[8]. https://www.bomwatch.com.au/climate-of-gbr-cairns/
[9]. https://www.bomwatch.com.au/climate-of-gbr-townsville/
[11. https://www.bomwatch.com.au/wp-content/uploads/2021/02/BOM-Charleville-Paper-FINAL.pdf
[12]. Trewin, Blair (2012). Techniques involved in developing the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) dataset. (Technical Report: https://cawcr.gov.au/technical-reports/CTR_049.pdf )
Disclaimer
Unethical or poor-quality scientific practices including the manipulation of data to support political narratives undermines trust in science. While we are not accusing the persons mentioned of unethical conduct, we are gravely concerned about their approach to data processing, use of poor data or their portrayal of data in their cited and referenceable publications as representing facts that are unsubstantiated, statistically questionable or not true. The debate is therefore a scientific one, not a personal one.
Biography
Dr. Bill Johnston is a former senior research scientist with the NSW Department of Natural Resources (abolished in April 2007); which in previous iterations included the Soil Conservation Service of NSW. With colleagues he undertook weather observations for about a decade from August 1971.
Bill’s main fields of interest have been agronomy, soil science, hydrology (catchment processes) and descriptive climatology and he has maintained a keen interest in the history of weather stations and climate data.
Bill gained a Bachelor of Science in Agriculture from the University of New England in 1970, Master of Science from Macquarie University in 1985 and Doctor of Philosophy from the University of Western Sydney in 2002 and he is a member of the Australian Meteorological and Oceanographic Society (AMOS).
Bill receives no grants or financial support or incentives from any source.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Data for sea level via satellite is also grossly manipulated.
Satellites provide low accuracy absolute sea level data while people really need to know relative sea level measured near their oceanside homes and businesses.
Dear Richard,
Sea level estimated from satellite data is modeled relative to the theoretical center of the earth.
Cheers,
Bill
The fact that the corrections and homogenization was done in an unblinded manner makes the results dubious. I was a Psych major, and expectations as to the result can be self fulfilling.
One of the reasons for switching from scintillation counting by sight, and counting, was that the observers knew what the results should be. Geiger counters with mechanical counting gave more stable results.
Even if those doing the corrections had no conscious intent, they knew what the results “should be”.
“They knew what the results “should be”
_________________________________
There’s this old chestnut from 2009
Correcting Ocean Cooling
https://s3.amazonaws.com/jo.nova/guest/aust/bom-audit/johnston-bill/2019/stevenson-screens-audit-6.0.pdf
This is tedious whining about the accuracy of surface weather statons. It’s easy to complain. But that does not correct the numbers. Assuming that would be possible.
Where is the correct claim, that Australia weather station data are inaccurate, leading to?
That there was no warming in Australia since 1975?
Or that CO2 did not cause any warming there since 1975?
The claim of inaccurate weather station data leads nowhere — just to a dead end.
Inaccurate surface weather stations from poor siting and questionable adjustments have been a problem known since the 1990s.
NASA-GISS made the 1940 to 1975 cooling almost disappear in the 1990s.
NOAA made 1998 the hottest US year by cooling the 1930s
The surface numbers claim Australia has warmed since 1975.
Are there alternative datasets that confirm or refute that claim?
How about UAH?
How about merely asking people who have lived in the same area for many decades if they have noticed warmer winters or warmer summers?
Any other source of information to verify or reject the surface temperature numbers?.
And don’t forget that perfect measuring equipment and perfect weather station siting may not fix data accuracy problems.
The people who collect the data and compile the Australia average temperature must be honest. Not biased to show more warming than reality, to better match the warming rate they had been predicting for decades.
The statement about thermodynamics
is partially wrong:
While there is generally a relationship between mean maximum temperature (Tmax) and annual rainfall, it is not always a simple linear relationship; in many cases, the connection is more complex and can vary depending on the specific climate and region, with some areas showing a positive correlation and others a negative one, often influenced by factors like humidity and weather patterns.
TLDR: “Inaccurate surface weather stations from poor siting and questionable adjustments have been a problem known since the 1990s.“
Dear Richard Green,
They claim it is hotter, but it is not.
As for your blooper about the First Law Theorem, grab some data and show some other relationship fits better across multiple datasets, than linear. I don’t mean toss-on a quadratic or something else, but test for lack of linearity after fitting a linear model.
Otherwise you are invited to read some of the background papers that I referenced.
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au
You seem to claim that you KNOW the truth about the average temperature of Australia since 1975. If so, that is a lie.
Proving the official data use an inaccurate methodology does NOT mean you KNOW what the right number is.
You have no proof that Australia bypassed the global warming since 1975 unless you have a private weather station network that has better data. If so, please tell us the right average temperature from your private weather station network and explain why your network is better than the official weather station network.
Criticizing the official numbers does not mean you know the correct numbers.
There is generally a positive relationship between mean maximum temperature (Tmax) and annual rainfall, meaning that higher temperatures tend to be associated with increased precipitatios. That is a geeral rule of thumb for out r plaet.
But if most of the warming was TMIN, rather than TMAX, the rule of thumb may not be true. And it may be generally true but not for every oe of the 195 nations.Different climate zones may experience different dynamics between temperature and precipitation, with some areas seeing a stronger correlation than others. Factors like wind patterns, topography, and ocean currents can also significantly impact precipitation levels, sometimes overriding the temperature-related influence.
Nations experiencing both warming temperatures and reduced rainfall include many regions in the Middle East, North Africa, parts of the southwestern United States, and parts of Australia, with countries like Mali, Chad, Somalia, and parts of Iran being particularly vulnerable due to their arid climates becoming even drier with rising temperatures.
Yiu claim this canot happen and I have given you examples of where it IS happening. That makes you wrong.
In addition, the global average absolute humidity has had a flat trend from 2000 to 2020, while earth has been warming. The AH measurements are it very accurate, but they contradict expectations from the Clausius–Clapeyron relation
My quote about thermodynamics was from a Google AI question. I have no reason to assume you are 100% right and Google AI is 100% wrong.
You do not have the correct data to refute the official claim that Australia has warmed (by some amount) since 1975.
Dear Richard Green,
There is no such a thing as being right or wrong, you simply have an unsubstantiated difference of opinion, which of course you are welcome to have.
At no point have I claimed to know “the truth” about average temperatures in Australia. However, as much of the current hype concerns maximum temperature extremes, it is opportune to investigate those more thoroughly using objective methods. Victoria River Downs is a convenient series to use because it is an ACORN-SAT site, the time-series is relatively short, and therefore not too difficult to use as an example.
While there are private weather station networks in Australia, as well as industry-run ones, the official network is run by the Bureau of Meteorology.
They do a rotten job, in terms of how they run and maintain their sites, the almost clandestine way they replaced 230-litre Stevenson screens with 60-litre ones, especially since 2000, their use of PVC screens that are even warmer on hot days than 60-litre wooden ones, and the way they handle their data. All these problems can be diagnosed using appropriate protocols, and on our side of the global warming divide, those protocols have been slow to emerge.
Your reply seems to be quite muddled. On the one hand you say “There is generally a positive relationship between mean maximum temperature (Tmax) and annual rainfall, meaning that higher temperatures tend to be associated with increased precipitatios. That is a geeral rule of thumb for out r plaet.”
However, on the other hand you claim “Nations experiencing both warming temperatures and reduced rainfall include many regions in the Middle East, North Africa, parts of the southwestern United States, and parts of Australia, with countries like Mali, Chad, Somalia, and parts of Iran being particularly vulnerable due to their arid climates becoming even drier with rising temperatures.”
I would encourage you to have a look at my report for Marble Bar, the hottest place in Australia, then I’ll set you a test (https://www.bomwatch.com.au/wp-content/uploads/2022/12/Marble-Bar-back-story-with-line-Nos.pdf).
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au
Deliberately missing the point. It must be Tuesday.
My point is what YOU missed. The RIGHT average temperature trend since 1975 is NOT known because the official data use an inaccurate methodology. The official Australia trend could be too much warming, too little warming, or right as a coincidence. Johnson falsely implies he KNOWS the RIGHT average temperature trend, that he does NOT know.
Before one can correct a problem, one first has to admit that the problem exists.
To date, the alarmists still proclaim that there “fixes” are perfect and there are no problems with the data.
MarkW,
Yes, this is what I found when I started to study Australian heatwaves.
Using the most simple analysis I could think of, I looked at heatwaves at many weather stations here. I failed to find evidence that overall, heatwaves were getting hotter. Then I failed to find support for the claim that they were getting longer.
I concluded there was no point in looking at increasingly “sophisticated” types of analysis, because I had no encouragement to do so.
Yet, other people have made a career out of their idea of sophisticated analysis. They might be correct in what they have done, but we will never know because we do not know the “right answer” for how hot historic heatwaves were. We have to look at what motivated researchers into trying to make a story from a monotonous data set. Could it be related to research grants from a global warming bandwagon getting under way?
Searching for an analogy, I imagined people working in a factory making specialty steel alloys. They weigh the various substances that are combined. They do not find this so monotonous that they wander off to do side experiments, by varying the weights of components as if someone decreed that component A might be harmful to people and should be reduced. They do not seem to place emphasis on the knowledge that a change in the recipe can lead to material failure and loss of life.
The simple old method was good enough, why try to be sophisticated?
Geoff S
When you are dealing with one variable, temperature, in a time series, you will not find anything remarkable by more and more complicated and “sophisticated” by simply rehashing that simple variable.
Only when you have a functional relationship with more variables that determine temperature can a multivariate analysis provide anything significant. Very much like your analogy of an alloy. If temperature is rising, falling, or staying the same, you can smooth, homogenize, filter, etc. that single variable, but all you are going to end up with are some spurious trends.
The variable is useless from the start.
Tmax has a major negative feedback of T^x as temperature goes up. Tmin has a negative feedback of conductive heat exchange with two major heat sinks, the oceans and the land. Yet I never see either of these negative feedback factors mentioned in climate science, only the supposed positive feedback of CO2.
You can certainly analyze Tmax data but since it is based on temperature and temperature is not climate exactly what does the analysis tell you about climate?
Dear Jim,
This is a discussion we have had before.
Cheers,
Bill Johnston
Average is not a statistical word, the mean is its statistical analogue. If you want the average of 10 and 20, averaging is the correct method.
Despite you think, muse or hypothesise, the definition of average daily temperature, is Tmin+Tmax/2, That is what it is.
Least squares trend is the accepted definition of warming – the trend coefficient being a rate function (DegC, or DegF per unit time.
All the best,
Bill
wikepedia; “The arithmetic mean, also known as “arithmetic average”, is the sum of the values divided by the number of values.”
As I said, in statistical world numbers is numbers. In statistical world the mean or “arithmetic average” is a statistical DESCRIPTOR of a set of numbers and does not need to have any relationship to the real world.
When you are trying to find an average TEMPERATURE, you are not looking at a “numbers is numbers” set of data. Temperatures exist in the *real* world, not in statistical world.
Temperature is an intensive property. I can take a 1lb ball and a 2lb ball, put them on a scale, and get a value of 3lb. That’s because mass is an extensive property dependent on the amount of material. Thus I can find an “average” weight. I can’t put a cubic foot of air at 10C and a second cubic foot of air at 20C on any measurement device and get a value of 30C. So how do you calculate an average of the two?
“Despite you think, muse or hypothesise, the definition of average daily temperature, is Tmin+Tmax/2, That is what it is.”
Nope. It’s a MID-RANGE temperature. It’s a carryover from our measurement capabilities a millenia ago. It has no *real* world meaning today. It’s a perfect example of how antiquated climate science is in terms of understanding the real world. Even HVAC engineering started 40 years ago to move from using mid-range temperatures to calculate degree-days to using the integral of the temperature curve because the use of the mid-range temperature was inadequate and resulted in eiither under-sized or over-sized HVAC units.
If mid-range temperatures were a usable metric for CLIMATE, then you would be able to differentiate the difference in climates in Las Vegas and Miami using temperature alone. Based on mid-range temperatures their climates are the same! That’s a result of trying to average an intensive property!
Climate “science” should be leading engineering. Instead it is lagging engineering by at least 40 years.
“Least squares trend is the accepted definition of warming”
Only by using the unstated and unjustified assumption that temperature is a proper metric for climate. The mid-range temperature may have been all that was available 100years ago. The sad thing is that climate science is stuck in that 100 year old methodology. It would be like physics today not using quantum theory and remaining stuck in physics theories pre-Planck!
Nothing sensible to see here.
Move on.
Cheers,
Bill
The argumentative fallacy known ad Argument by Dismissal.
Tell us EXACTLY how temperature allows differentiating the Las Vegas and Miami climates. If you can’t then you’ve lost the argument and climate science as it stands today is useless.
Richard,
Shown below is a chart of temperatures for Adelaide from 1857 to 1999, which I obtained from the late John Daly’s website “Still Waiting For Greenhouse” available at:
http://www.John-Daly.com. He stated that he obtained temperature data for weather stations from GISS and CRU databases, much before the data was adjusted, homogenized and pasteurized by NASA. Note the cooling trend.
There are number of temperature charts for Australia which show no warming up to 2002. You should check out the temperature chart for Boda Island. The chart has plots for Tmax, Tmin, and the annual average temperature. This is the only chart that has this type of plots.
I think we spend too much time fusing over temperature, CO2, and climate change. The availability of fresh H2O is much more important.
I live Burnaby, BC, and for the last several years there has been drought in the north where the main hydro dams are located. BC Hydro had to spend 500 million dollars per year to buy electricity to make up for short fall. In the lower mainland there was no rain in January, which is usual. We need the rain for the reservoirs in coastal mountains which supply fresh water.
I guess you keep up with Prof Cliff Mass’ outlook bulletins about the NW weather, Harold?
https://cliffmass.blogspot.com/
Nope. I just watch the Weather Channel on the TV.
You said tedious
The study l did myself last year certainly points to there been issues with putting electonic thermometers into small screens. Because of their high sensitivity electronic thermometers are detecting temperatures changes quicker and more efficiently then LIG thermometers would. Its this increased efficiency that is in my view showing warming in the trend that simply is not there.
Agreed!
Yes one issue l found in my study was when the sun was at a low angle in the sky. As this is when in shines more directly onto the screen which allows the up right screen to warm up quicker then it’s surroundings. This is certainly the case when winds are light. With electonic thermometers been so sensitive they detect this warming sooner and for longer. Putting them into smaller screens only makes the issue worse.
The screens containing them really need to be as open planned as possible
Would this be an issue if all temps readings had to just be recorded / reported in whole units, ie no decimals at all, let alone 10ths / hundredths / thousands of a degree?
It could be an issue. It depends on the physical design of the measuring device.
I believe you will find that for many measuring stations the data *is* recorded officially in the units digit.
From the Automated Surface Observation System (ASOS) run by the National Weather Syttem:
“Once each minute the ACU calculates the 5-minute average ambient temperature and dew point temperature from the 1-minute average observations (provided at least 4 valid 1-minute averages are available). These 5-minute averages are rounded to the nearest degree Fahrenheit, converted to the nearest 0.1 degree Celsius, and reported once each minute as the 5-minute average ambient and dew point temperatures. All mid-point temperature values are rounded up (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to – 3.0°F; while -3.6 °F rounds to -4.0 °F).” (bolding mine, tpg)
It should be noted that the manual for the ASOS stations give an uncertainty value for the measurements of +/- 3.6F with a resolution of 0.1F. I have yet to figure out what good it does to have an expensive measuring station with a resolution of 0.1F when it’s accuracy is only +/- 3.6F!
Good luck with figuring that out Tim.
We’re not supposed to look under the hood / bonnet.
Too Many incongruities?
“ACORN-SAT”
Not satellite, but “Surface Air Temperature”.
For perfect accuracy the temperature numbers have to follow the words: “Scientists Say”, after the following proper scientific adjustments:
Homogenized
Pasteurized
Filtered
Adjusted
Re-Adjusted
Rounded
Smoothed
Published with Three Decimal Places,
followed by the words:
“It’s Worse Than We Thought”
Alternative to save money:
Ignore weather stations and pull numbers out of a hat
UK MET got busted using imaginary weather stations.
https://wattsupwiththat.com/2024/12/09/massive-cover-up-launched-by-u-k-met-office-to-hide-its-103-non-existent-temperature-measuring-stations/
I am a bit puzzled by the negative response to the comment. This controversy is certainly valid for debate. We are probably all aware of many deficiencies in the surface temperature record which make it impossible to reconstruct a highly accurate, reliable record of global surface temperatures. This article is one of very many that document the biases and unjustified adjustments used in reporting the surface temperatures records in many jurisdictions, presumably because there are incentives for doing so that reward the people reporting. Richard, your comment seems to support the general theme that these records cannot answer the need of a reliable, complete and accurate surface temperature record. Agreed.
One thing that bothers me is that the rationale for changing data is not well defined. One thing I have learned is that different devices, even if calibrated, can give different readings. There are multiple factors that are involved in making measurements. Not the least is the DUT (Device Under Test) or UUT (Unit Under Test). In atmospheric temperature measurement the UUT is never the same and seldom is the microclimate identical.
This ultimately means temperature readings can be different. One can not then just jump to the conclusion that a bias due to error exists in one or the other. They are simply different.
At this point, climate science should decide if the data is fit for purpose and trash if it is not. There is a lot more I could say but long posts are getting tiresome.
Have you ever undertaken routine weather observations Jim?
Cheers,
Bill
Why hasn’t Nick Stokes been here? I am sure he has something important to say.
What is he going to do model or mangle data like he does on his stupid website which is like triple crazy. A lot of sites have very local factors on temperature and it is not uncommon for sites a couple of kilometers apart to have very different readings. ACORN2 set from the BOM is an absolute disaster because of that fact and you would use that junk set at your own peril.
The Australian Bureau of Meteorology is clearly a corrupt outfit. Fire everyone and start over with people who understand only pure and honest work is acceptable. I don’t understand all the technical stuff they do but I do understand honesty is paramount. I don’t think any actual reading should ever be adjusted rather keep a meticulous history of the readings and the site. There is zero need for those guys to make guesses about whether the reading is acceptable or not. Give us the actual raw reading and the history of the site and we will determine if something is amiss and or why. No site should be replaced. If you suspect the site has become unsuitable build a new device in a suitable site. Then compare the readings from the new site to the old site. In any case I don’t care what your opinion is (BoM) all that matters is accurate truthful readings and a complete history.
‘neighbouring’ = >1000km distant?
Whee!
Yes, Australia is big but …
That’s like Seattle to mid-California roughly, or Vancouver BC to Calgary or Dawson Creek.
Or long stretch across the prairies (Calgary to Winnipeg) and in the Arctic (Yellowknife to Inuvik) and beyond. And Prince Rupert to Anchorage.
I think many airports are more tolerant of reading high because that is more conservative in calculating takeoff performance, for safety.
Dear Rational Keith,
I’m not a pilot, but I understand that conditions on the ground are much less important to jet and turbine -powered aircraft, than piston-engined planes. Perhaps someone could comment.
Cheers,
Bill
Being a(n ex-)pilot of piston engined planes as well as ‘gas turbine’-driven ones I tend to disagree (strongly)! 🙃
In 1960 I flew F-100 (‘Super Sabre’) fighters i Arizona – and I can assure you that runway temperature meant a lot to takeoff performance.
Useful information. Thanks Hans.
Cheers,
Bill
Hot air is less dense. Density has to do with lift force on the wings. That affects *all* aircraft with wings.
NOAA also alters (and fabricates) temperature data to support the fake climate crisis narrative.
Yeah, there’s your temperature data fraud right there. The blue line is the real temperature profile and the red line is the bastardization of the blue line.
It isn’t clear how the graph was constructed, but I presume the person who made it failed to account for spatial and temporal variance in station distribution. Here is a comparison of global raw (black line) and adjusted GHCN temperature series based on my own analysis:
How do you explain the burgeoning number of sites around the globe with little growth in warm months and substantial warming in winter months.
As people keep investigating the piece parts that make up the GAT, more and more questions are going to be asked about your hockey stick.
There are many complex reasons why nighttime temperatures are warming more than daytime temperatures, such as changes in cloud cover, and local site-level variability is significant. But both daily maximum and minimum temperatures are unquestionably increasing globally:
That’s great, the data show what the data show. Those people have to wrestle with the realities they face.
Alan:
Meteorologically the faster warming of min temps vs max can be explained by looking at how surface temps behave on radiation nights (clear skies, light winds).
A surface inversion forms and depending of wind speeds this lyr is mixed vertically to a few 10’s/100’s of feet.
Thus the cooling is limited to that thickness of the atmosphere.
Max temps are very often to a value that causes convection and that surface energy is then lifted to 1000’s of feet.
It is easy to see that that will *spread* and thus mask a steady rise in achieved max temps with the decades.
That’s not raw (unmodified) data. That “data” is a figment of Phil Jones’s imagination and doesn’t represent reality.
The data are GHCN monthly raw, and have nothing whatsoever to do with Phil Jones. They are the values exactly as recorded at the stations.
That’s BS. No unmodified, written, historic temperature record has a “hotter and hotter and hotter” temperature profile like the one you are showing.
See if you can find a “hotter and hotter and hotter” temperature profile in any of these 600 historic temperature charts from around the world:
https://notrickszone.com/600-non-warming-graphs-1/
And then ask yourself, where did this “hotter and hotter and hotter” temperature profile come from if it isn’t represented by the historic temperature records?
Your “raw” data is just another scam among the many scams connected to human-caused climate change.
Where do you get a “hotter and hotter and hotter” temperature profile from data that doesn’t have a “hotter and hotter and hotter” temperature profile?
Answer: You make it up out of whole cloth by adding in bogus sea surface temperature data. Like Phil Jones did to create the first bastardized version of the instrument-era temperature record.
You are living in a False Reality, created by a bunch of Temperature Data Charlatans with a political agenda. You are obviously oblivious to this. You, and a LOT of other people.
Climate Change Propaganda works on many people.
The plurality of those graphs do not include present day temperature change, they are paleoclimate reconstructions of the Holocene. The “blade” of the “hockey stick” comes from the modern period. No one has claimed that the long Holocene “handle” shows a warming trend.
This is a baseless conspiracy theory, and you have no evidence to substantiate it. If all the data is fraudulent, how do you know what the temperature is actually doing?
100%
That is a false argument. Spatial and temporal issues are irrelevant — because it’s only about climate-change — or trend data.
If you ignore spatiotemporal changes across the network you have failed to isolate climate trends. This is basic geospatial analysis and statistical sampling.
Those changes are a NOAA management problem – not a data problem. The solution is to change management – not the data. Data altering creates fake data. Management altering creates better data. Elon will help do this.
The fact that the GHCN surface stations are unequally distributed around the globe is not a data quality issue or a data management problem, and the NOAA cannot go back in time to redistribute the us station network even if that were something they wanted or needed to do. Unless you think Elon has a Time Machine, he isn’t going to alleviate the need for basic skills in geospatial analysis. It’s statistical sampling 101.
Apologizing for bad management is not good science. If the data is bad, then don’t use it. Altering data — like altering a bank account — is fraud.
The data are not being altered, they are being binned before averaging to reduce oversampling. In the graphs I’ve shown above, there is not one single alteration made to any data point. The data are the raw values exactly as recorded at the station.
Have you looked into why the above graph comes about?
This will explain ….
”Until the late 1950s the majority of stations in the U.S. record recorded temperatures in the late afternoon, generally between 5 and 7 PM. However, volunteer temperature observers were also asked to take precipitation measurements from rain gauges, and starting around 1960 the U.S. Weather Service requested that observers start taking their measurements in the morning (between 7 and 9 AM), as that would minimize the amount of evaporation from rain gauges and result in more accurate precipitation measurements. Between 1960 and today, the majority of stations switched from a late afternoon to an early morning observation time, resulting a systemic change (and resulting bias) in temperature observations.”
If you think that the reasoning is unjustified, I have a suggestion….
That we revert to taking a maximum temperature in the evening (when it is still warm from the afternoon sun in summer).
(NB: the max thermo is reset and that will be to the temp at the time and 5pm can be the time of max temp on some days).
Because, of course, should the next day be cooler (50% chance), then that max would be recorded for 2 days running.
Hence why there was an enormous warm bias during that observational practise.
I was a NWS COOP station operator for 17 years and understand all that. The only goal of these graphs is to show TREND data, which is independent of when and where data was obtained. Plus, your data does not separate Max/Min data.
That is not a bias. The temperatures are real, measurements taken and recorded. The fact that they are different doesn’t make them wrong or biased in anyway. They should be judged fit for purpose or not fit for purpose.
I have never been involved in a measurement regime change where past measured and recorded data was allowed to be changed to allow splicing to the new data. Neither regulatory nor government agencies would allow that.
Tell us a scientific endeavor in physics, chemistry, engineering, medicine, etc. that allows past data to be modified for splicing into more current data. It just isn’t done. How many regime changes has temperature measurements gone through? Unscreened, Cotton, Stevenson, plastic, LIG, MMTC, ASOS, CRN, satellite, buckets, engine coolant tubes, ARGO, site moves, site changes,on and on.
I want to see a study that analyzes all these in order to determine a correction table for each station that NIST requires to meet calibration demands. Without this, changes are just being made willy nilly without any proper scientific analysis being done. This is especially previous when climate science informs us they can detect one thousandths of a degree change on a day to day basis.
“That is not a bias. The temperatures are real, measurements taken and recorded. The fact that they are different doesn’t make them wrong or biased in anyway.”
No: they (both day’s maxes) have become the (singular) max temp over a 48 period and not for 24 hrs – as the second day’s max is missed due to a reset at near the time it was reached.
The one recorded the previous day being recorded for the second day also.
Why is it that not obvious?
Because of auto-correlation, yesterday’s max is likely to be close to today’s max, and tomorrow’s close to today’s.
If resolution wasn’t being entirely made up to the one thousandths digit but instead kept within the actual measured resolution, there would not be much of a problem. Any difference would fall within the uncertainty interval.
5pm is going to be a bit early for the maximum in summer, and 7am a lot too early for the minimum in the winter. They could have picked worse times, but it would take work.
That’s drawing a bit of a long bow. It only applies if the maximum on a particular day is lower than the 5pm temperature the previous day. That really only applies if a cold change comes through after 5pm.
7am minimum readings have similar problems in winter, especially at higher latitudes.
Dear old cocky,
Tmax, which measures heat advection from the landscape to the air above, usually occurs between 2 and 3 pm. Tmin, which measures outgoing radiation at night plus some other processes (rapid loss of density as the air column heats from above in the morning, which facilitates dew and frost), usually occurs around dawn.
There are exceptions of course, due to cloud, rain, fog, prevailing wind, cool air drainage on still nights for example, but in general terms advection and radiation are the main drivers.
I have found across multiple sites using sampling experiments on daily data, strong relationships between daily Tmin and Tmax (Tmin~Tmax), as factored by IfRain (0, 1).
IfRain (not the amount of rain), which is a surrogate for cloudiness, reduces nighttime radiation which results in a warm offset between Rain=0, and and IfRain = 1. I set up these experiments using various R packages, which randomly sampled (without replacement) a percentage of data each year for the length of record available.
All the best,
Bill
That has certainly been my experience, but longitude within the time zone is a factor as well. For example solar noon is almost an hour later in Menindee than in Coffs Harbour. Daylight savings time comes into play as well, if the readings are taken by the clock.
Yes, some time around sunrise here as well. In higher latitudes, 7am will be before sunrise in winter. For example, sunrise in Chicago today was 6:56 a.m. At the winter solstice, it was 7:15 a.m.
On the other hand, sunset was 4:22 p.m. at the winter solstice, so 5 p.m. probably wasn’t the hottest time of the day.
Dear old cocky,
Observations are done at local time. Max, Min and resets are at 9am regardless of location. As far as I know, AWS also report at local time (old AWS were more haphazard).
Lighthouse keepers and telegraph station folk were paid per observation, and paid more for 3 am obs which is when they used to report Max & Min for the day. That mornings Min rolls forward until 3 am next day. (9 am max of course is for the previous day – and there is a spot in the A8 field book for that value to appear on the previous days page.)
Cheers,
Bill
Dear Bill,
I was fairly certain that Aus observations were 9am local time, which seems to be quite a reasonable way of getting today’s minimum and yesterday’s maximum.
Either US system seems quite pathologically timed.
I wasn’t aware (probably forgot) of the 3am readings. Do you know the rationale for those?
Dear old cocky,
A few years ago I had a conversation with a retired lighthouse keeper. While it is a much longer story, keepers,. like jackaroos were employed “all found” – meaning housing food etc.was supplied, with a small allowance.
Meteorological observations were paid by the ‘government’ as an additional allowance. Observations were 3-hourly, with a bonus paid for 3 am, and those observations were telegraphed away so they could make weather maps for that day’s paper.
In November 2024, I managed to ‘bump into’ someone who undertook weather observations at a remote telegraph office in Western Australia. It was the same story. She (or he) would go out every day on a 3-hour schedule and were paid a few shillings (or dollars) to do observations. For 24 years, they were paid a little above their whatever-per-day to telegraph 3 am observations to Perth so they could run the printing presses.
Remember though, that in those days, “for-found” jobs was money in the bank. In fact when I left high-school in 1965, I toyed with the idea of becoming a jackaroo for a bob-a-day in central Queensland. I am eternally grateful that dream ended … which is entirely another story.
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au
Thanks, Bill.
There is so much focus here on temperatures that we tend to neglect the other observations. There is much information to be had in synoptic charts.
I’ve long advocated that if you want a global metric then the observations for that metric should all be done at the same time everyplace. E.g. 0000UTC and 1200UTC. Forget Tmax, Tmin, and daily mid-range temps as a global metric. If there is truly a “global” trend in temperature then it should show up in a data set generated from a common observation time for all stations.
It is mostly true that Tmax occurs between 2 and 3pm. But on hot mid-summer days especially, a temp near that max can persist until 5pm here in the UK (seen/recorded it many times) this from the advection of hotter air over those 2- 3 hours.
Min temps do indeed old cocky, though here in the UK they are taken at 0900 GMT.
BUT the point is not just the timing BUT the fact that it was changed.
That is what you see in the trend.
The *bump* in the 40’s due to the over recording of hot days followed by cooler days and THEN the cold bias introduced by switching to a morning reading.
It is the switch over that is seen most markedly.
Had the practise continued then it would be apples to apples.
When the change was made it because apples to oranges.
And the bias becomes evident.
Neither time is particularly good, but the switch from a bad time to take readings to a worse time certainly causes problems.
It should be possible to detect the incidence of double-reading hot days by comparing the minima, but detection doesn’t help much with compensation. It should give some idea of the incidence, though.
We don’t have the very long summer days in Aus, and we had enough sense to take readings at 9 am local time like the UK.
This is an excellent point is is a large part of the discrepancy, but in this particular case the person who made the graph made a simple error in averaging that produced incorrect results, similar to this:
https://imgur.com/a/6oQUJRZ
Instead averaging gridded anomalies (or ensuring continuity in the averaged series), they’ve mashed the individual station records into a simple average. Even unadjusted temperatures in the contiguous US exhibit a positive trend.
Ah, the ever wrong Goddard/Heller !
There’s only 1 problem with temperatures used for climate modeling –
PROBITY > PROVENANCE > PROSECUTION
(see, my counting and numbers handling is just the same as the IPCC’s is 🙁 )
The climate alarmists are THRILLED when we argue about the accuracy of surface temperature data and statistics.
Because that avoids the important question:
Why do so many people fear global warming when global warming has been pleasant for the past 50 years … but global cooling would have been unpleasant?
“It is therefore surely time that in order to save reputations, and cease further damaging the science, the Bureau of Meteorology” should be sacked.
Perhaps if the incentives and disincentives in scientific endeavour were more oriented to finding truths than feeding political propaganda and padding resumes and wages, this dishonest academic reporting would diminish.