We, the Australian public, are continually told by the Australian Bureau of Meteorology that global temperatures need to be restricted to within 1.5C above the pre-industrial era. Further the Bureau attributes the ‘dramatically increased rate of observed hot record breaking in recent Australian temperatures’ to human-caused global warming.
But what if at least some of this warming was natural, and what if the other component could be attributed to how the temperatures are now recorded – with probes in electronic weather stations replacing mercury thermometers – and then there is the remodelling through the process of homogenisation.
To some extent the Bureau documents the extent of the remodelling that has the technical name homogenisation, but the Bureau has never made public the extent of the discrepancy between temperatures as recorded from mercury thermometers versus the new electronic automatic weather stations.
Work that I have undertaken with John Abbot shows that even without the industrial revolution there would have been a temperature increase of about 1C through the 20th Century.
The limited parallel data that I secured from the Bureau, following the intervention of Josh Frydenberg back in 2017, shows that the probe within the automatic weather station often records 0.4C warmer than the mercury thermometer for the same weather.
The AAT hearing on Friday is about the need to make the parallel data public, so we can know how much of recent warming can be directly attributed to the change in how temperatures are measured.
And for those wishing to attend the AAT hearing, details are:
APPLICANT: John William Abbot RESPONDENT: Director of Meteorology
This application has been listed as shown below:
Date: Time: Location:
Contact Officer: Jessica S
Friday, 3 February 2023 10:00AM (Qld time) Please proceed to Level 6 Reception Address: 295 Ann St BRISBANE QLD 4000
You may need to register in advance to be allowed into the hearing.
Dr John Abbot is an IPA Senior Fellow.
Dr Jennifer Marohasy will be appearing as an expert witness.
The feature image was taken by Craig Kelly and features in an article in today’s The Daily Telegraph by Clarissa Bye entitled ‘Bit hot and bothered? BoM silent on suspect solar panel shift’. It quotes me querying why the BoM temporarily placed a solar panel near the weather station for Sydney Observatory, specifically when it appeared this weather station was recording a run of record cool days.
The temperature increase has been contributed to by water vapor which has increased more than possible from just feedback. Temperature trend has been down since before the 2016 el Nino. CO2 has no significant effect on climate. Energy absorbed by CO2 is redirected wrt wavenumber to replenish energy radiated to space by water vapor. Detailed explanation at https://energyredirect3.blogspot.com
Water vapor is a feedback to other causes of climate change
A warmer troposphere will hold more water vapor.
The CO2 effect on the climate is unknown, but not dangerous
That CO2 effect includes a water vapor positive feedback
The lab spectroscopy shows CO2 above 400ppm is a weak greenhouse gas
Actual climate change as CO2 increased since about 1940 has been harmless
Water vapor – Wikipedia
Gas laws – Wikipedia
Water vapor is not a direct cause of climate change.
For every degree Celsius that Earth’s atmospheric temperature rises, the amount of water vapor in the atmosphere can increase by about 7%, according to the laws of thermodynamics. Some people mistakenly believe water vapor is the main driver of Earth’s current warming.
List of the best climate and energy articles I read each day:
Honest Climate Science and Energy
What you say is partly not valid. Average global water vapor has been accurately measured using satellite instrumentation and has been increasing substantially faster than possible from just feedback; Sect 8 of http://globalclimatedrivers2.blogspot.com
The temperature of the troposphere determines only its MAXIMUM capacity (100% RH). RH varies tremendously; average RH for the planet runs about 75%
My analyses show that CO2 has no significant effect on climate and might even be negative especially at low mixing ratio like now or even twice now.
Water vapor molecules have been increasing about 7 times faster than CO2 molecules but WV molecule increase is limited. Increased WV has contributed about half a degree to average global temperatures.
The saturated vapor pressure increase vs temperature for water has been determined mostly by measurements. As shown at Sect 6, Fig 1.7 the increase of saturation vapor pressure with temperature varies from about 5.5%/K to 12%/K depending on temperature. The vapor pressure of water in the atmosphere is assumed to vary accordingly.
Keep a clear head on what you read. There is a lot of misinformation out there; some of it being touted as official.
AAT = Administrative Appeals Tribunal
We make up plausible-sounding explanations for the dangerously stupid monopolist decisions the corporate puppets in government announce unto the useless eaters.
I am attending and was told there was no need to register
One way to stop the riff-raff at the door, with the added psychopathic reward of knowing the plebe jerk flew three thousand miles at great expense to get there, all for nothing.
This parallels previously posted findings in the US and in Germany. The switch from mercury to thermistor removed mercury thermal mass, enabling higher transient highs. And there was in Germany NO calibration overlap period. AUS just tried to hide their calibration. Hence this post.
That on top of UHI siting problems that homogenization has been proven not to remove. The AUS poster child is Rutherglen Ag Research Station, which Jen has posted on previously.
Bottom line is that surface temp records are not fit for climate purpose:
And the alternative UAH lower troposphere satellite estimates since 1979 say there is no urgent climate problem.
Spot on. The thermistor or platinum resistor should be ’embedded’ in a small block of metal to mimic the response time of mercury.
I would like to say ‘any fool knows that’. Unfortunately, I can’t say that because there are many fools that don’t know.
The better thing to do would be to end the data set from one measurement device and start a new data set for the new measurement device.
There is simply no way to derive a time dependent adjustment factor for the old device to make it match the new device (unless you have a time machine?) There is simply no way to derive a time dependent adjustment factor for the new device to make it match the old records unless you know the time dependent calibration drift for the old device (time machine again?)
The calibration drift is not just probe dependent but overall measurement station device dependent, including microclimate changes at or near the station.
I know there is always a push to develop data sets that consist of long periods of time. Trying to accomplish this using subjective adjustment of measurements is a pipedream that ignores physical reality. It would be much better to just keep the old station in operation and install a new one alongside it, at least if you want pristine data (i.e. not subject to subjective adjustments).
Not sure how they are currently set-up, but yep, definitely needs to be some form of thermal lagging to mimic mercury bulb. The vaccine / pharmacy cold storage industry are of course well aware of this. Storage of vaccines is well monitored and if the temp goes out of range, many vax’s need to be thrown out, costing big $$$. Early days moving from old mercury glass min/max to electronic had monitoring, when the fridge door was opened, the air temp briefly went above 10 deg on their electronic min/max thermometers, so they had to toss out the stock. Of course a momentary spike in the air temp had zero impact on the vaccines, that were naturally lagged in fluids/boxes etc. Most thermometers are now lagged, i.e. in a glycerol solution to smooth the curve.
This comes down to a question of pin-electronics and metrology.
A thermistor is a resistive device which has the property where the electrical resistance changes with changing temperature.
Like the mercury bulb, the resistor has mass, and will have a warming slope, but that’s not what I’m here about.
To measure a voltage, (the pin electronics side) we have the incoming analog voltage level from the thermistor and the power supply driving it. We provide this analog voltage level to a comparator circuit, because we’re going to compare this analog voltage level to a ramping up second analog signal. The second input of the comparator is connected to a ramp generator. A digital value is used to create the analog voltage level out of the ramp generator. This analog ramp signal is applied to the second input to the comparator. When the comparator detects both signals are equal, the comparator output triggers circuitry which reads the digital value which is driving the ramp circuit. That digital value going into the ramp generator corresponds to an analog DC voltage level. Because the comparator detected the input thermistor output level is equal to the ramp voltage level, we know what the voltage is on the thermistor. A lookup table provides DC voltage to temperature, this is how we measure the temperature, we measure the voltage out of the thermistor circuit, and compare that to the ramp level.
This can be noisy, probably the ramp runs several tens or hundreds of times per second, and this data is averaged … perhaps its not. This is probably programmable, and needs to be investigated. Like all electronics, these circuits can be glitchy and provide some spurious results based upon the randomness of nucleotide collisions, heat in the circuitry, magnetic interference, whatever.
Especially when two pieces of measuring equipment don’t always agree, there needs to be further investigation.
OK thanks for that explanation Michael.
But will I need my puffy this morning or not?
I know two sites that the US weather service throws out the low reading and changes them upward. The fools cannot figure both sites are at the base of the second highest point in Minnesota and cold runs down hill. I grew up in that same area. The fall is two weeks ahead of elsewhere nearby and two weeks behind in the spring. Yet the Nation weather service can’t be bother to check either site and see when the leaves fall off the trees and leaf out, simple observation would tell such morons that site sensor are correct.
Very lovely, maybe many folks didn’t know how an analogue-to-digital converter (ADC) works.
They do now.
You sort of allude to one of the 2 significant problems when going to digital, but both are the same.
One of the noises is a low frequency (very) low frequency in that you need a voltage reference for your ADC
And because the signals coming from whatever sensor you’re using (Platinum, Silicon diode or thermocouple) are really small, it needs to be a damn good one.
There are good voltage references out there but, in most applications it doesn’t matter if they age, have a temperature co-efficient or just simply ‘drift’
Because in most applications that ‘drift’ shows up as a slowly changing DC offset and as most applications are concerned with AC signals (the transmission of ‘information’ = by definition an AC or moving signal) that DC drift is easily and simply cancelled or ignored.
But the measurement of temperature, especially over long time spans is exactly the same sort of signal as the low frequency noise or drift.
So there’s your first ‘noise’ – that the temperature signal you want is exactly the same sort of signal as what an electronic circuit would produce anyway as it ages, warms up, cools down or whatever – all electronic circuits are = thermometers intrinsically. It’s just that the slowly drifting DC noise is ignored whereas in a thermometer, it’s the very signal you actually want
First noise problem= How to tell the climate signal from the signal the circuit makes.
The second noise problem comes from the Electronic thermometer having a much higher frequency response than the Mercury one.
The electronic one can see and record temperature variations that are moving from minute-to-minute or even faster
A Mercury thermometer would simply ‘glide gracefully’ through those.
e.g. Take 2 boats, a rowing boat and The Queen Mary out into the North Atlantic, then, compare he ride you’d get in each.
So that’s the second noise problem, how to make the ride you get in a rowing boat similar to that you’d get in cruise liner.
Question: What is this ‘Homogenisation and how does it work on an individual thermometer.
I’ve tried to explain before, the thermometer needs ‘thermal mass’ or ‘inertia’
In the electronic realm, what you’d implement is a Low Pass Filter
Strictly= an Integrator
To picture that, imagine the Time vs Temp output of the electronic device plotted on a graph.
To do the integration, you need that area under the graph.Simples
(Plot it on old-school graph-paper with all the lines/squares/graduations and then, count up all the squares under the graph – that becomes your integration)
Is that what Homogenisation is doing?
If not, what they’re doing is shyte and tell them so
Whilst what you say has truth, the actual noise is likely to be a few microvolts on a scale of millivolts. Secondly the integration or time constant of a mercury thermometer is easily added to an electronic one. The PROBLEM is the operators don’t want to do this because it is not in their interests to do so, they want the very high or very low numbers. Accuracy or comparability is the of the least interest.
Rud, you said:”And there was in Germany NO calibration overlap period.”
There was an 10+ year overlap at 10+ climate reference stations.
Read berichte des deutschen Wetterdienstes Bd 253:
Anual means of electronic measurements were 0.1 K lower than Glas bulb.
You were correct: maximum and minimum differ more.
I somehow think the BOM won’t give in without a fight. Give ’em heck, Jennifer.
The leftist maxim –
“never admit, never explain, never apologise”
It is a pity I missed this until now. I would have gone in. Hope all went well.
One more example of political leaders, bureaucrats and administrators doing a terrible job.
Is that a mirror in the photo, aimed to reflect sunlight at the Stevenson box?
If so. that’s another new low for Climate Howlers. I guess they couldn’t back up a car or airplane to aim hot exhaust air at that weather station.
It seems that JoNova — Australia blogger who I read every few days — raised the same question, which I just read, claiming it was a solar panel in a peculiar place.
Expert BoM excuses about a solar panel leaning on bushes near Sydney’s official thermometer « JoNova (joannenova.com.au)
Questions questions questions
Please see the attached
You’re seeing a screenshot of a data-plot from one of these;
It’s installed my own (epic) design of Stevenson Screen of what is/was light-grey plastic 100mm vent cover – as fitted to the external vents of indoor toilets to stop wind, rain and critters from falling in
As I’ve done it, it gets 3 layers of sun and wind protection but biiiiiiggggggg ventilations.
In turn dangling 6 feet from a washing-line pole above my green grassy lawn, 20 metres from my house.
In turn in splendid isolation with a 10acre field growing baby rose bushes one side and 50+ acres winter wheat the other side. Wisbech Town is 2 miles exactly due South, wind is prevailing westerly
Programmed to take a temperature reading every 4 minutes (The probe, not Wisbech)
Three real questions.
1/ What caused the (downward) temperature (highlighted in yellow) spikes through the night?
Why downward – as we all know, in Australia they would be upward spikes being The Land Down Under as it is 😀
2/ What would a Mercury thermometer have made of that?
3/ How would you process (hodgerize) that data to make it the same as a Mercury thermometer?
Have a play, here’s the data in a TXT file at Dropbox.
Ignore the high bits at each/both end on the record. Those bits are both = records of the temperature inside the pocket of my jeans
edit to PS
The screenshot is a zoom of the interesting bits
The TXT file runs from 10:00UTC on Jan 28 through to 09:12UTC today (3rd Feb)
I hope your probe has a time constant (integration) of about 2 minutes added, that should make the record more reasonable. This is about that of a mercury thermometer of normal size, but is a long time for an electronic circuit.
Yes, and what is a heat spike of less than 2 minutes? Real temperature? noise? A hot dust devil, the reflection off the window of a passing car?
There was a company in Davis California, that had a super sensitive thermometer that they got the government to provide financial incentives for vineyards to install their equipment. The idea was to measure the evapotranspiration, which is apparently large moist air domes which grow out of the ground, a slight breeze can shear them off, and they rise up and float off, being detected by this equipment as they rise.
The company was trying to model the ground moisture for timing irrigation, … I think it didn’t pan out. I interviewed with them, but my commute was too far.
Your cold spikes are adiabatic cooling.
Just before sunrise, at the very coldest part of the night, the rising sun heats the air high over your land. This air mass forms a warmer dome which rises. This rising dome lowers the air pressure locally below thus lowering the temperature.
PV = NRT
I agree with Michael (below).
To the data: Although every site may give a different T-response especially in the extremes, if you have thermometer data from nearby, try running averages, varying the length of the run until data approximate the thermometer max and min for each day. Although it won’t be exactly the same (or may be an impossible fit), the running average will approximate a time-lagged response for your particular situation and instrument.
All the best,
Thanks for cross-posting. The hearing today was open to the pubic for a very short period of time, and then taken back into mediation again, and so I am now unable to comment, again. That was the situation through much of last year.
As a paid-up member of the IPA I suspect it was all a waste of money, time and excitement. However, time will tell and I hope I am wrong.
I reviewed Marohasy’s previous post (https://wattsupwiththat.com/2023/01/29/hyping-maximum-daily-temperatures-part-3/) two days ago (Thursday OZ-time). Paraphrasing from the overview, which is near the end of the comments section:
In reviewing the post, I downloaded the latest site summary metadata for Mildura and parsed all information relating to temperatures into a spreadsheet, then sorted within categories, into date-order.I then checked black on ACORN-SAT metadata, which I had updated previously to V2.3, including all adjustments from v1. In addition I re-read Trewin’s ACORN-SAT V2BRR-032.pdf. report including his Table 3 on p.20.
There is no mention anywhere of a second dry-bulb probe being installed in May 2000. It appears that when they changed to a 60-litre screen, the probe and thermometers were swapped over. Furthermore, this means that Trewin’s difference in mean Tmax of 0.22 degC was for a comparison in the 230-litre screen. While metadata shows Tmax and Tmin thermometers were removed from the then 60-litre screen on 1 August 2017, it does not mean (nor require) that the thermometers were observed every day at 9am up to that time. Satellite imagery does not suggest two screens were operating between May 2000 and August 2017.
To be clear, I’m not arguing that thermometer data should not be available, I’m just pointing out that the screen was replaced in May 2000 and contrary to Jennifer’s claims, there was no second probe.
Bearing in-mind that Google earth Pro shows the site was considerably disturbed between 2005 and 2012 by new aircraft hangers and the wind-profiler array, what what effect does it have on Jennifer’s commentary that while the screen-size changed, there was no second probe? Also although Trewin noted a 0.22 degC difference between probe and thermometers before May 2000, there is no evidence that he actually analysed any data – he just presented the difference between two sets of means.
Given all this, I’m still very interested in what tests Jennifer used to determine the statistical significance of the trifling 0.22 degC difference between the probe and and thermometers. Otherwise, why does she obfuscate rather than say what she did.
“he just presented the difference between two sets of means”
It seems to me that that is appropriate. 0.22 is the best estimate of the small difference, and a significance test would not change that. Saying that it is not significantly different from zero does not mean that zero would be a better adjustment.
“it does not mean (nor require) that the thermometers were observed every day at 9am up to that time”
The A8 form that Jennifer showed, for 28 Feb 2013, showed, IMO, very explicitly numbers for the reading of a min/max LiG thermometer, with reading and reset at 9am. For some reason, Jennifer insists that these numbers are for a probe, but that makes no sense at all. The LiG max that day was higher than the probe max.
I disagree Nick.
The difference was calculated over 2.5 years of parallel data – 900 or so observations. Trewin would not have calculated that using his fingers and toes. In all likelihood, there would be an overlapping 2.5 yr daily dataset, that is not in the public domain. Jennifer seems to have missed that possibility, she never flagged it at least.
The appropriate test for a 2.5 yr. daily dataset would have been Kolmogorov-Smirnov test for equal distributions; or Anderson-Darling test for equal distributions; or Epps-Singleton test for equal distributions; or 2-sample t-tests adjusting for autocorrelation in residuals by sub-sampling, or re-randomization of data-pairs. There is also the Mann-Whitney test for “equal medians”.
One would have to examine the data to see what was going-on and one way I do that is to examine percentile ranges and differences. I can basically do all initial investigations using PAST from the University of Oslo (free and no strings attached: https://www.nhm.uio.no/english/research/resources/past/).
There is no way known that a difference between sample means of 0.22 over 2.5 years would be significant, except through the inappropriate use of paired t-tests. The problem there is that a ‘third signal’ which is the seasonal cycle is causing correlation between successive data pairs, and thereby (hugely) inflating apparent significance of the test. If you want to play with some numbers, the Bureau has plenty of overlapping paired daily data ….
In the absence of Jennifer explaining the test she used, I am of the view that the difference is nothing more than random noise.
You also seem as confused as she is. The Mx under “Muslin” is high by 0.3 degC (check the resets against 9am DB). Looking at the time-stamps on the right, it seems those data are transcribed from a consol in the office, but why?? Also note there is 8/8th cloud and it is raining at 9am. They were also taking 3am obs so it was a fully manned ship at that time and is seems from the writing that the shift changed at 1500.
Even though its been 40 years, give me a weather station I could still do the met and fill-in one of those A6 field-book forms tomorrow!
“There is no way known that a difference between sample means of 0.22 over 2.5 years would be significant”
But what is the significance of “significance”. What are you trying to infer? Not that the instruments are different – we knew that. The only use made of this difference in means is to estimate the adjustment to be applied when comparing one instrument to another. And whatever tests you apply, there is no better estimate than 0.22.