Bureau Capitulates: But Overseas Model Unlikely to Solve All Temperature Measurement Issues

From Jennifer Marohasy’s Blog

Jennifer Marohasy

It has only taken ten years, that is how long a few of us have been detailing major problems with how the Australian Bureau of Meteorology measures daily temperatures. Now, I’m informed, the Bureau are ditching the current system and looking to adopt an overseas model that it claims will be more reliable.

There will be no media release.

There was no media release when the Bureau ditched its rainfall forecasting system (POAMA) once described as state of the art, and quietly adopted ACCESS-S1 based on the UK Met Office GloSea5-GC2. (As though the British are any better at accurate rainfall and snowfall forecasts.)

Until yesterday the Bureau has claimed one-second spot temperature readings from its custom-designed resistance probes, did not need to be numerically averaged; something overseas bureaus routinely do in an attempt to achieve consistency with measurements from the more inert traditional mercury thermometers.

Consistency across long temperature series is, of course, critical to accurately assessing climate variability and change.

The Australian Bureau has long claimed numerical averaging is not necessary because its ‘thick’ probe design exactly mimicked a mercury thermometer.

Then this design was phased out and replaced with the ‘slimline’. Still no inter-comparison studies.

I welcome the switch to ‘the overseas model’ if this means that the Bureau will begin numerical averaging of spot readings from its resistance probes in accordance with World Meteorological Organisation recommendations.

But the problem of reliable temperature measurements doesn’t begin or end with numerical averaging.

The Bureau, and the Met Office in the UK, have been tinkering with how they measure temperatures since the transition from mercury thermometers to resistance probes began in the 1990s. Not only with how they average – or not, but with probe design, and also with power supply.

It is important to understand that resistance probes hooked-up to data-loggers measure temperature as a change in electrical resistance across a piece of platinum. And, this is the important bit, the voltage delivered to the probe is critical for accurate temperature measurement. Not just in Australia, but around the world. And there are no standards.

When using a traditional mercury thermometer, temperature is read from a scale along a glass tube that shows changes in the thermal expansion of that liquid, which happens to be mercury. The mercury thermometer was once the world standard.

The new, automated, and potentially more precise method for measuring temperatures via platinum resistance, is reliable in controlled environments; satellites that are measuring temperatures at different depths within the atmosphere use these resistance probes. But it gets much more complicated when trying to measure temperatures on Earth, and especially at busy places like airports, which have become a primary site for the automated electronic weather systems using platinum resistance probes from which global average temperatures are now derived.

At airports, the electrical system relied upon to measure temperatures very precisely, must be insulated from other electrical systems including radar and even chatter between a pilot wanting to land his jumbo and the control tower.

The electronics now used to measure climate change, are not only susceptible to electrical interference at these airports, but also changes in voltage that can be caused by something as simple as turning on and off runway lights – at dusk and dawn.

To know how reliable, the new system is, we need the parallel data not just for Australia, but for overseas airports including Heathrow and Cochin – in India, the world’s first airport fully powered by solar energy.

To know that warming globally has not, at least in part, been caused by a move to resistance probes, we need to see the inter-comparison data showing the equivalent temperature measurements from mercury thermometers at the same place and on the same day.

I’m reliably informed by a past Bureau employee that upgrading power supplies in 2012 caused a 0.3-to-0.5-degree Celsius increase across about 30 percent of the Australian network. (That would get us some way to the 1.5 degree Celsius tipping point, even if we closed down every coal-fired power station.)

Perth-based researcher Chris Gillham documented this uptick in Australian temperatures in correspondence to me last October, and as an abrupt change in the difference between the Bureau’s monthly mean temperature as reported from ACORN-SAT and the satellite data for Australian as measured by the University of Alabama Huntsville.

‘Australia UAH’ is the Australian component of the University of Alabama satellite monitoring. ACORN 2.1 is the official homogenised/remodelled Australian temperature series that the bureau uses for reporting climate change. More information on the extent to which electronic temperature measuring systems can cause discontinues in temperature series can be found in an important report by Chris Gillham entitled ‘Have automatic weather stations corrupted Australia’s temperature record’.

The step-up in warming in the official data for the entire Australian continent is also noted in peer-review publications by climate scientists including Sophie Lewis and David Karoly. I have written to Sophie Lewis about the problems in relying on Bureau data. But instead of attributing the change to equipment and voltage, Lewis, Karoly and other climate scientists ascribe it to anthropogenic greenhouse warming.

Because university climate scientists ignore my correspondence, and rely entire on advice from the Bureau’s current management, they could not know otherwise. The Bureau’s current management refuse to report this change documented by its technicians and communicated to me unofficially by retired former managers.

A relevant question: why did the Bureau’s Chief Executive Andrew Johnson not report the 0.3-to-0.5-degree Celsius increase across about 30 percent of the Australian network (caused entirely by a change to the power supply, not air temperatures) in its 2013-14 Bureau of Meteorology Annual Report to Federal Parliament?

In that same report Johnson does comment on the infrastructure upgrades that caused the artificial warming in the official temperature data.

Meanwhile, the Bureau’s management, including Johnson, continues to lament the need for all Australians to work towards keeping temperatures below a 1.5 degrees Celsius tipping point. Just today we are told to expect another hike in the price of electricity because of the need to transition to renewables including solar.

This headline is from today’s Courier Mail. Other newspapers report: The offers, which cover New South Wales, South Australia and south-east Queensland, indicate prices will rise between 19.6% and 24.9% for residents, similar to the draft levels announced in March. Victoria also announced a 25% rise to its default offer.

The Bureau continues to support a transition to renewable energy, without explaining the potential effect, even on the reliability of its temperature measurements. As the Bureau has not explained the effect of the transition to resistance probes more generally. (I’ve characterised the Ayers and Warne papers as fake in a 6-part jokers series, republished by WattsUpWithThat.)

An overseas colleague has explained how something as simple as applying a 100Hz frequency to a power circuit to extend the life of a battery – necessary with solar systems – can cause maximum temperatures to drift up on sunny days. To be clear, as the voltage increased the recorded temperature increased additional to any actual change in air temperature!

Go to the NASA page about temperature measuring and you will see a picture of someone atop a mountain in Montana and a solar panel. That solar panel will be supplying to a battery that will not only provide the voltage used to measure electrical resistance across the platinum wire, but also for the periodic upload of that same temperature data to a satellite.

Weather stations are set up throughout Glacier National Park in Montana to monitor and collect weather data. These stations must be visited periodically for maintenance and to add or remove new research devices. Credit: GlacierNPS, CC BY 2.0, via Wikimedia Commons.

Since the transition to the resistance probes that use voltage to measure temperature, problems at remote locations from mountains to lighthouses have included aging batteries – solar powered of course – but unable to provide sufficient current at critical times.

I’ve been shown data from such a remote location, were minimum temperatures reliably drop 2 degrees Celsius on the hour, at the same time every hour through the night, as the battery is drained with each satellite upload of temperature data.

This is the same temperature data that is being used in Australia, and around the world, to justify extreme economic and social intervention in the name of stopping climate change.

IN SUMMARY

In the 1990s, not just in Australia, but around the world, there was a fundamental change in the equipment and methods used to measure temperatures.

This created a discontinuity in the long temperatures series that begin around 1880, and that are used by the IPCC to measure climate variability and change.

But neither the IPCC, nor NASA, nor the UK Met Office have documented the effect of this change.

I’ve been asking the Australian Bureau, that provides data into the global databases, how they know that temperature measurements from the resistance probes at places like Cape Otway lighthouse are consistent with readings from mercury thermometers. (I’ve written extensively about how temperatures are measured at this lighthouse, including as part 3 of my 8-part series about hyping daily maximum temperatures.)

Now I ask, how can the bureau know that NASA and the UK Met Office are reliably measuring temperatures if they have not seen the US and UK parallel temperature datasets?

The parallel data are the recordings from the mercury thermometers measuring at the same location and the same place as the resistance probes. This data will give some indication of the extent of the many discontinuities created in the record by the change over to probes. I have estimated that the bureau is holding parallel data for approximately 38 Australian locations with on average 15 years of data.

When John Abbot first lodged a Freedom of Information request for some of this data for Brisbane Airport back in 2019, he was told that the parallel data did not exist.

Abbot took the issue to the Australian Information Commissioner, who sided with the bureau falsely confirming that it did not exist.

It was only after an appearance at the Administrative Appeal Tribunal in Brisbane on 3rd February, where I attended as an expert witness, and the drawn-out mediation process that followed, that three years of Brisbane Airport parallel data was finally made available.

As Graham Lloyd explained on the front page of the Weekend Australia thereafter, my analysis of this data shows that the resistance probes at Brisbane Airport measure temperatures that are quite different from the mercury thermometer most of the time. The Bureau has been neither able to confirm, nor deny, the statistical significance of the difference. But it doesn’t dispute the actual numbers.

John Abbot recently lodged another FOI for more parallel data for Brisbane Airport. This time around the bureau has acknowledged the existence of the data and even that some of the ‘field books’ have already been scanned, and so are available in an electronic form. But the bureau claims it will only be able to release another three years of data for this one site (Brisbane Airport), again this time – never mind the 15 years of parallel data that exists for Brisbane Airport and another 37 locations that vary geographically and electrically. We need this comparative data, including to assess the reliability of the current global warming forecasts.

I’ve been reliably informed, the Bureau is intent on drawing out provision of this parallel data to Abbot and me for Australia, while changing the ‘model’ it is using to measure temperatures as though the overseas systems are reliable.

So, I ask, again, where are these numbers for overseas locations, including the parallel data for the overseas mountains and airports – not to mention lighthouses?

It is critical for everyone to be able to see this data, especially if the Australian bureau is to adopt an overseas model for temperature measuring on the basis it must be more reliable.

*****

The feature image shows me at the Goulburn Airport weather station in late July 2017. This weather stations was shown to have had a limit set on how cold temperatures could be recorded for a period of 20 years.

4.9 36 votes
Article Rating
Subscribe
Notify of
127 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Rud Istvan
May 26, 2023 2:23 pm

As much as I welcome Jenn’s small BOM victory, I don’t think it matters much.

Both she and her colleagues and AW’s now two US surface station project s show the surface temperature data isn’t fir for purpose.The climate models are not fit for purpose. All but one of CMIP6 produce a tropical troposphere hotspot that does not exist. Parameter tuned to best hindcast 30 years, anomalies hide the fact that they diverge in absolute hindcast temperature by about +/-3C—horrendous disagreement.The now testable predictions made by climate experts have all proven false. Hansen’s sea level rise acceleration didn’t. Wadhams Arctic summer sea ice did not disappear in 2014. Viner’s UK children still know snow 23 years later. GBR is thriving, not dying. Glacier National Park still has glaciers.The green renewable solutions are now deployed at a penetration level showing they are financially ruinous and grid destabilizing.Yet the money still flows, the IPCC still produces nonsense, and academic climate careers remain unimpaired. That evidences 40 years of AGW momentum.I think a further accumulation of failed climate predictions, plus failed green grid and transportation solutions will be required to stop the climate insanity. We have good prospects on transportation in California and the EU. We have good grid failure prospects in California and Germany and UK and Aus. So I am increasingly optimistic about skeptics winning in the long term, meaning in the next decade.

Last edited 6 days ago by Rud Istvan
AndyHce
Reply to  Rud Istvan
May 26, 2023 3:20 pm

That possibility is why an immediate transition is so critical. All those years of effort down the tubes!

Smart Rock
Reply to  Rud Istvan
May 26, 2023 3:33 pm

Rud: I would like to share your increasing optimism, but I’m terribly afraid that “they” (the anonymous masterminds) are going to switch from fossil fuels to the new anti-agriculture movement as the primary tool to destroy democratic society.

Net zero will be horrible, and there will be a backlash from the general public at some point. But face it, a lot of folk in Ukraine have been making do with intermittent electricity (plus getting bombed on a regular basis), and life goes on after a fashion. But destroying modern agriculture by putting limits on fertiliser and taking land out of service is going to be really, really, really bad – and they’ve only just started. We in the rich countries will pay a lot more for a lot less food and grumble endlessly about it, but in the LDCs, there will be starvation on an unprecedented scale. Hundreds of millions could die, and if the climate starts to cool off a bit, it could be billions.

I hope I’m wrong, but I’m not very optimistic. As I’ve said before, we are in a war, and we are being attacked on multiple fronts. If there’s a setback on one front, expect “them” to send reinforcements to the other fronts.

There was a time when we could look at the net zero movement in isolation. I think that time is past.

Graham
Reply to  Smart Rock
May 26, 2023 4:17 pm

Well said Smart Rock.
Any one who thinks that food can be produced to feed the 8 billion people on this globe without using fossil fuel are delusional.
The food that is grown with nitrogenous fertilizer feeds 4 billion people.
Where and how will that food be grown or produced if the UN came out and tried to ban nitrogen fertilizer?
On top of that we have this insistent whine from the anti farming lobby about methane from farmed livestock.
Methane from farmed livestock is not a problem and never will be a problem as all fodder consumed by farmed livestock has absorbed CO2 from the atmosphere with the tiny amount of methane emitted breaking down within 10 years into CO2 and H2O.
We have scientist’s telling us that they can ferment food in vats but what could be better than cows turning grass into nutritious milk that most humans can digest ?
There has to be a big turn around before stupid politicians wreck their countries, other wise poverty and hunger will become common place around the world .

Rud Istvan
Reply to  Graham
May 26, 2023 4:22 pm

I do not disagree. Wrote Gaia’s Limits (published early 2012) expressing my (then) views. But if the EU thinks 15 minute cities are the future, wait until those 15 minute cities have no food and see what happens.
Sri Lanka was a warning.

Barnes Moore
Reply to  Smart Rock
May 27, 2023 6:11 am

The multiple fronts include subjects unrelated to climate change. Woke, DEI along with ESG initiatives, indoctrination in schools including the teaching of CRT, sexualization of children, a fully corrupt media, big tech censorship, Hollywood, etc. are other fronts. It’s exhausting. What is most disturbing is how few people have the slightest idea of what is going on – fat, dumb and happy. I fear that even a total and catastrophic grid collapse that can only be blamed on the high penetration of unreliables won’t convince those afflicted with mass formation psychosis. The media will simply claim that fossil fuels also failed without doing any real investigation and the true believers will simply cite all the articles claiming unreliables really were not the problem – only that had there been more, everything would have been fine.

Thomas Sowell said it best: ““It is usually futile to try to talk facts and analysis to people who are enjoying a sense of moral superiority in their ignorance.”

Jennifer Marohasy
Reply to  Rud Istvan
May 26, 2023 4:19 pm

Thanks Rud. And also for the correction (below) regarding what the satellites measure.

But in the end I do think it matters that we are able to accurately measure temperatures – locally and globally.

And one day I would like to think we could have different parallel systems in place at many locations on Earth, additional to the satellite data.

Anthony Watts is developing a method for a network of weather stations that is not so dependent on a constant voltage. (He is yet to launch the program, that is a very exciting development.)

I am also inspired by much of the discussion lead by Tim Gorman at a previous thread, where he suggests climate scientists need to begin to understand how to measure temperatures as an integration of change across each 24 hour period, as entomologists (my undergraduate training) have always understood temperature change in degree days.

Knowing exactly how the maximum and minimum temperature is defined in each day may become less important, that would be progress as we move to a proper understanding of a daily average temperature and more.

Jim Gorman
Reply to  Jennifer Marohasy
May 27, 2023 7:51 am

A total integrated temperature profile over a 24 hour period is a new and useful concept. You can determine several values, deg•seconds, deg•minutes, deg•hours, deg•days, degree•months, deg•years. Using Kelvin allows direct comparison without anomalies.

No more averages of averages, no more averaging anomalies. You no longer have a time series after integration over a period of time, you have an actual temperature value over that period of time. If temps are converted to Kelvin at the start, station values can simply be added to find a mean and will have a proper variance of the entire distribution.

If Cooling and Heating Degree Days are used, an added advantage is that climate science will be required to define a baseline temperature that is appropriate for humans. It can vary by location and if Kelvin is used, degree•time can be directly added and averaged.

Bill Johnston
Reply to  Jim Gorman
May 27, 2023 6:33 pm

Humans live in the Arctic, they live in the tropics, at Marble Bar and evan at the Space Station. There is no such a thing as “a baseline temperature that is appropriate for humans”. Degree days have specific uses but beyond those they are pretty meaningless as a climate metric.

Cheers,

Bill

Jim Gorman
Reply to  Bill Johnston
May 27, 2023 7:08 pm

Bull pucky! Sorry you are unable to see the usefulness. Science advances whether you want it to or not. Integrating an actual temperature profile over time is a huge step in creating an actual scientific basis for climate science going forward. You need to reflect on why architects, power and HVAC engineers denigrate meteorology for using 2 temps a day to determine what is occurring. Tradition is an excuse, not an answer!

A common baseline will allow comparisons to be made between regions for HDD’s and CDD’s. Changes in these will be good indications of what heat in the atmosphere is actually doing.

Bill Johnston
Reply to  Jim Gorman
May 27, 2023 8:16 pm

Dear Jim,

While our opinions differ, you don’t have to resort to childish memes.

I respect your opinions. However, as you have no data to back them up I see they are just opinions.

I have observed the weather, I’ve undertaken a large body of work developing methodologies and analysisng T-data, I have also calculated degree-days in the past and found that while they have specified uses, their utility in a climate sense is limited.

So we will just have to agree to differ and be polite about it!

All the best,

Bill Johnston

mleskovarsocalrrcom
May 26, 2023 2:35 pm

Kudos! Enough wins, no matter how small, and we may be able to turn the ship around.

AndyHce
May 26, 2023 3:18 pm

satellites that are measuring temperatures at different depths within the atmosphere use these resistance probes

What satellites are using resistance probes? I’ve certainly never heard of them in any of the many discussions about satellite measurements. I’m very curious as to how satellites could possibly measure atmospheric temperatures with such instruments. Does the statement perhaps mean that the temperature of the satellite itself is monitored by such a probe?

Last edited 6 days ago by AndyHce
Rud Istvan
Reply to  AndyHce
May 26, 2023 3:33 pm

None do. Jenn got that detail wrong.There is no temperature (defined as degree of molecular agitation) in the vacuum of space where there are almost no molecules. Sats use IR sensors, since the IR wavelength can be directly converted to a corresponding temperature. Shorter wavelength mean higher temperatures, as shorter wavelengths by definition mean higher photon energy.

Jennifer Marohasy
Reply to  Rud Istvan
May 26, 2023 8:45 pm

Rud et al.

They do have platinum resistance probes on board the satellites.

I was wondering where I had got that idea from. I was writing quickly from memory. Now, checking in with John Christy:

“The satellite essentially measures the intensity of the microwave emissions from (1) the earth’s atmosphere, (2) a warm target on board and (3) cold space. 

The Earth views have intensity levels between those of the cold space reading and that of the warm target plate. The Platinum Resistance thermometers are embedded in the warm plate so we can know what the temperature of that target is. So, we know (a) the warm target temperature and its measured emission intensity, (b) the cold space temperature and its measured intensity, and (c) we know the measured intensity of the Earth views, so we can interpolate to get the brightness temperature of the Earth views.

So platinum resistance thermometers play a role.

I like to say that the satellites measure the temperature of the bulk troposphere (i.e. lowest 75 percent of the atmosphere). A simple time series should do.

Satellite and surface data measure different quantities, but should have very similar trends over long time periods – though in maritime areas this doesn’t quite hold. The thing about the tropospheric data is that this is the place where the greenhouse effect should be largest and clearest to detect. So, it is a critically important metric to consider. [ends]

FYI.

sherro01
Reply to  Jennifer Marohasy
May 28, 2023 5:17 pm

Yes,
It has long ago, many times been stated by Roy Spencer that there is an on-board system of Pt resistance reference temperatures.
Geoff S

Smart Rock
Reply to  AndyHce
May 26, 2023 3:36 pm

I think Jennifer meant to say weather balloons, not satellites

Jennifer Marohasy
Reply to  Smart Rock
May 27, 2023 3:34 pm

Thanks Smart Rock,

I don’t know much about weather balloons.

John Christy tells me that there was a spurious temperature shifts in the Australian radiosonde network, a spurious warm shift in 2010.

Radiosonde being the instruments suspended below weather balloons and used to measure pressure, temperature and relative humidity.

Peta of Newark
Reply to  AndyHce
May 26, 2023 4:51 pm

They’ll be using what is often referred to, by Spencer himself not least, as a ‘Thermopile’

‘Thermo’ there refers to obviously ‘heat’ and the ‘pile’ is another word for ‘battery’
(It is the word for ‘battery’ in French at least and in many other languages)

In a nutshell and common vernacular, a Thermopile is something that converts heat into voltage
In a Sputnik I somehow doubt it. Platinum would be too unwieldy both technically and thermally- they’ use Silicon or Germanium constructed into (arrays of) diodes or the stuff used in PIR systems and alarms (Cadmium Sulphide) but there are a good few vatiations on that/those.
Diodes of all constructions have very precise and well known thermal characteristics, in a fashion they do make very good Standard Cells.
Any and all solid state cameras are in fact, fantastically good thermometers.

What goes on with the Sputniks though (Spencer’s Sputniks) is simply gobsmacking to the whole of climate science in its entirety.
That soooo many people can be so wilfully stupid & blind, ill-educated and unquestioning is nothing less that amazing .

The reason being: Spencer’s Sputniks ‘look’ at the atmosphere and measure its temperature using devices called Microwave Sounders.

The name is an oddity in itself. Microwaves don’t make ‘sound’ but even if they did, anyone would expect a ‘sounder’ to emit some sort of signal
Microwave sounders are ‘listeners’ only.

What they are listening for is the resonance of diatomic Oxygen within and throughout the atmosphere.
It seems that Oxygen likes to resonate at particular frequencies (roundabout 55GHz) but it is that the actual resonant frequency is dependant on both the temperature the Oxygen is at and also its pressure.

So, by running a few experiments on the ground inside pressure chambers, you can create graphs and tables telling you how much energy comes off Oxygen, at that (microwave) frequency at all sorts of temps and pressures – such as Oxygen might encounter within Earth’s atmosphere

Armed with that, you can send up a Sputnik to listen for that 55GHz signal, see how it spreads across its little spectrum and thus ‘see’ Oxygen at different pressures (heights) and what temp it’s at at those heights.
Fine. Lovely. Fantastic.

Where the wilful blindness and utter gullible acceptance comes in is that:
The Oxygen is Resonating

Errrrrm, excuse me – only Greenhouse Gases are allowed to resonate

IOW. The very working principle of the microwave sounders on board the much revered Spencer Sputnik rides a Coach and Horses through the very original premise of the Green House Gas Effect – upon which it all depends.

and nobody has noticed
and nobody says anything
even the guy operating the Sputnik

We are soooo doomed here – we really are and frankly, we deserve it.

DMacKenzie
Reply to  Peta of Newark
May 26, 2023 6:49 pm

Hmmm….55 Ghz is a wavelength of .545 cm or 5450 microns. This corresponds to a Weins law max temp of 2890/5450= 0.53 Kelvin, and orders of magnitude less electron volts per photon, for example,than 15 micron CO2 (193 K, -80 C), so nope, not 55 Ghz. I think they operate around 5.5 micron wavelength.

Nick Stokes
Reply to  DMacKenzie
May 26, 2023 11:11 pm

No, it is around 55 GHz.

Nick Stokes
May 26, 2023 3:18 pm

 And, this is the important bit, the voltage delivered to the probe is critical for accurate temperature measurement. Not just in Australia, but around the world. And there are no standards.”
Not true. Three wire configured RTDs use a Wheatstone bridge configuration. It is a balance between the RTD and reference resistors.

Peta of Newark
Reply to  Nick Stokes
May 26, 2023 4:16 pm

Yesssss, they use a bridge to measure the resistance of the probe.
The ‘joy’ of the Wheatstone Bridge being that you measure the resistance of the probe (or anything) without putting any current through it or voltage across it.
(The more you think about that the crazier it sounds)

But the ‘why’ is so that your measuring circuit doesn’t put power into the probe and thus heat it

But all that is doing is putting temperature sensitivity onto another component and in any circuit such as these are – you need an accurate Voltage Reference.
All resistances, resistors and other components will have their own temperature characteristics that will trash your result if you don’t recognise them.

Classically and as we all learned (didn’t we?) at ages 16 or 17 in Physics lessons at school, the Voltage Reference was a ‘Standard Cell’
i.e. Odd little things that resemble a battery (cell) but with particular chemistries such that they all have, and always all have, a precisely known and extremely stable voltage.
As long as you don’t mistreat – them especially by drawing any significant current out of them.

Modern science has ‘perfected’ solid state versions of the Standard Cell – generally called Zener Diodes.
Supposedly extremely precise and stable variations of these basic components become the voltage references for any time/place where the Analogue World meets the Digital World.
Obviously where the analogue signal is a voltage and needs to ne converted into a string of ones and zeroes.
So Zeners are in thermometers but also esp commonly when audio (and video) is involved. Also and vastly important, battery chargers/controllers.

e.g. When you speak into your mobile phone, the very first component that ‘hears’ you, is a Zener diode – comparing the voltage from the microphone against its reference voltage so that the rest of the circuitry can make consistent estimates, thus numbers out of, of the signal coming from the mic.

News that the recorded temperature drops/changes when the rest of the ‘apparatus’ is busy is simultaneously laughable, cringe-worthy. toe-curling and More Wrong Than a Wrong Thing

Any serious electronics engineer is only left wondering what sort of childlike & amateur shambles that circuit design and construction must be

Last edited 6 days ago by Peta of Newark
Nick Stokes
Reply to  Nick Stokes
May 26, 2023 4:20 pm

Three wire configured RTDs”
I see the BoM uses four wire config. Even more so.

Jennifer Marohasy
Reply to  Nick Stokes
May 26, 2023 4:29 pm

Relevant here is a comment by Jim Gorman at the previous thread on this topic where he wrote:

“2 wire -> The source is a constant voltage. The lead resistance and contact resistance is in series with the Rt. The resistance is pretty constant and reduces the overall sensitivity to temperature change.

3 wire -> The source is a constant voltage. The 3rd wire is used to reduce the effective resistance in the leads and contacts 

4 wire -> The source is a constant current source and not a constant voltage source. The lead configuration along with a constant source almost eliminates lead and contact interaction so the Rt gives the most sensitive readings. Please note “almost” means not entirely because nothing ever matches perfectly in the real world.

***

Re-reading this I think I have the detail of the need for a ‘constant voltage’ incorrect, in the above piece.

Loren Wilson
Reply to  Jennifer Marohasy
May 26, 2023 8:20 pm

Most digital multimeters measure resistance by supplying a low but constant current to the circuit and measuring the voltage drop across the circuit. For example, the Keithley 2000 DMM and the Hart 1502a use a supply current of 1 milliamp. This reduces self-heating of the item under test.

A platinum resistance thermometer (PRT) is simply a long piece of very small gauge platinum wire. Usually the wire is coiled like the tungsten filament in an incandescent light bulb, and encapsulated in glass, quartz, or a ceramic. Leads are attached to the two ends of the platinum coil. If you have a two-wire platinum resistance thermometer, one wire is attached to each side of the platinum coil. therefore, the resistance of the leads is included in the measurement so your result can be quite inaccurate. A three-wire PRT has two wires attached to one side and one attached to the other. The current flows down one of the pair of wires and the voltage drop is measured using the other of the pair of wires and the common wire.

No serious work is done with two or three wire sensors. The big boys use a four-wire PRT. A pair of wires is attached to either end of the platinum coil. One set is used for the supplied current and the other measures the voltage drop. Lead resistance is eliminated (not completely, but close enough that even NIST is not concerned about it). The thermometers used by the big boys also come with a calibration against fixed points at two current levels, usually 0.5 and 1 mA. The amount of self-heating is also quantified. My research was conducted mostly in equipment in stirred liquid baths where probe self-heating was on the order of 1 mK.

I bought relatively cheap PRTs (100 ohm element in ceramic inside a 1/8″ stainless steel sheath) with the best purity of platinum I could get for a few hundred dollars each, coupled with either a Keithley 2001 DMM or a Hart 1502a. We had a reference quality PRT calibrated by NIST to calibrate the industrial quality PRTs used for the day-to-day work. The working PRT was checked in a stirred ice bath each day. I had millikelvin precision but likely ±0.05K accuracy.

In air, especially nearly still air like the conditions in a weather station enclosure, self-heating can be a factor. Hopefully the manufacturer of the probes used by the BOM provides this information, along with a statement quantifying the stability of the current source and voltmeter used for these measurements.

Jim Gorman
Reply to  Loren Wilson
May 27, 2023 8:45 am

Great information. A way too many so-called experts spoil the stew! (Correct information)

Reply to  Loren Wilson
May 28, 2023 4:01 am

Loren Wilson,
Many thanks for a neutral, useful set of words from one with hands-on experience in the direct topic.
Years ago I used to make and test such devices as part of a quest like getting the best oscilloscope. These days I have no knowledge of the brands of parts or even the exact functions of some relevant devices, so i stay quiet.
It was great to learn dfrom you. It reminded me of an appraoch to thermometry that few seem to bother with, asking experts.
During claims over the performance of Argo floats, I wrote to Britain’s National Physical Laboratory, Teddington:
Q: “Does NPL have a publication that gives numbers in degrees for the accuracy and precision for the temperature measurement of quite pure water under NPL controlled laboratory conditions?
At how many degrees of total error would NPL consider improvement impossible with present state-of-art equipment?”
A:  “NPL has a water bath in which the temperature is controlled to ~0.001 °C, and our measurement capability for calibrations in the bath in the range up to 100 °C is 0.005 °C. However, measurement precision is significantly better than this. The limit of what is technically possible would depend on the circumstances and what exactly is wanted. We are not aware of any documentary standard setting out such limits.”
A similar letter to the Australian authority got this reply:
I am Mong-Kim Ho, and I am responsible for the Australian Temperature Scale from -189 to 960 deg C.
The selection of a suitable temperature sensor  and its readout is mostly based on the overall uncertainty, the physical constraint (contact/immersion), manual or auto-logging, available budget… The most accurate (most expensive) sensor is a standard platinum resistance thermometer at mK level uncertainty.
The best way to assist with your query is to contact me on 02 8467 3572.

The Aussie expereince was similar to that of the UK. I hope that this helps.
Geoff S

Jim Gorman
Reply to  Nick Stokes
May 27, 2023 8:39 am

Not true. Three wire configured RTDs use a Wheatstone bridge configuration. It is a balance between the RTD and reference resistors.

You might explain in your infinite wisdom how a Wheatstone bridge works without using Ohms Law. That is, V = IR.

I think if you look at the probe, there is always both voltage and current going through the bridge. In fact most diagrams show the voltage difference across the bridge (the terminals of the galvanometer) as E₀ and E₁. If E₀ and E₁ are equal, there will be no current through the device used to measure the resistance. The “standard” resistance that is used to balance the circuit can then be used to determine the value of the test resistance.

I know RTD’s operate somewhat differently by using the imbalance to actually indicate temperature and also require calibration to determine the current imbalance versus temperature curve.

Varying voltages can certainly upset the calibration curve causing incorrect readings. Heating can occur in the resistors if the voltage/current is too high, again changing the calibration curve. Induced currents from induction created by electrical noise (radar?) can upset the bridge.

The more resolution you need from a device requires measurement of smaller and smaller current imbalances. This by itself introduces more noise contamination. Trying to make lab grade measurements in a field device, even an RTD is fraught with opportunities for error which must be quantified. One will quickly learn that the costs of more and more precision far outweigh the value of more resolution in atmospheric temperature measurements.

Some links:

How to Wire an RTD with 2, 3 or 4 Wires? (omega.com)

Wheatstone Bridge – Working Principle, Formula, Derivation, Application (byjus.com)

karlomonte
Reply to  Jim Gorman
May 27, 2023 11:15 am

Off-the-self Pt RTD meters use the ASTM or ISO standard resistance versus temperature curves, with the curve type and wiring settable in firmware. Not sure I’d want to use one in an airport environment, though.

Nick Stokes
Reply to  Jim Gorman
May 27, 2023 5:13 pm

You might explain in your infinite wisdom how a Wheatstone bridge works without using Ohms Law.”
Who said that?
The point is that if you measure by Ohm’s law directly, and there is a ΔR that you want to know, due to T, but also an error in supply voltage ΔV, then the current discrepancy will be got from (I+ΔI)(R+ΔR)=V+ΔV, or IΔR=ΔV-RΔI.

But in a balanced bridge, there will be zero current, whatever V. And to the extent that ΔR causes imbalance, the current ΔI across the bridge (all resistors originally=R) is given by
ΔI=VΔR/(2R²+ΔR)
or, to first order ΔR=ΔI(2R²/V)
There is no ΔV, to first order.

Jim Gorman
Reply to  Nick Stokes
May 27, 2023 7:20 pm

It was you that said the balance is what is important. Constant voltage or current sources are needed to insure the calibration curve is the appropriate one. You are out of your area of expertise dude.

Nick Stokes
Reply to  Jim Gorman
May 27, 2023 9:28 pm

It’s just a circuit, calibrated or not. You can work out the response to varying R or V equally.

Jim Gorman
Reply to  Nick Stokes
May 28, 2023 10:09 am

You always dance around the issue. The problem is that the calibration curve (non-linear) is done under given conditions. If R1,R2, R3 change due to different voltage or current, depending on the configuration, then the programmed calibration curve is no longer valid.

Dude, I am old enough that much time in 1st semester EE lab was spent using a Wheatstone Bridge. We learned how temperature affect resistors of various composition. We used it to evaluate resistance of inductors of various types. It was my first introduction to measurement uncertainty. You don’t know how easy it was to upset the bridge with just your body. Remember, small currents are the order of business to prevent resistance changes due to heating. That’s why galvanometers were used, not just some old voltmeter.

Nick Stokes
Reply to  Jim Gorman
May 28, 2023 1:42 pm

If R1,R2, R3 change due to different voltage or current, depending on the configuration, then the programmed calibration curve is no longer valid.”

Well, if they behave differently than during calibration. But that is the stupid thing about these discussions. All sorts of normal instrumentation issues become unsolvable if climate is involved.

The point of using a Wheatstone bridge is that keeping a simple resistor linear and stable is an easier task than, say, supplying a super accurate voltage.


Nick Stokes
May 26, 2023 3:25 pm

Now, I’m informed, the Bureau are ditching the current system and looking to adopt an overseas model that it claims will be more reliable.”

Very little detail here. System for what? What exactly is changed? What does this have to do with Jennifer’s agitations?

Depending too, of course, on the reliability of Jennifer’s information.

Graeme4
Reply to  Nick Stokes
May 26, 2023 3:30 pm

Do you know if it’s possible to obtain more technical details of the BOM’s measuring stations Nick, including circuit diagrams?

Last edited 6 days ago by Graeme4
Nick Stokes
Reply to  Graeme4
May 26, 2023 5:13 pm

I haven’t seen circuit diagrams, which may be proprietary. But there is a detail report here. More here.

Graeme4
Reply to  Nick Stokes
May 26, 2023 8:03 pm

Thanks for the info Nick.

Jim Gorman
Reply to  Nick Stokes
May 27, 2023 9:15 am

Guess what this document shows for a combined uncertainty including the screen?

6.2.4 In such extreme conditions, relative humidity readings can be off by as much as 50 per cent, i.e. several °C for the dew-point temperature. As for air temperature, uncertainties associated with the screen are generally significantly higher than uncertainties associated with the sensor (Pt100) and the acquisition system. However, the desired ±1°C accuracy is attainable with a well-designed screen.17 MA8a provides more detail on instrument screen siting.  

±1°C accuracy is a pretty big uncertainty and far outweighs even a ±0.10 uncertainty from the RTD.

6.3.2 As platinum is a corrosion-proof metal, platinum wire probes have excellent stability over time, particularly when the platinum wire is well protected. It is therefore preferable to use a probe with proper mechanical protection.¹⁸

Range – 80˚C – + 60˚C¹⁹

Resolution 0.1˚C

Uncertainty ± 1.0˚C

Jennifer Marohasy
Reply to  Jim Gorman
May 27, 2023 11:35 am

Thanks Jim Gorman for finding this, and Nick Stokes for the link. :-).

Jim Gorman
Reply to  Jennifer Marohasy
May 27, 2023 11:43 am

You are welcome. These values are similar to NOAA/NWS specs for AOSS stations and MMTS in general. NOAA’s CRN stations have a better uncertainty (0.3˚C), but they also use better siting, etc.

This document is significant to me.

NDST_Rounding_Advice.pdf (noaa.gov)

Makes you wonder how we get anomalies to the one thousandths digit.

Jennifer Marohasy
Reply to  Nick Stokes
May 26, 2023 4:31 pm

Thanks Nick. With all your contacts within the BoM and CSIRO, perhaps you can tease something ‘official’ out of the institutions … one or the other. :-).

Nick Stokes
Reply to  Jennifer Marohasy
May 26, 2023 5:10 pm

Contacts? Not really. But where would I start?
I’ve heard a rumour that the BoM is upgrading something, somewhere? Could you tell me more?

SteveG
Reply to  Nick Stokes
May 26, 2023 5:23 pm

You heard a rumour from no contacts?

Nick Stokes
Reply to  SteveG
May 26, 2023 5:46 pm

The rumour (that is all it is) came from Jennifer.

Mr.
Reply to  Nick Stokes
May 26, 2023 4:37 pm

I’m given the firm impression that you have insider access to BoM nabobs, Nick?

Can you use your influence there to glean more details to share with us about the undisclosed changes to which Jennifer refers?

Ta.

Mr.
May 26, 2023 3:40 pm

Manmade global warming / climate change –

it’s all academic.

Literally!

Joseph Zorzin
Reply to  Mr.
May 26, 2023 4:29 pm

In many parts of the world, like here in New England, we spend most of the year praying it’ll warm up – so the idea that we should fear a bit more warming is maniacal.

Mr.
Reply to  Joseph Zorzin
May 26, 2023 6:34 pm

Yes, you would think that Canadians for example, would be delirious with excitement about the prospect of their country being hugely more habitable and arable, with a few more degrees of warmth throughout the year.

What’s up with that?

Jeff Alberts
Reply to  Mr.
May 27, 2023 8:51 am

Why would they be excited about that? They’re much more excited to have state-sponsored eugenics.

Dave Fair
Reply to  Mr.
May 26, 2023 8:00 pm

Academic, yes. But you have to throw in the alliance between paid-for Leftist politicians and their profiteering crony capitalist benefactors.

Tim Gorman
May 26, 2023 4:23 pm

But the problem of reliable temperature measurements doesn’t begin or end with numerical averaging.”

The problem with averaging is that the Pt sensors will record higher temps than the mercury thermometers causing any average to be biased if the temperature curve is not linear.

Suppose the temperature goes from 20C at time t0 to 21C at time (t0+1minute) and back to 20C at time (t0 + 2minutes) . The mercury thermometer will only indicate part of that increase in the first minute due to its thermal inertia, let’s say just for arguments sake that it will indicate half or 20.5C. Will the average of the Pt sensor readings also equal 20.5C? If not, then what will the Pt sensor average be? In essence what this actually does will introduce an uncertainty into the what is actually being recorded compared to the mercury thermometer. This will be added to the systematic uncertainty associated with any field measurement device. When climate science is trying to identify differences in temperature of 0.01C adding any uncertainty just makes this endeavor more questionable than it already is.

You’ll get the same uncertainty if the temperature is falling, e.g. during the second minute.

Bill Johnston
Reply to  Tim Gorman
May 26, 2023 6:14 pm

Tim,

The response is linearised via a quadratic calibration relationship determined in a metrology lab using fixed-T oil-baths. This is spelt out in several BoM publications that you (and JM) could find on the internet.

By the way, you and she could also find that the AWS dataloggers and the various settings are proprietary goods and services. While their serviceability was evaluated against Bureau specifications by their Metrology lab, they were not built by the Bureau. In the same vein, while they use thermometers, the BoM also don’t make thermometers.

It is also impossible to calibrate thermometers and PRT-probes in service. They can be checked for accuracy during a service call but they cannot be calibrated.

All the best,

Bill Johnston

http://www.bomwatch.com.au

Loren Wilson
Reply to  Bill Johnston
May 26, 2023 8:29 pm

Can they at least change the resistance value in the calibration to match the current reading while at 0.01°C (triple point of water)? My experience and impression is that if the ITS-90 methodology is followed, the resistance at the water triple point may increase as the PRT “ages” but the calibration is also shifted proportionally, hence their approach to basing everything on the resistance ratio.

Bill Johnston
Reply to  Loren Wilson
May 26, 2023 9:47 pm

Dear Loren,

I don’t know. In response to the Goulburn saga they set new specs, and as far as I can work out, the company produced new cards for the data loggers. When I worked with PRT-probes in the 1980s the manufacturer tried to alter (lengthen) the calibration so the response would not overshoot and cause spikes on warm days. I don’t think spikes have anything to with radars etc; however, if they get averaged into a dataset (verses being identified and removed), their influence is still embedded in the data.

My experience with commercial AWS deployed at field sites was that spiking was still an issue; and somewhere the Bureau has documented that they use rules-based error trapping to remove spikes at source. (I can’t spend my life chasing around this stuff, but is reasonable to assume that those who publish on it can do so with authority.)

I my view, having taken weather observations on and off for a decade from 1971 to about 1980, data are too coarse to use to detect small changes in the climate. This is borne-out by the numerous studies now published on http://www.bomwatch.com.au.

In response to arguments being made here, my most recent study published yesterday, examines an overlap thermometer/AWS dataset for Townsville (under the tab statistical methods).

Kind regards,

Bill Johnston.

Jennifer Marohasy
May 26, 2023 4:45 pm

My colleagues at the IPA, and journalists at The Australian, and so many others have ‘gone to ground’ on this. They are hiding. At least for the moment.

No announcement of this small win in The Weekend Australian or in any three of the weekly IPA newsletters to members and others. Yet I was telling them last Wednesday. Time to get official confirmation, or not.

And so often these same publications speak of the need for us to be bold, and lead the way. Chin up.

They will hopefully catchup, eventually, including with Charles Rotter and Anthony Watts.

Not to mention that I am in awe of John Christy and Roy Spencer for their work for so long now, accurately measuring temperature by satellite.

And I am always inspired by Anthony’s motto here: Walk toward the fire. Don’t worry about what they call you.” – Andrew Breitbart.

MOST IMPORTANTLY:

I have a member of the Australian Parliament also inspired, and wishing for me to provide some questions that can be tabled in the Australian Parliament in an attempt to get some discussion going, least the ‘new model’ end up less than reliable.

So, this request is especially to Tim Gorman and others who understand in great detail how temperature is measured electronically and potential pitfalls.

How best, through a series of questions, can we highlight the mistakes of the past with a view to avoiding them moving forward?

So much thanks, in advance.

RickWill
Reply to  Jennifer Marohasy
May 26, 2023 6:11 pm

I have a member of the Australian Parliament also inspired, and wishing for me to provide some questions that can be tabled in the Australian Parliament 

The temperature in the Nino34 region is one of the key indicators of the Pacific Ocean oscillation between El Nino and La Nina phases. Many Australians appreciates these phases have dramatic influence over the weather in Australia. The average temperature in his region is remarkably stable over time with accurate measurement, using a combination of satellites, moored buoys and moving buoys, showing a slight cooling trend over the past 4 decades.

However the CSIRO ACCESS climate model shows a steadily rising trend in this region. If the model cannot reproduce the temperature trend in that region which has a such a significant bearing on Australia’s weather, why should there be any confidence in the model to replicate climate anywhere on any timescale.

Attached compares the ACCESS model using CMIP5 input with NOAA?NCEP temperature data. The chart was produced in 2021.

Nino34_NCEP_CSIRO.png
RickWill
Reply to  RickWill
May 26, 2023 6:32 pm

The attached chart here shows NCEP and the CSIRO CMIP3 model. This one reflects the CMIP3 baseline data such that the forecast temperature in the region is already above the actual.

The IPCC reports never go back and check their forecasts. They just reset to the new baseline with the same warming trend. You never get the modellers to agree their earlier model was wrong. They just say the latest one is better.

A good question is why did the model need to be upgraded? The science was settled long ago apparently.

Nino34_CSIRO_CIMP3.png
Bill Johnston
Reply to  Jennifer Marohasy
May 26, 2023 6:32 pm

Dear Jennifer,

Idea number 1. You could stop claiming differences between instruments at Brisbane airport and Mildura are significant when they are not.

I am frankly amazed that you could use the wrong test (the paired t-test ) on data that are not paired (instead use the un-paired t-test), and that you could claim differences are real when in fact both tests are invalidated by autocorrelation.

By not understanding the difference between paired and un-paired t-tests, and the overarching issue of autocorrelation, you are making a total hash of this. I hope your politician does not discredit himself by taking your advice.

You should also put your data in the public domain like real scientists do.

Yours sincerely,

Bill Johnston

http://www.bomwatch.com.au

Jennifer Marohasy
Reply to  Bill Johnston
May 26, 2023 7:26 pm

Bill,

You don’t have access to either data sets, yet you claim to know there is no statistical difference.

Last thread you were claiming I was suggesting the difference was not zero, at least now you have updated your accusation to no difference that is the test I did.

You don’t have access to the data because the Bureau has not made it public.

And I won’t give it to you, because you ran a campaign stating in the first instance that the Mildura parallel data did not exist, and then that the I was wasting BoM resources asking for the Brisbane data.

Worse, you wrote to management at the IPA falsely claiming you had evidence I was an incompetent, and phoned various journalists again falsely stating I was making stuff-up.

That is what you do, make stuff-up.

Last edited 6 days ago by Jennifer Marohasy
Bill Johnston
Reply to  Jennifer Marohasy
May 26, 2023 9:17 pm

Thanks Jennifer, you say that I:

claim to know there is no statistical difference

The standard deviation of data that is in the public domain is such that relative to the decimal-point differences you highlight, the likelihood of finding a difference between instruments is impossibly small. You can also work out using your numbers of observations, and available data, how large that difference would have to be in order to reach significance.

Further, your graphs show that all but a few out 1000 or so (3-years) at Brisbane, and 10,000 or so daily observations at Mildura are within the uncertainty band of comparing two values (+/- 0.6 degC).

So, while I (and others) don’t need your data, it would be interesting to confirm those assessments. It also does not matter if a percentage of numbers are above the re-scaled mean (which is zero) or below, they are still within the uncertainty envelope. If you want to look at proportions or numbers of instances above and below, you need to use a different test.

Re. using paired t-tests on Brisbane data, I pointed out that the paired t-test is a test that the mean of the differences (between instruments) is zero. The unpaired t-test is a test that the means of two groups (each representing an instrument) are the same. In both cases low P-levels reject the NULL hypothesis in favor of the alternative. Even if the test was appropriate, the word significance and highly significant do not indicate if a difference is meaningful in the overall scheme of things.

As for the “campaign” and talking of making things up, you cannot name one journalist that I contacted, because I don’t know any and did not contact any. I sent you a private email in an attempt to dissuade you from getting-in over your head.

However, there being no response from you, I sent a private submission, in which I justified my case, to Scott Hargreaves at the IPA because as a member and supporter of the IPA I was concerned where this BoM-warfare was leading. With your welfare in-mind, and acknowledging your valuable contribution to exposing BS-science on the Great Barrier Reef, I was hoping your hype about weather stations would quietly subside.

Frankly, as things are I don’t think this will end well.

Yours sincerely,

Bill Johnston

karlomonte
Reply to  Bill Johnston
May 27, 2023 6:40 am

Better.

old cocky
Reply to  Bill Johnston
May 26, 2023 8:55 pm

You could stop claiming differences between instruments at Brisbane airport and Mildura are significant when they are not.

I am frankly amazed that you could use the wrong test (the paired t-test ) on data that are not paired (instead use the un-paired t-test), and that you could claim differences are real when in fact both tests are invalidated by autocorrelation.

Something we see on a continuing basis is the widely disparate views on statistical analyses and what are the “correct” measures and tests. This seems to be tied to the fields of study.

To try to remove just some of the ambiguity, it would be nice to see why Jennifer consider the paired t-test to be appropriate for this case, and why Bill considers it to be inappropriate.

Bill Johnston
Reply to  old cocky
May 26, 2023 10:51 pm

Dear old cocky,
 
In simple terms, the paired t-test is designed to attribute all the variation in the response to “subjects”, in this case to instruments. It does this by using the standard error of the differences (a standardized measure of variation) as the denominator in the t-test equation. A small denominator, relative to the mean-difference results in a larger t-value (essentially the ratio of signal to noise). A large t-value indicates a low probability that the NULL hypothesis (that the mean-difference is zero) is supported. Thus, the difference is significant or depending on the significance level, highly significant.

Significant is usually taken to be that there is only a small probability (0.5, 5% or a 1 in 20) that the NULL is supported by the data. Highly significant reduces the odds to less than 0.01. 
  
So, hold that idea.
 
An unpaired or two-sample t-test, evaluates whether the mean responses of two subject-groups are the same. The un-paired test uses standard errors pooled for each group. Pooled SE’s are larger than for the paired t-test, consequently, if the same data is analysed as groups, the likelihood of detecting a difference is considerably less.
 
In practical terms, the paired t-test controls for variation between subjects – typically the same subjects are measured repeatedly. The same people may be measured before and after an intervention; the same animals before and after a dietary supplement. Two instruments may be compared in a series of oil- or ice-baths under closely controlled conditions. The underlying assumption is that differences in response is dominantly due to the intervention, the diet or the instruments compared under controlled conditions in a lab.

Conditions inside a Stevenson screens are constantly changing. There is no control, and instruments cannot measure the exact same parcel of air 100% of the time. Probes are invariably towards the rear of the screen, while the thermometers which are accessed every day are at the front, closest to the observer.

The instruments are doing their job, which is to show the environment being measured is spatially and time-wise non-constant. The paired t-test, which is much more sensitive to small differences is inappropriate under the circumstances.  
 
(The fish to water ratio was very unfavorable last Monday, mainly due to sea level rise; however, all next week untouched! They also say you do better when sea level is falling.)
 
All the best,
 
Bill Johnston
  

Last edited 5 days ago by Bill Johnston
old cocky
Reply to  Bill Johnston
May 27, 2023 12:06 am

With recreational fishing, it’s supposed to be more about the journey than the destination.

Jennifer Marohasy
Reply to  old cocky
May 26, 2023 10:56 pm

Old cocky, We’ve been through it over and over and over. The last time was with Cohenite. You can find that thread if you like. Why you couldn’t find a more perfect test than a paired t-test given the nature of this data.
Bill is on a loop with this one.
He is calling white out as black, over and over and over.

Last edited 5 days ago by Jennifer Marohasy
Bill Johnston
Reply to  Jennifer Marohasy
May 26, 2023 11:34 pm

Dear Jennifer,

In my reply to dear old cocky, I summarised the differences between the tests and gave reasons why paired t-tests are not appropriate for comparing instruments held in Stevenson screens. Also as of yesterday I have provided an overview here: (https://www.bomwatch.com.au/bureau-of-meterology/why-statistical-tests-matter/).

The underlying report supporting that overview is here: http://www.bomwatch.com.au/wp-content/uploads/2023/05/Statistical-tests-Townsville-Case-Study.pdf , and the dataset is here: Statistical tests Townsville_DataPackage

So, rather than shower me with niceties and your usual warm accolades, what are your reasons for continuing to use an inappropriate test, and for disregarding the assumptions on which it is based?

Why don’t you do some internet searches or read some stats books; and linked to that, why has this issue been going-on for almost a decade.

Yours sincerely

Dr Bill Johnston

old cocky
Reply to  Bill Johnston
May 27, 2023 12:04 am

I was trying to uncover the differences in underlying assumptions underlying the ongoing argument, not stir it u again 🙁

Bill Johnston
Reply to  old cocky
May 27, 2023 12:25 am

Ummmm, sorry. Despite there being thousands of text-books and posts about t-tests, technically there are only three or four assumptions underlying both tests.

With so much info sloshing around, except for intransigence and the lack of statistical understanding, I’m at a loss to explain assumptions underlying “the ongoing argument”. While I guess she assumes I’m wrong. Alternatively perhaps you are asking the wrong person.

Anyway at this juncture, out to dinner!

Cheers,

Bill

old cocky
Reply to  Bill Johnston
May 27, 2023 12:41 am

At the risk of being burned from all sides, at a conceptual level it seems to boil down to “are these samples the same?” vs “do these samples come from the same population?”

Bill Johnston
Reply to  old cocky
May 27, 2023 3:30 pm

Dear old cocky, more or less.

Does the mean of their difference equal zero (paired), verses are the means (of two populations) the same (unpaired).

Cheers,

Bill

sherro01
Reply to  old cocky
May 28, 2023 5:37 pm

Old Cocky,
In the last months of 2022, WUWT kindly published three articles by me on uncertainty, the last one jointly with Tom Berger. One of them attracted over 800 comments, very large for WUWT.
Many of the matters we covered are being raised again here (but not Bill’s t tests).
Some BOM technical reports were discussed in the overall category of uncertainty, particularly measurement uncertaInty.
(My post mortem reflection is that people are overall quite reluctant to change personal views that have given them comfort for years. Probably includes me. So I am in a phase of back to basics, like reading Richard Feynmann books cover to cover, and more. Personal philosophy drives motivation, a curious property of human minds that I struggle to understand.). Geoff S

old cocky
Reply to  sherro01
May 28, 2023 6:11 pm

Yes, Geoff, the same discussions/arguments do seem recur fairly regularly.

My hobby horse is that these arise less from personal philosophy than from educational/professional background leading to different perceptions of statistical measures and the appropriateness thereof.

The current differences of opinion between Jennifer and Bill regarding t-tests seem to be the result of these differences in backgrounds.

It’s rather frustrating overall, because there is a lot of technical expertise talking at cross-purposes 🙁

Jim Gorman
Reply to  old cocky
May 29, 2023 5:47 am

This is one place where trending can give a visual cue as to the differences between existing data.

Jim Gorman
Reply to  sherro01
May 29, 2023 6:08 am

Climate science refuses to address measurement uncertainty properly. You simply can’t “average” measurements that have a +/- 1.0 uncertainty and expect to reduce measurement uncertainty.

SEM is not an indication of measurement uncertainty. It is merely how closely a sample mean estimates the population mean.

Sooner or later these statistical misinterpretations will come to light.

old cocky
Reply to  Jennifer Marohasy
May 26, 2023 11:51 pm

That was the Cape Otway thread?

cohenite
Reply to  Jennifer Marohasy
May 27, 2023 4:37 am

Hi Jen; great work. To refresh, a T test simply measures the difference between the means of 2 sets of data. 2 different thermometers measuring temp and each recording temp data at the same site is perfect for T testing. The null is that there is no difference. If there is a difference then the natural conclusion is there is a difference between how the different thermometers are measuring the temp.

To your critics: KISS.

Jim Gorman
Reply to  cohenite
May 27, 2023 9:52 am

I agree.

From Paired Samples t-test: Definition, Formula, and Example – Statology

paired samples t-test is used to compare the means of two samples when each observation in one sample can be paired with an observation in the other sample.

A paired test basically is used when a single subject is measured before and after some change. In other words, the same subject in each sample.

What is Paired Data? (Explanation & Examples) – Statology

And when we’re working with unpaired data, we use an independent samples t-test to determine if the difference between the sample means is different.

Each measuring device is independent from the other. The results of the measurement are independent so the un-paired t test is appropriate.

Jennifer Marohasy
Reply to  Jim Gorman
May 27, 2023 1:48 pm

Thanks Jim and Cohenite.

I pair the observations, and by day.

Every day is different. Every day the Moon and the Sun have shifted causing a change in pressure and radiation, that affects weather and temperatures.

Considering just Tmax for Brisbane and/or Mildura and daily data: I have two temperature series, one measured using a mercury thermometer and one using a resistance probe.

It is the difference between each of these pairs that is relevant.

There is no point comparing the temperature from the probe yesterday, with the mercury today. The value is that the measurements were taken from different instruments on the same day.

:-).

Last edited 5 days ago by Jennifer Marohasy
Jim Gorman
Reply to  Jennifer Marohasy
May 27, 2023 2:04 pm

You have the correct analysis.

Nick Stokes
Reply to  cohenite
May 27, 2023 1:35 pm

 The null is that there is no difference.”

And that is what a t-test might refute. But it is a straw man. No-one maintains it. In the spirit of KISS, I imagined a conversation between BoM and Jen re Brisbane:
BoM: We’ve estimated the difference between AWS and LiG is 0.02°C
Jennifer: But my t-tests show statistically significant difference from zero
BoM: Yes, it is 0.02
Jennifer: But I can prove it isn’t zero
BoM: Yes, it is 0.02

Jennifer Marohasy
Reply to  Nick Stokes
May 27, 2023 1:42 pm

Hey Nick

You are out by a magnitude of 10, and a bit more.

Nick Stokes
Reply to  Jennifer Marohasy
May 27, 2023 2:22 pm

From your article
According to the article in today’s The Guardian, the Bureau claims that the mercury at Brisbane Airport was on average within 0.02 C of the automatic probe for a period of three years. And I get the same result across the three years.”


old cocky
Reply to  Nick Stokes
May 27, 2023 3:02 pm

That illustrates one of the limitations of summary statistics. The chart of daily differences Jennifer posted on the earlier thread shows them initially negative, then jumping to positive.
Splitting into subsets at that break point would have given a different result.

Nick Stokes
Reply to  old cocky
May 27, 2023 3:15 pm

It might. But the point is, no-one is claiming it is zero.

Jennifer’s t-tests are based on summary statistics.

old cocky
Reply to  Nick Stokes
May 27, 2023 3:50 pm

the point is, no-one is claiming it is zero.

This is digressing even further, but somebody should at least be formally checking whether it is zero or not.

0.02 is no different to 0 or 0,0, but it is different to 0.00.
Given the measurement resolution, is 0.02 within the uncertainty interval of “zero”?

Nick Stokes
Reply to  old cocky
May 27, 2023 6:09 pm

Jennifer says that it is significantly different from zero. So, yes.

old cocky
Reply to  Nick Stokes
May 27, 2023 6:30 pm

I think you meant ‘no’, but I understsnd your point 🙂

Bill Johnston
Reply to  old cocky
May 27, 2023 11:36 pm

Dear old cocky, yes indeed!

Jennifer may feel the desire to explain what is going on.

I just analysed the AWS data for a step-change on 1 January 2020, which in the absence of Thermometer data and a more careful examination of AWS data, is a first approximation.

Due to the strong seasonal cycle, I deducted day-of-year averages and analysed resulting anomalies using R and the single step-change scenario as a factor variable.

It turns out that AWS Tmax stepped DOWN in December, in contradiction of what differences suggest in JM’s difference graph.

This points to a problem and Jennifer should explain how an apparent downstep in AWS data, became an up-step in AWS minus Thermo differences!!

To quell arguments, here is the OP and attached is a box-plot:

Call:
lm(formula = Anom ~ Step, data = Dataset)

Residuals:
   Min     1Q Median     3Q    Max
-6.1392 -1.0892 -0.0192 1.0608 8.5218

Coefficients:
           Estimate Std. Error t value     Pr(>|t|)   
(Intercept) 0.52822   0.08931  5.915 0.00000000408 ***
Step[T.2]  -0.68902   0.10200 -6.755 0.00000000002 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1
Residual standard error: 1.706 on 1562 degrees of freedom
Multiple R-squared: 0.02839,   Adjusted R-squared: 0.02776
F-statistic: 45.63 on 1 and 1562 DF, p-value: 2.005e-11

Here is the AOV table

> Anova(LinearModel.2, type=”II”)
Anova Table (Type II tests)

Response: Anom
         Sum Sq  Df F value   Pr(>F)   
Step      132.8   1 45.635 2.005e-11 ***
Residuals 4547.0 1562

While the step is significant, there is a lot of unexplained variation in the data (R^2 = 0.03!).

While I have an open mind, the question is how does this translate into Jennifer’s repeated claims that AWS PRT-probes caused data to be biased-high?

Just a simple explanation will do. As it is getting late in the day (4.30 pm local time) and I have invested considerable unpaid time, and put-up with more than enough finger-pointing, soon would be nice.

(Perhaps armchair warrior karlomonte would have some idea also … Oh, silly me!)

Yours sincerely,

Bill Johnston

BrisBoxPlots.jpg
Last edited 4 days ago by Bill Johnston
Bill Johnston
Reply to  Bill Johnston
May 27, 2023 11:55 pm

It turns out that AWS Tmax stepped DOWN in December January (or end of December) in contradiction of what differences suggest in JM’s difference graph.

old cocky
Reply to  Bill Johnston
May 28, 2023 1:33 am

Thank you, Bill. Now I have to delve back to that long-ago Biometry course. Oh, well, mental stimulation is supposed to be good for us.

Was the ANOVA just being thorough, or perhaps stirring the pot just a little?

Bill Johnston
Reply to  old cocky
May 28, 2023 3:00 am

No. The ANOVA tests the linear regression model, not the outcome.

In the model summary the “intercept’ is the mean of Factor1, the “Step[T.2]” is the difference between Factor1 and Factor2.

While numbers are repeated as as: “F-statistic: 45.63 on 1 and 1562 DF, p-value: 2.005e-11”, the ANOVA shows significance of the (linear) slope = zero hypothesis. Low is good, very low is better

Cheers,

Bill

Bill Johnston
Reply to  old cocky
May 28, 2023 5:59 pm

Dear old cocky,

I had another look at the step-change analysis I did yesterday. Based on a CuSum curve and iterative analysis using R, I found two step-changes in the AWS probe data: on 9 January 2021 and 20 January 2023.

Contrary to JM’s assertions that AWS were exaggerating Tmax, an apparent trend of -0.28 degC/yr was explained by the two step-changes. The hypothesised disturbance in December 2019 (which I modelled yesterday) was not significant. The dataset was also very noisy.

To show an apparent shift when they did, I think the thermometer data (and hence the differences) are suspect/unreliable.

Graphs are attached.

All the best,

Bill Johnston

BrisbaneAP analysis.JPG
Bill Johnston
Reply to  Jennifer Marohasy
May 27, 2023 10:23 pm

Dear Jennifer,
 
I used PAST from the University of Oslo (https://www.nhm.uio.no/english/research/resources/past/) to calculate two statistics for Mildura (May 2000 to May 2012).
 
Firstly, I used Summary (PAST manual v.4 p. 47) to calculate the daily dataset mean with 95% bootstrapped confidence intervals. The bootstrap used 9999 iterations. The bootstrap calculated that the mean of your Tmax thermometer data would need to exceed 24.57 degC in order to be outside the CI range (of from 25.14 to 24.57 degC). In difference terms, the mean of the differences would need to exceed 0.214 degC. As you know bootstrapping is a re-sampling, non-parametric method of calculating CI’s. You can probably do the same exercise using Minitab.  
 
Next, I used F and T test from parameters (PAST Manual p. 62).
 
I used the overall mean (24.358 degC), calculated the variance (which as you know is the square of the SD (7.484^2 = 56.009) and the numbers of samples (4628). Holding those values constant for Sample 2, but varying Tmax to simulate possible thermometer data, I calculated when the difference became significant according to the t-test. Here are the results:

Sample2         Psame            Delta
24.6               0.1198            0.24198
24.65             0.06056          0.29198
24.67             0.0449            0.31198
24.7               0.0279            0.34198
 
As this is a parametric test, results are more exact than bootstrapping.
 
So, provided data are independent and holding other parameters the same, the mean of Sample 2 (thermometer data) would have to exceed 24.67 degC to be statistically significant; a difference with AWS Tmax of 0.312 degC.
 
This is the overall mean of respective populations, not their range or their outliers.   
  
Comparing these results with your difference graph for Mildura (https://i0.wp.com/wattsupwiththat.com/wp-content/uploads/2023/03/Scatter-retitled-annotated-copy.webp?fit=1024%2C512&ssl=1) only a few of those dots exceed 0.312 DegC (which is the calculated P=0.05 significance level for mean difference). Therefore, despite a few outliers, as the mean difference in your graph is mostly +/- 0.1 degC, it could not be claimed that the mean difference is significantly different from thermometer Tmax.  
 
Going to Brisbane:
 
Airport data from 01 January 2019 to 11 April 2023
 
Firstly, I used Summary (PAST manual v.4 p. 47) to calculate the daily dataset AWS mean (25.73 degC) with 95% bootstrapped confidence intervals. The bootstrap calculated that mean Tmax thermometer data would need to exceed 25.91 degC in order to be outside the AWS CI range (of from 25.56 to 25.91 degC). In difference terms, the mean of the differences would need to exceed 0.176 degC.
 
Next, I used F and T test from parameters (PAST Manual p. 62).
 
I used the overall mean (25.73 degC), used the variance from the summary table (12.355) and the numbers of samples (1564). Holding those values constant for Sample 2, but varying Tmax to simulate possible thermometer values, I calculated at what different temperature would the t-test became significant. Here are the results:

Sample2         Psame            Delta
25.8               0.5722           0.07001
25.9               0.1739           0.17001
25.95             0.0788           0.22001
26                  0.0311           0.27001
26.1               0.0032           0.37001
 
So, holding variance and sample number the same, for a difference to be significant mean thermometer Tmax would have to exceed 25.95 degC, for a difference of 0.27 DegC. I.e., the average difference, not the range would have to shift north by 0.27 degC.  
 
Comparing this to your graph comment image?ssl=1), even with more data this seems highly unlikely.
 
Furthermore, I analysed for a step change in daily AWS anomalies (data minus day of year cycle), and while there was a disturbance it seems the problem is with the thermometer data, not the AWS data. (I re-did the analysis on first-differenced data, with the same result.)

Yours sincerely,
 
Bill Johnston
 

old cocky
Reply to  Bill Johnston
May 28, 2023 1:51 am

Bill,

I’m afraid I’m going to mount my hobby horse here (well, one of them).

Everybody still entangled in this thread has quite a technical bent (with the possible exception of those with a sulphur crest and raucous squark), but from varying fields.

You have detailed what you did, but in the interest of ensuring common understanding, would you be so kind as to employ Feynman’s “janitor” approach to elucidate why you took that particular approach?

Bill Johnston
Reply to  old cocky
May 28, 2023 4:11 am

Dear (by now) ancient old cocky,

I looked at JM’s propositions in the light of my experiences, then decided to lightly test her evidence. Finding problems, I delved deeper and using accumulated knowledge and public-domain statistical tools I found more problems. From there I went back to basics and commenced a thorough investigation from the ground-up.

I already had investigated Townsville (https://www.bomwatch.com.au/data-quality/climate-of-the-great-barrier-reef-queensland-climate-change-at-townsville-abstract-and-case-study/). There was an overlap dataset that I’d ignored before, so I started with that. I realised that a protocol was needed to guide others and using the Townsville overlap dataset. I then developed that protocol using the the public-domain application PAST. I also made an Excel.xlsx workbook that provided a way forward for deducting the day-of-year cycle.

My colleague (who was waiting to board a plane at Sydney airport) posted that yesterday almost while he was boarding. The collage is here: https://www.bomwatch.com.au/bureau-of-meterology/why-statistical-tests-matter/. Anyone can use it or develop it further with acknowledgement. They can also relay concerns to me via http://www.BomWatch.com.au, or leave comment there (which at my discretion I’ll scrub – no crap).

Closer to the question, I then spent yesterday and today working through the various datasets relevant to this conversation and some of that is reported here. Off to the side since Wednesday last, I also carefully analyzed an existing overlap dataset for Brisbane airport. The report is almost finished. It is essentially a follow-up that as a replicate, tests the protocols I had advanced using the Townsville dataset.

My firm view is that we cannot continue to fight fire by stoking a bigger fire with less fuel. In that respect, unless Jennifer can justify her methods, her approach is foolish in the extreme.

However, this does it abrogate the BoM and all the other climate-clowns for their part in destroying Australia’s energy network and the economic future and opportunities of future generations.

While it does not mean her heart is not in the right place, Jennifer is misguided in this case. Meanwhile, the institution of government is thoroughly corrupt. Infested by corporate interests and saprophytes, our democracy is rotting from within and in turmoil.

Simply stated, focusing on crap gets in the way of maleficence and real issues. (I’m also sick of spending my time fighting idiots!)

We need to refocus, and for that to happen this argy-bargy and opinionism has to be bought to a stop. By seeking center-stage, Jennifer is not helping.

All the best,

Dr Bill Johnston

http://www.bomwatch.com.au

cohenite
Reply to  Nick Stokes
May 27, 2023 4:57 pm

No Nick I am the Lion and you are the Tinman! The BOM is the Strawman and Dr Bill is the Wizard.Jen is Dorothy.

Nick Stokes
Reply to  cohenite
May 27, 2023 6:10 pm

I think she really got carried away this time.

Bill Johnston
Reply to  cohenite
May 28, 2023 1:03 am

Dear cohenite,

You are too generous by far, especially the one about the Lion. And which is the deep-thinking karlomonte >>> scarecrow-straw in-the wind? Micky’s mouse, who knows, perhaps mouse’s Micky.

A tribal elder may not know the answer either … too deep. Rise to the top checking spelling, commas and word-length.… The greasy pole syndrome >>> the higher one goes the less one knows and the more one thinks they know. Preach comes to mind, and being ruled by fools …

Don’t side with OZ Energy Minister Chris Bowen. Could not think his way out of a kiddies blow-up pool except by blowing-up the pool. Fool. That is the mess we are in and this stuff with JM does not help!

Civilized debate based on factual analysis has lost its place to universal shouting, selfies and tribalism. If you want some factual analysis of weather station data, go to http://www.bomwatch.com.

Cheers,

Dr Bill

Bill Johnston
Reply to  cohenite
May 27, 2023 3:17 pm

No cohenite that is not entirely true. Your “a T test simply measures the difference between the means of 2 sets of data” is implicitly an unpaired or two-sample t-test, where variation is attributed to both “sets of data”. The test is whether the means are the same, and it is based on a pooled standard error.

There are strict rules for pairing. If the medium being measured is sensed by each instrument 100% of the time, without interference by exogenous factors, variation is attributable entirely to the subjects being compared, Thus, the same person or animal or pot-plant being measured before and after an intervention controls for within subject variation. Two instruments in the same oil-baths, ditto. In these cases the test is whether the mean of the differences is zero. The paired test uses the standard error of the paired differences, which is a smaller number.

Here is the inside of the Townsville Stevenson screen, Thermometers and probes are offset, with the probes nearer the rear wall of the screen, which faces north in the S-hemisphere, and thermometers near the front where they can be observed without disturbing them from the open door (wet and dry bulb thermometers closest).

While temperatures will often be the same, due to positional effects and turbulance, often they will not be. The use of paired T-tests in this case cannot be justified experimentally or logically (simply by looking, you can see the instruments cannot sample the same parcels of air, 100% of the time.)

But this is not the only problem in Jennifer’s approach. The second and more important is that she disregards the fundamental assumption that data for one time are independent of data for other times. As data are strongly autocorrelated across all lags by the seasonal cycle, the results of her interminable use of the wrong test are invalid anyway.

For either test to be valid, serial independence is an absolute requirement. No matter who does it, autocorrelation invalidates the test, period.

Type in “paired t-test assumptions”, I get 20,900,000 hits in 0.39 seconds), so test assumptions are no secret. Jennifer has known this for at least eight years. Her continuing down this path, brushing the issues aside is highly misleading, unprofessional and ultimately destructive.

As she is making the claims, it is incumbent on her to justify her use of the paired t-test, and more importantly, verify that the test assumptions are not violated.

All the best,

Dr Bill Johnston

http://www.bomwatch.com.au

Townsville Screen_650.jpg
cohenite
Reply to  Bill Johnston
May 27, 2023 5:07 pm

Oh Gawd; what the 2 different thermometers are measuring is the same: the temp at the site at the same time. You can’t get more paired than that! Differences in the screens where the 2 different thermometers are stored is also to the point as is different adjustments the BOM may be doing with the data from their new thermometers. The POINT is are they getting warmer temps with the new gear/screens/homogenisation methods?

Jen says yes. You’re tilting at windmills and Nick is polishing his tin.

Bill Johnston
Reply to  cohenite
May 27, 2023 6:26 pm

No cohenite they are not measuring the same. Jennifer’s data for Brisbane shows they are not. The probes are at least 100mm closer to the rear of the screen facing the sun, than the Tmin and Tmax thermometers (which are horizontal).

Both instruments are calibrated under laboratory conditions. The data they produce is the temperature of the air measured at two different positions, and probably at two different times in the Stevenson screen.

The paired t-test has strict rules, and as I have said, under those rules it uses a smaller standard error value. There is also no homogenisation of incoming daily temperature data.

Ascribing variation in the medium being measured to ‘instruments’ is invalid and misleading to everyone.

If you want to get paired you would use an oil- or ice-bath in lab.

Autocorrelation is the more important problem and Jennifer has not mitigated its effects.

Yours sincerely,

Bill Johnston

cohenite
Reply to  Bill Johnston
May 27, 2023 7:52 pm

There’s your answer Jen: get in an ice bath with the 2 thermometers. Otherwise there’s no possible way to explain the higher temps the BOM is getting with the new stuff.

And don’t forget Jen if you froze in the ice bath today chances are you’re going to freeze tomorrow: that’s called auto-freezing.

Bill Johnston
Reply to  cohenite
May 27, 2023 8:20 pm

Unable to sustain a case, I see cohenite has now decided to resort to dull-humor.

To be clear, I did not not suggest that Jen should get into an ice-bath.

Kind regards,

Bill

cohenite
Reply to  Bill Johnston
May 27, 2023 9:20 pm

Well ok, you’re the one who brought up ice baths. If you don’t think she should get in an ice bath with her thermometers, what sort of bath do you think she should get into?

But all seriousness aside, let’s be blunt: do think the new BOM thermometers have artificially increased temp?

Bill Johnston
Reply to  cohenite
May 27, 2023 10:01 pm

No I don’t.

In most all cases where I could cross-corroborate using independent sources of information (aerials, docs & plans etc), Tmax increased due to station changes, including variously the change from 230 to 60-litre screens, undocumented site changes (moving the site at Townsville for instance), spraying out the grass (see pic for Amberley), etc etc.

Exhaustive analysis of individual sites over several years, are presented at http://www.bomwatch.com.au, as frontstories, detailed reports associated with each case and datapacks for those fake factcheckers out there.

My problem with what is happening here is that this stuff is small bikkies and unlikely to stick. I am concerned also that JM’s full-frontal on Andrew Johnson, Greg Ayers and Anne Warne will create blow-back and reputational damage. However, it is too late now.

On the other hand, the BoM has been loose with the truth and has deliberately fudged data to support the warming narrative. While this is provable, much of what JM is stirring-up is not.

This statistical thing, is ultra annoying. Most people don’t understand it and therefore aside from tossing brick-bats can’t add usefully to the discussion anyway.

Yours sincerely,

Bill Johnston

http://www.bomwatch.com.au

.

AmberleySite.jpg
cohenite
Reply to  Bill Johnston
May 28, 2023 2:47 am

Ok, so you think BOM has artificially increased temp but not for the same reason as Jen does; although Jen has noted that site moves and other physical reasons have contributed to an artificial increase. So, what is your best guess-estimate of the amount of artificial increase to temp produced by these physical factors?

Jim Gorman
Reply to  cohenite
May 27, 2023 7:10 pm

LOL. You are on point.

karlomonte
Reply to  Bill Johnston
May 27, 2023 6:40 am

Thanks for the abbreviated BillRant this time, Bill.

Bill Johnston
Reply to  karlomonte
May 27, 2023 3:23 pm

No worries karlomonte. I try to make allownace for those who have trouble with long sentences and complex concepts.

Kind regards,

Bill Johnston

Nick Stokes
Reply to  Jennifer Marohasy
May 26, 2023 8:59 pm

My colleagues at the IPA, and journalists at The Australian, and so many others have ‘gone to ground’ on this. They are hiding. At least for the moment.”

The likely reason is that, despite the ridiculously triumphalist headline here, it just isn’t newsworthy. Ahh you have is that an unnamed person told you that the BoM was planning to upgrade something unknown, at some unspecified time in the future.

Peta of Newark
May 26, 2023 5:40 pm

It suddenly dawned, earlier today, that I’ve been running a neat little experiment without realising.
(It’s to do with my endless ravings about soils, plants and deserts)

How it happened came from my trying to trace some really peculiar night-time temperature graphs as recorded by my local Wunderground stations and it’s resulted in me setting up 2 identical dataloggers in my garden here on The Fen.
(One is my ‘normal’ climate logger (at 30 min intervals) one and the other is recording very fast at 3 minute intervals)

attached is crappy photo of of my Stevensen Design datalogger, next to a 1.75littre Coke bottle for scale.
The actual logger itself (Elitech RC51 or Lascar) fits snugly into the white tube sticking out the bottom

They are identical twin loggers in identical housings/screens/enclosures, about 30 metres apart and both suspended 6 feet above a green grassy lawn.

It’s just that one is dangling from a branch of an evergreen ‘specimen’ tree ## and the other is dangling off a wooden post intended for hanging a washing line off of.
The washing pole one is more exposed than a really exposed thing = no available shade for miles around

## The tree is on its own and moderately large bush shape (not a conventional Xmas tree shape) – its trunk will be 12″ diameter. ish
No branches near the ground, you can walk right up to the trunk without ducking

Anyway, as it was quite sunny, I thought I’d compare the instantaneous readings on the loggers – and I was gobsmacked/horrified.
(More research needed here or what!!!)

What I found, and just taken another reading that makes it worse, is the following:

That near solar-noon, the thermometer under the tree reads 4°C cooler than the one on the washing-line pole (17°C vs 21°C)

At sunset, the one under the tree reads 1°C warmer than the washing-pole (13°C vs 12°C)

And now, at 01:00BST, the one under the tree is 3°C warmer than the one on the pole (8.6°C vs 5.7°C)

How do Green House Gases explain that?

by-the-by: – there is now already soooo much dew on the grass you could nearly damn well go swimming, just on my lawn, right now.

And that is why, contrary to everything that everybody knows, it is why plants on fertile soil don’t need extra water or irrigation.
What they breathe out during daytime, they soak right back up at nighttime.
Add that to there being probably 10 or a dozen water molecules attached to every CO2 molecule they suck in, they don’t need extra water. On Fertile Soil.

Petas Temp Logger.JPG
Last edited 6 days ago by Peta of Newark
RickWill
Reply to  Peta of Newark
May 26, 2023 8:51 pm

How do Green House Gases explain that?

The Green House Gas can be whatever you like, just don’t confuse it with something that regulates Earth’s energy balance.

Your different gauges are mostly indicative of the ground temperature under them providing air movement is low, which is likely the case if dew is present. The ground is usually the warmest location and the temperature drops off with altitude. A metre or two above ground is near enough to ground.

By 0100, the ground beneath the pole has been getting dew that is equalising the ground temperature to the atmosphere at the temperature of the dew point. So the ground temperature is now at the dew point temperature and will gradually cool till the early morning under still conditions.

The ground under the tree is protected from dew and does not have clear view of the sky due to the presence of the tree so it has a lower cooling rate. No dew to cool it and low radiative loss as its view is the tree, which will be a similar temperature to the ground below. The tree and its roots have thermal inertia.

Trees are like watered down water. They have quite a lot of thermal mass and play a significant role in moderating ground temperature.

When conditions are calm, the ground temperature will be near constant once dew point is reached. Heat is being lost but it is mostly latent heat higher in the atmosphere so ground temperature declines slowly once the dew point is reached.

Jeff Alberts
Reply to  Peta of Newark
May 27, 2023 8:57 am

It suddenly dawned, earlier today, that I’ve been running a neat little experiment without realising.”

And it involves you consuming mass quantities of double-glazed donuts, I’ll wager.

Mike
May 26, 2023 11:24 pm

”Meanwhile, the Bureau’s management, including Johnson, continues to lament the need for all Australians to work towards keeping temperatures below a 1.5 degrees Celsius tipping point.”

How on EARTH can Australians ”work towards” keeping temps below 1.5? Because if all of the 1C over the last century is from human co2, all we could possibly do is reduce temps by 0.01-0.02C sometime in the distant future AND spend a trillion dollars trying to do it. What the hell is wrong with these people?? Of course it is HIGHLY unlikely that much more than a fraction of the 1C rise is anthropogenic so once again we are talking about fairies on the head of a pin. Money well spent reducing the planet’s temps by a thousandth of a degree!
If that is not the very definition of insanity, someone tell me what is.

Last edited 5 days ago by Mike
Ben Vorlich
May 27, 2023 1:15 am

Out of curiosity, do LIG thermometer scales take into account thermal expansion/contraction of the glass?

Bill Johnston
Reply to  Ben Vorlich
May 27, 2023 3:25 pm

Of course. They are calibrated against the scale on the glass.

bill

SteveG
May 27, 2023 1:45 am

— Story Tip —

CSIRO is responsible for this latest piece of — computer simulation —

“Dangerous slowing of Antarctic ocean circulation sooner than expected” !!!
Excerpt from media release.. —

authors (of the study) used observational data gathered by hundreds of scientists over decades and then filled the gaps with computer modelling..

Dangerous slowing of Antarctic ocean circulation sooner than expected (msn.com)

Last edited 5 days ago by SteveG
Jeff Alberts
May 27, 2023 8:43 am

satellites that are measuring temperatures at different depths within the atmosphere use these resistance probes.”

Satellites put physical probes into the atmosphere?

Jennifer Marohasy
Reply to  Jeff Alberts
May 27, 2023 11:48 am

Jeff, Amongst other things, there is equipment on satellites used to measure the Earth’s temperature including platinum resistance probes. I sought clarification from John Christy yesterday:

“The satellite essentially measures the intensity of the microwave emissions from (1) the earth’s atmosphere, (2) a warm target on board and (3) cold space. 

The Earth views have intensity levels between those of the cold space reading and that of the warm target plate. 

The Platinum Resistance thermometers are embedded in the warm plate so we can know what the temperature of that target is. So, we know (a) the warm target temperature and its measured emission intensity, (b) the cold space temperature and its measured intensity, and (c) we know the measured intensity of the Earth views, so we can interpolate to get the brightness temperature of the Earth views.

So platinum resistance thermometers play a role.

I like to say that the satellites measure the temperature of the bulk troposphere (i.e. lowest 75 percent of the atmosphere). A simple time series should do.

Satellite and surface data measure different quantities, but should have very similar trends over long time periods – though in maritime areas this doesn’t quite hold. The thing about the tropospheric data is that this is the place where the greenhouse effect should be largest and clearest to detect. So, it is a critically important metric to consider. [end quote]

Jeff Alberts
Reply to  Jennifer Marohasy
May 27, 2023 2:55 pm

Understood.

the way it was worded was telling me that direct atmospheric probes were somehow in play.

Thanks for the clarification.

Jennifer Marohasy
Reply to  Jeff Alberts
May 27, 2023 3:43 pm

Thanks. As I acknowledged in a comment to Rud earlier in this thread. I am not sure I got the detail correct exactly as I wrote it in the above article. And I was writing from memory about satellites and platinum probes, and in something of a rush.

But the points I made are nevertheless important and valid:

Platinum resistance probes can be used in controlled environments, eg. on satellites, to accurately measure temperature. These same devices may NOT be suitable in environments with a lot of electrical noise/potential interference. For an accurate measurement the electrical current needs to be constant.

Last edited 5 days ago by Jennifer Marohasy
real bob boder
May 27, 2023 5:32 pm

As I have stated many times these probes all slowly fail in the same direction, and guess which direction that is?

Brigun2546
May 27, 2023 8:06 pm

There are published concurrent daily max and min temperature data for both glass and probe thermometers at Marble Bar (in NW Australia). I have compared the data for three years, 2003-2005, and find significant differences between the two sets of data. About 3% of days had differences exceeding 1degC. The largest difference was 4.9degC. https://briangunterblog.wordpress.com/2023/05/21/marble-bar-temperature-comparisons/

Bill Johnston
Reply to  Brigun2546
May 28, 2023 5:41 pm

Dear Brigun2546,

I tried to contact you a week or so ago re an email you sent, but the reply bounced. Try again if you would like.

Cheers,

Bill

Jennifer Marohasy
May 28, 2023 12:37 pm

Following are the key questions that I am looking to pass across for asking in The Australian Parliament.

“Can the Australian Bureau of Meteorology please confirm the nature of the electrical problems causing artificial variations in daily temperatures across the automatic weather station network, in particular:

1.     Is it true, given the nature of the platinum resistance probes and how they are hooked-up to the data loggers, that applying a 100Hz frequency to a power circuit to extend the life of a battery – necessary with solar systems – can cause maximum temperatures to drift up by more than 1.5 degrees Celsius on sunny days? (To be clear, as the electrical current increased the recorded temperature increased additional to any actual change in air temperature.)

2.     Can the Bureau confirm the number of remote locations where temperatures have dropped by more than 1.5 degrees Celsius on the hour, at the same time every hour through the night, as the battery is drained with each satellite upload of temperature data?

3.     Can the Bureau confirm that upgrading power supplies in 2012 caused a 0.3-to-0.5-degree Celsius increase across 30 percent of the Australian network?

4.     When is the Bureau going to inform university and CSIRO scientist of potential problems with the temperature data it has supplied, following the 2012 upgrade? (The increase in temperatures was reported by David Karoly and Sophie Lewis, for example, as due to greenhouse gases when it may at least in part have been due to changes in the power supply to the AWS network.)

5.     Which overseas model for measuring temperatures is the Bureau going to adopt as a replacement for the current AWS network that has proven unreliable? 

Ends. 

I am keen, in the first instance, to keep the focus on electrical issues.

There will be opportunity down-the-track hopefully, to comment on the simulation models including for El Niño, compounding affects of homogenisation/remodelling, need to move to a more scientific approach to measuring climate change beyond the current reliance on a single maximum and minimum for each day, etc.

Last edited 4 days ago by Jennifer Marohasy
%d bloggers like this:
Verified by MonsterInsights