Niccolò Machiavelli

Fake Analysis by Greg Ayers and Jane Warne – Because End Justifies Means

From Jennifer Marohasy’s Blog

Jennifer Marohasy

I estimate the Bureau have about 200,000 parallel temperature records. These are handwritten records of temperatures measured at the same place and the same time from a traditional mercury thermometer and the newer resistance probes. I have been very critical of the Bureau for not making this data public, so we can see the extent to which the measurements match. The A8 reports (with both measurements) were the focus of a Freedom of Information request by John Abbot, in which the Bureau initially claimed the reports did not exist, to eventually releasing 1,094 with Brisbane Airport data, representing just three years of the 15 years of data that the Bureau holds for this site. And that was only after Abbot took the Bureau to the Administrative Appeals Tribunal.

Noble cause corruption is a term invented by the police to justify fitting up people they believe to be guilty, but for whom they can’t muster forensic evidence that would satisfy a jury. It is a crime, but it is very difficult to get a conviction when the person on trial is a police officer.

It is like this with the Australian Bureau of Meteorology. They have several methods that falsely suggest catastrophic global warming, but no one seems to want to undertake any sort of investigation.

It is not as though this is without consequence, given the high level of public interest in record temperatures as well as the significant financial stakes involved.

Contrast this with the fisherman who was recently convicted of stuffing fish with lead weights to win thousands of dollars in a fishing tournament. He was arrested by the police, convicted, lost his license and his boat worth US$100,000. He may go to jail.

A week ago, The Guardian reported on the fake experiment by Greg Ayers and Jane Warne published in Journal of Southern Hemisphere Earth Systems Science – except Readfearn didn’t call it fake.

Graham Readfearn writing in The Guardian claimed it as a direct comparison between the Bureau’s method for recording daily maximum temperatures and the method recommended by the World Meteorological Organisation. But the values listed in Table 1 as the maximum temperature for Darwin are not the same values the Bureau has recorded for Darwin.

I call the experiment fake because it is an imitation. It confounds datasets – if not deliberately then, why? I have previously written to Jane Warne about this, but she does not reply.

Instead of listing the highest of the one-second readings taken each day, in Table 1, Ayer and Warner have fudged, and listed the highest of the last one-second spot readings for each minute for that day. The difference is significant, in the case of 6th April 2018 at Darwin – their first listed measurement – is out by a whole one degree Celsius!

In fact, it rather makes my point, that the Bureau’s custom-designed resistant probes are all over the shop.

This is one way of describing the three years of recordings from the probes compared to the mercury for Brisbane Airport. My analysis of the limited data that Abbot was able to secure for Brisbane Airport shows the probe read higher than the mercury 41% of the time, and sometimes by up to 0.7 degrees Celsius. The difference is statistically significant and not randomly distributed. It will produce more record hot days for the same weather.

That the Ayers and Warne paper claims to represent the Bureau’s method but does not is a point of fact easily checked.

Compare the very first value listed as the maximum temperature for Darwin, for the 6th April 2018 in the Bureau’s ADAM data base (click here), with the value listed in Ayer and Warner (click here).

This difference illustrates the point that I have been making for some years: by taking the highest spot reading, and not averaging, the Bureau is over estimating maximum temperatures and by some significant amount. In this case one whole degree Celsius.

This, combined with setting a cold limit, something that I have also written about (click here), means that university researchers relying on Bureau data have been able to falsely claim that ‘record hot days are now 12 times more likely in Australia than days of record-breaking cold’. So wrote Peter Hannam from the Sydney Morning Herald quoting Sophie Lewis and Andrew King from the ARC Centre of Excellence for Climate System Science. This fits the human-caused global warming narrative that is a reliable source of funding for academics, catastrophe stories for mainstream media, and government subsidies that prop-up renewable energy industries.

It is an absolute travesty.

Instead of journalists Graham Readfearn from the Guardian accusing me of harassment, and more recently Kate Tran from the AFP undertaking a very fake ‘fact check’, someone should actually check my facts. My facts are the only ones that will withstand scrutiny. Of course, The Australian Broadcasting Corporation has jumped on Readfearn’s bandwagon but not a single ABC journalists will put their name to the slander that also accuses me of being a conspiracy theorist.

In particular, Ayers and Warner can not compare the last second in each minute with the readings from the mercury thermometer and claim their analysis to be a test of the Bureau’s method, because the Bureau log the highest one-second reading in each minute as the daily maximum. The much cited Ayers and Warner peer-reviewed study is classic ‘bait and switch’.

To understand more about how science and public policy have been corrupted by the noble cause of environmentalism, consider reading an important book by Ansley Kellow published in 2007 by Edward Elgar, click here.

The handwritten temperatures from a probe and also mercury as recorded on 14th October 2020 at Brisbane Airport showing a difference of 0.7C, with the probe recording warmer.

I will continue this story as part of a series I’m calling ‘Jokers, Off-Topic Reviews and Drinking from the Alcohol Thermometer’. I have this post as Part 5. To read some of my previous scribbles:

The Guardian, Temperatures, Misinformation (Part 1) – sets up the query and anticipation.

https://jennifermarohasy.com/2023/05/the-guardian-temperatures-misinformation-part-1/embed/#?secret=lS8sjZfgRE#?secret=TYyNdFp5PE

The Coronation & The Guardian, Temperatures, Misinformation (Part 2) – more information in an expanded context that is global.

https://jennifermarohasy.com/2023/05/the-coronation-the-guardian-temperatures-misinformation-part-2/embed/#?secret=NYN8Ne9UjH#?secret=55nAtZq7vk

Jokers, and Temperature as Radio Chatter, Part 3 – airports in focus.

Joker, Killing Dissent While Calling it Debate, Part 4 – politics in focus.

https://jennifermarohasy.com/2023/05/jokers-killing-dissent-while-calling-it-debate/embed/#?secret=9kyvNehjDB#?secret=q6vkuF1HCf

And before I got to this fake comparison, my good friend Ken Stewart posted something very relevant here: Who’s Laughing Now – malfeasance by omission.
Who’s Laughing Now?

John Abbot and me outside the Administrative Appeals Tribunal on 3rd February 2023.
5 22 votes
Article Rating
149 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
MarkW
May 13, 2023 6:55 pm

Since we already know what the right answer is, what’s the problem with adjusting the data to match it?
/sarc

Last edited 24 days ago by MarkW
Vlad the Impaler
Reply to  MarkW
May 13, 2023 7:57 pm

One of my professors once told us, and I swear he was being serious (poor bloke didn’t have a humerus bone in his whole body), “If the data do not fit the model, then obviously the data are wrong!”

I have witnesses who heard the exact same thing I did, those many decades ago. And I do not think that we were the first class he said that to. I recall some grad students who told us to keep lots of salt handy when taking this individual’s class.

Vlad

MCourtney
Reply to  Vlad the Impaler
May 14, 2023 12:20 am

He sounds ‘armless.

Vlad the Impaler
Reply to  MCourtney
May 14, 2023 8:50 am

I thought mine was good, but yours was better. Let’s try this:

‘We did our best to disarm him.’

Yes, he self-identified as a “he” before it was fashionable.

I also self-identify as a male, for today, that is. Tomorrow I’m going to be a gay, transgender sea anemome.

JLL3Sonex
Reply to  Vlad the Impaler
May 14, 2023 12:24 am

Sadly, I’ve seen a lot of that. If reality doesn’t match the model, it’s not the model that’s wrong because computers don’t make mistakes… as far as they’re concerned.

However, with a background in IT I know that the computer only does what it’s programmed. It doesn’t make judgements, it doesn’t think, it can’t look at data coming in or results going out and go “You know, that doesn’t look right at all.” Put a line in a program that takes a calculated value for X from a bunch of data that randomly adds .1 to various data points before X is calculated – and it’ll do it without question. Want a curve to go in a certain direction? The computer will calculate it, and not care.

This is why engineering models are so very carefully vetted. Without proper data and computations that building won’t stand, that bridge will fall, that spacecraft won’t make orbit. The models are compared against reality, tweaked until they match – then tested, tested, and tested again.

CAGW models are run, the results compared against reality – and the data from the past they use is ‘adjusted’ just a little bit, and the computations are tweaked just a touch until the results are what they want.

Does it reflect reality? Well, that doesn’t matter. The model is the important thing after all – and who would dare disbelieve what comes out of a computer?

Reply to  JLL3Sonex
May 14, 2023 6:22 am

The climate models have been around for years. You would normally expect them to start to converge into a narrow band of outputs and their “ensemble” (what a crappy description) would get closer and closer to reality. Yet year after year that never happens. In fact they get *further* from reality every year.

Hivemind
Reply to  JLL3Sonex
May 15, 2023 1:29 am

Computers aren’t intelligent. They only think they are.

Nick Stokes
May 13, 2023 8:47 pm

This difference illustrates the point that I have been making for some years: by taking the highest spot reading, and not averaging, the Bureau is over estimating maximum temperatures and by some significant amount. In this case one whole degree Celsius.”

Anything to bash the BoM, but this is getting desperate. The link to the ADAM data doesn’t work, but it’s here. Here is an image:
comment image

The other numbers match. The replacement of 33.2 by 32.2 on 6 April is a typo.

old cocky
Reply to  Nick Stokes
May 13, 2023 9:02 pm

Would data have been manually transcribed? That seems extremely time-consuming and error prone.

Nick Stokes
Reply to  old cocky
May 13, 2023 9:49 pm

Evidently, yes. Ayers is filling out a table for his paper of an irregular sequence of dates.

old cocky
Reply to  Nick Stokes
May 13, 2023 10:47 pm

Scientists should never be allowed anywhere near a computer 🙁

Ben Vorlich
Reply to  Nick Stokes
May 13, 2023 11:35 pm

Nick Stokes
You’ve just confirmed poor science.

The saying “Measure twice, cut once” is a classic adage that has been passed down through the ages and is still relevant today. It is a reminder to take the time to double-check measurements and ensure accuracy before taking any irreversible actions.

Nick Stokes
Reply to  Ben Vorlich
May 14, 2023 1:44 am

Ayers is writing a paper. He is not taking any irreversible actions. No such actions will be taken because they thought Darwin on 6 June 2018 was 33.2 instead of 32.2.

Ben Vorlich
Reply to  Nick Stokes
May 14, 2023 2:18 am

So he/she made one transcription error and that’s fine because they’re not taking irreversible actions?

John McEnroe applies

How many other unchecked errors are ther if they miss something as fundamental as that? How many politicians WILL take irreversable actions based on sloppy/deceitful work like this?

Reply to  Ben Vorlich
May 14, 2023 5:15 am

You nailed it!

harryfromsyd
Reply to  Nick Stokes
May 15, 2023 12:54 am

It’s pretty irreversible if the flaws in his paper justify the BOMs current methods and it turns out to be wrong? What’s your plan for reversing the decades worth of incorrect data?

siliggy
Reply to  Nick Stokes
May 14, 2023 1:13 am

The other numbers match.”
No they don’t. Check June 11 both min and max.

Nick Stokes
Reply to  siliggy
May 14, 2023 1:42 am

OK, after 24 exact matches of both max and min, another typo, writing 13.8 for 31.3. Maybe that means that Ayers is not a perfect proof-reader. What it does not mean is what Jennifer tries to make of it:
Instead of listing the highest of the one-second readings taken each day, in Table 1, Ayer and Warner have fudged, and listed the highest of the last one-second spot readings for each minute for that day. The difference is significant, in the case of 6th April 2018 at Darwin – their first listed measurement – is out by a whole one degree Celsius!”
All built on a typo!

siliggy
Reply to  Nick Stokes
May 14, 2023 1:52 am

All built on a typo!” If it is typos it would be 3 so far. 1 on April 6 + 2 on June 11 is three not “a typo”.
You could at least try to count to the 3 proven differences so far instead of adjusting the count down.

Nick Stokes
Reply to  siliggy
May 14, 2023 2:00 am

“a typo”
Jennifer built it all on April 6.
But OK, there are 54 numbers there, with 51 exact matches, and three cases where just one digit is wrong. Does that sound like “Fake Analysis” or just fumble fingers.

siliggy
Reply to  Nick Stokes
May 14, 2023 2:11 am

What does finding so many contradictory numbers so fast say about the peer review process and the oversight of how it got to be published. What can you tell me about the publisher?

Nick Stokes
Reply to  siliggy
May 14, 2023 2:39 am

Peer review isn’t expected to cover proof-reading numerical tables for typos.

Reply to  Nick Stokes
May 14, 2023 5:20 am

Of COURSE peer review is expected to cover proof-reading numerical tables when those tables are driving the conclusion!

It is the *data* that drives everything, including proving a hypothesis!

MarkW
Reply to  Nick Stokes
May 14, 2023 8:13 am

Of course not, the purpose of proof reading is to ensure the conclusion matches the officially acceptable position.

Ben Vorlich
Reply to  Nick Stokes
May 14, 2023 2:21 am

Nick Stokes
See my reply to your previous squirming about just one typo not being important.
Your PERSISTENCE is admirable, in a strange way, your credibility zero

Nick Stokes
Reply to  Ben Vorlich
May 14, 2023 3:42 am

Whether one typo is important, it doesn’t justify Jennifer’s claim of “fake analysis”.

Jim Gorman
Reply to  Nick Stokes
May 14, 2023 4:45 am

Nick,

Two authors who didn’t check each other? Two authors who didn’t graph their data to see if there were outliers to investigate?

I don’t know what you think scientific data analysis consists of, but quality control of what is being analyzed should be part of the process. Just punching data and having a statistics package analyze it is not doing science!

MarkW
Reply to  Jim Gorman
May 14, 2023 8:14 am

For “scientists” like Nick, quality control is only done on results, not data or methods.

Reply to  Nick Stokes
May 14, 2023 5:18 am

3/54 is about 6%. That’s a *terrible* record for transcription. Doesn’t matter what the reason is – it is just terrible no matter the reason.

MarkW
Reply to  Nick Stokes
May 14, 2023 8:11 am

Notice how quick is to defend the inadequacies of his side’s “science”.
If they want to do science, they need to have mechanisms in place to validate the data that they use, before they use it.
Instead, like most climate scientists, they just assume that if the process produces the correct answer, then the process itself must be correct.

MarkW
Reply to  siliggy
May 14, 2023 8:07 am

Sounds like Nick is confirming that before the era of automated data collection the data is not good enough to use to determine what the tempeature of the earth really is. Especially not to within a few hundredhs of a degree.

MarkW
Reply to  Nick Stokes
May 14, 2023 7:59 am

I notice that Nick is much more upset that BoM is being criticized, then he is over why they are being criticized.

Nick Stokes
May 13, 2023 9:02 pm

 I have been very critical of the Bureau for not making this data public, so we can see the extent to which the measurements match.”

And yet Jennifer, having brought on much public expense in obtaining the data, will not herself make it public. Not even the numbers that she graphed.


Mr David Guy-Johnson
Reply to  Nick Stokes
May 13, 2023 9:57 pm

Liar

Nick Stokes
Reply to  Mr David Guy-Johnson
May 13, 2023 10:04 pm

Where are the numbers?

leefor
Reply to  Nick Stokes
May 13, 2023 10:12 pm

Or yours?
BTW you can download as pdf and convert to excel.

Nick Stokes
Reply to  leefor
May 13, 2023 10:44 pm

Plenty of my numbers are here

There is no pdf linked with Jennifer’s numbers.

cilo
May 14, 2023 12:28 am

Zero point seven degrees. One degree, point five, one point one…
The fact we argue about trivialities could be a huge lark, if people like Nick Stokes did not go around wasting millions of our taxes on pretending this is actually relevant.
Tell you what, Nick: I’ll design an automated probe using mercury, for free, with comms to any recorder you send me the specs for, but in return I ask just one thing: You let us see the data real time.
After all the millions spent on modernising your equipment, why can we not see all that scary data real time on the internet?
That would certainly stop all these stupid arguments over half a degree, right? Geez, I’ll even accept your bloody platinum jewelry probes, but put those stations back on the air, where they once were, if I understand correctly?
And, Nick, before you cry about cost, please spare the crocodile tears; you have no problem spending billions enforcing your lies, you have no problem buying platinum wire when carbon would have done, now use some of those millions, to give us the unimproved truth.

Nick Stokes
Reply to  cilo
May 14, 2023 1:04 am

why can we not see all that scary data real time on the internet?”

You can. Take my state, Victoria. On this page, you can see every AWS, and some others. The AWS data were updated every 30 min – looks like it is now every 10 min. You can switch to my city, Melbourne. Here you have data for every half hour in the last three days.

harryfromsyd
Reply to  Nick Stokes
May 14, 2023 2:18 am

Looking at the Melbourne Airport data the timings look pretty random, just like you’d expect from a bunch of amateurs trying to do “science”.
12/11:52pm 11.1 10.0 10.6 97 0.3 WSW 7 11 4 6 1031.9 1032.0 0.0
12/11:30pm 10.6 8.9 10.0 96 0.3 W 9 9 5 5 1031.9 1032.0 0.0
12/11:25pm 10.8 9.6 10.2 96 0.3 W 7 9 4 5 1031.9 1032.0 0.0
12/11:22pm 10.9 9.7 10.3 96 0.3 WNW 7 9 4 5 1031.9 1032.0 0.0
12/11:12pm 11.0 9.5 10.4 96 0.3 WNW 9 11 5 6 1031.9 1032.0 0.0
12/11:00pm 11.2 9.4 10.7 97 0.3 WNW 11 11 6 6 1032.0 1032.1 0.0

Nick Stokes
Reply to  harryfromsyd
May 14, 2023 2:35 am

They aim to present half-hour readings, and they have done that. Sometimes they insert extra readings if there is something to follow, like a cool change. I’m not sure why the extra data here.

harryfromsyd
Reply to  Nick Stokes
May 14, 2023 2:56 am

I’m not sure why the extra data here.”

That’s comforting.

I can’t see the point, we have no idea what criteria they are using to insert an additional data record, we don’t know if they are doing it all the time, so it’s unreliable and adds no information since we don’t know if similar events are happening during the large swathes where only 30m times are recorded.

It also contradicts your prior claim “looks like it is now every 10 min”.

And what on earth does each data line represent? An X minute average (where X appears to be selected using a roll of a dice)? The last measurement at the 30 minute slot? The last minute at the 30 minute slot? Who on earth knows.

Nick Stokes
Reply to  harryfromsyd
May 14, 2023 3:39 am

It also contradicts your prior claim “looks like it is now every 10 min”.”
That referred to the whole state page. I said explicitly that the pages for each site are half hourly.

it’s unreliable”
You don’t have to rely on it. They give the half-hour values, as promised. They have an algorithm that picks out intermediate values if it seems something interesting is happening in between. It looks like in this case the selection may have picked out some that aren’t of interest. Nothing is lost by doing that except bulking the page.

An X minute average”
No, it is the most recent 1sec reading at the time specified. Jennifer has odd ideas about that, but the point of Ayers paper is to show that that reading is representative of at least the previous minute.

harryfromsyd
Reply to  Nick Stokes
May 14, 2023 4:52 am

Seriously? Is “the interesting thing” also a 1 second event?

Jim Gorman
Reply to  Nick Stokes
May 14, 2023 5:12 am

Jennifer has odd ideas about that, but the point of Ayers paper is to show that that reading is representative of at least the previous minute.”

The point is that it is not similar to LIG thermometers. Attempting to correlate the two through changing past data by “homogenization” is not a valid scientific process. You simply can’t make one match the other.

Reply to  Nick Stokes
May 14, 2023 5:29 am

They have an algorithm that picks out intermediate values if it seems something interesting is happening in between.”

What algorithm? Is it automated or is the algorithm someone just looking at the data and hitting a button if it looks interesting?

If it is automated what rule set is it following?

old cocky
Reply to  Nick Stokes
May 14, 2023 3:42 pm

the point of Ayers paper is to show that that reading is representative of at least the previous minute.

Ayers, or Ayers and Warne?

The objective of the Ayers and Warne paper seems to be to justify the approach used in the earlier Ayers paper, and both suffer from limited data.

A&W notes more spread in maxima than minima, and Fig 4 shows quite a few differences in the positive and negative 0.1 – 0.2 degree wings.

One would have hoped that extensive in-depth laboratory and field testing had been conducted when the instruments were commissioned and deployed, instead of rather cursory checks considerably after the fact.

Reply to  Nick Stokes
May 14, 2023 5:26 am

10 minute data would be at 11:00, 11:10, 11:20, 11:30, 11:40, 11:50, and 12:00

I see an 11:00 and 11:30 reading but no matches for the others. That *should* be a first clue that AUTOMATED processes are not being used here.

Remember, you said above that they have moved to 10 minute data and not 30min data. You, as usual, are trying to have your cake and eat it too!

1saveenergy
May 14, 2023 1:34 am

“accuses me of being a conspiracy theorist.”

Whats the difference between a conspiracy theory & a verified fact ??
.
.
.
About 5 years !!!

Reply to  1saveenergy
May 14, 2023 7:44 am

Closer to 6 months these days.

siliggy
May 14, 2023 1:37 am

Selecting the highest and lowest of 60 noisy numbers with plus or minus 1 degree variations will produce different results to an average of the 60 readings but they compare the last of 60 readings.
An average of the 60 readings would be more representative of the true temperature than the highest and lowest readings. This is because the average will take into account all of the readings, including the noisy ones. The highest and lowest readings, on the other hand, will be more affected by the noisy readings. The last of 60 is randomly affected.
For example, if you have 60 noisy numbers with plus or minus 1 degree symetrical variations, the average of the readings will be 0 degrees. The highest reading could be 1 degree, and the lowest reading could be -1 degree.
If you only select the highest and lowest readings, you will get a range of 2 degrees. If you only select the last you get a random number between +1 and -1. This is a much wider range than the average of 0 degrees.
The average is a better way to represent the true temperature because it is less affected by the noisy readings. The highest and lowest readings can be misleading, especially if the noisy readings are large.

siliggy
Reply to  siliggy
May 14, 2023 1:42 am

To understand what that max, min and last of 60 means a bit better read Ken info about it here.
https://kenskingdom.wordpress.com/2017/03/21/how-temperature-is-measured-in-australia-part-2/

Nick Stokes
Reply to  siliggy
May 14, 2023 3:08 am

Selecting the highest and lowest of 60 noisy numbers with plus or minus 1 degree variations will produce different results to an average of the 60 readings but they compare the last of 60 readings.”

As often, you are getting confused about what the complaint here is. The Bureau takes the last of 60. So why the highest or lowest? What Greg Ayers showed was that it doesn’t make ant effective difference whether you take the last or the average. That is because the thermal inertia of the probe smooths the data, so the last value is representative. Here is one of Ayers plots showing that:

comment image

siliggy
Reply to  Nick Stokes
May 14, 2023 4:51 am

The value that goes into the climate data online record is neither the last or an average. It is the most extreme single valid sample of the day. To see it happen just watch the pages like this for the times that are the same on the left and either of the right columns. For example Horn Island at 9:30 shows two different temperatures on the left and right 26.2 and 26.1 minimum for the same time. All that anyone needs to do to see that the Stokes/Johnston team are presenting nonsense is watch what happens.
http://www.bom.gov.au/qld/observations/qldall.shtml?ref=hdr

Reply to  Nick Stokes
May 14, 2023 5:43 am

Thermal inertia is *NOT* averaging. Averaging requires that minimum and maximum temps be known as well as all the data in between. Thermal inertia prevents sensors from actually recording true minimums and maximums so you can never get an accurate average.

Averaging data from thermal probes that have less thermal inertia than a mercury thermometer does *not* provide an equivalent reading to a mercury thermometer that has higher thermal inertia.

This is just one more problem with the entire climate science dogma that believes temperature is a good proxy for enthalpy. Thermal inertia determines how much heat can be transferred between mediums in a unit of time. That limits how much temperature change can be seen. A platinum probe has a much lower thermal inertia than a column of mercury. You simply cannot duplicate mercury readings using averages from the platinum probe. The platinum probe will *always* show a higher maximum or lower minimum average than a mercury thermometer reading.

Are there no actual physical scientists in climate science? Ones who understand the science of materials?

old cocky
Reply to  Tim Gorman
May 14, 2023 3:55 pm

Averaging data from thermal probes that have less thermal inertia than a mercury thermometer does *not* provide an equivalent reading to a mercury thermometer that has higher thermal inertia.

The BoM’s thermal probes do seem to be embedded in a material which provides thermal inertia similar to the LiG thermometers.
The question is “how similar?”

Reply to  old cocky
May 14, 2023 6:20 pm

The *real* question is if all they are doing is duplicating the mercury thermometers then WHY? What do the new sensors provide that the old mercury thermometers didn’t? Why all the expense to the taxpayers?

And if the thermal inertia is the same then why take 1 second readings? A mercury thermometer can’t change fast enough to keep up with 1 second readings. If the new sensors have the same thermal inertia as the mercury thermometers then reading intervals of 20 seconds would be more than sufficient!

The time constant for a mercury thermometer (i.e. to reach 67% of the new temperature) is on the order of 2 seconds. For the thermometer to reach equilibrium is on the order of 5 x response-time or 10 seconds. So unless the temperature in the measuring unit remains constant for 10 seconds or longer the mercury thermometer will never actually catch up with what is actually going on in the atmosphere.

And climate science wants to replace mercury thermometers with devices that are designed to have the same characteristics as the mercury thermometers? WHY?

What should be done if proper science and engineering protocols were to be followed is to start brand new measurement data sets using sensors that are designed to respond more quickly than what they are replacing! Yes, it might take twenty years to build a massive baseline but so what? Far better that than trying to extend “long records” using unreliable methods.

If the climate scientists were designing race cars we would be using modern sensors to do nothing more than duplicate fuel flow rates and exhaust back pressure measurements that we did back in the 60’s instead of measuring a multiplicity of different things in order to track engine performance!

old cocky
Reply to  Tim Gorman
May 14, 2023 8:24 pm

It would be fiddly to chart response curves in the lab for mercury and alcohol thermometers across the temperature and humidity ranges, but once it has been done it isn’t a major exercise to emulate that behaviour in software using the 1Hz data as input.

It’s probably not something you’d bother doing in the data loggers, but it shouldn’t be a big job in the data centre to calculate the LiG-equivalent temperatures for each site.

That gives the new “high” frequency data as well as emulated “old” data.

Of course, software engineers probably shouldn’t be allowed anywher near experiments, either 🙂

Reply to  old cocky
May 15, 2023 4:37 am

Everything you mention has costs. Development costs. Maintenance costs, Etc.

And you would *still* wind up with an ESTIMATE of what the mercury thermometer would have shown.

Why not just automate the reading of the mercury thermometer using something like a trail camera and a cell phone? Off-the-shelf tech with no associated development costs and minimal maintenance costs. If you want to use newer computer tech then develop an AI that can read the transmitted picture to get the temp reading. Easy to train the AI and you get absolute consistency – and eliminate the human contribution to uncertainty in the reading.

old cocky
Reply to  Tim Gorman
May 15, 2023 1:30 pm

Yes, everything has costs. but it was your contention that low latency 1Hz readings allow collection of much higher resolution data.

The LiG emulation is an add-on, and would work better than the WMO’s averaging or the BoM’s adding thermal inertia with different materials.

Reply to  old cocky
May 15, 2023 2:30 pm

You can modify the sensor to mimic a mercury thermometer *or* you can use software to try and do it. We are being told they are doing both. That’s a crock from the word “go”. You can’t do *both* at the same time.

Averaging instantaneous readings won’t do a good job of emulating the mercury thermometers response time and equilibrium lag. The instantaneous readings will always have temperatures that are higher (or lower) then the the mercury thermometer would show. Thus the average is going to be biased higher (or lower) from those higher (or lower) readings. How do you remove those?

You *could* take each change in reading and multiply it by .67 to emulate the response time of the mercury thermometer but that would be only a partial fix and would leave you with “adjusted” data with more uncertainty than the initial data.

If you want a long record then keep on using the mercury thermometers and bring the recording protocols into the 21st century. Make the Pt sensors as responsive and as accurate as possible and start a brand new data set from them. You *can* do both simultaneously no matter what climate science says.

old cocky
Reply to  Tim Gorman
May 15, 2023 3:14 pm

You *could* take each change in reading and multiply it by .67 to emulate the response time of the mercury thermometer but that would be only a partial fix and would leave you with “adjusted” data with more uncertainty than the initial data.

If the thermal response of an LiG thermometer can be characterised in sufficient detail, it can be emulated in software from higher sensitivity, higher frequency readings. That may be through some equations, or lookup tables. Without the characterisation of the LiG it’s probably counterproductive to second-guess the approach.

If you want a long record then keep on using the mercury thermometers and bring the recording protocols into the 21st century. Make the Pt sensors as responsive and as accurate as possible and start a brand new data set from them. You *can* do both simultaneously no matter what climate science says.

Yes, definitely in both regards.
Your Heath Robinson trail cameras may not have been practical 30 years ago, but may be now. Parallax may be an issue.

Reply to  old cocky
May 16, 2023 3:50 am

If the thermal response of an LiG thermometer can be characterised in sufficient detail, it can be emulated in software from higher sensitivity, higher frequency readings. “

That’s true *if*, and only if, one solution fits all. Unless the software can be tailored to each individual measurement station (not just the sensor) all you do is introduce one more element of uncertainty in the readings. Uncertainty adds, it always adds.

“Parallax may be an issue”

Yeah, that’s true. Hopefully the AI could be trained to minimize it but it would still be a source of uncertainty. But it shouldn’t be more than it would be from a human (tall, short, medium) reading the scale.

old cocky
Reply to  Tim Gorman
May 16, 2023 4:28 am

That’s true *if*, and only if, one solution fits all. Unless the software can be tailored to each individual measurement station (not just the sensor) all you do is introduce one more element of uncertainty in the readings. Uncertainty adds, it always adds.

It may not be as much of a problem as you think. No two LiG thermometers will behave exactly the same in any case due to production variations, but they should pass their calibration tests.
Replacing one calibrated LiG thermometer with another will have the same issues, especially if they’re another batch or brand.

I think the best you could do is use something along the lines of factory QC techniques to get a range of responses for a sample of instruments of each model used, then probably use an average of those for the LiG emulator.

When it’s all said and done, the LiG emulator is a way of using the 1Hz fast response Pt data to simulate the daily min and max from the LiG thermometers in addition to using the native data.

It should be a closer match than attempting to physically increase the thermal response of glass by using steel, and the high frequency/sensitivity data is available as well.

Reply to  old cocky
May 16, 2023 7:43 am

they should pass their calibration tests.”

No field instrument remain calibrated over time. That is what the uncertainty interval is used to indicate.

“Replacing one calibrated LiG thermometer with another will have the same issues, especially if they’re another batch or brand.”

Yep. But both should have their uncertainty interval specified, especially when considering calibration drift is a fact of life.

“I think the best you could do is use something along the lines of factory QC techniques to get a range of responses for a sample of instruments of each model used, then probably use an average of those for the LiG emulator.”

That doesn’t address the fact that calibration drifts once the unit is in the field. If you just use an “average” response from a number of units you haven’t done anything to address the uncertainty interval once the unit is in the field. This is especially important when you are trying to determine temperature differences in the hundredths digit. What is the standard deviation of the data used to calculate the “average”. An average tells you little about a data set – even though the climate crowd thinks it is the be-all-end-all factor that is the only one that has to be looked at.

“When it’s all said and done, the LiG emulator is a way of using the 1Hz fast response Pt data to simulate the daily min and max from the LiG thermometers in addition to using the native data.”

Again, this does nothing to reduce the uncertainty associated with the readings from the unit.

I can do a dynamometer run on ten Pontiac GTO’s equipped with a 389cu engine, 3-deuces and a 4-speed, calculate an average HP curve, and write a data-fitting equation to match the average. That doesn’t mean the 11th GTO on the dyno will give a curve matching the average. It doesn’t mean that GTO number 1 will match the average curve after being run down the 1/4 mile track a dozen times. The differences from the average are the *uncertainty* associated with the units.

And that’s all you are really doing with the Pt unit, writing an algorithm that does data-fitting to the average value. The uncertainty will remain. And that uncertainty in temperature measuring stations is such that it overwhelms the ability to discern differences, even anomaly differences, in the hundredths digit.

” using the 1Hz fast response Pt data to simulate the daily min and max from the LiG thermometers “

The fact that the 1Hz response will give higher (lower) numbers than a mercury thermometer is a major problem. The higher the maximum number, the higher the “average” will be and the Pt unit will give higher (lower) numbers than the mercury unit. That means you have to diddle the highest temp from the Pt unit somehow to make it match what the mercury thermometer would have shown so the average of the Pt unit would be the same as the mercury unit. Any time you diddle the data you add uncertainty. No simulation can cure that. The uncertainty in the field units is already high enough to make them not fit for the purpose they are being used for. Adding to that uncertainty just makes it worse.

old cocky
Reply to  Tim Gorman
May 16, 2023 1:45 pm

The raw figures from the Pt instrument forms a new data set.
The LiG emulator, uncertainties and all, is just a way to try to reconcile that to the earlier daily LiG readings.

Parallel running in the field helps with the comparison.

If nothing else, this LiG emulation approach will be closer to the mark than trying to physically reproduce the thermal response using materials with different properties, or straight averages.

Reply to  old cocky
May 16, 2023 2:58 pm

The raw figures from the Pt instrument forms a new data set.”

Yep. We’ve had the capability to collect that data for what? 40 years? Where is it.

The LiG emulator, uncertainties and all, is just a way to try to reconcile that to the earlier daily LiG readings.”

yep. But it seems like an awful waste of time and effort to do so.

“Parallel running in the field helps with the comparison.”

Yep. Where is this being done?

“If nothing else, this LiG emulation approach will be closer to the mark than trying to physically reproduce the thermal response using materials with different properties, or straight averages.”

*IF* you get the simulation right. But the only real way to do that is on a station-by-station basis. That’s what Hubbard and Liu found out in 2002 (about then, I forget the exact year). Regional adjustments (e.g. via simulation) simply do not account for the varying microclimates among stations or the varying calibration drift among stations. Such adjustments add more uncertainty than they eliminate.

old cocky
Reply to  Tim Gorman
May 16, 2023 3:17 pm

There are lots of things which seem eminently sensible which aren’t being done.

ITER level data collection and analysis would be a little over the top, but 1 second collection and storage from fast-response probes would seem to be relatively cheap.

Nick Stokes
Reply to  Tim Gorman
May 15, 2023 8:29 pm

We are being told they are doing both.”
No you are not. You don’t listen.

Reply to  Nick Stokes
May 16, 2023 3:51 am

You say they don’t average. The BOM says it does.

Tell me again who isn’t listening?

Nick Stokes
Reply to  Tim Gorman
May 16, 2023 4:01 am

You have totally missed the argument. Jennifer says, based on a WMO recommendation, that they should average. BoM doesn’t, and says that it doesn’t need to, because of matching thermal inertia. Greg Ayers wrote paper, showing that averaging, as recommended by Jennifer, gives exactly the same result as what they actually do. Jennifer found a typo, and declared Ayers careful analysis fake. That is how her style of “debate” goes.

old cocky
Reply to  Nick Stokes
May 16, 2023 5:21 am

Greg Ayers wrote paper, showing that averaging, as recommended by Jennifer, gives exactly the same result as what they actually do. Jennifer found a typo, and declared Ayers careful analysis fake. 

I think you might be a little over-enthusiastic in this case. While the results are close, they aren’t exactly the same.
The Ayers and Warne analysis is also quite limited spatially and temporally.

Unfortunately, everybody seems to be getting a bit too invested in this.

old cocky
Reply to  siliggy
May 14, 2023 3:51 pm

Selecting the highest and lowest of 60 noisy numbers with plus or minus 1 degree variations will produce different results to an average of the 60 readings but they compare the last of 60 readings.

From Fig 5 of the Ayers and Warne paper, the range seems to be +/- 0.2 degrees C

Reply to  old cocky
May 14, 2023 5:59 pm

Which, again, questions how accuracy in the hundredths digit can be provided.

Peta of Newark
May 14, 2023 1:50 am

I ran a little test with my Elitech RC4 datalogger this morning. Recording at 20 second intervals.
I dangled its probe out of ‘little truck’s’ window on my way to the coffee shop (25 min drive in cool misty morning)
When I got to the carpark, brought it inside the truck and let it warm up till it ‘settled’

That is the big spike just after 07:34 – it went from just-under 10°C to 25°C in exactly 2 minutes.
Then you see it drop on my walk into the shop and it rises again to the temp inside here now= about 20°C

The bit before the big spike is interesting as it was overcast but clear where I set off from but about half way it became very misty/foggy and brief ‘mizzle’ on the windscreen.
I expected that that would reduce the temp but it actually went up – by nearly 2°C.

I must have ran into a bank of downwelling radiations. sigh. groan.
I’ll take a different route on the way home. Don’t want anymore if that, I’m crazy enough as it is.
😀

But anyway, you get the point:
Sunlight is getting into the boring conventional old Stevensen Screens and heating the air inside.
The solid state probes have the low inertia and response speed to see that.
Coming from: Glint and glare from nearby building and cars on clear sunny days. possibly solar panels and large-ish bodies of water.
C’mon people, play the game – it the working principle of techno jokes like Ivanpah after all.

As you see, it only takes 2 minutes to utterly cook the temperature book

Never mind harassing the BOM – just ask them why the requirement to take nearly 90,000 readings per day then disregard 89,998 of them.

Olympian Grade Cherry Picking or what…

OMG: Slow on the uptake today – and I just measured and said it =
Fog is getting into the weather stations and warming them up.

Do you get fog in Australia – I’ve never been there.

Temp Slew Rate Test.png
Last edited 23 days ago by Peta of Newark
siliggy
Reply to  Peta of Newark
May 14, 2023 2:30 am

Was that pulse real or just electrical interference from some GPS tracking watching you via your mobile phone? Re Fog; it is a big country. There are many places that do fog out badly. I sometimes have to drive through 100 kM of it. Being Australia though, just big distances of fog is not dangerous enough, So we have hidden things in the fog like kangaroos, wombats, wild pigs, all white school buses, flooded rivers, emu and some camels. It is also the best time to find a Bunyip.   

Graeme4
Reply to  siliggy
May 14, 2023 3:16 am

And Drop Bears?

Jim Gorman
Reply to  Peta of Newark
May 14, 2023 5:18 am

Never mind harassing the BOM – just ask them why the requirement to take nearly 90,000 readings per day then disregard 89,998 of them.”

Yep, if you want a real average temperature for the day, why not integrate all those 90,000 measurements? Is it time we move into the 21st century of automation and computers?

Reply to  Peta of Newark
May 14, 2023 5:49 am

The solid state probes have the low inertia and response speed to see that.”

I replied above before I read this. I said the same thing.

Platinum probes have lower thermal inertia than a mercury thermometer. Therefore the platinum probe will *always* read higher maximums and lower minimums than the mercury thermometer. You simply can’t “average” the platinum probe readings to try and duplicate the thermal inertia of a mercury thermometer. The averages from the platinum probes will always be higher/lower than the mercury reading simply because you will see wider variation in the data from the platinum probe.

It’s like no one in climate science has a clue about the science of materials and their properties. Statistics (i.e. averaging) is not a solution to everything!

Bill Johnston
May 14, 2023 2:42 am

Dear everyone,

There is no public interest in being bored to death by little things that don’t matter. The hammer Jennifer thinks she is wielding also totally misses the nail! Get full-bottle; read what they did at Townsville for instance (https://www.bomwatch.com.au/data-quality/climate-of-the-great-barrier-reef-queensland-climate-change-at-townsville-abstract-and-case-study/) or what happened at Rutherglen (https://www.bomwatch.com.au/bureau-of-meteorology/rutherglen-victoria/).  
 
My assessment of the two papers is that one leads from the other (Ayres and Warne, after Ayres). Both papers clearly set out their objectives, they use high-frequency raw data that is unavailable to freely download (and data that simply could not be observed using thermometers). Methods and findings of both studies also align with stated objectives. However, as temperature in Australia is measured in degC, being an atmospheric scientist, Ayres clouded the issue by expressing units in Kelvin, which is his style.
 
It is impossible to take this seriously. Jennifer says: “This difference illustrates the point that I have been making for some years: by taking the highest spot reading, and not averaging, the Bureau is over estimating maximum temperatures and by some significant amount. In this case one whole degree Celsius”, which is not true, my emphasis. As someone who has never undertaken regular weather observations, just what would she know? In any event, the papers that are the focus of this ham-fisted post, and others, dispel totally that end-of-minute readings are spot temperatures.
 
Marohasy does not understand the word attenuated and cannot read the explanations given by Ayres about the difference between WMO requirements (which are not requirements), and the approach taken by the Bureau in respect of their platinum resistance AWS probes. While she craves debating with ‘conservatives’ whatever they are, she ignores what she should read, then makes it up as she goes along. I have debated all this with her before, over and over. Again, unlike the papers she criticises, but aside from rattling the can, she has not clearly set-out her objectives, and she won’t now either! I’m sure Readfearn’s chooks are lining him up for a sausage-sizzle. The question is, what does she hope to achieve from all this, and who will she blame when the wheels fall off?
 
She says “the Bureau log the highest one-second reading in each minute as the daily maximum”; but they don’t. Provided it is “clean” they log the highest attenuated end-of-minute reading for the day, as the maximum. Clean means that the value has not been deleted at-source as a spike, and rules for that have been published. If she wants to know those rules, she should do some homework.
 
While I re-read both papers this afternoon (OZ-time), just how many of the tribes waiting to pounce have done the same?
 
I’ll say again for the benefit of leaping karlomontes, I am no fan of the Bureau … etc. https://wattsupwiththat.com/2023/05/12/jokers-killing-dissent-while-calling-it-debate/#comment-3721118

Divide to rule is destructive and while all this is grist for Jennifers mill, the bigger issue is that she has NOT considered the consequences of talking herself into a corner and taking others with her.
 
Yours sincerely,
 
Dr Bill Johnston

http://www.bomwatch.com.au  

siliggy
Reply to  Bill Johnston
May 14, 2023 3:08 am

Bill says:”She says “the Bureau log the highest one-second reading in each minute as the daily maximum”; but they don’t. Provided it is “clean” they log the highest attenuated end-of-minute reading for the day, as the maximum.”
Bill will not be able to prove this claim. It is yet another example of Bill just making stuff up.

Bill Johnston
Reply to  siliggy
May 14, 2023 3:38 am

Hello Lance, fancy meeting you here,

If you read the documentation instead of flapping around, you could “prove” it for your noble self.

Oh, look (from Ayers 2019): “The Bureau has explained this as reasonable because automatic weather station (AWS) temperature systems have a response time that means each measurement is not instantaneous, but an average smoothed over 40–80 s”.

In other words: The Bureau has explained this as reasonable because automatic weather station (AWS) temperature systems have a response time that means each measurement is not instantaneous, but an average smoothed over 40–80 s

and specifically in short-hand for Jennifer’s edification:

each measurement is not instantaneous, but an average smoothed over 40–80 s

Precisely which words don’t youse understand.

Allowing for seemingly-circular processing delays, next week will do.

Kind regards and good nite,

Dr Bill

(I run climate-clinics on days that don’t end in ‘y’).

Reply to  Bill Johnston
May 14, 2023 5:56 am

 “The Bureau has explained this as reasonable because automatic weather station (AWS) temperature systems have a response time that means each measurement is not instantaneous, but an average smoothed over 40–80 s”.”

Averaging platinum probe data over a period of time simply cannot duplicate the thermal inertia of a mercury thermometer. The platinum probe will *always* provide higher highs and lower lows because it has a faster response time due to its lower thermal inertia. That means that the averages will *always* be different from what the mercury thermometer readings will be.

The bias is built-in. It can’t be duplicated using statistics. The only way it can be done is to embed the platinum probes in a material with the same thermal inertia as a column of mercury.

It’s a matter of HEAT. How much heat can be transferred per unit time into different materials. AND an understanding of what that transferred heat does to the temperature of each material.

Nick Stokes
Reply to  siliggy
May 14, 2023 3:46 am

Bill is correct, and you and Jennifer seem totally confused even about what you are claiming. Greg Ayers used to be BoM chief. His abstract begins:
“Bureau of Meteorology automatic weather stations (AWS) are employed to record 1-min air temperature data in accord with World Meteorological Organization recommendations. These 1-min values are logged as the value measured for the last second in each minute.”

Last edited 23 days ago by Nick Stokes
siliggy
Reply to  Nick Stokes
May 14, 2023 5:46 am

Notice for Walpeup that the 10:30 temp is different on the left and right.
The coolest makes the minimum not the last.

Capture.PNG
siliggy
Reply to  siliggy
May 14, 2023 5:51 am

From here.
http://www.bom.gov.au/vic/observations/vicall.shtml?ref=hdr

It can also be seen from the half hour page here that the last reading of the minute 14/10:30pm is higher than that accumulated low.
http://www.bom.gov.au/products/IDV60801/IDV60801.95831.shtml

Reply to  Nick Stokes
May 14, 2023 6:00 am

You and Bill are POOR representatives of the understanding of the science of materials. If you two are representative of the entire climate science cadre then it is understandable why so much is wrong in climate science.

All you have stated here is that it is impossible to compare “1-min values” with the readings of mercury thermometers in the past. Yet that is exactly what climate science does every minute of every day.

You have proved Jennifer’s claims a priori.

Bill Johnston
Reply to  Tim Gorman
May 14, 2023 4:48 pm

Not true Tim, this has nothing to do with materials. Instruments are measuring the temperature of the air. If you want to measure materials, then by all means set up an experiment.

At lest, read the papers that JM is so sweaty about.

Cheers,

Bill

Reply to  Bill Johnston
May 14, 2023 6:28 pm

Give me a break!

You are saying that the Pt sensors are engineered to have the exact same characteristics as a mercury thermometer. So why change?

A mercury thermometer takes at least 10 seconds of constant temperature after a change in order to reach equilibrium. So why design a Pt sensor that has the same? Unless the temperature in the enclosure stays the same for more than 10 seconds you will never catch up with actual temperature changes – either using the mercury thermometer or the Pt sensor with the same response time!

Exactly what did you gain from the change? Taking 1second readings is worthless with a response time of 2 seconds (67% or the new value) and 10 seconds to reach equilibrium.

This is *all* based on the science of materials and heat transfer between those materials. Whether you realize it or not.

Bill Johnston
Reply to  Nick Stokes
May 14, 2023 4:44 pm

In any event Lance, all those numbers are an average smoothed over 40–80s and reported at the end of a single minute-cycle.
Having gone down this track with you and Jennifer dozens of times, what words don’t you STILL not understand; or are you stuck in a time-warp?

Cheers,

Bill

Jim Gorman
Reply to  Bill Johnston
May 14, 2023 5:28 am

The big question is why all these readings are not integrated into a true daily average? And, in fact integrate the humidity readings to get an average so that a daily enthalpy can be calculated.

Twice a day temperatures are a poor proxy for the amount of heat contained in the atmosphere. It immediately conflates similar climates to locations that have low and high humidities. As I have stated before, it makes the Sahara Desert temp average appear the same as a high humidity location like Houston, Texas.

karlomonte
Reply to  Bill Johnston
May 14, 2023 10:01 am

Another unreadable BillRant.

observa
May 14, 2023 3:37 am

Down to one thousandth of a degree you say?
The BEST Climate Clip I’ve EVER seen – What do you think?? #SCIENCE – YouTube
I’m sure the BoM would agree with such fine digital probes. Err….no wait a minute…!

siliggy
May 14, 2023 4:12 am

For anyone who would like to understand the difference between the maximum temperature and the last second within each minute, here it is as explained by the BoM 2017 review of automatic weather stations.
“All valid one-second temperature values within the minute interval are assembled into
an array. If there are more than nine valid one-second temperature values within the
minute interval, then a range of one-minute statistics are generated from these values.
These include:
 an instantaneous air temperature is the last valid one-second temperature
value in the minute interval;
 one-minute maximum air temperature is the maximum valid one-second
temperature value in the minute interval; and
 one-minute minimum air temperature is the minimum valid one-second
temperature value in the minute interval.”
https://apo.org.au/node/106276

siliggy
Reply to  siliggy
May 14, 2023 4:32 am

That the Stokes/Johnston team are as usual misleading people can be easily seen live on the BoM website. Every ten minutes the state pages like this one update. On the right the accumulated maximum and minimum since being reset are updated if a new extreme has been recorded. On the left the time shows the ten minute update and the temperature of the last value recorded that minute. Often on the list the same time will appear on the left and the right but show different temperatures. It is easy to see which one updates the accumulated max and min.
http://www.bom.gov.au/nsw/observations/nswall.shtml?ref=hdr
It should be entertaining to see just how determined the effort to confuse the obvious will be.

Reply to  siliggy
May 14, 2023 6:03 am

You nailed it. Stokes/Johnson are trying to support that you can equate average temperatures from platinum probes with mercury thermometer readings. It’s simply not physically possible. The thermal resistance of each are different and you can’t repair that difference using “averaging”. Statistics is simply not a universal solvent even though climate science seems stubbornly fixated on believing that it is.

karlomonte
Reply to  Tim Gorman
May 14, 2023 11:57 am

As usual in climate pseudoscience, averaging removes any and all problems.

Bill Johnston
Reply to  karlomonte
May 14, 2023 10:20 pm

Tim and karlomonte,

Averaging is being promoted by Lance and JM. However both the papers being ridiculed here state unequivocally that: the Bureau’s PRT probes report an average smoothed over 40–80s and reported at the end of a single minute-cycle.

Cheers,

Bill

Reply to  Bill Johnston
May 15, 2023 4:39 am

You stated that the sensor itself has been designed to duplicate the response time and equilibrium lag of a mercury thermometer. So why the averaging? It would gain you nothing.

Something isn’t kosher here. Either the Pt sensor is *NOT* duplicating the mercury thermometer characteristics or the averaging is the actual attempt to duplicate it.

Which is it?

Nick Stokes
Reply to  siliggy
May 14, 2023 3:34 pm

here it is as explained by the BoM 2017 review of automatic weather stations.”

OK, that does clarify. So they use the last second for the half hour (periodic) values, and the within minute max for the reported max. And these are different, sometimes sufficiently to move from one rounded 1-dec to another, ie an occasional difference of 0.1.

But they can’t be very different. People have this persistent notion of a bare Pt wire in the breeze. But it isn’t like that at all. They are totally within a steel shell, which the BoM carefully designs to have the same thermal inertia as a LiG thermometer. There is very little variation possible on the 1-second scale.

comment image

siliggy
Reply to  Nick Stokes
May 14, 2023 5:25 pm

Nick just as you have acknowledged that they don’t use the last reading of the minute but use the extremes after watching what really happens instead of believing confused and error riddles papers, you will also see more if you just keep on watching what really happens.
For example this false claim you just made up without evidence:
“But they can’t be very different.”
Just keep watching and despite there only being a 1 in ten chance of readings being at the same time left to right, it does happen and there are reasons the thermal time constant filter cannot stop it.


TenPast.png
Last edited 23 days ago by siliggy
Reply to  Nick Stokes
May 14, 2023 5:55 pm

If they can’t be very different then why change from mercury thermometers to Pt sensors? The sensors need more electronics (read that as systematic error) which is more expensive. Is that merely in order to get remote readings instead of having to have local readings of the mercury thermometer? If that’s the case why not just put a digital camera in with the mercury thermometer and store the jpg files?

Every time you try to push this equivalency between mercury thermometers and Pt sensors you raise more questions than you provide answers!

Raspberry Pi’s (or something similar) with an attached digital camera would be FAR less expensive and far easier to maintain.

None of this makes any economic or engineering sense at all!

old cocky
Reply to  Tim Gorman
May 14, 2023 6:24 pm

Is that merely in order to get remote readings instead of having to have local readings of the mercury thermometer? If that’s the case why not just put a digital camera in with the mercury thermometer and store the jpg files?

Heath Robinson would be proud of that approach 🙂

There are good reasons for using a far less manual approach than LiG min and max thermometers and the other manually read/recorded instruments, but it does seem to have come at the cost of continuity and consistency of site records.

Bill Johnston
Reply to  old cocky
May 14, 2023 11:05 pm

Reality is that probes generate much more data which can be used for specific purposes, and they are less subject to operator error.

Anyone thinking that mercury met-thermometers are perfect instruments and that they provide infallible data has never undertaken standard 9am weather observations, filled in one of Marohasy’s A8 forms and sent off monthly returns etc. to the Bureau as I have.

The issue of rising trends has little to do with probes, but has much to do with station changes. It seems to me from the flack I receive, that everyone is an expert, except those who have had direct experience, and who have undertaken deep-dives into data, like I have published at http://www.bomwatch.com.au. Whose side are you on?

Lance has misread or misconstrued some information, without taking the whole story into account. All the values on the pages that he has thrown up are end of minute attenuated values (i.e,.an average smoothed over 40–80s and reported at the end of a single minute-cycle).

They are measuring air temperature. They have a century or so of thermometer data of dubious to completely unknown quality – just two values per day (max & min). They don’t have 1-minute data for 1896 or 1949 to integrate over 1-day and that is the way it is. It is those data that PRT probes are designed to mimic or be comparable with.

The other day they opened a can of worms about Brisbane Regional, which is still unresolved, yet Marohasy has moved on and Lance has changed wavelength. Marohasy want’s to debate a ‘conservative’, what for; what is her depth of knowledge, what does she want to debate and for what purpose? Why does she not debate here on WUWT? She is also mistaken that her meaningless differences actually matter.

Cheers,

Bill

.

Last edited 22 days ago by Bill Johnston
Reply to  Bill Johnston
May 15, 2023 4:31 am

If the one-minute data is based on a sensor with the same response time and equilibrium lag as a mercury thermometer then exactly what have you gained in uncertainty reduction? You *still* won’t know what the actual temperature was at any point in time!

You’ve kind of hoisted yourself on your own petard by claiming the Pt sensors are designed to duplicate the response time and equilibrium lag of the mercury thermometer. All you’ve really gained is the elimination of a human having to be on-site to record readings. A trail camera used by hunters with bluetooth connection to a cell phone would do the exact same thing using off the shelf tech thus minimizing future maintenance costs.

Reply to  old cocky
May 15, 2023 4:26 am

If the non-manual approach only reproduces the manual approach then what good is it? You say there are good reasons but fail to list any.

If the Pt sensor has the same lag in response as a mercury thermometer then its uncertainty interval for the actual temperature is the exactly the same as for a mercury thermometer. It is claimed by climate science that the uncertainty component introduced by humans reading the scale of a mercury thermometer is random, Gaussian, and cancels out over the long term so there is no benefit in precision based uncertainty from using the Pt sensor instead of the mercury thermometer. Any claims of an increase in precision is a red herring as long as the thermal inertia of the sensors, be it Pt or mercury, is the overriding contribution to the actual uncertainty of the reading.

Every data collection scheme I have been associated with has *always* gravitated to sensors with faster response times and higher sensitivity, e.g. maze running robots. Yet in climate science we see the exact opposite – a conscious decision to gravitate to slower response times and less sensitivity. What the H*LL?

From an engineering approach the use of a small computer and a digital camera is far more efficient than trying to design an electronic sensor such that it is the same as the non-electronic sensor! Think of the trail cameras used by hunters today. They take pictures, store them, and can even transmit them using bluetooth tech so you don’t even have to touch them! They don’t store a bunch of data from an IR sensor and transmit it so that you have to analyze the data off-site to determine what the IR sensor may have picked up! Stick one of those trail cameras in the temperature station and set it to take a picture once every minute! Use bluetooth to connect to an external transmitting unit (e.g. a cell phone) to send it where ever you want. You maintain your long record and eliminate the need for an on-site human to read and record what the mercury thermometer shows! At far less development expense and far easier maintenance from using off-the-shelf tech!

old cocky
Reply to  Tim Gorman
May 15, 2023 1:38 pm

If the non-manual approach only reproduces the manual approach then what good is it? You say there are good reasons but fail to list any.

Removing transcription errors is a good start 🙂
Not having some poor devil have to trudge out in freezing rain isn’t bad, either.

Reply to  old cocky
May 15, 2023 2:34 pm

You can do both while keeping the mercury thermometers. Like I said, put in a small computer with a camera (like a trail camera) and send the pictures to a server running a simple AI to do the recognition of the reading. It wouldn’t be nearly as hard as doing facial recognition or a self-driving car! In other words the technology already exists! It’s being used in industry all over.

No transcription errors and no human involvement.

old cocky
Reply to  Tim Gorman
May 15, 2023 3:18 pm

That would be feasible now, but wouldn’t have been in the early 1990s when the switch to Pt was in planning.

Sometimes, being on the bleeding edge is sub-optimal, and it’s better to wait for the technology to catch up.

Reply to  old cocky
May 16, 2023 3:59 am

Sometimes, being on the bleeding edge is sub-optimal, and it’s better to wait for the technology to catch up.”

Can’t argue with that. The problem is the continued use of the Argument from Tradition in climate science. We must continue trying to emulate the older equipment because that’s the *best* way. Climate scientists seem to have been trained by Teyve from Fiddler on the Roof. No new and better data sets *EVER*. No use of degree-days *EVER*. No recognition that daytime temps have a different distribution than nighttime temps *EVER*.

Meanwhile disciplines like ag science and HVAC engineering march on into the future leaving climate science in the dust.

old cocky
Reply to  Tim Gorman
May 16, 2023 5:30 am

It’s possible to do both, with suitable software.

Reply to  old cocky
May 16, 2023 8:06 am

I don’t agree. All the software does is add uncertainty.

old cocky
Reply to  Tim Gorman
May 16, 2023 1:47 pm

Less uncertainty than the WMO or BoM methods.

Bill Johnston
Reply to  Nick Stokes
May 15, 2023 1:11 am

No they don’t,

Ten minute and half-hour temperatures are actual temperatures reported at the end of the minute at those times. Each measurement is an average smoothed over 40–80s and reported at the end of a single minute-cycle. I.e., at 10 minutes and 30 minutes. Join the dots and you have dry-bulb temperatures at those times.

Maximum and minimum temperatures are the single highest and lowest of the 1-minute values through the day (of the 60 by 24 observations or samples = 1440 readings). Each of those 1-minute observations is an average smoothed over 40–80s …. or have I said that?

Lance and Jennifer both know this was debated furiously within a private email forum years ago, but like other perennial skeletons in the cupboard, every now again they pull it out and dust it off hoping to impress Chris Kenny, or anyone, any joker at all. The fracas over the Goulburn AWS, is a case in-point (https://wattsupwiththat.com/2023/02/12/legacy-electronics-botch-temperature-recordings-across-australia-part-1/).

The within minute Max and Mins, such as used by Ayers, don’t enter the conversation unless you wish to specifically request and pay for the data, and even then I don’t know in what form it would be supplied e,g., max min within a minute or max min with a timestamp within a minute. Maybe people who are interested could shorten their pockets, buy some data and analyse it, using Minitab for instance and report back.

To be clear, Jennifer is paid for this, I am not paid. In fact as a member of the Institute of Public Affairs, I pay her, and I therefore have an interest in her producing quality goods, not junk science. I am also not interested in chasing rabbits down burrows because somebody else out there, is unable or even truly interested in undertaking analyses THEY could do for themselves.

To them I say, do your own analysis and bring that back to the discussion; or get Marohasy to do it, she has the dough and Minitab; or press her to put the data she argues about in the public domain so everyone can pick the same bone.

When everyone including karlomonte aligns into tribes that poke sticks to protect their leader, or start yelling at each other ignorant of facts or data, everything falls to pieces.

Good nite,

Bill Johnston

http://www.bomwatch.com.au

old cocky
Reply to  Bill Johnston
May 15, 2023 1:49 am

Each of those 1-minute observations is an average smoothed over 40–80s …. or have I said that?

That doesn’t seem to be what the Ayers and Warne paper says.

That averaged the individual 1s readings during each minute and compared that figure to the last reading for the minute.

According to the first paragraph of the Discussion section:
“The objective of this work was to explore the proposition that 1-s air temperature data measured at Bureau of Meteorology AWSs represent good measurement practice as required by WMO, via measurement system time constants that average the data over the previous 40–80 s, to meet the WMO recommendations for recording 1-min data.”
The “averaging” is the thermal response time of the BoM instruments, presumably because of the thermal mass of the probes Nick showed.
Calling that “averaging” seems a recipe for confusion, but it seemed to be in response to the WMO recommendation to average temperatures over such a period.

Bill Johnston
Reply to  old cocky
May 15, 2023 3:11 am

Dear old cocky,

At the risk of sounding grumpy at this time of the night, the second sentence of the Introduction says (direct quote):

The Bureau has explained this as reasonable because automatic weather station (AWS) temperature systems have a response time that means each measurement is not instantaneous, but an average smoothed over 40–80 s (Bureau of Meteorology 2017).

Is that unclear? Doesn’t it mean that end-of-minute data are an average smoothed over 40–80s?

Although we are on different pages, that statement encapsulated the purpose of the paper. Everything flowed from that.

The essential argument being put forward by Marohasy and Lance (again) is an attention-seeking rave.

They conveniently ignore that that end-of-minute data are an average smoothed over 40–80s. I.e., within their little mindsets they only see spot values. They also fail to mention that this issue has been on-going for almost a decade.

Every now again, Jennifer digs it out, waves it around hoping to start tribal warfare so she can talk to Chris Kenny. On failing that she gets miffed and blames Kenny. She wants a fisty with ‘conservatives’ she says.

She did the same over non-existent parallel data for Wilsons Promontory, Cape Otway, and Rutherglen. Fake trends at Amberley also come to mind, and more …

In a similar way to whether Graham Readfearn and his chooks are right or wrong, eluding major issues, this is what Jennifer does.

She has a similarly impervious Farady cage to Readfearn. Her modus operandi is to say something, anything; loudly; move quickly, don’t debate, don’t look back; don’t put data in the public domain like I do at http://www.bomatch.com.au, and certainly don’t mind the wreckage.

Not to mention the gender card … and … oh, that’s me, look at me.

This whole package is tiresome and surreal and if the consequences were not potentially serious I’d be snoring in front of the ABC, which would be turned off to prevent global warming messaging.

Good nite again,

Dr Bill

Last edited 22 days ago by Bill Johnston
old cocky
Reply to  Bill Johnston
May 15, 2023 3:30 am

The Bureau has explained this as reasonable because automatic weather station (AWS) temperature systems have a response time that means each measurement is not instantaneous, but an average smoothed over 40–80 s (Bureau of Meteorology 2017).

Is that unclear? Doesn’t it mean that end-of-minute data are an average smoothed over 40–80s?

Yep, but it’s thermal response delay due to thermal mass rather than explicit averaging in the data logger. Analog averaging, if you will.

they only see spot values. 

While there is spread in the tails of the differences from average, it certainly does appear to have much longer than 1 second thermal response time.

Reply to  old cocky
May 15, 2023 4:47 am

You nailed it. Either we are being lied to about the design of the sensor or being lied to about the need for averaging in an attempt to duplicate the response time of the mercury thermometer.

Something isn’t kosher here. If the sensor duplicates the mercury thermometer then just take the reading, no need for averaging. If averaging is needed to duplicate the mercury thermometer then the sensor is *not* designed as claimed.

Which is it?

karlomonte
Reply to  Tim Gorman
May 15, 2023 6:27 am

Why does Bill need to scream “average smoothed over 40–80s” over and over and over?

Bill Johnston
Reply to  karlomonte
May 15, 2023 2:00 pm

Because Lance and JM don’t seem to get the difference between spot readings and average smoothed over 40–80s” over and over and over.

You may not be convinced yet either!

Cheers,

Bill

Bill Johnston
Reply to  Tim Gorman
May 15, 2023 1:57 pm

No one is lying Tim and you should take that back. It is just difficult to measure a constantly changing medium, out in the paddock, using rapid-sampling instruments.

I worked with a manufacturer in the 1980s comparing met-lawn and PRT temperatures trying to resolve the spike issue, and in retrospect, BoM seem to have struck a reasonable balance.

For their part, the Bureau should get off their horse of “record hot days”, sack their marketing branch, send Blair Trewin to re-establish the office at Oodnadatta, where he can’t do any real damage, then get back to reporting on the weather, not making it up and embellishing it to scare the kiddies for climate action.

All the best,

Bill Johnston

Reply to  Bill Johnston
May 15, 2023 2:44 pm

 It is just difficult to measure a constantly changing medium, out in the paddock, using rapid-sampling instruments.”

I am getting two different stories. One is a lie. It has to be. Either the sensor mimics the mercury thermometer or the emulation is being attempted using software, i.e. averaging over 40-80s.

Which one is it?

It’s not difficult to measure a constantly changing medium. It’s done in self-driving cars routinely. It’s done in industry with self-adjusting robots handling non-homogenous material.

I even worked on a maze running robot back in the 90’s being designed to run through a building to find where a fire might be located. As the robot progressed through a floor it encountered a constantly changing medium and one of our main tasks was to make the sampling of the environment as rapid as possible so the robot wouldn’t run right into the fire before recognizing it! Among other issues such as recognizing stairwells and other hazards!

old cocky
Reply to  Tim Gorman
May 15, 2023 3:22 pm

It doesn’t have to be a lie. It may just be poor wording.

Reply to  old cocky
May 16, 2023 4:03 am

It’s not wording. One side says “yes, averaging” and the other says “no, no averaging”. They both can’t be right. It’s either ignorance or lying on one side. I’ve seen too many outright fraudulent claims (i.e. les) in climate science in the past to give any benefit of the doubt in this case.

old cocky
Reply to  Tim Gorman
May 16, 2023 4:36 am

The Ayers and Warne paper talks about a “40 – 80 second average”, but it’s really thermal response time “averaging” the readings. That seems to be a response to criticisms about not using computational averages of readings.

It’s as clear as mud.

Reply to  old cocky
May 16, 2023 8:05 am

Thermal responses don’t “average” anything. The thermal response of a mercury thermometer is an exponential curve. Since when do exponential curves “average” anything? The voltage across a capacitor rises exponentially but the “average” voltage depends on the time points you use to measure it. What time points do you use for a mercury thermometer in your simulation?

It’s the same problem with climate science assuming you can use Tmax (from a sinusoidal curve) with Tmin (from an exponential curve) and get some kind of an “average” temperature.

Nick Stokes
Reply to  Tim Gorman
May 16, 2023 11:25 am

Since when do exponential curves “average” anything?”

It’s a very standard method. In fact, it is even built into Excel.

Reply to  Nick Stokes
May 16, 2023 2:52 pm

The *average* of an exponential requires integrating the curve over the appropriate time period and then dividing by the time period. It is *NOT* the sum of the 1-second values divided by 60!

How do you do that for a Pt sensor with an exponential response similar to a mercury thermometer while also taking accurate 1-second readings? Are you using the 1-second data to plot the exponential, determining a in e^ax, and then integrating it to find the average?

If the exponential response is e^2x and x is from 0 to 10 seconds then the anti-derivative is (1/2) e^2x evaluated from 0 to 10 seconds.

That’s (1/2) [ e20 – 1] divided by 10 seconds. I’m doing this on the fly in a hurry so if my math is wrong then correct it. But if it is correct then explain how you are doing this in a Pt measuring station. And explain how you are doing it when “a” is a time varying value as well!

Bill Johnston
Reply to  Tim Gorman
May 15, 2023 3:46 pm

What is hard about this and where is the lie? Your language is irrationally aggressive under the circumstances.

Blind Freddy can see that if you want to monitor at high frequency (say 1Hz) in a fire situation – if that is the data you want, you need a sensor with a long calibration bandwidth and a short response time, the tradeoff being that data will be less accurate and more spiky (noisy). (At least that was my experience). Time averaging may flatten the noise, but it does not discard erroneous spikes.

On the other hand, if you want to mimic a thermometer (specifically a dry-bulb), you need a narrow calibration bandwidth (the calibration being quadratic), and a similar response time (and also error-trapping at source). As manually observed met-thermometers are not particularly accurate, you don’t need second-decimal accuracy, which tends to convert variance into signal. Don’t you think that the Bureau would have considered such issues?

Having been repeatedly around the block with this I cannot see in JM’s data a major issue (that is, looking across the distribution). While she and Lance only push one story-line as they seemingly have forever, all the variance could equally be due to the medium (the air), as to either of the instruments.

The graph for Bundaberg shows that highest minute to minute variation coincides with the warmest part of the day (about 840 minutes or 1400 hrs or 2 pm). Plucking-out the Max is the challenge, and if you pluck it out near the front of the screen, its probably a bit different near the back of the screen. That is where the instrument to instrument variation happens. In the same screen does not mean, sampling precisely the same ‘air’.

I’ve given you a serious considered answer, which is not a lie, and now I want to get on with something else.

Have a nice day,

Bill Johnston

http://www.bomwatch.com.au

Reply to  Bill Johnston
May 16, 2023 4:15 am

Time averaging may flatten the noise, but it does not discard erroneous spikes.”

We found it better to stop the robot and take at least two more readings. The outlier would be discarded. I guess you could call that a form of averaging but that really wasn’t the intent. Typical normal heat sources put out consistent heat readings (e.g. heaters, ovens, machines) which would be identified by consistent multiple readings. Actual fires put out varying fingerprints (usually).

We didn’t have the funding or the time to actually perfect anything. It turned out to be more of a proof of concept trial. The damn robot spent more time avoiding obstacles (think a roomba sweeper) than it did actually searching for a fire! Of course our robot just had two wheels and a skid and was only 12″ tall. Not like the robots of today.

As manually observed met-thermometers are not particularly accurate, you don’t need second-decimal accuracy, which tends to convert variance into signal. Don’t you think that the Bureau would have considered such issues?”

Then why do they join in with the rest of climate science trying to identify anomalies differences in the hundredths digit? That variance carries over into the anomalies and when you subtract a value from the base value the variances associated with each ADD. Meaning trying to identify differences in the second-decimal is a useless attempt, the variance (i.e. uncertainty) is larger than the difference!

While she and Lance only push one story-line as they seemingly have forever, all the variance could equally be due to the medium (the air), as to either of the instruments.”

In other words the uncertainty associated with the readings is too broad to make the process fit for purpose. It doesn’t actually matter where the uncertainty comes from – it is still just too large regardless of the source.

Nick Stokes
Reply to  Tim Gorman
May 15, 2023 2:21 pm

If the sensor duplicates the mercury thermometer then just take the reading, no need for averaging. “

Do you ever read? They don’t average. Jennifer and co sometimes try to beat up something about a WMO recommendation which they say says they should average. BoM says that is unnecessary because the thermal inertia is similar to LiG. So, as with LiG, they rely on a single reading (in each minute).

Ayers is averaging to show that the result of this is the same as averaging as the WMO is said to suggest.

Reply to  Nick Stokes
May 15, 2023 2:37 pm

Do you ever read? They don’t average”

The Bureau has explained this as reasonable because automatic weather station (AWS) temperature systems have a response time that means each measurement is not instantaneous, but an average smoothed over 40–80 s (Bureau of Meteorology 2017).”

Do you see the word AVERAGE” in that statement somewhere? I do!

If the thermal response is the same as a mercury thermometer then what does the averaging over 40-80s do?

You are *still* trying to have your cake and eat it too!

Nick Stokes
Reply to  Tim Gorman
May 15, 2023 3:32 pm

Do you see the word AVERAGE”
They do not perform a numerical average. That is simply a descriptor to say that thermal inertia smooths the response, just as it does with a LiG.

karlomonte
Reply to  Nick Stokes
May 15, 2023 6:24 pm

Typical Nitpick Stokes gaslighting…

And you ran away from Tim’s main point fast and far.

Reply to  Nick Stokes
May 16, 2023 4:17 am

an average smoothed over 40–80 s”

Tell me again who can’t read?



karlomonte
Reply to  Tim Gorman
May 16, 2023 6:16 am

And spammed at least 20x.

Bill Johnston
Reply to  old cocky
May 15, 2023 1:41 pm

Good morning old cocky,

Although I quoted the second sentence of Ayers, his paper flows directly to Ayers and Warne and I don’t see this as inconsistent. It does not matter under the circumstances, whether ‘averaged’ or attenuated, the papers show the outcome is the same.

The air being monitored is also not the same from one minute to the next and its variability is not smoothed either. The only real difference is that while both instruments (thermometers and PRT) are monitoring continuously, only the PRT is able to track air temp variation in real (attenuated) time. No one can observe mercury bumping up and down at i-minute intervals, and a Max thermometer can only ratchet up. It does not go back down again until reset.

While we can conceptualise all sorts of things, until backed-up with data they are just hypotheses.

A question to JM is why does she think outliers at Brisbane AP point to a fault with the probe? It could equally be the thermometer. More likely is that the instruments cannot sample the same parcels of air 100% of the time, and the response is in the medium being monitored, not either of the instruments. She is so adamant and half-cocked that she has not considered those possibilities. Furthermore faced with those questions, she will try to degrade the messenger, not debate the message, which is a disgrace.

She does not have to reply, so she won’t, she will move on leaving the issue in limbo for another post on another day.

I have attached minute to minute differences at Bundaberg for 28 February 2017. While raw data consist of an average smoothed over 40–80s, temperature of the medium (the air circulating inside the screen) is instantaneously (and insanely) variable. Real attenuated data, and don’t forget the probe is going through a warming-cooling cycle.

If they were there, thermometers would see this too, but they would not record it. Looked at closely, many minute to minute changes exceed JM’s couple of 0.7 degC +/- outiiers. So why would those outliers and differences necessarily reflect a problem with the probe?

Yours sincerely,

Bill Johnston

http://www.bomwatch.com.au

28FebBundaberg.JPG
old cocky
Reply to  Bill Johnston
May 15, 2023 3:36 pm

Although I quoted the second sentence of Ayers, his paper flows directly to Ayers and Warne and I don’t see this as inconsistent. It does not matter under the circumstances, whether ‘averaged’ or attenuated, the papers show the outcome is the same.

Ayers and Warne is either cross-validation or self-justification of the method used in Ayers, depending on one’s degree of cynicism.
There are differences between the results, albeit probably not statistically significant.

While we can conceptualise all sorts of things, until backed-up with data they are just hypotheses.

Aye, there lies the rub.

I have attached minute to minute differences at Bundaberg for 28 February 2017. 

That certainly demonstrates one of the advantages of higher sampling rates.

Bill Johnston
Reply to  old cocky
May 15, 2023 3:54 pm

They use two totally different separately curated datasets.

As I just said to Tim Gorman. “The graph for Bundaberg shows that highest minute to minute variation coincides with the warmest part of the day (about 840 minutes or 1400 hrs or 2 pm). Plucking-out the Max is the challenge, and if you pluck it out near the front of the screen, its probably a bit different near the back of the screen. That is where the instrument to instrument variation happens. In the same screen does not mean, sampling precisely the same ‘air’.”

This is becoming boring and repetitive and I’m going fishing!

Cheers,

Bill

old cocky
Reply to  Bill Johnston
May 15, 2023 4:36 pm

They use two totally different separately curated datasets.

and different methods, if you’re referring to the Ayers and Ayers & Warne papers.

if you pluck it out near the front of the screen, its probably a bit different near the back of the screen.

Aren’t the screens designed to give excellent circulation? It’s been ages since we had them at high school, but that seemed to be one of their features. There would be a bit of a time delay, depending on the breeze. Would the location within the screen have an effect as far as the sun heating the screen as well? Wood is a good insulator, but it isn’t perfect.

I’m going fishing!

Good idea. It’s always good to get out and do real things. I hope you catch a couple of nice feeds.

Reply to  old cocky
May 15, 2023 4:43 am

Either the sensor is *NOT* duplicating the response time of a mercury thermometer or the averaging is a waste of time.

So which is it? Is the Pt sensor engineered to the response characteristics of the mercury thermometer or is the averaging an attempt to duplicate it in software?

Bill Johnston
Reply to  Tim Gorman
May 15, 2023 2:02 pm

Dear Tim, as most of this stuff is on the Bureau’s website in different places, instead of relying on JM, you could find all that out and report back.

b.

Reply to  Bill Johnston
May 15, 2023 2:46 pm

When I get two mutually exclusive stories on how things are done I don’t need to go find out which one is right in order to know one isn’t!

karlomonte
Reply to  Bill Johnston
May 15, 2023 6:23 am

an average smoothed over 40–80s …. or have I said that?

You’ve been screaming this over and over in bold, asserting without evidence that this proves the two techniques are identical.

Bill Johnston
Reply to  karlomonte
May 15, 2023 2:35 pm

Evidence is in the papers, you could start with them. Especially read the second sentence in Ayers. I don’t have all day to beaten around by people who ignore the evidence, then claim it does not exist.Hence my earlier frustrations. And I’m not screaming, I’m re-stating a fact.

All the best,

Bill

karlomonte
Reply to  Bill Johnston
May 15, 2023 6:26 pm

Now you are into clown territory.

Bill Johnston
Reply to  siliggy
May 14, 2023 10:16 pm

Yes Lance,

Array temperatures were used in the Ayers paper and you can probably purchase them if you want. However, temperatures reported to the database (the temperatures you reported earlier) are all an average smoothed over 40–80s and reported at the end of a single minute-cycle. This has all been discussed with you before and before that.

Why not read the Ayers paper, particularly that they record an average smoothed over 40–80s and reported at the end of a single minute-cycle.

All the best,

Bill

siliggy
Reply to  Bill Johnston
May 15, 2023 8:12 am

Another vomit of meaningless prattle Bill. I do feel sorry for anyone who thinks you have a clue what is going on here.

Bill Johnston
Reply to  siliggy
May 15, 2023 2:09 pm

Bless your arguments with data Lance.

Why not also read the Ayers paper, particularly that they record an average smoothed over 40–80s and reported at the end of a single minute-cycle.

Kind regards,

Bill

%d bloggers like this:
Verified by MonsterInsights