Averaging Last Seconds Versus Bureau Peer-Review

From Jennifer Marohasy’s Blog

Jennifer Marohasy

Averaging by its very nature smooths: removing peaks and troughs. Temperature data tends to be cyclical, whether on a one-minute, or thousand-year scale. The Australian Bureau of Meteorology has made a habit of smoothing when it is convenient and using extreme values otherwise. Take their one-minute temperature data from Canberra Airport: super-sensitive electronic equipment now records the highest, lowest, and last second of each minute and reports the highest second as the daily maximum temperature. Back in 2019 I purchased some of this data to test the Bureau’s claim that averaging the data would make no difference. I found that averaging the last one-second of each minute always gave me a lower maximum temperature. This is because the difference between the the highest and the last second could typically be 0.7 degrees Celsius as shown in Figure 17 – that is from a comprehensive report I co-authored in 2020. I have so far been unable to get this report published in a suitable peer-reviewed journal perhaps because it contradicts the Bureau’s much lauded Ayers and Warne (2020) analysis that comes to the opposite conclusion.

From an unpublished report that I co-authored back in 2020, entitled ‘One Minute Surface Air Temperature Observations – Canberra, Melbourne and Adelaide’

The Bureau claim that there is no need to average all 60-seconds in each minute as recommended by the World Meteorological Organisation when using resistance probes hooked up to data loggers. Ideally the Bureau would at least collect each of these seconds, and test this claim, but they never do. It was after meeting with Carl Otto Weiss for a drink at the Sunshine Beach Surf Club back in 2017 that I decided to at least test the concept by averaging the last second of each minute. This data can be purchased from the Bureau at some cost and with some delay.

In September 2017, I did met with Carl Otto Weiss. He is an Advisor to the European Institute for Climate and Energy and a former President of the German Meteorological Institute, Braunschweig. He was not particularly interested in my work on how the Australian Bureau of Meteorology measures temperatures, he had come to Noosa to meet with me and John Abbot to discuss our research newly published in the journal GeoResJ on the application of artificial intelligence, for evaluating anthropogenic versus natural climate change (GeoResJ, Vol. 14, Pgs 36-46 published in July 2017).

Our GeoResJ paper had been pilloried on Twitter, and we had been defamed by Graham Readfearn in The Guardian. So, it was a relief that contrary to everyone else in mainstream climate science at the time, who wanted our GeoResJ paper retracted/destroyed/burnt, that Otto Weiss praised it.

He thought it a most wonderful contribution to science showing not only what many suspect, that natural climate cycles drive the more significant changes in temperature over hundreds and thousands of years, but most importantly how the latest advances in artificial intelligence could be used to quantifying these effects.

I knew that Otto Weiss had a particular interest in measurement, after all, he had just attended the Australasian Measurement Conference (MSA2017) in Brisbane with Jane Warne from the Australian Bureau of Meteorology.

I wanted to know what he thought about the Bureau recording Australian temperatures as the highest, lowest and last second in every minute rather than taking the average of all the seconds over each minute.

The World Meteorological Organisation recommended that with the transition to more sensitive resistance probes hooked up to data loggers, to maintain some consistency with temperatures historically measured by mercury thermometers that have more inertia, sampling is best averaged over at least one minute.

At that time the Bureau had just finished and published its ‘Review of the Bureau of Meteorology’s Automatic Weather Stations’ in direct response to a front-page article by Graham Lloyd in The Australian newspaper on 1st August 2017. That article, with a photograph of Lance Pidgeon and me at the Goulburn airport, explained the Bureau had been forced to admit it had been caught out setting a limit of minus 10 degrees Celsius on how cold temperatures could be recorded; the limit had been in place for some 15 years since the transition to data loggers and the many ways the algorithms can be pre-programmed.

Side-stepping the issue of the cold limits, Otto Weiss queried whether it really was the case that the Bureau took spot-readings, rather than numerically averaging. I showed him the Bureau’s newly published AWS review and quoted from pages 22 where it explains:

One-minute maximum air temperature is the maximum valid one-second temperature value in the minute interval.

I also explained that the Bureau takes the lowest one-second spot reading as the minimum, but that until recently the Bureau had sent a limit of minus 10 degrees Celsius on how cold a temperature could actually be recorded.

I explained that the Bureau also records the last one-second temperature value in each minute interval. Otto Weiss explained this was the value that was perhaps most useful, the last second in each minute. He suggested that if the Bureau’s new resistance probes with data loggers had time constants that accurately mimicked mercury thermometers as the Bureau claimed, then this could be tested by averaging the last second that is recorded in each minute.

Perhaps the highest of these would then be recorded as the maximum temperature for each day? This is the method since used to calculate the daily maximum temperatures in the much quoted paper by Jane Warne and Greg Ayer published in the Journal of Southern Hemisphere Earth Systems Science (Vol 70, Pgs 160-165) in 2020.

Except there are three key problems with their method, never mind the dearth of data they actually compare:

1. Ayer and Warne claim to compare this last-second with the average of all 60-seconds in each minute except they compared the last second with just 5 one-second values from each one-minute interval incorporating the highest and lowest.

2. They used data from Darwin Airport (Site No. 14015), that is one of the 38 sites that still has mercury thermometers recording temperatures. So why not record the last-second from the probes with the value recorded from the mercury thermometer. If the objective of the Ayer and Warne study is to determine whether the time constant of the resistance probes is equivalent to a mercury thermometer, why not make a direct comparison.

3. While Ayer and Warne conclude that it is appropriate for the Bureau to record the value at the last second of each minute as satisfying WMO requirements, the Bureau don’t ever actually use this value. To reiterate, the Bureau use the highest one-second and the lowest-one second. It is nonsense and dishonest for Ayer and Warne to suggest otherwise.

According to page 17 of the AWS review:

The Almos DAS can provide one-second, one-minute, and 10-minute messages, as well as various other standard format meteorological messages.

So, the probe at Darwin Airport could have been reprogrammed to record a true one-minute average of all 60 one-second measurements. Then the comparison would at least have been consistent with WMO guidelines. This average could then have been compared with the manual recordings from the mercury at Darwin Airport, at least as a check of the WMO guidelines. Alas, and to reiterate, to justify the method currently used by the Bureau, Greg Ayers and Jane Warne would also have needed to make the comparison with the highest and lowest one-second value in each 24 hour period. Ayer and Warne never did this.

Yet, the Ayers and Warne paper has been held up as proof that temperature measurements from the Bureau’s probes in automatic weather stations are equivalent to readings from traditional mercury thermometers. Further, for me to suggest otherwise has been labelled a conspiracy theory.

Meanwhile, I can only characterise the Ayers and Warne paper as a ‘fake’ because it uses this different method of recording temperatures (the highest last-second of all the last-seconds each day) while claiming to be using the Bureau’s method that records the highest second within each minute each day as the maximum temperature. Detail can be tedious, and in this case is important. So I reiterate.

Nevertheless, Ayers and Warne are cited in The Guardian, by the Australian Broadcasting Corporation and the Agency France-Press, as reason to disregard my concerns about the Bureau hyping maximum temperatures.

It was two years after Otto Weiss visited, in 2019, and after purchasing batches of daily one-second data for Canberra, Adelaide and Melbourne from the Bureau, that I tested Otto Weiss’s hypothesis, that is essentially the Ayers and Warne methodology of using the last second in each minute.

I co-authored a 27-page report that sets out our method, results and conclusions. We test a lot more data points than Ayers and Warne, and in different ways. We could not calculate a proper minute average because the Bureau never collects every second of each minute. And we were unable to compare against a mercury thermometer, because the Bureau will not provide us with the parallel data for Canberra Airport, or any of the other locations.

Like the Ayers and Warner paper, our analysis was ready for publication in 2020. Entitled ‘One Minute Surface Air Temperature Observation – Adelaide, Canberra, Melbourne’ it, however, remains unpublished. Unlike Ayers and Warne, I no-longer have any colleagues willing to risk publishing me in a mainstream climate science journal. The last editor who published me had his journal shutdown: GeoResJ was discontinued in 2018.

My co-author of this report, testing the last one-second hypothesis as discussed with Otto Weiss all those years ago, cannot be named. My co-author also lives in Australia that is purportedly a secular democracy, but he risks losing his day job for assisting me with the analysis and report given there is no tolerance of dissent in Australia when it comes to issues of science and climate change.

Our unpublished manuscript begins:

Resistance temperature detectors (RTDs) in the Australian Bureau of Meteorology (BoM) automatic weather station (AWS) network provide temperature data at a rate of 1 Hz (sample per second). For every clock minute, three surface air temperature (SAT) observations are recorded:
• T , the last one-second reading (taken at 00 seconds of each minute)
• Tmax, the highest one-second reading over the last 60 seconds
• Tmin, the lowest one-second reading over the last 60 seconds

The BoM, however, only publishes the daily extreme values and associated statistics, e.g. the monthly and annual means. The one-minute data can be requested from the BoM for a given station, typically at a cost and processing delay.

The BoM has published statements indicating that their RTD and historical liquid-in- glass (LiG) measurements are equivalent, and specifically that the response times are similar. Every one-second reading is viewed as a time-averaged value (integrating over the past 40 to 80 seconds), effectively describing the moving-average temperature leading up to the given second, due to design of the RTD. High-frequency temperature fluctuations should therefore not be seen from second to second in the data, and also not from minute to minute (although more fluctuation could be expected at longer time scales).

Evidence that high-frequency fluctuations are indeed present in the measurements is given in this report, questioning the equivalence between RTD and LiG data.1 This can be seen by evaluating the time series consisting of all the last-second observations (a temperature series with constant sample spacing of 60 seconds), and also the difference between the last second and extreme measurements (Tmin and Tmax) for every minute, which indicates the measure of fluctuation possible, as measured with an RTD, within one minute. ENDS.

This is technical speak for let’s compare the last second reading from the resistance probe (RTD) with the highest and lowest reading each minute and the average.

When we did the analysis for Canberra airport – the example I am using in this note – we found that within the one-minute interval the difference between the last second reading and the highest second reading (maximum temperature archived by the Bureau) in any one minute interval was often more than 0.5C, and sometimes as high as 2.1C, as shown in Figure 2.3 chart B and table bottom left.

Figure 2.3 is from page 6 of my unpublished report. I am keen to get this published, should a reputable journal editor be prepared to take it on.

We concluded our analysis of the Canberra, Melbourne and Adelaide one minute data with comment:

The approach of the BoM to measure SAT [surface air temperature] is to record the highest, lowest and last second of every minute, as discussed before. The last-second data with the daily extremes are published and updated every 10 minutes on the “Latest Weather Observations” page for a given AWS [automatic weather station]. The data from the last 72 hours are updated every 30 minutes. The one- minute Tmin and Tmax data are also used to determine the daily ADAM Tmin and Tmax.

The WMO recommends averaging RTD [resistance probe] data over one minute. However, the BoM does not average at all, which is the reason for the spikiness of the data analysed in this report. Another example is shown in Fig. 16, displaying the last-second data observed at Canberra Airport (70351) on 17 Jan 2019.

If the WMO recommendation were followed, the BoM would provide the mean of 60 values — instead of only the single last value — for each minute. This would smooth the time series, similarly to what the averaging process depicted in Fig. 16 would do.

For illustration purposes, the moving average (MA) series over the last 5 samples (or 5 minutes, with only 1 sample per minute) is shown over the spiky last-second data. Al- though this illustration is not perfect (more data is needed to smooth over every minute), it does show that the daily Tmax would likely be lower, as it would be based on an average and not an instantaneous observation. ENDS.

Numerical averaging will drop the daily maximum temperature by almost a full one degree Celsius relative to taking the last second in each minute and by more than one degree when recording the highest one-second in each minute.

It is Bureau policy to record the highest one-second in each minute and the highest of these becomes the maximum temperature for that day for that location.

ADAM is the value archived by the Bureau as the maximum for that day. Data Tmax is the last one-second reading for those minutes, and the red line (MA) is the moving average of the five second averages. This is from page 22 of my report, that I would like published in the peer-reviewed climate science literature.

This last chart (Figure 16) from my unpublished report, shows that contrary to the hypothesis of Carl Otto Weiss, which is also a central thesis of the fake paper by Greg Ayers and Jane Warne, recording just the last second of the minute is not equivalent to the numerical average of even just the five last-one second readings. At least this was the situation at Canberra on 17th January 2019.

Lance Pidgeon, who often signs comments at blog threads as Siliggy, with me at the Goulburn Airport weather station late July 2017.

This is part 6 of ‘Jokers, Off-Topic Reviews and Drinking from the Alcohol Thermometer’. In part 7 I will explain why it is imperative that Greg Ayers and Jane Warne provide the A8 reports for Darwin Airport for April 2018 – that is the parallel data on which their analysis is based. The highest, lowest and last second records for each minute for the months of March, April and May of 2018 also need to be made public. You can read some of my criticism of Warne and Ayer at the popular climate blog WattsUpWithThat.com. I am grateful to Anthony Watts and Charles Rotter for republishing this series.

Get notified when a new post is published.
Subscribe today!
5 20 votes
Article Rating
279 Comments
Inline Feedbacks
View all comments
Ed Zuiderwijk
May 16, 2023 2:11 pm

How about shifting the boundaries of each minute by 30s as an internal check?

mleskovarsocalrrcom
Reply to  Ed Zuiderwijk
May 16, 2023 2:14 pm

They’ve probably tried every iteration and ended up with the one that suits them best.

ferdberple
Reply to  Ed Zuiderwijk
May 17, 2023 7:53 am

There is an interesting quirk with numbers. Take any series of numbers. Look at the first digit. In almost every series “1” is the most common first digit.

Now explain why 59 seconds was chosen as the sample point? How is that superior to 58?

Reply to  ferdberple
May 17, 2023 8:46 am

It isn’t superior. If the response time of the sensor can’t keep up with the measurand then just pick a point and record it – just like with an LIG. Then specify what the uncertainty interval is that surrounds that measurement. Climate science keeps wanting to assume there is no uncertainty in their measurements – thus they can calculate differences out the to hundredths digit when the sensor are only good to the tenths digit at best!

Reply to  ferdberple
May 17, 2023 10:10 am

Now explain why 59 seconds was chosen as the sample point?”
That would be because the older automatic weather stations were taking a measurement just once per minute. Memory was expensive and telemetry more difficult in the 1970s. Unlike a glass thermometer which never stops recording but until the highest or lowest extreme, the electronics takes a pause between readings to save on memory usage and telemetry usage. Binary numbers Start at zero. So 59 is actually one minute gone by. that is one minute between readings of signal and noise. The shift to reading more often became easy with cheaper memory and better comms. Reading the thermometer more often allows better differentiation between noise and real signal, if the readings are averaged or integrated over a similar time to what the old thermometer inherently did. Problem is the BoM do not average to remove electrical noise.

Reply to  ferdberple
May 17, 2023 10:20 am

Why not 30″.

old cocky
Reply to  Jim Gorman
May 17, 2023 2:32 pm

It obviously should be 42.

bdgwx
Reply to  old cocky
May 17, 2023 2:56 pm

Pfft…everyone knows it should be 17.

old cocky
Reply to  bdgwx
May 17, 2023 3:27 pm

The white mice said they’re busy formulating the current question, but the dolphins may be able to help with the new one.

mleskovarsocalrrcom
May 16, 2023 2:11 pm

Anything/anybody that contradicts the AGW narrative is chastised, ostracized, then ignored. It makes no matter what the proof is.

May 16, 2023 2:42 pm

You simply cannot record 1 second data to find the maximum and minimum during a minute period and also claim that the Pt sensor has the same response time as a mercury thermometer.

A mercury thermometer typically takes 2 seconds to reach 67% of a step change in temperature. To reach equilibrium, assuming no further change in the measured temperature, is usually assumed to be 5 times the 67% response time or 10 seconds. That’s because the response is exponential and it takes a long time for the asymptote curve to settle.

It is simply impossible for the Pt sensor to be physically designed to have the same physical response as a mercury thermometer while also providing fast-response times in the 1 second range.

Averaging won’t fix this. If the Pt sensor reports higher (or lower) temperatures at 1 second intervals than a mercury thermometer would, then the average of those 1 second Pt readings will *not* be what the mercury thermometer would display. It will be something different. There would be some kind of an offset due to the higher (or lower) temperatures recorded by the Pt sensor. If the Pt sensor is truly physically designed to physically replicate a mercury thermometer’s response time then taking 1 second readings is meaningless.

You can have a fast response or a slow response from the same physical device, but you can’t have both at the same time.

Taking the last 1-second reading in a minute doesn’t help either. If the temperature changed in the 58th second then the mercury thermometer would not have enough time to fully respond. A Pt sensor with the same response wouldn’t either.

A slow response time doesn’t provide “averaging”. It just provides a lagging indicator for what is being measured. If you get a step-change at time zero with no further change, you won’t get an equilibrium reading for 10 seconds. The temperature readings in between 0 and 10 seconds isn’t an “average”. The average of an exponential curve isn’t the mid-value. You have to integrate e^(ax) over the time period and divide by the time to get the average.

Jeeshh folks, has anyone ever looked at the response of a capacitor to a step change in voltage on an oscilloscope? The response time of a mercury thermometer is very much the same!

fansome
Reply to  Tim Gorman
May 16, 2023 3:22 pm

What you described is called a first-order lag response. It can be mimicked with a R-C circuit with a 2-second time constant. As you said, 5 time periods are require to obtain a 99+% response. The rate of change of a first-order lag is proportional to the difference between the input and the output. A large time constant could be used to remove those multi-minute oscillations and get a smoother, if delayed response.

These are used in airplane simulations and flight control systems.

Reply to  fansome
May 16, 2023 4:56 pm

dont confuse him with facts

michael hart
Reply to  Steven Mosher
May 16, 2023 6:03 pm

Mosher misunderstands two consecutive posts.

Reply to  Steven Mosher
May 16, 2023 6:28 pm

I cant believe the number of people in climate science that believe the average of an exponential is the midpoint between the max and min.

And you speak of facts?

You dont even know that a low pass filter is not an averaging circuit!

Reply to  Tim Gorman
May 16, 2023 10:03 pm

Tim your figure of 2 seconds may be possible in fast moving water but it doubt it. In air you have times that are much longer like 40 to 150 seconds. 40 being for when the mercury thermometer is in wind moving at around 5 meters per second. 150 when the wind stops.

Reply to  siliggy
May 17, 2023 6:13 am

My figures are for a LIG thermometer dunked in a water bath. I figured that was the best the thermometer could do. It is based convection transfer rate and the convection transfer rate from an air/glass interface would be totally different. I wouldn’t be surprised at all to see 40s to 150s for the response time. The actual equilibrium time would then be at least 5 times that long or 200s to 750s (about 3min to 12 min). If the air temp doesn’t stay constant for that length of time then the thermometer will *never* actually catch up with the temperature change. It would totally miss the actual Tmax value because the temp would already be on the way down before the thermometer could respond to the maximum!

If they have actually designed the Pt probes to the same specs then they are only fooling themselves with 1s, 10s, or even 1min data. I think Feynman said something about that but I don’t remember it exactly.

Reply to  Tim Gorman
May 17, 2023 10:20 am

 I wouldn’t be surprised at all to see 40s to 150s for the response time. “
“Response time” needs to be clearly defined. And it is usually taken as 99.3% or five time constants.
A time constant is always 63.2%.
Only the range of thermometer time constants in air matters for the weather station. That is air from 0 to 100 humidity and 0 to whatever can happen inside the thermometer box wind speed.
The sampling rate needs to be able to take more than enough readings to join the dots correctly along all those plotted lines. Sampling rate should never be mixed up with response time. Two different things entirely. Whenever you think of “sampling rate” just remember the game of join the dots. More samples = a clearer picture and a better average.

Reply to  siliggy
May 17, 2023 2:23 pm

The response time is the time to respond Lance, it refers to particular instruments and it is in seconds not% (https://www.thermoworks.com/thermometry101-basic-concepts-speed/ first-page pick out of 17,000,000 hits in 0.4 seconds). So there all sorts of combinations, but only one kind of degC Tmax met-thermometer. It does not matter what it does every second, because it only produces one observation per day.

In collaboration with suppliers, the BoM has tried to match that response and I don’t doubt you have instrument reports that show that. Expecting two values to absolutely agree 100% of the time is fantasy science. For example, JM could check reset values against 1900 dry-bulb remembering of course, as an experienced observer would, that the two values would not be simultaneous.

(My routine was DB, WB, Max then Min, check the chart recorders, close the screen, do the rain, then wind, cloud and all the rest, and finally pan Ep. About 10-minutes to do, then fill-out the book etc, about 20-minutes to 1/2 hour out of the day.)

I much doubt 40 to 150s. I have done the same as Tim with a met thermometer. But a screen does not behave like a water-bath or ice-bath.

Furthermore, an eyeball would be hard pressed to sample a dry-bulb thermometer accurately at the rate of a PRT probe The met-thermometer is also only calibrated at 1/2 degree increments. So the theoretical uncertainty of a single observation is 0.25 degC. This rounds-up to 0.3 because the second decimal is impossible to estimate, because met-thermometers are only calibrated at 1/2 degree increments (or did I say that!) …..

Add to that observer uncertainty (parallax etc) and the thermometer starts to seem a bit subjective. Then if its hissing down rain, or the observer is running late because the kids were krook … there is the human factor; writing numbers down on an A8 form inside a plastic bag and holding an umbrella in the wind with your third hand! I know, I did it on-and-off, weekends and all for a decade.

So why is a met-thermometer the ‘perfect’ instrument?

Why not a PRT probe smoothed over 40–80s (Bureau of Meteorology 2017) so it mimics the thermometer?

What is the squabble about in fact. Oh that’s right. JM is miffed that her paper was not published? Well, join the club. They try not to publish bad stuff and they won’t publish stuff that rocks the boat. That is why, with a colleague, I started http://www.bomwatch.com.au and why we also publish the data we use. JM could publish her data too so we could all chew on the same bone!

Yours sincerely,

Bill Johnston

http://www.bomwatch.com.au

bdgwx
Reply to  Bill Johnston
May 17, 2023 2:53 pm

The response time is the time to respond Lance, it refers to particular instruments and it is in seconds not%

siliggy isn’t saying the time constant is a percent. What he is saying is that the time constant is the amount of time it takes to adjust to 63% of the change. I’ll call that T63. WMO defines response time as the amount of time it takes to adjust to 95% of the change. T95 = 3 * T63. siliggy defined it as T99 = 5 * T63. Both are equally valid. You just have to clearly define what you mean by “response time” like siliggy said. BTW…you can find more information about the concept of e-folding and how you can convert between specific time constant and adjustment percent values here.

Reply to  bdgwx
May 17, 2023 5:47 pm

Yet I get that bdgwx. It comes out of the [Burt & Podesta 2020] paper and other publications. However if you are buying a thermometer as here: (https://www.thermoworks.com/thermometry101-basic-concepts-speed/), you specify seconds not %.

The perennial problem is to balance the PRT-probe so it behaves like a lig (specifically a met-lig) MINUS the odd spike. Averaging post-hoc is one way, the other way is to choose a probe that attenuates over 40 to 80 seconds, or whatever it is.

The question is whether Ayres and Ayres and Warne achieved that, and my assessment is that their verification experiments did meet those objectives.

To my mind, the myths in the alternative argument are that end-of-minute readings are instantaneous (and therefore should be averaged); and that the BoM selects within-minute max and min as the highest and lowest for the day.

For the max/min thing, while I’m not rock-solid (I can’t point to a definitive reference on-the-spot) the evidence I have points to them using the highest and lowest end-of-minute values, not within-minute values as max and min. (Because they record it, does not mean they use it.)

I don’t want to be chasing anymore rabbits right now, so I’ll leave e-folding for another day.

Thanks,

Bill

Reply to  bdgwx
May 18, 2023 6:20 am

Perhaps reading some Tektronix documents on oscilloscope bandwidth would help explain this.

Tek says that the bandwidth you need in an oscilloscope is 5 x the highest frequency signal to be measured. And even then you will see a +/- 0 2% uncertainty in your reading.

That uncertainty interval grows larger as your scope bandwidth goes down. Things like RC low-ass filters limit bandwidth thus trying to filter your data results in limiting the bandwidth (i.e. the signal details) of the data thus increasing the uncertainty interval of your results.

This exact same thing applies to temperature measurements. Limiting bandwidth raises uncertainty. It’s why you simply should not want to try and emulate LIG measurement devices in sensors that are more capable.

But then, in climate science, all uncertainty cancels so I guess it isn’t any big deal.

Reply to  Tim Gorman
May 18, 2023 12:48 pm

It is one reason the expertise of experimental physicists and chemists like Pat Frank is so necessary. They have the knowledge and background in making proper measurements and have incorporated advancing technology into their work.

Climate scientists appear to not have the desire to move on to new techniques and analysis.

Is anyone aware of a council or organization in climate science that is dealing with new technology, techniques, and protocols necessary to move the physical part of the science forward? It seems like modeling has become the be all and end all of doing climate science.

Reply to  siliggy
May 17, 2023 4:40 pm

My take on response time is based on my electrical engineering training. The response time of an oscilloscope is the time it takes for a square wave to reach 67% of the amplitude of the square wave. The time to actually reach equilibrium is 5 times that value.

Sampling and response time are very much intertwined. Even digitally sampled oscilloscopes have a bandwidth limit on their sampling – it is based on the response time of the circuity doing the measurements. Think of a faster processor in your computer. A faster processor lets you sample with a decreased time interval. compared to a slower processor. If you try sampling faster than the circuitry has bandwidth to handle you get things like artifacts and aliasing. That’s what happens when you try to take 1 second readings from a measuring device whose response time is 2 seconds (or more). What you think you got isn’t really an accurate picture of reality.

Reply to  siliggy
May 17, 2023 1:22 pm

Reference Lance? or presumption.

Bill

bdgwx
Reply to  Bill Johnston
May 17, 2023 1:49 pm

siliggy’s estimates are consistent with [Burt & Podesta 2020]

Reply to  bdgwx
May 17, 2023 2:47 pm

True that thanks. Sadly Also as they explain an associated wind speed is required so junk numbers like this below that give no clue to either time constnat or wind speed are less than useless. ” smoothed over 40–80s” Every time you see that silly meaningless 40-80s realise that it is not enough to figure anything out at all. Rather it is proof of brain dead regurgitation.

Reply to  siliggy
May 17, 2023 4:38 pm

Rather it is proof of brain dead regurgitation.

Amen, this has become painfully obvious.

Reply to  siliggy
May 18, 2023 6:02 am

+100!

Reply to  bdgwx
May 17, 2023 5:03 pm

Good, so we hare a reference, we also have Greg Ayres and Ayres and Warne (Dr Anne Warn used to run the Bureau’s metrology lab so she is pretty sharp about these issues.)

I also have no doubt Lance has read the paper and could have even quoted his stats from it. I have seen it before too, so it is a handy refresher, even if they do things differently to what the BoM does.

The paper also makes the very important point that: “Of course, the pursuit of shorter and shorter time constants to enhance sensor responsiveness in meteorological measurements of air temperatures is desirable only up to a point. Unlike wind speeds, for example, there is little benefit in sampling air temperature every second outside of specific research applications, such as turbulence or eddy-correlation measurements. Very fast-reacting sensors could result in higher fluctuations, increased thermal noise and relatively greater impacts from other environmental factors, such as rapid changes in wind speed or solar radiation.” Touché.

Thanks, bdgyx, more grist!

Lance’s retort below “less than useless. ” smoothed over 40–80s” ” is actually what goes on in a screen. The instrument smooths over 40–80s so it mimics lig thermometers exposed to the same turbulent-or-not conditions.

Bill

Reply to  Bill Johnston
May 18, 2023 9:45 am

“”””Very fast-reacting sensors could result in higher fluctuations, increased thermal noise and relatively greater impacts from other environmental factors, such as rapid changes in wind speed or solar radiation.””””””

These are exactly the things that should be captured!

“Thermal noise” needs defining, but noise is usually dealt with in the uncertainty calculations. Rapid changes in wind and solar are part of what define the temperature distribution throughout a period of time. In order to get any idea of how to model these variation one must measure their impact first.

This is just a reason to maintain traditional LIG measurements and calculations by trying to emulate LIG thermometers.

Climate scientists should embrace new technology and develop new protocols that will maximize the usefulness of the newer technology. Restricting the newer technology to “old methods” is simply being “sticks in the mud”!

Reply to  Jim Gorman
May 18, 2023 4:17 pm

This is really funny. A time constant is always 63.2%. Just the same as there are 60 minutes in an hour.

Reply to  fansome
May 16, 2023 6:20 pm

Nope. You may emulate the exponential response but that is not averaging. The average of an exponential requires integration not just straight addition like you do for an average.

The issue is not trying to remove high freq noise to get a “smoother response which is what you are suggesting.

The issue is trying to duplicate a mercury thermometer.

Reply to  Tim Gorman
May 17, 2023 1:19 pm

A mercury thermometer typically takes 2 seconds to reach 67% of a step change in temperature. To reach equilibrium, assuming no further change in the measured temperature, is usually assumed to be 5 times the 67% response time or 10 seconds. That’s because the response is exponential and it takes a long time for the asymptote curve to settle.

I would be interested in a reference to this Tim. Also do you mean a meteorological thermometer or a thin laboratory thermometer.

While it would be logical to assume the response was exponential, the actual change in T being measured are mostly in the 10ths of a DegC range.

Cheers,

Bill Johnston

Reply to  Bill Johnston
May 18, 2023 5:18 am

Do a search for “Response Time of a Thermometer” by Volker Thomsen, Spectro Analytical Instruments

The response time of a measuring device doesn’t determine the rate of change in what is being measured. It doesn’t matter what the rate of change in the measurand is, the measuring device must have a higher bandwidth than the measurand’s rate of change in order to be accurate. It’s why you don’t measure a 100Mhz signal with a scope that has a 20Mhz bandwidth (i.e. its response time).

Giving_Cat
May 16, 2023 2:42 pm

> “Back in 2019 I purchased some of this data to test the Bureau’s claim”

Needing to purchase data you’ve already paid for should have been the first red flag.

A mercury thermometer might notice a jet exhaust transients but a digital ‘couple would absolutely notice. That alone makes any continuity in the temperature record suspect. And while we are at it has anyone done an airport activity versus temperature analysis. Two weeks ago at Newark we were #22 in the line for departure because of a “service issue” we were a couple hours late for leaving the gate. Because Jet Blue tried to save some money with a light fuel load we were forced back to a gate to take on more kerosene. Then reentered the wait line. Any idea what the recorded tarmac air temperature was with all those planes for all those hours milling about?

sherro01
Reply to  Giving_Cat
May 16, 2023 4:32 pm

Cat,
I have never seen a broad calculation that starts with the area of the airport, then a volume to a series of altitudes above it, heated by combustion of fuel for aircraft in that volume.
My hypothesis is that insufficient fuel is burned on a typical day at a typical airport to cause a significant change on average to the air temperature.
I once started this exercise, but was warned off in serious words as if I was a saboteur intent on damage to airport fuel supplies. So I did not get numbers for airport fuel consumption.
The next stage was to move from average heating around the thermometer to anomalous heating, as with jet wash hitting the screen. This would respond best to actual measurements, not general scenarios.
Has anyone ever seen a study of temperature changes with distance from the jet engine in operational settings?
One wonders why there is so much chatter instead of measurement.
Links? Geoff S

Reply to  sherro01
May 16, 2023 4:58 pm

the only measurements ive seen were a japenese study that showed NO heat contamination.

why is simple. turbulant mixing

Reply to  Steven Mosher
May 16, 2023 10:02 pm

Japanese?
Turbulent?

Reply to  sherro01
May 16, 2023 6:32 pm

It was fairly recently and an airline here in the UK started using electric/diesel ‘tractors’ to move aircraft from the loading bay/gate and out to the start-point on the runway.
(Those heavy square chunky vehicles used to drag dormant aircraft around anyway= shunting engines as similarly used on railways)

The figure they claimed was that that would save 2 tonnes of Kerosene, on average, for every aircraft departure…

  • or = roundabout 2,500 litres of actual fuel
  • or = roundabout 25 megawatt hours of energy

Just for rolling around on the ground waiting for take-off.

As I recall, that was the ‘thing’ that killed Concorde at Paris – its fuel load.
For that particular flight, the aircraft was totally loaded to the hilt with passengers (overweight Germans on a supersonic jolly) and baggage weight – also it was also going to be a long flight.
So, they crammed as much fuel into it as they possibly could – totally brim full

The ground crew knew this was naughty – they knew they were supposed to leave some ‘air space’ in the aircraft’s tanks.

BUT they were under instruction to fully load it and ‘somebody’ assumed/hoped that the craft itself would do that by the amount of fuel it would typically use taxiing out and waiting for clearance to take off.

But its taxi went sweet as a nut and without even having to put the brakes on. slow down or even stop, it got instant clearance to go go go. Go Concorde go!

So when its tyre exploded and hit the underside of the fuel tank, there was no free/air space inside the tank to absorb the shock.
The perfect storm all came together and happened.

And all the rest, the aircraft, the Concorde, the people inside, the houses on the ground – are history

Writing Observer
Reply to  sherro01
May 16, 2023 9:30 pm

NOTE: One location, a quite small set of data points. So, definite showing of correlation, but NOT proof of causation.

With that out of the way…

I was taking a business statistics class about the time that I encountered Anthony’s weather station project, and I needed to make a presentation (in PowerPoint, of course, pity me, please…). I decided to look at the maximum temperatures recorded at Tucson International Airport, and compare them to the readings taken at the (then still existing) Tombstone reporting station. (For those who don’t know, the Tombstone station was in the same place for more than a century – in a bare dirt lot, with no paving nearer than 300′, and very little masonry construction around it.)

Well, I found Mikey’s hockey stick very quickly in the TIA data. In Tombstone, though, the curve was essentially flat (a very slight decline, actually – but not of anything that you could call significant). Quite interesting, and that ended up in my presentation, to show that your statistical analysis is, at best, only as good as your data, and that what data you use can make a huge difference.

VERY curiously, though, I had an interesting “blip” in the curve for TIA – in September 2001, the monthly average DECLINED by more than three degrees Fahrenheit, compared to the average for all Septembers in the TIA data set. (All measurements for the presentation were in those units, not Celsius – I was presenting to a business class, not a science class.) Tombstone showed no such “blip.”

Intrigued, I dug up the daily maximums for both stations. Tombstone – daily maximums chugged right through the month, no significant variation from the averages for all dates in that set. TIA – same thing. UNTIL September 11th. Then there was a difference from the average for each September date of nearly four degrees lower, which only disappeared once commercial air traffic returned to some semblance of “normal” towards the end of the month.

Hmm. What happened on that date?

Now, I didn’t (and haven’t) dug any further. MIGHT be that a cold front moved in and lingered for three weeks. MIGHT be that rattled airport personnel didn’t record the data correctly, or at the same TOB that the other measurements in the data set were recorded.

But still a VERY interesting coincidence, wouldn’t you say?

Reply to  Writing Observer
May 16, 2023 9:48 pm

Nice work — note that all the tendologists are arguing up and down in this very thread that aircraft exhaust can’t affect measured air temperature data.

harryfromsyd
Reply to  Writing Observer
May 16, 2023 10:50 pm

It would be interesting to see if the various Covid travel bans around the world had similar effects on airport temperature data.

Writing Observer
Reply to  harryfromsyd
May 17, 2023 4:56 pm

I suspect that (assuming causation, here) I would find nothing. The thing about 9/11 was that ALL planes were grounded – even the lower volume of the CoViDiocy still had planes using the runways, and it only takes one hot wash to spike the maximum reading.

TIA also has an Air National Guard unit in one part of the facility. They were ALSO grounded after 9/11; only Air Force in some critical parts of the country were flying. They were running as normal all through the “pandemic.”

One could argue quite reasonably that the instruments at airports and airbases are not fit for ANY purpose. Neither for climate research, nor for their original purpose of reporting the temperature over the runways (temperature affects density which affects lift which affects thrust required to either get off the ground or to stop before you run out of runway). Too close for the first, too far away for the second.

old cocky
May 16, 2023 2:44 pm

Just a pedant point here:

1. Ayer and Warne claim to compare this last-second with the average of all 60-seconds in each minute except they compared the last second with just 5 one-second values from each one-minute interval incorporating the highest and lowest.

The 5-point average was used in the earlier Ayers paper. Ayers and Warne did average the available set of 1s readings in each minute.

Nick Stokes
May 16, 2023 2:51 pm

While Ayer and Warne conclude that it is appropriate for the Bureau to record the value at the last second of each minute as satisfying WMO requirements, the Bureau don’t ever actually use this value. To reiterate, the Bureau use the highest one-second and the lowest-one second. It is nonsense and dishonest for Ayer and Warne to suggest otherwise.”

No, they use the highest of 60 for max, the lowest of 60 for min, and the last second for the regular logged data. Ayers latest paper begins:
“Bureau of Meteorology automatic weather stations (AWS) are employed to record 1-min air temperature data in accord with World Meteorological Organization recommendations. These 1-min values are logged as the value measured for the last second in each minute.”

But as Ayers (retired head of BoM) carefully demonstrated, it doesn’t matter. Pt wires do not respond as hastily as people here think. This is what a probe looks like:

comment image

It is a wire totally enclosed in steel, and the thermal inertia of the steel is designed to match that of a LiG. No heat gets to the wire without first warming the steel.

Reply to  Nick Stokes
May 16, 2023 3:01 pm

Then what good are the 1-second readings? The probe will always provide a lagging reading of the actual temperature, especially when it is changing rapidly. You can’t have your cake and eat it too!

Nick Stokes
Reply to  Tim Gorman
May 16, 2023 3:58 pm

Of course it takes a lagging reading. So does a LiG thermometer. BoM aims to match the lag. Jennifer’s complaint is that the probes are too responsive.

Reply to  Nick Stokes
May 16, 2023 4:23 pm

Answer the question, Stokes.

Reply to  karlomonte
May 16, 2023 9:54 pm

Read what Nick said karlomonte.

Reply to  Bill Johnston
May 17, 2023 12:52 pm

Request DENIED, SpammerBill.

Reply to  Nick Stokes
May 16, 2023 6:31 pm

Why cant you address the issue that you can have a fast response or a slow response but you cant have both?

Nick Stokes
Reply to  Tim Gorman
May 16, 2023 7:30 pm

They are not trying for both. They are trying to match the response time of a LiG. Not more, not less.

Reply to  Nick Stokes
May 17, 2023 3:59 am

Then why all the emphasis on “averaging”? If the response time is the same just take a reading, ONE reading, just like you would do with a mercury thermometer?

Trying to read a mercury thermometer at one sec intervals will tell you nothing. It can’t respond that fast. Even at 10 second intervals it’s probably impossible to actually identify a change because of the resolution that is possible. The same thing applies to 1 minute readings. Because of scale resolution it can be difficult to identify changes at 5 min intervals, the change is almost certainly to be within the uncertainty interval of the instrument.

So why would a Pt based measurement device with the same response time as a mercury thermometer be any different? 1 sec changes simply get lost in the response time. 10sec changes are only truly identifiable if the temperature is in equilibrium for at least that long. If the temperature is changing rapidly, such as at sunrise/sunset or at the front/back of a storm front, you’ll never catch up with the actual temperature, the measurement device will always be in catch-up mode.

When I speak of lag time, I am not just speaking of the amount of time to reach equilibrium, but also the ability of the device to accurately keep up with a changing environment.

Reply to  Tim Gorman
May 17, 2023 12:43 pm

Taking just one reading is the problem not the solution.
Remember an in-glass thermometer never stops. There are no gaps between readings because it is always reading. If the observer resets the maximum glass thermometer at 9am then at 9am and 29 seconds it is a little warmer, the mercury thermometer will catch it but a system that only that only takes one sample per minute will not. The mercury thermometer has an infinite sampling rate. Some people who live in the inverted reality cannot see this. The BoM did see this problem but did not come up with a solution just another detour from reality. They record readings often enough to have seen that rise but instead of reliably choosing it their system chooses the highest reading of the 60 per minute. This reading could have been influenced by electrical noise. It could be way higher than the 9:00:29 am temperature. The last reading of the minute at 9:00:59 could also have been influenced by noise. Averaging 60 readings over the minute causes the noise values to cancel each other out. A fast sampling rate with an appropriate average from a sensor with an appropriately LONG enough time constant solves both problems. The total system time constant needs to match the inglass. Matching just the thermometer ignores the noise problem. Taking readings too far apart prevents noise from being removed.

bdgwx
Reply to  siliggy
May 17, 2023 1:38 pm

I think I see what you’re saying. While the higher time constant of the instrument may suffice to abate the noise and uncertainty of the environment the fact that only 1 sample is taken from the data logger does not allow for the same abatement of noise due to the electronics. In that sense BoM might be advised to use a lower instrumental time constant and implement averaging in the data logger.

Reply to  bdgwx
May 17, 2023 3:57 pm

The fact that you have a recommendation that is different from that be used is a perfect example of why climate science should move into the present and use proper science to obtain temperature profiles. It is like trying to make an automobile do the same thing that horse drawn carriages did just because of tradition. New technology, new applications, new and more accurate measurements. Holy crap why aren’t digital voltmeters designed to emulate analog meters. We can now measure gravity waves! Why not sub-second temperature sampling and integrated results?

Reply to  Jim Gorman
May 18, 2023 3:49 am

Jim,
Absolutely correct.
When the Pt probes first surfaced in the 1990s, the first thought in this mind was “Beaut, now each probe can report 100 obs each day so that an area under the response curve can be used to measure temperatures and their time trends instead of a couple of obs a day because that was the historical way.
But we still do not have an integrated approach unless it is used internally and not conveyed to the interested public.
Geoff S

Reply to  Jim Gorman
May 18, 2023 5:46 am

You mean like having your horse pull the automobile like a fancy carriange?

Reply to  bdgwx
May 17, 2023 5:17 pm

Oops reply was meant for bdgwx.

Reply to  bdgwx
May 18, 2023 5:52 am

Why average it at all? Just do an integration of the data and get degree-day values? Averaging can only get rid of Gaussian (symmetric) noise. I’ve used some of the fanciest software-defined-radios of today. They can’t eliminate non-Gaussian noise like lightning pulses. They basically do what the old analog limiters and such did, they just do it in software.

Degree-days remove the problems with temperature data being a time series. You just add’em up!

Move into the 21st century!

Reply to  siliggy
May 17, 2023 3:04 pm

Lance you say They record readings often enough to have seen that rise but instead of reliably choosing it their system chooses the highest reading of the 60 per minute.

BUT THEY DON’T. They only use end-of-minute samples which are an average smoothed over 40–80s. Either you are misleading JM or she is misleading you (or both). In any event, it is bending reality that you and she continue with the fantasy that end-of-minute readings are instantaneous point-data.

On average, the machine averages over 40–80s, which covers one-half to two full 1-minute sampling cycles. I first raised the spike issue within the email group years ago and it still could be an issue. However, the BoM uses error-trapping at-source to dismiss spikes, and they have published rules for that.

Have a nice day.

Bill Johnston

Reply to  Bill Johnston
May 17, 2023 4:35 pm

SpammerBill is stuck in an infinite loop.

Reply to  karlomonte
May 17, 2023 5:13 pm

SpammerBill is stuck in an infinite loop.”
Yes same easily disproven arguments over and over week after week. All anyone need to do to see that the current value is not the same sample of the minute value as either of the accumulating extremes is watch the all of state observation pages coming out of the BoM.

Rundle.png
Reply to  siliggy
May 17, 2023 8:40 pm

How can they record three different numbers at precisely the same time Lance (which is noted as a full minute value, not a by-second value)? Besides it does not exclude that the number pertinent to this discussion is the number under the Temp column.

Have you asked the BoM what the other two values are and what they are used for?

What I found just now is (see the last three lines, bolded and underlined just for you). The data you show is an extract of the full dataset in about the same order as listed:

The following [10-minute] data are recorded:(my [emphasis-bold]):

wmoid (WMO index number, normally a unique id, but can be missing), bomid (Bureau of Meteorology site name used to identify the observing site), stnname (observation station name), stnaltname (observation station name in title case), lat, lon, timeutc (datetime, UTC), timeloc (datetime, local time), apptemp (apparent temperature, degrees C), airtemp (air temperature, degrees C), dewpoint (dew point, degree C), mslpres (Mean Sea Level Pressure, hectopascals), relhum (relative humidity, %), winddirdeg (wind direction, degrees from N), windspdkmh (wind speed, 10 minute average from standard height of 10m, kmh) windspd (wind speed, 10 minute average from standard height of 10m, knots), gustkmh (wind gust measured over 3 seconds from standard height of 10 m, kmh), gustspd (wind gust measured over 3 seconds from standard height of 10 m, knots), viskm (visibility, km), rain (rainfall since 9 am local time, mm), rain24hr (rainfall in the last 24 hours before 9 am local time, mm), maxairtemp (maximum air temperature, degrees C between 6 am and 9 pm local time), minairtemp (minimum air temperature between 6pm and 9am local time, degrees C).

i.e daytime degC max (reset at 6 am), and nitetime degC min (apparently resets at 9pm the previous night. You could check with them for clarification. or see:

Surface Weather Observations (latest 10 Minute reading) | Bureau Data Catalogue (bom.gov.au)

Cheers,

Bill

Reply to  Bill Johnston
May 18, 2023 1:23 am

I think Lance has ducked under a magic mushroom, which leaves his un-researched claim unsupported.

Goodnight all,

Bill

Reply to  siliggy
May 17, 2023 3:06 pm

You are assuming that the “noise” is in different random directions and has just as many positive values as negative. This is not necessarily what occurs. The noise could easily cause a bias in just one direction.

We are also not dealing with absolutely linear devices. This can cause bias. I have been deficient in not examining solid state temperature devices. As you say, they are more like digital multimeters than analog LIG devices.

All this just makes me more confident in my position that a new set of protocols are needed for measuring surface temperatures when using sampling techniques.

We are at a point in time where brand new protocols can be used to derive better temperature measurements but it will require cutting ties with the past.

Reply to  Jim Gorman
May 17, 2023 4:30 pm

In that sense BoM might be advised to use a lower instrumental time constant and implement averaging in the data logger.”
Yes Yes and that is exactly what the latest WMO guidelines recommend. You are quiet correct that the instrument time constant needs to be lowered. Need only be enough to allow the other filtering(s) to do their Job Interestingly i find two different figures with two different wind speeds at the WMO in older publications they have 30s@5M/S T63.2. But in the newer it is 20sT63. The @1M/S as the Stephen Burt Michael de Podesta paper you quote show having a question mark on it.

ferdberple
Reply to  Jim Gorman
May 18, 2023 7:32 am

it will require cutting ties with the past.
=======
Past methods can be changed while still making use of the old data.

(Tmin+Tmax)/2 was no problem when no one worried about 1/2 a degree. But today, when 0.001 degree is a certain forecast of Armageddon, it for certain introduces spurious trends.

Reply to  ferdberple
May 18, 2023 9:03 am

The uncertainty of the temperature measurement data is still far more than .001 degree. It remains at +/- 0.2C at best. The problem is that the uncertainty is never propagated in climate science.

Reply to  siliggy
May 18, 2023 4:49 am

Noise is eliminated if it is Gaussian (symmetric) around the average. Is the noise on a temperature signal always Gaussian? UHI impacts at an airport wouldn’t be, yet they would affect the average.

Some people who live in the inverted reality cannot see this. The BoM did see this problem but did not come up with a solution just another detour from reality. They record readings often enough to have seen that rise but instead of reliably choosing it their system chooses the highest reading of the 60 per minute.”

Yep. Teyve in Fiddler on the Roof: “TRADITION”

Reply to  Tim Gorman
May 18, 2023 4:27 pm

Noise is eliminated if it is Gaussian (symmetric)
True that but it is also reduced if it is not symmetrical by filtering or averaging. There is no reason to keep high frequencies. There is good reason to keep high sampling rates.

Reply to  Nick Stokes
May 16, 2023 4:23 pm

Hey Stokes—you do realize the thermal conductivity of steel is quite a bit different from SiO2?

Nick Stokes
Reply to  karlomonte
May 16, 2023 4:48 pm

It’s actually the heat capacity – thermal inertia – that matters. But either way, the result is a product of the specific property with the thickness, which they adjust.

Reply to  Nick Stokes
May 16, 2023 4:52 pm

Do you see the threads in that photo?

Duh!

Reply to  karlomonte
May 16, 2023 9:56 pm

Did you where the small length of platinum wire was?

Duh!

Reply to  Bill Johnston
May 17, 2023 7:32 am

Bill chimes in, trying to pull the IPCC shill (i.e. Nitpick Nick Stokes) out of the quicksand.

Hey Bill, you’re the expert, are those 2-, 3-, or 4-wire RTDs?

Reply to  karlomonte
May 17, 2023 12:54 pm

The BoM use a four wire system but there will actually be five conductors. The probe being grounded to the cable shield.
Rather than the “heat capacity” being the only thing that matters, anything that can affect how long the probe tip takes to warm up or cool down matters. The thermal conductivity between the platinum and the probe tip is good compared to the connection between the air and the probe tip. Changing the diameter of the probe changes the surface area in contact with the air.

Reply to  karlomonte
May 17, 2023 3:18 pm

I don’t need to know. I can ask Lance. He is the analogue engineer in the outfit. Trust an expert. He says they are 4-wire.

Want a branch lopped off a tree without damaging the neighbors house, go and ask an arborist. Want to know about probes, ask Lance. While he might know how to do branches as well, he is not an arborist.

Hey karlomonte, I don’t claim to be an electrical engineer, so do you know what 4-wire means?

Reply to  Bill Johnston
May 17, 2023 4:33 pm

You forgot to spam 40-80s in this post.

HTH

Reply to  Bill Johnston
May 18, 2023 7:52 am

Let give a short explanation.

2 wire -> The source is a constant voltage. The lead resistance and contact resistance is in series with the Rt. The resistance is pretty constant and reduces the overall sensitivity to temperature change.

3 wire -> The source is a constant voltage. The 3rd wire is used to reduce the effective resistance in the leads and contacts

4 wire -> The source is a constant current source and not a constant voltage source. The lead configuration along with a constant source almost eliminates lead and contact interaction so the Rt gives the most sensitive readings. Please note “almost” means not entirely because nothing ever matches perfectly in the real world.

Reply to  Nick Stokes
May 17, 2023 4:04 am

Really? So the Pt probe would work just as well if they were surrounded by a good thermal insulator rather than steel?

Reply to  Nick Stokes
May 17, 2023 7:06 am

The issue is not lagging or averaging.

An LIG I essence truncates temperature reading when in a changing environment.

Lagging implies a state of equilibrium where it takes a period of time for the measuring device to respond.

Averaging assumes that a period of time represents an average measurement and that the mean is an accurate temperature or at least emulates an LIG.

Neither accurately describes what an LIG does.

Reply to  Jim Gorman
May 17, 2023 12:59 pm

Neither accurately describes what an LIG does.”
True but the averaging is to deal with internally generated electrical noise as well as externally induced electrical noise. Altering only the probe thermal properties only filters air temperature noise. It allows the electrical noise to do damage because that noise is not filtered at all. This is why the WMO specify it. The BoM like Bill and Stokes seem to be just clueless about this.


Reply to  siliggy
May 18, 2023 5:03 am

Respectfully, the term “averaging” is not truly appropriate for describing the filtering. Averaging only works if the noise causes as many ups as downs and of the same value, i.e. Gaussian. Not all noise is Gaussian. Examples are shot noise or atmospheric noise, each can cause peaks and valleys in the signal around an average value that you can “average” away. UHI is not a symmetric “noise” that can be averaged away. Storm fronts (e.g. lightning) can cause noise at the frontal boundaries that is not symmetrical.

It would be better to just stick to the term “filtering”.

Reply to  Nick Stokes
May 16, 2023 4:25 pm

Nick, They now have 4 mm probes, you are showing the original/older versions. They have moved on to something a bit thinner and more responsive.

Reply to  Jennifer Marohasy
May 16, 2023 4:52 pm

He’ll stick to gaslighting with what he thinks proves his “point”.

bdgwx
Reply to  Jennifer Marohasy
May 16, 2023 6:20 pm

When did they change to 4 mm probes? I ask because the two Ayers publications discussed here used data for 2016 and afterward. Follow up question what are the time constants for those pictured above vs the 4 mm versions?

Reply to  bdgwx
May 16, 2023 6:49 pm

bdgwx, I’ve never been able to get the manufactures time constants for any of the Bureau’s probes. If you could find out the answers to the good questions that you are asking that would be grand.

bdgwx
Reply to  Jennifer Marohasy
May 17, 2023 11:01 am

This is apparently included in the BoM document Instrument Test Report 714. I’ve never been able to find it on the internet though. However, it is said T63 = 95 for no airflow and T63=35 for 3 m/s.

Cyberdyne
Reply to  Jennifer Marohasy
May 16, 2023 6:36 pm

Thank you Jennifer for your tenacity.

I’ve been lurking for years here, my opinion of Stokes is low. He is a “seagull manager” someone who flys in, makes a lot of noise, defecates all around, and flys away without adding any value.

Reply to  Cyberdyne
May 16, 2023 10:00 pm

Nothing to add, but what a nasty thing to say, you lurker you …

Reply to  Bill Johnston
May 17, 2023 7:34 am

Bill again reveals his true orientation.

Reply to  Cyberdyne
May 17, 2023 3:55 am

ROFL!! I love it, what a description. I’m going to have to try and remember this one!

Loren Wilson
Reply to  Nick Stokes
May 16, 2023 8:51 pm

Having used platinum resistance thermometers in the lab for about 30 years, I don’t think the time constant of those shown above can produce the graph shown in the article. In other words, the electronics are quite noisy. These probes (if these are representative) cannot respond fast enough to show a +0.5°C increase and then back down in a matter of seconds when warmed only by air. Maybe not even if they were in a non-homogeneous liquid bath. Therefore, if those are the data provided by the voltmeter and data logger, that data logger is junk.

I would like to conduct a comparison of the same model of mercury in glass thermometer used for these measurements next to the PRT used for these measurements, using both good electronics and the electronics used in the stations in Australia. My previous lab had the capability to do so but I don’t work in that kind of lab any more. Of course, having field data observed every minute and logged every second for several hours on several days would be most informative.

Reply to  Loren Wilson
May 17, 2023 3:56 pm

Loren Wilson occasionally pops-in to these discussions, with both a cool head and wide experience.

Putting aside all the mis-information, here is a graph for the Bundaberg AWS, which shows in the top panel. end-of-minute attenuated data through the daily cycle for that day (1440 samples.)

i parsed the data into 6-minute segments and determined the maximum and minimum within each 6-min segment (i.e. 0.1 hour interval) in the bottom panel.

Given that each value is attenuated (not an instantaneous spike) I have no reason to believe that the data DO NOT reflect the air moving through the screen. This is the temperature of the fluid as sensed by the PRT-probe, as turbulence increases through the day, then drops away, leaving a skewed distribution.

From the data in the top graph, the BoM places dry-bulb readings on the internet at a refresh-rate of 10-minutes. Of those, the 60 minute readings are accessed by news services that tell you what the T is at your nearest weather station at that time.

Meanwhile back at the AWS, the same 1-min values are used to determine the highest and lowest values within the 24-hour period. These roll through (as Lance has noted) to become the max and min for the day. So from those 1440 samples, they picked the highest and lowest (two values from 1440 samples) and they went into the database as Max and Min.

The essential argument is whether those two values plucked from noisy data, are true values for the day. The second question flowing from that is whether small discrepancies make any material difference.

No-one can actually ‘feel’ a T-change at the scale the instrument is capable of reporting. Obviously this is a different situation to dunking a thermometer in a water bath, and nothing like a carefully controlled lab. However, the questions should guide the debate not tribal warfare.

So what is the next question?

Thank you for your considered input Loren Wilson

All the best,

Bill Johnston

(I’ve got an arborist coming to scoot up the tree. So karlomonte, I’ll ask him about the wires!)

Bundaberg_28 Feb.JPG
ferdberple
Reply to  Bill Johnston
May 18, 2023 7:48 am

These 2 graphs from Billl Johnston are very revealing.

My thoughts are that the device has noise but cannot be seen when the trace is vertical.

This says sampling is not enough. You need to process the noise otherwise the readings will appear to be more extreme than with mercury thermometers.

But maybe that is the desired result.

Reply to  ferdberple
May 18, 2023 1:22 pm

ferdberple,

I see no evidence that the “device has noise”. The device is doing its job and picking up noise in the air being monitored. Because the air is constantly in a state of flux, and conditions inside the screen are turbulent vertically and horizontally at all time scales, increasing the sampling rate converts more noise or variance, into signal. The problem then becomes, moderating that noise into a usable signal.

This underpins JM’s thesis – she wants more averaging; and is the underlying message in the Ayers and Ayers and Warne papers. The papers say the signal produced by the probes are sufficiently attenuated that the probes meet WMO guidelines.

I have read both papers and can find no real problems with the methods used and and the conclusions they drew.

To be clear, I have never worked for the Bureau or CSIRO. However what will probably happen will be that Ayres or someone else will publish a rebuttal of what has been claimed by Marohasy and Lance (and some of the commentary here), and the Guardian and the lefty press and the ABC will have field-day. Her ‘science-cred’ can be called to account too.

She and others have gone too-far out on a limb. I’ve privately warned her (and the IPA) of that risk, and if the BoM et al. respond, much progress in the overall debate will be lost.

There are too many inaccuracies and errors in what she and Lance claim, and as a consequence, she has jeopardised much more than she is capable of realising.

Yours sincerely,

Bill Johnston

Reply to  Bill Johnston
May 18, 2023 4:35 pm

I see no evidence that the “device has noise”.

Here is the start of your problem.

she has jeopardised much more than she is capable of realising.

More clown show.

Reply to  Bill Johnston
May 18, 2023 6:35 pm

I’m sorry you feel this way. What JM has brought to the forefront is that new technology has been installed without adequate thought of what and how it should be used. It appears the default purpose, at best, is to make it look like exactly like LIG. What a waste of money. What a lack of foresight for what could be done with correct protocols and procedures.

Reply to  Nick Stokes
May 16, 2023 9:44 pm

“Averaging, by its very nature smooths: removing peaks and troughs” says Jennifer Marohasy. She found for Canberra airport, “that averaging the last one-second of each minute always gave me a lower maximum temperature”. Well of course it would. The end-of-minute 1-second sample is already smoothed – it is actually an average smoothed over 40–80s, or I have said that before.   

 
Just a small thing. But If Jennifer replaced every mention of 1-second reading with: an average smoothed over 40–80s the whole flavour of this post would change, just like that.
 
Or even if she read and tried valiantly to understand the second sentence of the Ayers paper viz:“The Bureau has explained this as reasonable because automatic weather station (AWS) temperature systems have a response time that means each measurement is not instantaneous, but an average smoothed over 40–80s (Bureau of Meteorology 2017)“.
 
As dear old cocky said “The 5-point average was used in the earlier Ayers paper. Ayers and Warne did average the available set of 1s readings in each minute. Meaning they actually did the averaging. Bit of a spoiler eh?

Bit like the missing Stevenson screen at Goulburn airport (nice picture, Lance looks so well and happy), the non-existent overlap data at Wilsons Prom, Cape Otway and Rutherglen, and the blooper about paired t‑tests … but we do have faith in your story-telling.  
 
Nevertheless, and it could be spoiler for some of my earlier comments, I have not seen reference to Nick Stokes assertion that “they use the highest of 60 for max, the lowest of 60 for min). My reading, tracking of incoming data, and looking at flashcards used by Lance Pigeon, leads me to believe all Max and Min values are for end-of-minute and are only updated when another end-of-minute value is higher/lower. Sometimes some of the values get culled on QA as outliers, if so, they eventually show up as missing.  
 
Being such a small thing, that she would probably ignore anyway, the mean is still the mean of values that have been smoothed over 40–80s, which means by between one-half and >1 full cycles. This also means that as each measurement is not instantaneous, but an average smoothed over 40–80s, a running mean intended to smooth out the peaks in-line with guidance provided by the WMO (see bottom paragraph on Ayers p. 172) is done de facto by the fact that data are smoothed over 40–80s.
 
The second paper (Ayres and Warne was about response times and it used the full 1 Hz dataset for Darwin and Noarlunga, which was probably collected for the purpose. The JM dataset seems to be the same level of detail as the Ayres dataset, but she has not made clear whether she parsed the maxima, minima and end-of-minute values, all of which (for karlomonte’s information) are attenuated values smoothed over 40–80s anyway.

(No fish, too much water!)

All the best,
 

Bill Johnston

http://www.bomwatch.com.au
 

old cocky
Reply to  Bill Johnston
May 17, 2023 1:11 am

(No fish, too much water!)

I wondered how you’d got on.
I trust it was a pleasant break in the fresh air away from The Interwebs, anyway.

Reply to  Bill Johnston
May 17, 2023 7:35 am

Back to spamming your cherished 40-80s smoothed idol?

fansome
May 16, 2023 3:14 pm

Introduce an R-C circuit between the probe and the A/D converter. The RC time constant should be 1-minute. That will act as an averaging circuit and remove those oscillations. The R-C circuit will mimic the thermal inertia of a mercury thermometer.

Input —> R —> Output
|
|
C
|
|
Grnd————————

Eng_Ian
Reply to  fansome
May 16, 2023 3:36 pm

The capacitor should be on the other side of the resistor. The location is important.

If, for example the instrument had a very low source resistance, it could quickly load up or unload the voltage on the capacitor. By placing the capacitor on the other side of the resistor, the instrument can no longer quickly drain, nor charge, the capacitor.

fansome
Reply to  Eng_Ian
May 16, 2023 4:30 pm

WordPress removed all of the spaces that put the capacitor downstream of the resistor.

Reply to  fansome
May 16, 2023 7:38 pm

The RC filter before the ADC but after linearization is a good plan that can work well but it needs to keep the resistor value low to keep the Johnson noise low Root(4KTBR). This means a very big capacitor but it must be very low loss. The Method is WMO approved. One problem however is that the thermal time constant of the thermometer changes with wind speed and humidity. The RC filter will not change with wind speed and humidity. But it is still good to get rid of high frequencies.

Reply to  siliggy
May 17, 2023 4:17 am

Why is getting rid of high frequencies good? High frequencies define rapidly changing things, such as temperature at sunrise and sunset. The temperature curve at night is approximately an exponential decay. I.e. it has numerous high frequency components. Why would you want to eliminate them?

A low-pass filter doesn’t do averaging. If averaging is what you are looking for you need a different method.

Reply to  Tim Gorman
May 17, 2023 1:16 pm

Getting rid of high frequencies is not just “good” it is required to be even close to accurate.
Here are three reasons. There are more.
1) The in-glass thermometers could not measure high frequencies. So they need to be removed from the real air temperature fluctuation to be compatible with old records from the in-glass thermometers.
2) Most high frequencies did not come from real air temperatures at all. Measuring them is measuring something else. Possibly some drunk in the car park mouthing off on a CB radio.
3)If you get rid of high frequencies you improve accuracy. It is not hard to see that if one reading was 35 and the next was 36 that the real temperature is likely to have been 35.5. The first reading having a noise content of -0.5. The second reading having a noise content of +0.5.

Reply to  siliggy
May 18, 2023 5:13 am

Getting rid of high frequencies is not just “good” it is required to be even close to accurate.”

No, not all high frequencies are noise. High frequencies are what makes a square wave a square wave. High frequencies are what define an exponential or polynomial curve.

Think about a DC power supply that creates the DC from the standard AC power supply. With a good capacitor filter you only get the DC component of the signal, all the higher frequencies are lost – i.e. the frequency of the AC signal itself.

If the temperature decay at night is exponential it is imperative to capture the frequencies that create that exponential curve or your reading will not be accurate. The same for the polynomial curve during the daytime.

The faster you can sample the more accurate your representation will be when high frequencies exist. It’s why the bandwidth of oscilloscopes is so important.

Reply to  Tim Gorman
May 18, 2023 10:13 am

The faster you can sample the more accurate your representation will be when high frequencies exist. “
Higher sampling rates are needed. The higher the better. Yes faster sampling rates are good. You get a tick for that. You however mix up sampling rates and filtering again and again. Like Bill and his 40-80 fettish. Slower sampling rates still record the components of unwanted high frequency noise and wanted high frequency signal. Most of the noise that is unwanted is at high frequencies but sampling fast or slow still records them. With slow sampling they appear as aliasing.
Look up aliasing then think about what it does to the things listed here.
https://www.analog.com/en/technical-articles/managing-noise-in-the-signal-chain-part-1-annoying-semiconductor-noise-preventable-or-inescapable.html
Longer time constants are also required and are filtering. Not only can you have long time constants and high sampling rates but things work better with that combination.

Reply to  fansome
May 16, 2023 4:25 pm

Blindly inserting an RC filter won’t work because of the way the resistance of a Pt RTD must be measured.

Reply to  karlomonte
May 16, 2023 7:47 pm

No body would do it blind of that. Read my comment above it is a good way to do it but the filter ends up being very big and expensive and can introduce more noise.

Reply to  siliggy
May 16, 2023 9:01 pm

You still have to apply an excitation directly to the resistor and measure the voltage across it. The extra load of a 1-pole network on the resistor will change its calibration—in other words, you are no longer measuring the RTD resistance.

Reply to  karlomonte
May 16, 2023 9:14 pm

The ADC input impedance is in series with the R of the RC filter and is very very high. The value of the resistance is also fixed.So can be allowed for by calibration. Much easier to deal with than the The Callendar–Van Dusen equations and self heating.

Reply to  siliggy
May 16, 2023 9:51 pm

Why not just do it in software? A lot easier, and I still don’t believe you aren’t altering the resistance measurement.

Reply to  karlomonte
May 17, 2023 5:34 am

How do you average a sinusoidal or exponential signal using a low-pass filter?

Reply to  Tim Gorman
May 17, 2023 2:20 pm

Over time. It works like a running average. I can ask you the same back. How can you not average when using a low pass filter? That is what they do. It is how they work. It is what they are for. Although some may prefer to call it integration.
Time constant filtering is common to both the thermometer and the RC circuit. Both will average like this in the piccy.
Yellow is the source temperature. For this imagine you just picked up two thermometers from a room at 41.2 degrees and took them into a room at 40, closing the door behind you. one is rated at 20s the other 40s.

Bourke63.png
Reply to  siliggy
May 17, 2023 4:30 pm

A low pass attenuates high frequency signals, it doesn’t average, it can’t. Those two curves are NOT averages of the yellow—how do you know when to take a reading from the red or blue and get 40.6? You can’t know this in advance.

You seem to have a different definition of the word “average”.

Reply to  karlomonte
May 17, 2023 6:06 pm

The red and blue lines are both moving at an exponential rate from the old average to the new average. As i said it is like a running average. “how do you know when to take a reading from the red or blue and get 40.6″ Look at the chart. A 20s time constant averaging crosses the 40.6 line earlier and reports that value earlier because it’s running average is shorter.

Reply to  siliggy
May 18, 2023 8:57 am

Remember you are dealing with equilibrium also. What does that curve look like when you walk from one to the other through a gradient from one to the other and then reverse the process? Nothing is easy in the physical world.

Reply to  siliggy
May 18, 2023 5:58 am

Why is the output of a capacitor filtered AC signal equal to .707 of the max value while the average value of an AC signal is .63 of the max value?

The RC filter removes the high frequencies leaving only the DC component. The DC component is not necessarily the same as the average value. Nor is the average value the actual value of interest, the maximum value is.

A low-pass filter doesn’t average. .707 is not equal to .63.

Reply to  Tim Gorman
May 18, 2023 10:23 am

“Why is the output of a capacitor filtered AC signal equal to .707 of the max value while the average value of an AC signal is .63 of the max value?”
The question is wrong. The average of an AC sign wave signal is zero. You are mixing up RMS AC and rectified DC.

Reply to  karlomonte
May 17, 2023 1:52 pm

Why not just do it in software?”
Why not do both analogue and digital filtering?
Doing it in software is much better than not doing it but one of the reasons is to stop the noise from causing ADC problems. Have a look at quantisation error.
“I still don’t believe you aren’t altering the resistance measurement.”
This is what calibration tests are for. Static unchanging errors like that are detected by ice bath and thermowell oven tests etc. Here is a diagram from WMO 8 I added the red line. Things above the red line are inside the Stevenson screen. Things below are in the data logger.

flowdia.png
Reply to  siliggy
May 17, 2023 7:47 am

Ultimately trying to emulate an LIG is a waste of time. Some time in the future, 20 – 30 years, there will HAVE to be a reckoning where old LIG data is declared not fit for purpose! Why not do it now?

Reply to  Jim Gorman
May 17, 2023 2:28 pm

For forecasting i think you are correct. Why wait. Proving that the planet has not warmed however would need to wait. Another couple of hundred years or so of being imprisoned by the green police. Being compatible with past records means that homogenisation can be shown to be wrong, now.

Reply to  siliggy
May 18, 2023 5:21 pm

To explain something i said above a little clearer.
The root mean square of a sine wave is 0.707
The average of a sine wave is zero.
The average of either the positive half cycle, the negative half cycle or both after rectification is 0.637
That is why I said the question is wrong. to ” Why is the output of a capacitor filtered AC signal equal to .707 of the max value while the average value of an AC signal is .63 of the max value?” The question misses the reality that RC filters described move values toward the average typically for DC power supplies etc.
“The DC component is not necessarily the same as the average value.” Oh yes it is.

Reply to  siliggy
May 18, 2023 9:23 pm

“The DC component is not necessarily the same as the average value.” Oh yes it is

The average depends on the t0 and t1 values over which the summation is performed (the averaging period).

This is only true if the averaging period is exactly an integer multiple of the signal period.

Reply to  karlomonte
May 19, 2023 7:08 am

The DC component is ALWAYS the same as the average. What you are considering is AC component that has not been completely filtered out. A running average with a time constant that is too short will not attenuate enough to filter it out. If there is a large enough capacitor in the RC circuit the “Signal Period” is filtered out. Regardless of being an integer multiple or not. A low pass filter that is set to be much lower than that “signal period” is all you need to average it out.

Reply to  siliggy
May 19, 2023 7:21 am

You need to specify how this average is performed.

My comment had nothing to do with using analog low pass filters.

Reply to  siliggy
May 19, 2023 8:06 am

I’ll ask again. Where does this “filter” attach in a 4-wire system?

It is important to not upset the wheatstone bridge if accurate measurements are to be made.

Reply to  Jim Gorman
May 19, 2023 8:12 am

They also need to specify exactly what the averaging is intended to accomplish.

Reply to  karlomonte
May 19, 2023 9:32 am

It can only be to approximate an LIG. What a waste of valuable infotmation!

ADC’s can operate in the tens of megahertz so it is not an equipment problem.

The data should be read and stored such that an accurate integration can show the temperature distribution throughout the day.

People will just need to become comfortable using deg•sec, deg•min, deg•hours, or deg•days!

The real key is someone is going to have to decide on an appropriate baseline temperature. Won’t that be fun!

Reply to  Jim Gorman
May 19, 2023 11:31 am

What Bill J. has been ranting about over and over makes no sense at all—what is the purpose of picking off the max and min for each minute, but then only using the very last reading? Then this “40-80s smoothed averaged” stuff, does this mean the 1-minute max, min, and last values come out of a running 40-80s average process? I can’t even imagine what this is supposed to accomplish. When anyone tries to pin him down, he goes off and tells them to “read the papers”.

The title of JM’s post is “Averaging Last Seconds Versus Bureau Peer Review”; if this is what is done, why bother with the max and min?

Reply to  karlomonte
May 19, 2023 12:57 pm

From this Australian Government site.

http://www.bom.gov.au/climate/map/heating-cooling-degree-days/documentation.shtml

• “If cooling is being considered to a temperature BASE of 24 degrees, and if the average temperature for a day was 27 degrees, then cooling equivalent to 3 degrees or 3 CDDs would be required to maintain a temperature of 24 degrees for that day. ”

Think about why real engineers no longer want to use Tmax, Tmin, and Tavg.

You don’t know how many hours the temps exceeded the base! Only that there was a short period where the average exceeded the base. No idea if Tmax lasted 15 min or 6 hours. So CDD is not based on anything but ONE temperature measurement for Tmax and ONE temp for Tmin.

No idea what the temperature profile actually is! How do you engineer anything other than by the seat of you pants.

Don’t even start about farmers choosing crop varieties based on Growing Degree Days

Why do people like Judith Curry, Ryan Maue, and yes, The Farmers Almanac prosper by letting folks know this information? Climate Scientists in academia only want to forecast doom and gloom but nothing useful.

Reply to  karlomonte
May 17, 2023 7:41 am

Where would you put it in a four lead device anyway?

As Tim said, some of these methods would obviate the need for 1second readings. You just can’t have both fast response AND averaged readings over a long period of seconds.

Folks need to give up on emulating LIG thermometers anyway. We are going on 20 or 30 years of digital measurements. Whatever their time resolution is, there is enough data to integrate it to find daytime and nighttime temps. (Notice I didn’t say daily average.)

I know the statisticians amongst us want to continue “long” records so they can justify millikelvins as you put it. Too bad!

Reply to  Jim Gorman
May 17, 2023 12:57 pm

Exactly what I tried and fail to point out.

I don’t think Stokes or anyone from the rest of the crowd knows the difference between 2-, 3-, and 4-wire RTDs.

Reply to  fansome
May 16, 2023 6:11 pm

An RC circuit like you show is a low pass filter, not an averaging circuit.

Reply to  Tim Gorman
May 16, 2023 7:51 pm

Filters do average but it is an exponential delay type filter. It slows the rates of rising and falling edges. So has most effect at start of change and little effect at all near the end of the integration.

Reply to  siliggy
May 17, 2023 5:57 am

A low-pass filter can’t show rates of rising or falling edges, it filters out the high frequencies which define those edges.

The average of an exponential signal is ∫ (1/a) e^at dt evaluated from 0 to t which is then divided by t.

How does a low-pass filter do this integration?

Response time for a thermometer is just like the response time of an oscilloscope. It’s why the oscilloscope can’t perfectly display the start and end of a square wave. The start and end of a square wave appear as rising and falling exponential curves. If that square wave is high enough in frequency the scope can’t even properly display it because the whole period falls within the response time of the scope. The voltage begins to fall off before the scope has even fully responded to the rising part of the wave.

The same thing happens in a thermometer with thermal inertia and a non-infinite response time. If the temperature is changing faster than the response time multiplied by 5 then the thermometer will never adequately display it. It can’t even average it because it can’t keep up with the rate of change!

Go read up on the Nyquist-Shannon sampling theorem. Your sampling rate must be at least twice the highest frequency in the signal in order to provide a complete description of the signal. The daytime temp is typically described as a sine wave but it really isn’t. It is better described as a higher-order polynomial. – meaning higher frequency components. If it was a pure, single frequency sine wave then Tmin and Tmax (i.e. two samples) would be all you would need to define the signal. But that’s just an easy approximation to make which doesn’t actually match reality.

No amount of low-pass filtering is going to give you the average value of an exponential or polynomial signal. It would help identify any DC component in the signal but I don’t know anyone that considers atmospheric temperature to be a DC signal with an AC component overlaid.

Reply to  Tim Gorman
May 18, 2023 10:40 am

Oh dear. You arguing against your own poor reading.
A low-pass filter can’t show rates of rising or falling edges, it filters out the high frequencies which define those edges.”
If you had read correctly i typed slow not show.
“Filters do average but it is an exponential delay type filter. It slows the rates of rising and falling edges. “
Quite familiar with Shannon Nyquist thanks.
See my comments here from 2019 posts on this.
https://joannenova.com.au/2019/02/adjusted-another-degree-shaved-off-darwins-history-at-this-rate-in-50-years-darwin-wont-even-be-tropical/#comment-2109998

Here is the 1948/9 paper.
https://fab.cba.mit.edu/classes/S62.12/docs/Shannon_noise.pdf

old cocky
May 16, 2023 3:29 pm

now records the highest, lowest, and last second of each minute and reports the highest second as the daily maximum temperature. Back in 2019 I purchased some of this data to test the Bureau’s claim that averaging the data would make no difference.

after purchasing batches of daily one-second data for Canberra, Adelaide and Melbourne from the Bureau,

Are these the same data sets, or different?
Highest, lowest & last are of limited utility. With readings every second , the analytical sky is the limit. Well, those plus parallel daily LiG readings.

Reply to  old cocky
May 16, 2023 4:37 pm

Old Cocky,

The only data sets that can be purchased, and that are the same data set used in Ayers and Warne, show highest, lowest and last second in each minute.

That is the beginning and end of it. :-(.

To repeat myself, this is the same formula used in/by Ayers and Warne.

The Bureau absolutely refuse to reprogram to collect each second in every minute, which is what should have at least been done if Ayers and Warne was to get through peer-review. What they actually published is a fake because it is not even a test/comparison of one minute averaging.

The every one second in each minute dataset does NOT exist for the Bureau. And so there is no averaging, as recommended by the WMO.

I will attach an example of what the Bureau will provide for payment. Attached is for Mildura, but I didn’t actually pay for it, because the request came via Josh Frydenberg who was at the time the Minister.

What I really wanted was every 60 seconds in each minute and most importantly also the reading from the mercury thermometer at 9am and 3pm for that day. I didn’t get either of those.

They truly are Jokers, at the Bureau.

oneSecMildura.png
old cocky
Reply to  Jennifer Marohasy
May 16, 2023 4:47 pm

The only data sets that can be purchased, and that are the same data set used in Ayers and Warne, show highest, lowest and last second in each minute. These are the same two referred to, that you quote above.

That is the beginning and end of it. :-(.

To repeat myself, this is the same formula used in/by Ayers and Warne.

The Ayers and Warne paper says that the earlier Ayers paper used the 5-point calculations, but the Ayers and Warne paper had 60 1-second figures for each site during the periods analysed. Well, apart from missing readings.

This may have been specially collectd for he purposes of the paper, which is certainly an advantage of being part of the organisation.
Alternatively, I may have misunderstood what they did.

The rather limited spatial and temporal coverage of A&W tends to support this data being collected especially for the paper.

The lack of a large set of 1-second data severely limits the analysis which can be performed, which is extremely unfortunate.

Nick Stokes
Reply to  old cocky
May 16, 2023 6:18 pm

“Alternatively, I may have misunderstood what they did.”

No, you’re right. I think Jennifer encountered a typo in that paper, declared it fake, and stopped reading.

Here is an example from the paper, showing their 60s running mean from 1 Hz data vs the BoM practice of taking the last in each minute.

comment image

Reply to  Nick Stokes
May 16, 2023 9:27 pm

Are we to believe that ragged line is accurately representing the temperature of a well-mixed parcel of gases? It looks to me like there is noise superimposed on the temperatures.

Has anyone compared the variance in the temperature on a calm (no wind) day and a day with steady wind and a day with gusty wind?

Reply to  Clyde Spencer
May 16, 2023 9:52 pm

Climate science don’t do variance.

Reply to  Clyde Spencer
May 17, 2023 8:08 am

Let me add that this graph demonstrates the need for integrating the readings to obtain a more accurate depiction of temperature. The variance here is so high that a Tmax can’t help but have a large standard deviation .

Reply to  Jim Gorman
May 17, 2023 8:50 am

Read that term “variance” as “uncertainty”. Which means that trying to calculate differences down to the hundredths digit is a fools errand.

old cocky
Reply to  Nick Stokes
May 17, 2023 2:19 am

Do you know if the data for the Ayers and Warne paper is available online?
The paper doesn’t appear to have a link to archived data.

bdgwx
Reply to  old cocky
May 16, 2023 6:30 pm

That’s how I read the Ayers & Wayne paper as well. They even say in the abstract To test this proposition in the field air temperature data were measured at 1-Hz at two Bureau AWS sites between April and June 2018.”

Reply to  Jennifer Marohasy
May 16, 2023 10:10 pm

and that are the same data set used in Ayers and Warne;

No. Confused again. Ayers and Warne used 1-second data. each value being smoothed over 40–80s.

You still don’t get the importance of reading the paper and getting-up close with the data.

Just helping,

Dr Bill

harryfromsyd
Reply to  Bill Johnston
May 16, 2023 11:03 pm

Given the second by second data varies by quite a bit the real temperature must be going quite mad if the numbers being collected are actually after a 40-80 second smoothing. If the 80s smoothed temperature is recorded as 30C at a point in time, and 30.4C one second later, how much would the air temperature have to have changed to cause that?

Reply to  harryfromsyd
May 17, 2023 12:39 am

Dear harryfromsyd,

Temperatures measured in Stevenson screens during the day, or outside under a tree change second by second. Except during the dead of night it is barely ever “still”. Rapid sampling instruments are able to sample that, while thermometers can’t. (Seecomment image?fit=1181%2C742&ssl=1)

The engineering issue has been to make probes behave like thermometers. The papers referred to by JM show that by producing attenuated values smoothed over 40–80s, the Bureau has achieved that. Furthermore, their PRT probes are consistent with WMO guidelines.

Yours sincerely,

Dr Bill Johnston

http://www.bomwatch.com.au

harryfromsyd
Reply to  Bill Johnston
May 17, 2023 1:09 am

I thought the Ayer’s paper showed this level of variance from 1s sampling from the probes you claim are smoothed over 80s.
Your response seems to be “it’s ok because it’s ok”.

Reply to  harryfromsyd
May 17, 2023 2:59 am

Dear harryfromsyd,
 
Did I say that? Did he say that? Did I say that he said that?
 
Who said that?
 
Quick, look under the bed! Maybe it was Jennifer’s analogue-engineer whisperer-joker! Just joking!
 
I have the Ayres paper in front of me in real-time. Linking his findings with the objective of the study, Ayres said in the final sentence of the Abstract (in his poshie CSIRO style) that “thermal inertia in the AWS measurement systems ensured that its 1-s data represented averages over the prior 40–80 s, providing a 1-min average of air temperature in accord with World Meteorological Organization requirements”.
 
You can check, Jen can check. Even Readfearn and his chooks can check and then report some twisted meme of the same words in the Guardian. Perhaps they need a new reporter of everlasting indelible truth called The Truth. Woops there was one but they ran out of page-3 boobs.
 
Now there’s professors, the ABC (bin nite) …. However, of those the Readfearn at the Guardian is at least, the least credible.
 
Marohasy should bless ‘er socks. As Readfearn is the worst of the worst and the Guardian is not even a useful bin-liner, the take-down could have been even worser!
 
No, I did not say that, neither did I say what you thought I said. Instead I read the paper.
 
Dear old cocky and others including Nick Stokes also apparently read the paper. While JM has a sweat, speaking for me, a scientist, it’s OK because it’s OK because the objective of the work was met. Data justified the conclusion.
 
Except as Chinese whispers by jokers; I don’t understand how my reply to you, comes back as something else. Ummm, lucky it had no teeth looking for a butt.
 
Posterior trigger-warning for this time of nite: look-out for post-hoc butt jokes.
 
 
Sleep well, snore loud, but as Shakespeare or Lord Byron might have said, don’t the shingles shake.

 
Dr Bill
 
http://www.bomwatch.com.au

Reply to  Bill Johnston
May 17, 2023 6:29 am

The queston, that you keep dancing around, is simple.

Is the Pt sensor designed to emulate an LIG thermometer?

One word answer please.

Yes.

or

No.

Reply to  Tim Gorman
May 17, 2023 9:12 pm

Wrong question Tim,

The question answred by the two papers is that the PRT sensors (which were designed to provide comparable data) were in line with WMO recommendations.

Sensors and LIG are calibrated in the lab. They cannot be calibrated in a Stevenson screen any more than a medical thermometer can be calibrated in someone’s mouth or ear-canal.

The response variable cannot be juxtaposed as the control variable.

All the best,

Bill

Reply to  Bill Johnston
May 18, 2023 8:47 am

In other words:

“Go, I say go away boy, you bother me.” – Foghorn Leghorn.

old cocky
Reply to  Bill Johnston
May 17, 2023 2:30 pm

Dear old cocky and others

Oh, dear. I’ve become an old dear.

I’m not that old. It just feels like it some days 🙁

Real cockies (the ones with wings) can live to well over 100, and I aim to emulate them.

Reply to  old cocky
May 17, 2023 5:21 pm

As my arborist said, you can get out of the bed on the wrong side, or the smiley side, so long as it it is the top side.

b.

old cocky
Reply to  Bill Johnston
May 17, 2023 11:54 pm

Every day above ground is a good day.

harryfromsyd
Reply to  Bill Johnston
May 17, 2023 4:10 pm

Thanks for the response Bill, it confirms everything I expected particularly eloquently.

Moron.

Reply to  Bill Johnston
May 17, 2023 6:33 am

second by second”

If the temperature changes second-by-second and the response time of the probe is 2 sec then you are violating the Nyquist theorem of sampling. You must have at least two samples at the highest frequency of the signal to properly characterize it. That means to characterize 1 second changes in temperature you must have at least two samples per second.

This is no different than trying to look at a 40Mhz square wave with an oscilloscope with a 20Mhz response time. All you will see is garbage.

Reply to  Bill Johnston
May 17, 2023 1:53 pm

The question is whether the changes are real physical changes, or sensor/electronics noise.

Reply to  Clyde Spencer
May 17, 2023 5:16 pm

I agree, but you cant take the ever-changing world that is the weather laboratory, into a controlled laboratory.

All you can do is to produce an instrument that emulates historic instruments, while at the same time producing lots of other numbers that are useful for (as I quoted from Burt & Podesta 2020), “there is little benefit in sampling air temperature every second outside of specific research applications, such as turbulence or eddy-correlation measurements“.

While I don’t have an interest in that, an atmospheric physicist Greg Ayres probably has and Lance certainly has, at least as a bone to pick.

Yours sincerely,

Bill Johnston

Reply to  Bill Johnston
May 18, 2023 9:54 am

“””””All you can do is to produce an instrument that emulates historic instruments, while at the same time producing lots of other numbers that are useful”””””

These are incompatible goals for a measuring device. Software can do the estimation of what an LIG might provide. But the instrument device itself must be designed and built to meet the data gathering requirements.

Reply to  Jim Gorman
May 18, 2023 1:59 pm

But the instrument device itself must be designed and built to meet the data gathering requirements.

Which it has been. Both of the papers referred to by Marohasy at the start of this post show the probes are compatible with WMO guidelines.

Who are you to claim “these are incompatible goals”. Go and buy some data (an average smoothed over 40–80s) and look at the numbers.

Lance is also wrong. Daily max and min are calculated at 9am from the 1,440 end-of-minute samples, not within-minute max and min. (https://wattsupwiththat.com/2023/05/16/averaging-last-seconds-versus-bureau-peer-review/#comment-3722866). He just prefers what he makes-up.

Some of the stuff you have written is interesting but irrelevant to the issue. You don’t observe the weather using an oscilloscope for instance. Lance has instrument reports, I have some too and its pretty clear that in collaboration with their suppliers, the BoM have undertaken a lot of QA on their instruments.

Then of course you and others humiliate and degrade yourselves by shovelling vitriol and resorting to bullying and personal attacks on people you don’t know …. Truly pitiful.

Yours sincerely,

Dr Bill Johnston

Reply to  Bill Johnston
May 18, 2023 4:18 pm

You haven’t understood a word I have said. The WMO is as far behind as the rest of climate science. I could care less what they recommend for continuing trying to emulate the past. Forget emulating, estimating, whatever, LIG thermometers. Think what would be needed if they had never existed.

Given the opportunity to use technology that already exists to its fullest and prepare for even better technology, what design specification would you ask for?

Would you say we don’t need to do any better than what was available in 1790? I hope as a scientist that you will look forward and not backward. The data exists for calculating the REAL temperature profiles using automated digital thermometers. We should also be able to track insolation, humidity, wind, etc. on an equally fast sample basis.

Read this site and ask yourself why they denigrate climate science trying to remain the same as in the past.

Calculating Degree Days

Some other sources (particularly generalist weather-data providers for whom degree days are not a focus) are still using outdated approximation methods that either simplistically pretend that within-day temperature variations don’t matter, or they try to estimate them from daily average/maximum/minimum temperatures.”

This is not a “climate sceptic” site. It is dedicated to bringing the best information available to HVAC engineers and to agriculture. Their snide remarks should make current climate scientists blush.

Reply to  Jim Gorman
May 18, 2023 10:18 pm

Dear Jim,
 
Don’t tell me what I understand and don’t understand, that is just simply over the top arrogant. I know how to calculate degree days but they are irrelevant to this conversation about measuring temperature in Stevenson screens (there is better information than your site reference, which I don’t need anyway).

And do you want C3 degree-days, or C4 degree-days, degree-days for coral bleaching, annual degree day to flowering, ripening …. And how would HVAC engineers calculate degree-days in 1949, or 1960 for that matter when it just becoming trendy amongst plant breeders and agronomists? And how could they use 1-minute data, when they need smoothed data – Oh, wait, by averaging of-course!   
 
Like Marohasy, I doubt you have ever undertaken weather observations, so what makes you an expert, and how could you possibly design an instrument for measuring something you don’t have a clue about?
 
If you know so much, why did you not pull JM up about her claim that end-of-minute probe readings were not instantaneous point data, as she has been claiming for years. Because you don’t’ know. Even armchair warrior karlomonte doesn’t get the importance of Marohasy and Lance being knowledgeable and reliable about what they say.

karlomonte still does not get the difference between a 9am mercury dry-bulb estimate (which for all the reasons you have been hot about, is an attenuated value), and an attenuated PRT value, and why that might be important to the bloke flying 250 people into Canberra airport.  
 
And what about those numbers Lance has been flashing around for the 10-minute dataset, for at least 5 years. Do you know what they are for? Ummmm no, of course not.

They are for your local shock-jock to tell you what the hottest/coldest temperature has been to that time. They re-set each evening and morning and roll through the day keeping track of extremes. How do I know? Because while you were talking irrelevant stuff, I went and found out. WMO is also into the business of comparing the past with the present. Compatibility = comparability with the past, which is focus of the papers Marohasy is sweaty about.
 
You say “Given the opportunity to use technology that already exists to its fullest and prepare for even better technology, what design specification would you ask for?” My answer is a PRT probe that emulates the performance of LIG gear, because that has been the standard for longer than either of us has been alive and if you want to measure climate change … look at reports on http://www.bomwatch.com.au, not at Lance’s flash-cards
 
You say “The data exists for calculating the REAL temperature profiles using automated digital thermometers. We should also be able to track insolation, humidity, wind, etc. on an equally fast sample basis.” Should your WE want to do that, you can buy much of that data from the BoM for specific sites or set-up your own gear – so why not do that if you are so keen about going down that track? I have no interest in it.
 
You want lower boundary-layer temperature profiles you need Bowen Ratio gear. Want profiles to 7km into the sky, you need wind-profiler arrays and weather balloons with instruments and GPS-trackers – Oh wait, they have those. You think I am not curious about instrumentation at airports, I go look all the time, take photographs, and if possible, meet with staff and ask them (not tell them) about their job. Arrogant people would charge in and tell them what their job was or should be.
 
So, for at least five years, Lance, the high-flying electrical engineer has been making claims that are not true; and you are telling me how to think and what to think while you sip his cool-aid but have not QA’d any of his stories?
 
karlomonte does not even understand that this hot little story by JM is about signal, not about variance – running averages for instance are a first approximation of extracting a signal from highly variable data – She wants more averaging of data that already is “an average smoothed over 40–80s”.

Joke on him. Karlomonte does not get JM’s wonderment that averaging reduces the sharpness of a signal already average smoothed over 40–80s, but then gets all petty about it anyway.       
 
You also think you can run the WMO and design all sorts of gizmos when you can’t even ask basic questions or understand the significance of JM and Lance being completely off the track?
 
Frankly Jim, I’m pretty over this unedifying conversation.
 
Yours sincerely,
 
Bill Johnston

Reply to  Bill Johnston
May 19, 2023 2:59 pm

I have just read Jo Nova (https://joannenova.com.au/2023/05/what-if-the-airport-radar-was-generating-false-record-high-temperatures-through-random-electrical-noise/) and gone back over Ken Stewart’s earlier (2017) posts about spikes, which I contributed to at the time. I noticed that Ken received a reply from the BoM about those rolling Max & Min values.

I take back (retract) my comments that Tmax & Tmin are calculated from end of minute samples. Whether the source of the spikes is electrical noise or not is uncertain, however. While Tmax spikes may contribute to “record hot days” I have no evidence they affect trends.

I have also published evidence in various reports at http://www.bomwatch.com.au including frequency and percentile analysis, that shows site changes -> introduction of 60-litre screens, spraying out the site, stripping-off the topsoil etc. are the main causes of increased extremes. Not electrical interference.

Spikes were a problem in gear that I trialed in the 1980s, in in the paddock, nowhere near an airport radar. Ken, Lance, Geoff, Jo Nova and and others are aware of this and I may have even posted on it once.

The people who designed the equipment thought either that the spikes were real – the gear was too sensitive, or the machine briefly exceeded the quadratic calibration. While time-averaging could reduce their effect, spikes remained “in the data”. Filtering or excluding them was a better option. The Bureau do employ an error-trapping-at-source rules-based filter for that purpose.

While we deployed four branded AWS of various types at field sites that used smaller than 60-litre screens, their data was pretty rough in the ‘tails’. Also there are AWS networks in WA and Victoria (and NSW) that probably suffer from similar problems, However, I much doubt that all data everywhere suffers from electrical interference.

So apologies Lance,

All the best,

Bill Johnston

old cocky
Reply to  Bill Johnston
May 19, 2023 3:17 pm

I take back (retract) my comments that Tmax & Tmin are calculated from end of minute samples. 

Onya, Bill. It’s good to see somebody who is more interested in getting it right than being right.

Reply to  old cocky
May 19, 2023 4:24 pm

Dear old cocky,

It bothers me still that Ken’s post at https://kenskingdom.wordpress.com/2017/03/01/how-temperature-is-measured-in-australia-part-1/ is still not completely consistent with what I found at Surface Weather Observations (latest 10 Minute reading) | Bureau Data Catalogue (bom.gov.au).

The rolling Tmax starts at 6am each day and runs to 9pm (i.e., it resets at 6am). Tmin resets at 6 pm, and runs until 9 am. So what is the precise running max value at 6am, when Tmax starts at the same time? Is it for the minute before the reset, or is the reset earlier?

Over at Kenskingdom, they (the BoM) says “Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)

Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600.

I’ve (the BoM) looked at the data for Hervey Bay at 0600 on the 22nd February. 25.3, 25.4, 23.2 .

The temperature reported each half hour on the station Latest Observations page is the instantaneous temperature at that exact second, in this case 06:00:00, and the High Temp or Low Temp for the day is the highest or lowest one second temperature out of every minute for the whole day so far. There is no filtering or averaging” says BoM.

But the rolling-day max has not started yet – it starts exactly at 6AM. The rolling-min dos not reset until 9am. While rolling-min can be cooler and may not be for that minute, rolling-max, which can’t be for the next minute, is a mystery at that point. What I’m saying is there seems to be an inconsistency still unresolved.

There is also no reason why at 6.00 am, rolling-min should be the same as Tmin, so comparing the two values is misleading.

Or at some other times, should rolling-max be equal Tmax. They are different numbers, likely to be the same near-dawn, but unlikely to be so under turbulent hot conditions.

That PRT-probes go over-range is not disputed at all, never has been by me. I raised the issue with Lance et al. years ago anyway.

How they (BoM) deal with it is the issue. If they average over it, it is still there. If they chuck it, well they are in trouble from the data-police aren’t they. Making comment about it causes banishment, but I still don’t believe it is entirely settled (and probably not much to do with radar, which is just an hypothesis anyway, caveats being could be, might be, may be …..). Some data about this would be good.

Cheers,

Bill

old cocky
Reply to  Bill Johnston
May 19, 2023 5:23 pm

As we say in IT, “the first 80% of the job takes 90% of the time. So does the last 20%”

There seem to be lots of little nooks and crannies there, few of which are adequately documented.

One of the bugbears of maintaining old code (even one’s own old code) is the inconsistent documentation. That, and the Band-Aids, chewing gum and baling wire which have been applied over the years.
Meteorological observations have been taken for a long time, and it appears that similar issues exist there.

Reply to  Bill Johnston
May 18, 2023 4:33 pm

(an average smoothed over 40–80s)

WTH does this even mean (that you spam over and over and over)? That all variance has been scrubbed away and washed down the drain?

Lance is also wrong. Daily max and min are calculated at 9am from the 1,440 end-of-minute samples, not within-minute max and min.

BFD. How does this prove it is “correct”?

Reply to  karlomonte
May 19, 2023 2:23 am

Come up for oxygen -> stop sittig on your brain.

Reply to  Bill Johnston
May 19, 2023 6:33 am

Hypocrite.

May 16, 2023 3:54 pm

Story Tip –

Ian & Jennifer discussing The Australian Minister for Energy Poverty Chris Bowen,nuclear energy, and other topics.

Chris Bowen is ‘not too bright’ and ‘doesn’t know very much’: Ian Plimer | Sky News Australia

AleaJactaEst
May 16, 2023 4:13 pm

in the post, is the author sure about the use of artificial intelligence (AI) or should this be repurposed as machine learning (ML)?

There are major differences.

Reply to  AleaJactaEst
May 16, 2023 4:48 pm

I used the word AI, because it seems to be better understood and for me machine learning is a form of AI. How do you see them as different?

rogercaiazza
May 16, 2023 4:42 pm

It has been a while since I was responsible for meteorological monitoring related to air quality analyses for New York State and EPA. My recollection is that all the one-second data were used to calculate one-minute averages and the one-minute averages were used to calculate longer period averages (15-minute and one hour). When reporting the maximum for the day we didn’t even report the one-minute average value, we used the hourly or 15-minute average. I cannot imagine any regulatory agency accepting anything other than using all the one-second data.

As other commenters have pointed out the Hg thermometers have physical response time issues. If you are trying to determine a trend that uses both Hg and electronic thermometers using anything less than 15-minute maxima data is nuts and I would argue that hourly average maxima are more appropriate for a representative trend.

Kudos for Marohasy’s efforts to address this egregious approach.

Reply to  rogercaiazza
May 16, 2023 5:00 pm

Thanks Roger.

I wrote to Australia’s Chief Scientist back in 2014 about this, https://jennifermarohasy.com/2014/05/corrupting-australias-temperature-record/

The Office of the Chief Scientist replied that they didn’t have time to investigate.

The head of the Bureau (Andrew Johnson) and various sceptics (including Ken Stewart) have since insisted that I am absolutely wrong to suggest that there was ever any averaging, including before 2011. What I have written in that letter is from the guy who tells me he installed the original system. But I have no proof. Just his phone call to me.

The Bureau will on the one hand insist that it has always been the case that the daily maximum is a one-second spot reading. While also insisting that Ayers and Warne is a test of this.

Jokers they are.

bdgwx
Reply to  rogercaiazza
May 16, 2023 5:56 pm

I’m assuming you used ASOS or equivalent data. The method is to form a 1-minute value computed as the average 6 values within the minute. Then each minute a 5-minute value is computed as the average of the previous 1-minute values. There are 60 5-minute values in each hour. The daily Tmax or Tmin are selected from the set of 1440 5-minute values in the day.

May 16, 2023 4:55 pm

 He was not particularly interested in my work on how the Australian Bureau of Meteorology measures temperatures

the reason is simple.

make an estimate of the CHANGE in temp between 1850 and today

you’ll get soething like 1.5C.

now take your australian data. randomly corrupt it by adding 1C to every value.

make an estimate of the CHANGE in temp between 1850 and today

you’ll get soething like 1.5C.

in other words its a difference that makes no difference

Reply to  Steven Mosher
May 16, 2023 5:56 pm

randomly corrupt it by adding 1C to every value.

This is what qualifies as “random” in mosh-world?

Reply to  Steven Mosher
May 17, 2023 8:20 am

Funny you quote to 1 decimal point, yet temps are shown to 3 decimal places.

May 16, 2023 5:01 pm

somebody is still stuck on the stupid of absolute temperature.

you can do this same experiment with FREE CRN DATA and find no difference whatsoever.

none

ClimateBear
May 16, 2023 5:37 pm

Jennifer, just how the heck can anyone be defamed by Graeme Redfearn regarding ‘climate change’?

BTW the need for your analysis of the BOM ‘data’ is so blindingly obvious it really is a wonder to me how they a) came up with the idea they could use such obviously biased ‘data’ and b) how the heck have they not been pilloried by the so called ‘peer review’ system.

Follow the grant money, follow the travel points, follow the paid for accommodation all round the world follow the back patting by the fellow travelling pals.

Reply to  ClimateBear
May 16, 2023 6:52 pm

Bear, thanks for reassuring me that I have not gone insane.

Jokers they are.

ClimateBear
Reply to  Jennifer Marohasy
May 17, 2023 3:56 pm

Jennifer, I imagine the abuse might make you twitch from time to time but can I thank you for much the same, i.e. doubting myself that being just an engineer with a bachelor’s degree I lack any expertise in understanding the vagaries of data quality in this area.

After all these people are “experts”. Engineer’s definition:- ‘x’ is an unknown quantity and a ‘spurt’ is a drip under pressure – offered on day one of first year at UWA by the dean of engineering. I wonder if climate scientists (climate scientossophists imo) are given a comparable brief instruction in professional humility? Somehow I think not. In fact I think it might just be the opposite which is then trumpted by the likes of Redfearn, the spoon fed media let alone Mikey and co.

Soldier on grrrl ! 🙂

Reply to  ClimateBear
May 16, 2023 7:21 pm

Don’t forget to check who paid for Scholarships.

May 16, 2023 5:50 pm

Keep at if Jennifer. It took six years to publish Propagation over the most incompetent reviews I’ve ever received, until luck gave me Carl Wunsch and Davide Zanchettin.

Even so, publication took a manuscript editor with integrity and the courage to stand up to negative comments from the Editor-in-Chief.

So keep pushing. Editors with integrity and courage are few these days. But find one and you’ll succeed. And best of luck!

If the BOM AWS sensors aren’t mechanically aspirated, they won’t produce accurate air temperatures in any case..

R.K.
May 16, 2023 6:21 pm

The temperature recorded at about 14.45 on the Canberra graph above would definitely be caused by aircraft and that spike is not a normal temperature you would expect at that time. On a September day in 2019 the BOM claimed some sort of heat record and on that day the wind was about 25 knot from the west meaning the air was coming from the hot inland and when I checked, at that time two B737s took off and one B737 landed in the space of a few minutes. It was about the same time of day as the above graph.
The exhaust from the engines is over 600C and that Stevenson Screen would have been affected by that exhaust air from all three aircraft as even on landing the reverse thrust causes significant power and heat disbursement. It would be correct to say that many temperature recordings at airports everywhere are similarly affected.
Comparison of temperature records is meaningless without taking into account area pressure, wind speed and direction, humidity and aerodrome elevation and none of those are used by recording authorities.

Nick Stokes
Reply to  R.K.
May 16, 2023 6:45 pm

The temperature recorded at about 14.45 on the Canberra graph above would definitely be caused by aircraft”

Here is a map from BoM metadata of the Canberra site. It is 200 m from the nearest runway.

comment image

bdgwx
Reply to  Nick Stokes
May 16, 2023 6:57 pm

And over 500 m from the runway handling the larger aircraft.

Nick Stokes
Reply to  bdgwx
May 16, 2023 7:43 pm

Indeed so. Runway 30 is for small planes:
Runway 12/30 crosses the main 17/35 and services smaller aircraft from Dash-8s downward.

Reply to  Nick Stokes
May 16, 2023 9:38 pm

Has anyone looked at the time series for temperature of turbulence swirls coming from a high-temperature engine exhaust versus low temperature wind. I suspect that they will look different enough to be able to distinguish one from the other. How far does the exhaust turbulence travel? It seems that you are implying that it will be damped out in a couple of hundred meters. Is there evidence to support that assumption?

Nick Stokes
Reply to  Nick Stokes
May 16, 2023 6:58 pm

Here is a google map of the site, with AWS in yellow ring

comment image

R.K.
Reply to  Nick Stokes
May 16, 2023 7:23 pm

Nick,
And that is precisely why temperatures recorded there would be affected by aircraft taking off on Runway 35 with a westerly wind. At that position with take off power applied the hot air would affect that area

Nick Stokes
Reply to  R.K.
May 16, 2023 7:26 pm

It is 200 m away.

R.K.
Reply to  Nick Stokes
May 16, 2023 7:36 pm

That is way close enough. Anyway what experience do you have of hot air from aircraft on tarmacs and runways?

Reply to  R.K.
May 17, 2023 3:38 am

I live about a mile from the end of one of the runways at an airfield handling KC135 tanker aircraft. When they rev up engines or reverse engines it is so loud that you can’t hear much else going on outside. That sound is creating pressure waves in the atmosphere if nothing else and that alone would affect a sensitive sensor in a temperature measuring station. It is simply unbelievable that the pressure wave would not carry with it heat turbulence as well for quite some distance. More than 200m and probably even 500m. On a sunny summer day when the tarmac is hot that pressure wave would blow that heated air a long way!

I often question just how much time outside most climate scientists spend during the day! And where they spend it!

R.K.
Reply to  Tim Gorman
May 17, 2023 4:19 am

Tim,
All you say is correct and it reminds me of something else when walking across tarmacs in the days before air bridges behind aircraft about to taxi. That is the smell of Jet A1 and around aerodromes the air laden with that could well affect measuring eqipment. With aircraft both taking off and landing, very high speed air exhausts could reach quite high above the runway at times. Landing both light aircraft and jets can be quite challenging on hot runways in windy conditions.

Reply to  R.K.
May 17, 2023 6:46 am

Yep! I hadn’t even thought about comparing jet fuel smell with heat distribution! If you can smell the jet fuel then there is probably some kind of heat distribution happening as well!

R.K.
Reply to  R.K.
May 16, 2023 6:52 pm

It could further be said that airport temperatures in summer during the hotter times of the day would be significantly affected by the extra heat rising over the runways and taxiways. That air could well be 10 C hotter than what is recorded and depending on wind direction would influence the measurements inside the Stevenson Screens.
That heat and rising air above the runway also causes significant changes to the air near the runway approaches as aircraft can experience high sink rates as a result. In summer time, landing into places like Mt.Isa and Alice Springs in large jet aircraft, full power is sometimes needed to arrest those sink rates.
When consideration of these effects and the aircraft exhausts are taken into account the only true aerodrome recordings would need to be taken well away from operational areas.

Nick Stokes
Reply to  R.K.
May 16, 2023 7:00 pm

It is 200 m from the nearest runway, with grass all around.

Reply to  Nick Stokes
May 16, 2023 9:40 pm

What are your unstated assumptions?

Nick Stokes
Reply to  Clyde Spencer
May 17, 2023 12:15 am

That Google Maps represents distance accurately.

Reply to  Nick Stokes
May 17, 2023 3:42 am

Typical evasion. Don’t you ever get embarrassed?

Reply to  Nick Stokes
May 17, 2023 1:47 pm

Sweets for the sweet. A trivial answer from a …

bdgwx
Reply to  Clyde Spencer
May 17, 2023 6:44 am

In google maps right click on the AWS in the southeast corner of the meteorological site. Then select measure distance and click on another location. It will tell you the distance between the two points.

Reply to  bdgwx
May 17, 2023 7:53 am

Why are you answering for Nitpick? He’s quite capable of running away from pertinent questions without your aid.

Reply to  bdgwx
May 17, 2023 1:56 pm

Even you missed the point. I wasn’t questioning the distance, but asking about evidence to support the unstated assumption that 200 m would be sufficient to dilute and cool down the engine exhaust sufficiently to be unmeasurable. It needs to cool down from 600 degrees to only a tenth of a degree above ambient to be detectable.

bdgwx
Reply to  Clyde Spencer
May 17, 2023 2:38 pm

Oh got it. Consider a 1 m^3 parcel at 600 C. It’s excess energy is about 1.2 kg * 1 kj/kg.C * (600 C – 30 C) = 684 kj. If we then mix this energy into a cone with a 10 degree dispersion angle at 200 m that would be a volume of 250000 m^3 which is 1.2 kg/m^3 * 250000 m^3 = 300000 kg. Therefore there is enough energy to raise that mass by 684 kj * 1 kg.C/kg / 300000 kg = 0.002 C.

Obviously things would be dramatically different if the station were inline with RW 12/30 and receiving a continuous spray of direct exhaust. But as it is actually configured only a westerly wind can get the exhaust over to the station. And remember, RW 12/30 cannot accommodate the larger commercial jets.

Reply to  bdgwx
May 17, 2023 6:16 pm

It would seem that your assumptions only apply for gusts of wind a few minutes apart. The jet exhaust is continuous for as long as it is sitting on the runway. It also appears that you are assuming that the 1 m^3 parcel at 600 C is thoroughly mixed with the entire volume of the dispersion cone. That is the crux of the problem! If vortices constrain the hot air from mixing, packets may well be much warmer than your thoroughly-mixed assumption. I don’t think that your calculations rule out the possibility of hot packets reaching the Stevens Screen without knowing more about the behavior of the mixing processes. That is why I asked Stokes about his assumptions.

bdgwx
Reply to  Clyde Spencer
May 17, 2023 7:08 pm

There are lot of assumptions there. It is intended only as a first approximation. I’m certainly willing to accept that the effect is greater than 0.002 C upon the station. In fact, there are a few reasons I can think of that it would actually be higher. But try as I might I can’t get it up to 0.2 C (100x higher) which could still be less than the environmental noise at the station.

Reply to  R.K.
May 17, 2023 3:41 am

That heat not only will create high sink rates but for smaller aircraft can create low sink rates. The rising air off the tarmac can make it hard for the smaller aircraft to actually get on the ground. We used to fly 4-seat high-wing Cessna’s from Topeka to Kansas City. More than once when landing in Topeka it was a fight between stalling and landing because of the rising air. Good thing it was a runway long enough for B-52’s!

Reply to  R.K.
May 17, 2023 1:36 pm

As George Carlin’s Hippy Dippy Weatherman, Al Sleet so eloquently put it, “The temperature is 68 degrees at the airport…which is stupid because nobody lives at the airport.”

Reply to  R.K.
May 16, 2023 6:53 pm

R.K. The Melbourne data is even more fun. And it is also worth looking at the Canberra data that they choose to not use, because it didn’t meet their claimed QA.

bdgwx
Reply to  R.K.
May 16, 2023 6:56 pm

If the hypothesis is that temperature spikes are caused aircraft then why don’t we see more of those spikes? Also, can you provide links to literature discussing the effects of aircraft takeoffs/landings with plots of the effect vs distance?

R.K.
Reply to  bdgwx
May 16, 2023 7:32 pm

bdgwx
If the prevailing wind is such that the recording area is in the path of hot air from the effects of aircraft or even from over the runway in summer it will affect it. A wind from the west/north west at 25 knots or more would affect that Canberra AWS

Nick Stokes
Reply to  R.K.
May 16, 2023 7:45 pm

The AWS is 500 m from runway 17/35 which the big jets use.

R.K.
Reply to  Nick Stokes
May 16, 2023 7:54 pm

Nick,
The AWS is actually about 380 metres from that runway but the distance is unimportant if the air is very hot and the wind strength is above 20 knots. We are talking about air with a temperature above 600 C. What do you know about operating aircraft at Canberra?

bdgwx
Reply to  R.K.
May 16, 2023 8:28 pm

I get a maximum distance of 2.0 km and a minimum distance of 0.5 km for runway 17/35.

A parcel of air moving at 20 knots while dispersing at only a 10 degree angle due to turbulent mixing would mix into 4,000,000 m^3 by the time it made it to the station 50 seconds later. And that’s for the minimum distance between the runway and station and does not consider that there would be extreme buoyant lift to the parcel.

I’m not seeing how a 600 C parcel of air is going to have a measurable effect on that station. Can you provide evidence to the contrary?

R.K.
Reply to  bdgwx
May 17, 2023 1:38 am

bdgwx
The site is about 380 metres from R/W 35 and with a 25 knot wind blowing from the west/north west and two B373 taking off and one landing within a few minutes of each other you have six engines putting out air above 600C at about 300 metres per second for about half the length of the runway plus what is occurring in the air above the ground as they climb out. If you have ever stood behind jet aircraft even 100′ away whilst taxiing you might have some idea what I am talking about.

bdgwx
Reply to  R.K.
May 17, 2023 6:37 am

The site is about 380 metres from R/W 35

comment image

two B373 taking off and one landing within a few minutes of each other you have six engines putting out air above 600C at about 300 metres per second

Then you would expect two spikes in the temperature.

 If you have ever stood behind jet aircraft even 100′ away whilst taxiing you might have some idea what I am talking about.

We’re not talking about being behind a jet 100′ away.

Reply to  bdgwx
May 17, 2023 6:48 am

We are talking about UHI that can literally be carried for miles by the wind, let alone 500m. When you are talking about looking for differences in the hundredths digit it doesn’t take much to affect the results.

R.K.
Reply to  bdgwx
May 17, 2023 3:36 pm

bdgwx
In talking about air 100′ behind a jet I was simply using that as an example of how strong and hot the wind is from jet exhausts even at idle power. The air from inland Australia was already hot on that day with being carried by a wind from the northwest somewhere around 25 knots and with three aircraft operating on the same runway within minutes of each other, as they were, the air would be very hot over that whole area to the east of the runway.
I can state from first hand experience that the air would be hot and affect the AWS. That experience stems from 54 years flying including a long airline career, and having flown jets into Canberra hundreds of times. What’s yours?

bdgwx
Reply to  R.K.
May 18, 2023 5:23 am

If you know that the AWS at Canberra is affected by aircraft takeoffs/landings then present data that shows it.

I have no experience flying aircraft and I’ve never been to Canberra.

Reply to  R.K.
May 17, 2023 12:46 am

It was actually the wind profiler array that changed Canberra airport’s temperature.

https://joannenova.com.au/2017/10/canberras-hottest-ever-september-record-due-to-thermometer-changes-and-a-wind-profiler/

Cheers

Bill Johnston

R.K.
Reply to  Bill Johnston
May 17, 2023 1:45 am

Bill,
I am aware of that link and although site changes might be involved, at 14.36 on that day when the record temperature was recorded, the wind was from west/north west at 25 knots plus and three aircraft took off and landing right around that time. It would have been the jets that affected the temperature and I am talking about that day not in relation to other times when your comments about higher temperatures would be correct.
I speak from having operated jet aircraft into and out of Canberra on hundreds of occasions.

Reply to  R.K.
May 17, 2023 3:50 am

It’s not just the jet impacts but the impact hot tarmac can have on downwind measuring devices. Studies have shown that UHI effects can travel for literally miles downwind. This impact is not easily identified as “spikes” but will definitely bias temperature readings over the entire day, daytime as well as nightime.

R.K.
Reply to  Tim Gorman
May 17, 2023 4:09 am

Tim,
I have spent a lot of time on hot tarmacs and I agree with your comments

Reply to  R.K.
May 17, 2023 6:17 pm

I have no doubt that you are correct R.K.. My concerns in the post was the additive effect on baseT of the array. Anyway, the array effect was only decided on because I detect the step and looked for the cause post hoc. Other factors are not ruled out.

I have flown into Canberra many times too, and the approach up the valley from the south, over Hume, over the river, then the concrete dump/recyclers can lead to some pretty brisk adjustments and ups and downs. I was on one flight that was all over the sky, and when the pilot finally crashed it down onto the runway as it skewed around in the wind to point the right way, everyone cheered with relief!.

Thanks for your insights.

Cheers,

Bill

Reply to  bdgwx
May 17, 2023 3:56 am

The term “response time” obviously has little physical meaning for you.

Have you ever been tapped on the shoulder, turned around, and found no one there? That’s response time!

ferdberple
May 17, 2023 3:41 am

My co-author of this report, testing the last one-second hypothesis as discussed with Otto Weiss all those years ago, cannot be named.
========
This limits those willing/able to supply articles, even on WUWT.

Pen names have been used throughout history to deal with tyranny such as we are witnessing. A secret is safe with 3 people, so long as 2 of them are dead.

ferdberple
May 17, 2023 3:58 am

What is your definition of average temperature?

(Min + Max)/2 is what the official services use as I recall. At least historically using min/max thermometers

This is quite a bit different than a thermistor for example where you are sampling. You cannot duplicate the min max thermometer but you can come close

The BOM is sampling once a minute. It makes no difference what second of the minute, so long as it is consistent. From Nyquist we know that this will allow us to detect a frequency of one cycles per 2 minutes or slower.

In other words, if the response of a mercury min/max thermometer cannot detect a change quicker than one cycle per 2 minutes, the BOM is OK. If mercury thermometers have a quicker response then the BOM needs to up their sample rate.

Reply to  ferdberple
May 17, 2023 6:40 am

It’s not obvious that climate scientists have ever heard of Nyquist.

Remember, the temperature profile is actually a higher-order polynomial during the day and an exponential decay at night. Both contain higher frequencies than just a single sine wave base. You are going to have to do faster sampling in order to capture adequate data to characterize the temperature profile. That’s especially true at the inflection points where the profile changes from a polynomial to an exponential.

It just looks like climate science is still stuck on the (Tmax+Tmin)/2 paradigm and it doesn’t really matter how Tmax and Tmin are measured.

ferdberple
Reply to  Tim Gorman
May 17, 2023 7:35 am

I agree with you in principle. See my reply below.

Reply to  ferdberple
May 17, 2023 8:39 am

If climate science would move forward to using degree-days and such then they could join the more advanced disciplines of agricultural science and HVAC engineering. Not likely to happen based on what we see spouted on here.

old cocky
Reply to  Tim Gorman
May 17, 2023 2:52 pm

That’s where using relatively fast responding instruments at 1 Hz or better allows us to have our cake and eat it too.
The data allows integration (e.g. degree-days) along with various analyses.

With a suitable characterisation of the behaviour of met LiG thermometers, the more granular data also allows reasonably granular emulation LiG emulation in software for consistency with the legacy data.
Both the WMO computational averaging and BoM thermal mass approaches seem to provide an approximation at best, rather than emulation.

Reply to  old cocky
May 18, 2023 6:06 am

Yep! And the term “approximation” actually means uncertainty in the resulting data. But of course climate science always assumes uncertainty is random, Gaussian, and cancels. That way it doesn’t need to be propagated through all the averaging calculations!

old cocky
Reply to  Tim Gorman
May 18, 2023 5:01 pm

I can sort of see why they did what they did with the limitations at the time, but it wasn’t very forward thinking.

The first of the remote monitored met equipment pre-dates Linux and well and truly pre-dates the Raspberry Pi, and mobile phones were still for yuppies.

When the BoM would have been planning in the late 1980s or early 1990s, the meteorologists probably didn’t have much exposure to computers, especially the senior staff. Maybe some had a Mac Plus or SE, or an IBM compatible running MS-DOS or Windows 386.
At that stage, their IT people would have been thinking of all the wonderful things they could do with their new Cray.
There were still a bunch of proprietary UNIX[TM] versions, the UNIX Wars were still raging, and AT&T was monstering the few BSD distros out there.

It was a different world then. Most of the IT we take for granted now really only took off at the turn of the century.

Reply to  old cocky
May 19, 2023 5:15 pm

No the first (Fielden) were remotely sensed chart/display recorders, with sensors out near the runways relayed by u/g cable to the control tower & met office/Flight Services Unit Office (1950s).

I’ve never seen one, and while I have searched around and asked the BoM (who installed them at all major airports), I’ve never been able to find any info. (Sydney Observatory also had a Fielden, probably reporting to the regional BoM office).

b.

old cocky
Reply to  Bill Johnston
May 19, 2023 5:48 pm

I’d forgotten all about chart recorders, and didn’t know any were used for meteorology purposes.
Wouldn’t those old charts be a treasure trove?

bdgwx
Reply to  ferdberple
May 17, 2023 11:13 am

According to [Burt & Podesta 2020] T63 is 211 s and 69 s for unventilated and ventilated respectively for the Tmax pattern of a LiG. That means the response times (3*T63 or ~95%) are 633 s and 207 s respectively.

ferdberple
May 17, 2023 4:13 am

Nyquist says the actual second you choose is not of interest. What is important is how often you sample.

Change the temperature. Measure how long it takes the mercury thermometer to change by whatever your resolution is on the mercury thermometer. Say 0.5 degrees.

Whatever time that is, divide by 2 and that is how often you must sample. So if it takes 1 minute for the mercury thermometer to change 0.5 degrees you must sample twice a minute. If it takes 2 minutes, you are OK sampling once a minute as the B0M is doing. 3 minutes, they are in overkill.

Reply to  ferdberple
May 17, 2023 6:43 am

Again, this is for a pure sine wave signal. Since the temperature profile is not a true sine wave you must sample more often and do it with a measuring device that has a response time faster than the sampling rate.

If the response time of the probe is engineered to be the same as the LIG thermometer then sampling twice per minute is worthless. The samples won’t adequately represent the signal. Just like a 20Mhz scope can’t properly display a 40Mhz signal.

ferdberple
Reply to  Tim Gorman
May 17, 2023 7:35 am

Nyquist does not require a sine wave. We are only interested in how rapidly we must sample to capture the performance of a min max thermometer.

Of course if we integrated the entire waveform over time we would capture a much more mathematically correct average, but that would not satisfy the rules created by the BoM.

Reply to  ferdberple
May 17, 2023 8:36 am

I know it doesn’t require “a” sine wave. But it *does* require knowing the frequency of the highest frequency sine wave component in the signal. A square wave is not a sine wave but is made up of a series of ever higher frequency sine waves, at least to the first approximation. If you cut off any of the higher frequencies then what you get will be a distorted view of the original signal.

If the night time temp profile is an exponential decay then what is the Nyquist frequency needed to capture that signal using a sensor that itself has an exponential response?

If the temperature decay is faster than the response time of the sensor then how can you ever get an accurate reading? Averaging won’t help because all the readings will be higher than the actual temperature was! And that will be true whether you do a simple arithmetic average or an integral average.

I don’t think the BOM knows who Nyquist is or what his theorem says. What they have done is basically tried to do the old (Tmax+Tmin)/2 as an “average” temperature. What seems to escape them is that the actual average is the average of the polynomial daytime temps combined with the average of exponential night time temps. And that will *never* come out to be the mid-range temp of (Tmax+Tmin)/2.

It’s “TRADITION” in climate science, *always* tradition. Time to join the 21st century.

Reply to  Tim Gorman
May 17, 2023 6:25 pm

Remember we are sampling the weather here.

According to Burt and Podesta “Of course, the pursuit of shorter and shorter time constants to enhance sensor responsiveness in meteorological measurements of air temperatures is desirable only up to a point. Unlike wind speeds, for example, there is little benefit in sampling air temperature every second outside of specific research applications, such as turbulence or eddy-correlation measurements … etc etc..

Cheers,

Bill

Reply to  Bill Johnston
May 18, 2023 11:39 am

“””””there is little benefit in sampling air temperature every second outside of specific research applications,”””””

Another stick in the mud excuse for keeping the old ways.

There are all kinds of uses for better and more accurate temperature distributions throughout a day. They all require increased sampling rates. Is one second too fast? Why hasn’t it ever been addressed as to what one second sampling might provide?

Here is a question. Does water vapor begin to reflect insolation prior to precipitation? In other words, could clouds be “surrounded” by water vapor that reduces surface insolation before a cloud shadow and after the shadow passes, thereby increasing clouds effectiveness in reducing temperature?

ferdberple
Reply to  Tim Gorman
May 18, 2023 5:37 am

Agree. The mercury thermometer itself has inertia and lag so from a data science point of view you are ill advised to try and recreate this as a means of improving accuracy.

Reply to  ferdberple
May 17, 2023 8:40 am

The problem is that you are assuming a step change to an equilibrium value. With a continuous change there is no equilibrium value. The LIG will never catch up to an ever increasing change. This also occurs as temps fall. An LIG will never catch up until an equilibrium occurs with no change.

Reply to  ferdberple
May 17, 2023 3:35 pm

So if it takes 1 minute for the mercury thermometer to change 0.5 degrees you must sample twice a minute. If it takes 2 minutes, you are OK sampling once a minute as the B0M is doing. 3 minutes, they are in overkill.”
No this is a bit mixed up.
The magnitude of the step change that is being followed is important but not mentioned. Two samples per cycle is a change and a change back gain. Not a single change but a cycle back to the beginning value. The Nyquist Shannon theorem dictates two samples per Hz. The change should be 63.2% not 50.

ferdberple
May 17, 2023 7:45 am

Imagine we modified a modern high resolution display to only display a single monochrome line of text.

That is what climate science is doing. Adding bad data practices to modern thermistor to recreate min max mercury thermometers.

It is idiotic make work data corruption thru ignorance. Reducing the accuracy of the present is not a remedy to anything honest. It shows a lack of best practices in data science.

Reply to  ferdberple
May 17, 2023 8:41 am

Nailed it!

They are only adding more and more uncertainty into the data stream instead of reducing it.

bdgwx
Reply to  ferdberple
May 17, 2023 9:08 am

ferdberpie: Reducing the accuracy of the present is not a remedy to anything honest.

BoM isn’t reducing the accuracy though. Or assuming you actually meant uncertainty they aren’t increasing that either. In fact, BoM presents evidence that due to the instruments higher time constant the effective 40-80s average is already compliant with the WMO guidance without doing the averaging at the data logger. Note that the WMO states that you should use averaging to reduce the uncertainty of reported data and recommend a 1-10 minute averaging window. See [Guide to Meteorological Instruments and Methods of Observation 2014] for more details.

Reply to  bdgwx
May 17, 2023 10:16 am

That isn’t the point. Your report is still trying to emulate LIG readings from over a century ago. That emulation DOES generate measurement error in determining the true temperature at any given second.

It is time to discuss how to use the better data that we currently have rather than find excuses to keep measurements looking like they always have.

Only the statisticians want to keep measurements the same in order to have “long” records for their averages. The scientists should say we are moving to a new method that is better than the old way. There are studies being done that show even four temps per day provides a better look and has considerable differences when compared to the max/min.

Reply to  bdgwx
May 17, 2023 4:26 pm

Note that the WMO states that you should use averaging to reduce the uncertainty of reported data and recommend a 1-10 minute averaging window.”

Typical climate science. Averaging does *NOT* reduce uncertainty. The only way to reduce uncertainty is to use better measuring equipment with more accuracy and higher precision.

For some reason climate science ALWAYS assumes that all uncertainty is random, Gaussian, and cancels. Unfreakingbelievable.

ferdberple
Reply to  bdgwx
May 18, 2023 5:45 am

Replace accuracy with quality. Formally accuracy and uncertainty are attributes of data quality. For comprehension I was using the generic form of accuracy.

bdgwx
Reply to  ferdberple
May 18, 2023 7:59 am

Typically quality refers to the legitimacy of a measured value. For example, if Canberra reported 100 C then the quality control will reject the value as unreasonable. It is a different concept from uncertainty.

However, if you’re using “quality” to describe how well an RTD matches LiG behavior then, yeah, that’s the rub. If the RTD has an effective averaging period that is too long or short then the quality of comparisons between RTD and LiG are compromised. But that doesn’t mean the RTD nor the LiG are themselves providing bad quality data.

Giving_Cat
Reply to  ferdberple
May 17, 2023 11:20 am

I suspect bringing best practices to climate “science” doesn’t produce the results they want.

bdgwx
May 17, 2023 1:34 pm

Here are some relevant publications.

[Burt & Podesta 2020] Response times of meteorological air temperature sensors
[Burt 2022] Measurements of natural airflow within a Stevenson screen and its influence on air temperature and humidity records
There is a lot of content to wade through, but two simple rules stuck out at me.

T63 = 5.6*D^1.5 / Vscreen^0.5

Vscreen = 0.07 * V10

D is the PRT diameter and V is the airflow.

So for a 10m wind of 10 m/s wind and 4 mm PRT that would be Vscreen = 0.07 * 10 = 0.7 m/s and T63 = 5.6 * 4^1.5 / 0.7^0.5 = 54 s. And that would be a response time of 3 * T63 = 3 * 54 = 162 s.

Reply to  bdgwx
May 18, 2023 5:24 am

Which means the temperature has to remain stable for about 3 minutes in order for the measuring device to get an accurate reading. Does this happen at Tmax/Tmin because of the thermal inertia of the atmosphere or does it change more quickly even then? If I had the time I’d go look at my 5min data but I don’t. Perhaps someone else can determine this.

May 17, 2023 1:42 pm

Tim Gorman,

So, so much thanks for persisting with your explanations, not only of time constants, but also sampling rates, Nyquist theorem and more.

One of the key points that you repeat is: design your sampling system for measuring temperature after you understand how it cycles … how it changes. This is so important.

You write:

“Remember, the temperature profile is actually a higher-order polynomial during the day and an exponential decay at night. Both contain higher frequencies than just a single sine wave base. You are going to have to do faster sampling in order to capture adequate data to characterize the temperature profile. That’s especially true at the inflection points where the profile changes from a polynomial to an exponential. [ends]

You also make mention of ‘integration’. I don’t understand what this refers to exactly in the context of engineers measuring temperatures.

I agree with you about the importance of climate science moving away from simple maximum and minimums. My training and first employment was as an entomologist. Measuring day degrees, back in the early 1980s, was central to everything I did, first working in cotton pest management on the Darling Downs in Queensland and later as a field biologist in Madagascar.

As a field biologist working in Madagascar, my ‘bible’ was T.R.E. Southwood’s ‘Ecological Methods’. It taught that if you want to properly and adequately sample an insect population you need to first have some idea about its distribution and abundance. While this may seem somewhat different from sampling temperatures it is not really. :-).

None of my statistics books include ‘integration’ in the index. How is this done, mathematically?

bdgwx
Reply to  Jennifer Marohasy
May 17, 2023 2:08 pm

I agree with you about the importance of climate science moving away from simple maximum and minimums.

I don’t think anyone is going to mount a serious challenge to the idea that there are better alternatives to (Tmax+Tmin)/2. Reanalysis, for example, does the full the integration at sub-hourly timesteps. However, the problem has always been regarding how we deal with historical data in which Tmax and Tmin is all that is available.

None of my statistics books include ‘integration’ in the index. How is this done, mathematically?

∫ f(x) dx

When numerically integrating it is the case that the smaller dx is the smaller the error in the calculation. For example, the Tavg = (Tmax + Tmin) / 2 method is most closely representative of dx = 0.5 days (caveats). On the other hand Tavg = (T1 + T2 + … T24) / 24 is equivalent to dx = 1/24 days. 3D-VAR/4D-VAR systems often calculating Tavg using a method equivalent to dx = 0.003 days or even lower. The smaller the time slice (dx) the more representative the calculation is of the true daily average.

Of course all of this is predicated on being able to convince people that numerical operations on temperature including averages or even just “spot” temperature measurements themselves (since they are actually averages) are meaningful and useful despite what many on WUWT think.

Reply to  bdgwx
May 17, 2023 6:50 pm

Like it or not, an integration will provide a much better and more accurate determination of the temperature curve throughout a day and night. Once the area under the curve is determined a true midrange value will also be calculable.

Good grief, home decorators use laser tapes to get distances. Measurement technology has advanced so far that we can measure gravity waves. Yet here are climate scientists saying new technology must emulate 1800’s LIG technology and we must continue with Tmax, Tmin, and (Tmax + Tmin)/2.

When was the last paper you have seen that discussed the protocols for temperature going forward? I have never seen one. Everything is still let’s do averages of unrepresentative measurements so climate scientists can pretend they are doing technically significant work.

At least JM is trying to show that there are significant differences which should be a knowledged.

Reply to  Jennifer Marohasy
May 17, 2023 3:38 pm

Numerical integration gives the area under the curve: periodic samples are collected, multiplied by the time between samples, and then added together.

Reply to  Jennifer Marohasy
May 18, 2023 5:42 am

Measuring day degrees, back in the early 1980s, was central to everything I did, first working in cotton pest management on the Darling Downs in Queensland and later as a field biologist in Madagascar.”

If you were doing “day-degrees” then you were doing some form of integration. Integration is merely finding the area under the temperature curve. It’s: ∫ T(t) dt from t1 seconds to t2 seconds as the integration limits. The integral is (temperature x time (dt)) or “degree-days”.

Agricultural science and HVAC science have been using degree-days for a long time. The method for calculating them has changed drastically, from using (Tmax+Tmin)/2 as a crude measure to actually integrating finer data intervals (e.g. 2 minute data).

I would recommend perusing http://www.degreedays.net to see what they do. Many HVAC engineers have moved to using their data.

As I’m sure you are aware, the use of degree-days removes some of the problems with temperatures being a time series. Using the area under the temperature curve sort of minimizes what the details of the temperature curve actually are. You can compare areas without having to worry about what caused the areas. For example, you can add 30 degree-day values in a month, divide by 30, and come up with an average degree-day value. You don’t even have to worry about the seasonal differences in temperature between March and August. Their degree-day values will indicate the differences.

Of course, this still doesn’t fix the problem that temperature is a poor proxy for climate. Enthalpy is the proper measurement. A combination of temperature, humidity, pressure, etc. It’s why the climate in Las Vegas and Miami are different even if the temperatures are similar. Trying to find a “global average degree-day” would have the same problems as the “global average temperature”.

Nor does any of this eliminate the problem with measurement uncertainty due to micro-climate differences among stations, systematic biases in the measurement devices, or the fact that uncertainties, like variance, don’t cancel, they always add. Something climate science seems to be unable to grasp with their statistical analyses.

Reply to  Tim Gorman
May 18, 2023 12:22 pm

Thank you. What a great website: https://www.degreedays.net/calculation

So much to learn. And yet there is nothing new under the sun. :-).

Reply to  Jennifer Marohasy
May 18, 2023 1:00 pm

The following is from the web site you referenced.

“””””Some other sources (particularly generalist weather-data providers for whom degree days are not a focus) are still using outdated approximation methods that either simplistically pretend that within-day temperature variations don’t matter, or they try to estimate them from daily average/maximum/minimum temperatures. We discuss these approximation methods in more detail further below, but, in a nutshell: the Integration Method is best because it uses detailed within-day temperature readings to account for within-day temperature variations accurately.”

“particularly generalist weather-data providers for whom degree days are not a focus) are still using outdated approximation methods”

That is the truth and some folks should take it to heart.

ferdberple
May 18, 2023 6:15 am

You also make mention of ‘integration’.
======
Method for finding area (sum of samples) under a curve. Divided by the number of samples yield arithmetic average (mean).

I can still remember sitting in math class seeing how Newton solved the problem and recognizing that a year we spent learning graphing was wasted. Newton had come up with a much easier solution.

Much like mercury thermometers. Climate science is trying to convert Newton’s calculus to Pythagoras.

Reply to  ferdberple
May 18, 2023 9:00 am

This is why the “average” daily temp is not the mid-range value.

ferdberple
May 18, 2023 6:28 am

Recreating a mercury thermometer with modern equipment to my mind is highly dependent on the rate of change of external temperatures.

It seems a humbug that sampling at 1 minute intervals would coincidentally yield the correct solution universally, regardless of location.

The minute is simply a human invention. An artifact of the rotational speed of the earth and a numerical system handed down from the sumerians/babylon. It seems hopelessly absurd to argue that it is also the correct solution to the equation of mercury thermometers.

Reply to  ferdberple
May 18, 2023 12:18 pm

Memory is not what it used to be. It’s pretty cheap. I would set up a protocol like 1/10th of a second samples stored for 5 years. Then average it to 1 second and discard 1/10th. Ten years later average to 1 minute and discard 1 second. Ten years later average to 1 hour and discard 1 minute. Store hourly forever.

old cocky
Reply to  Jim Gorman
May 18, 2023 2:57 pm

In 5 years time, storage casts will have decreased enough that you’ll keep all the old data anyway.
In 10 years, you’ll be kicking yourself for not having recorded 10ms instead of 100ms.

Tape backups are far less common than they used to be because RAID arrays have come down in price so much, and mag disk has now been relegated to 3rd-tier storage behind SSD and 1st-level cache.

Reply to  old cocky
May 19, 2023 1:26 am

But dear old cocky, AWS are not hard-wired-in; they report in. They don’t relay 1s data to a central DB. It appears that except for special projects AWS only send within-minute Max and Min, and end of minute values, all attenuated. (Who keeps their garbage these days anyway?)

(Fishing Monday. My woodies mate assures me low tide = less water; thus more fish, beer, conversation, something/anything better!)

All the best,

Bill

old cocky
Reply to  Bill Johnston
May 19, 2023 1:52 am

Ah, Bill, we can but lament lost opportunities in data collection.

Your upcoming fishing Monday should be very pleasant. Catching some fish may be somewhat of a bonus.