Hyping Daily Maximum Temperatures (Part 1)

From Jennifer Marohasy’s blog

Jennifer Marohasy

There is more than one way to ruin a perfectly good historical temperature record. The Australian Bureau of Meteorology achieves this in multiple ways, primarily through industrial scale remodelling (also known as homogenisation – stripping away the natural warming and cooling cycles that correspond with periods of drought and flooding), and also by scratching historical hottest day records, then there is the setting of limits on how cold a temperature can now be recorded and also by replacing mercury thermometers with temperature probes that are purpose-built, as far as I can tell, to record hotter for the same weather.

The Australian Bureau of Meteorology (BoM) regularly claims new record hot days, and Australian scientist report that heat records are now 12 times more likely than cold ones. But how reliable – how verifiable – are the new records?

I have been trying for five years to verify the claim that the 23 September 2017 at Mildura was the hottest September day ever recorded in Victoria. According to media reporting at that time, it was the hottest September day all the way-back to 1889 when records first began. Except that back then, back in September 1889, maximum temperatures were recorded at Mildura with a mercury thermometer. Now they are recorded with a temperature probe that is more sensitive to fluctuations in temperature and can thus potentially record warmer for the same weather.

In the absence of any other influences, an instrument with a faster response time [temperature probe] will tend to record higher maximum and lower minimum temperatures than an instrument with a slower response time [mercury thermometer]. This is most clearly manifested as an increase in the mean diurnal range. At most locations, particularly in arid regions, it will also result in a slight increase in mean temperatures, as short-term fluctuations oftemperature are generally larger during the day than overnight.” Research Report No. 032, by Blair Trewin, BoM, October 2018, page 21.

To standardise recordings from temperature probes with mercury thermometers, one-second readings from probes are normally averaged over one minute – or batches of ten second readings are averaged and then averaged again over one minute. That is the world-wide standard to ensure recordings from temperature probes are comparable with recordings from mercury thermometers. But the Australian Bureau of Meteorology do not do this, instead they take one-second instantaneous readings and then enter the highest of these one-second spot readings for any given 24-hour period as the official maximum temperature for that day.

There is an easy way to test this.

Many Australians check the ‘Latest Weather Observations’ for their local weather station online at the Bureau website, but few realize that the values they see there represent the last one-second recording for any given half hour period.

For example, for the 23rd September 2017, the highest value for that day as shown on the observations page for Mildura was 37.2 °C, recorded at 12:00pm.

Yet 37.7 °C was entered into the data archive as the official maximum temperature for 23rd September 2017 at Mildura.

This represents a discrepancy of 0.5 °C.

This is because the Bureau uses the highest one-second reading as the the maximum temperature for that day, while the last (not the highest or averaged) one-second reading for any 30-minute period is displayed on the ‘Latest Weather Observations’ page.

There is absolutely no averaging. None at all in direct contravention of international norms and standards.

This is confusing, most unconventional, and in fact ridiculous.

Consider the temperatures as recorded at Sydney’s Observatory Hill automatic weather station just yesterday, as another example.

Australia is a land of drought or flooding rains and so relatively hot years like 2017 tend to be followed by cooler years including the last three years. Until yesterday, 18th January 2023, Sydney apparently had its longest spell of days with temperatures below 30 °C in 140 years. And I watched as Chris Kenny on Sky Television last night made reference to this and showed the ‘Latest Weather Observations for Sydney – Observatory Hill’ with the temperature showing 30.1°C at 2.30pm.

I went online to the Bureau’s data archive to see the maximum temperature officially record for this weather station for 18th January 2023. At 9 am this morning, a different value was entered, the value of 30.2°C, giving a discrepancy of 0.1.

There is a discrepancy because the value on the ‘latest observations’ page is the last one second reading for that 30 minute period, while the value entered into the permanent archive is the highest one second reading for the entire day.

The World Meteorological Organisation (WMO) provides a clear definitions of daily maximum temperature. This temperature can be read directly from a mercury thermometer, but when using a temperature probe ‘instantaneous’ values must be averaged over one to ten minutes.

Back to Mildura and in summary, both the one second value of 37.2 °C shown for 12:00 pm on 23rd September 2017, and the different one second value of 37.7 °C recorded as the daily maximum temperature at Mildura on that same day are not compliant with any international standard and therefore are not verifiable or standardisable against the temperatures as officially recorded at Mildura using a mercury thermometer from January 1889 until 1 November 1996. It is thus disingenuous to claim a new record hot day back to 1889 for 23rd September 2017, because the temperature on 23rd September 2017 was measured with a different type of recording instrument (temperature probe) and in a non-compliant way (no averaging).

Theoretically it is possible to know how the values of 37.2 °C and 37.7 °C compare with a mercury thermometer for that location at that time of year as Mildura is meant to be a location with parallel measurements. That is measurements from both a probe and a mercury thermometer in the same automatic weather station recorded on what are known as A8 Forms.

I have been seeking this information for 34 different locations as part of a Freedom of Information Request, so far denied by the Bureau. An appeal against this is being heard in the Administrative Appeals Tribunal on 3rd February 2023 in Brisbane.

The current probe, the third for Mildura, was installed on 27th June 2012. As I will show in a future blog post in this series, this probe has a very different recording profile relative to the previous probes and the mercury thermometer.

There is theoretically parallel data (temperatures recorded from both probes and mercury) for Mildura for the period from 1 January 1989 until 28 January 2015, and many scanned A8 Forms were provided to me following the intervention of Minister Josh Frydenberg in November 2017. But the Bureau has so far withheld the A8 Forms pertaining to the entire month of September 2012. This is the only September for which there are parallel recordings with the same probe used to record the 23rd September 2017 claimed record hot day and a mercury thermometer.

Mildura has one of the longest temperature records for any where in the Murray Darling Basin region. The official data for this region shows an increase in the number of warmer years after the temperature probes became the official recording instrument beginning on 1 November 1996.

******
Consider subscribing for my irregular e-news updates by clicking here.

4.9 26 votes
Article Rating
Subscribe
Notify of
72 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Steve Case
January 19, 2023 6:11 pm

Cherry picking by any other name is still cherry picking.

AndyHce
Reply to  Steve Case
January 19, 2023 7:31 pm

think of the pie possibilities.

Tom Halla
January 19, 2023 6:16 pm

Anyone not releasing records must be assumed to be cooking them. If they had a strong case, they would want as much evidence out as possible.

observa
Reply to  Tom Halla
January 19, 2023 8:15 pm

We the omniscient ones decide the timing and what’s classified docs and what’s not.

Mr.
January 19, 2023 7:04 pm

What’s that I see on the edge those scales?
Why, it’s Blair’s thumb!

Or maybe it’s Deidre Chambers?
What a coincidence!

SMS
January 19, 2023 8:20 pm

The temperature has to be exaggerated using one of two possibilities. Either use thermometers that are biased to show higher temperatures, or through adjustments. Put a one-second thermometer at an airport and wait for an instantaneous blast of turbine wash to pop the temperature up.

We are told that the last decade was the hottest on record, but when I look at the high temperature records by US state, I find that the 1930s were the hottest. How long will it be before the high temperature records for these states are excised out?

michael hart
Reply to  SMS
January 20, 2023 6:20 pm

“The temperature has to be exaggerated using one of two possibilities.”

No. There at least three. And probably more besides.

The point of the author’s article is highlighting another one.
They are not really “adjustments” in the normal sense of the word. And the thermometers do not need to be “biased”. The thermometers read what they read.

The fluctuations of a mercury thermometer are dampened by the thermal inertia of the instrument. They are effectively doing an average over a period of, maybe seconds, or more likely, minutes. Possibly many minutes. This is due to physical limitations.

A more modern temperature probe will have less thermal inertia and can be sampled more rapidly. So at the time of the day of peak heat, which may be controlled by a sudden gust of wind or lack thereof, the modern temperature probe will capture a sudden “record” high which the mercury thermometer would not capture.

This encapsulates much of the failings with reports of “records”. They often weren’t recorded before because:
1) The equipment was not capable or recording them at such a high frequency, averaging them out.
2) The observers weren’t hunting for them.

Seek, and you shall find.

Chris Hanley
January 19, 2023 9:57 pm

That the BOM relishes announcing these maximum temperature records to convince people that the country must convert to 100% ‘renewables’ at enormous cost so your average stinking hot day might be 0.5C cooler as a result is of course ridiculous, not least because the Chinese Indians Africans South Americans etc. don’t give a damn.
Besides as Dr Marohasy says it is impossible to compare electronic temperature readings with mercury or alcohol thermometer readings because: ” these thermometers are reasonably accurate, to within a degree or two “ whereas ” today, for purposes of safety, accuracy and convenience, air temperature is most often measured using electronic thermometers, which are accurate down to a fraction of a degree ” ( National Institute of Standards and Technology US Department of Commerce).

Last edited 10 days ago by Chris Hanley
Scarecrow Repair
Reply to  Chris Hanley
January 19, 2023 10:16 pm

Thanks for the link — it answers a little of my question following your post.

Jennifer Marohasy
Reply to  Chris Hanley
January 19, 2023 10:31 pm

I’ve a good amount of parallel data from Mildura, that I haven’t published yet. But I have analysed it. The one mercury thermometer was in place from 1996 until 2013. Over that same period three different probes were used to record official temperatures at Mildura in the same Stevenson screen.

The first probe recorded a statistically significant minus0.2 degrees too cool that is for the period 1 November 1996 to 2 May 2000.

The second probe started recorded too warm but then drifted relative to the mercury and by 27 June 2012 was recording too cool. So that second probe while able to record to a fraction of a degree drifted terribly over the course of those few years.

Then the third probe recorded too warm relative to thermometer. But the person tasked with writing the temperature down from the mercury when it was really hot just didn’t write anything down. And so this data is not normally distributed. Then they pulled the mercury with no recordings at all by January 2015. Often the probe was recording 0.4 degrees warmer than the mercury for the same weather. I’ve written a bit about this here: https://jennifermarohasy.com/2018/02/bom-blast-dubious-record-hot-day/

Tim Gorman
Reply to  Chris Hanley
January 20, 2023 4:56 am

NIST: “Today, for purposes of safety, accuracy and convenience, air temperature is most often measured using electronic thermometers, which are accurate down to a fraction of a degree.”

A perfect example of the idiocy of government bureaucrats today, even at NIST!

They simply don’t understand the difference between ACCURACY and PRECISION!

Precision is *NOT* accuracy. Whoever wrote the quote above needs to write that out on a a blackboard 10,000 times!

Hubbard/Lin found in a study in 2004 that the minimum uncertainty of both the MMTS and AWS measurement stations was +/- 0.2C. This was with annual replacement of the sensor and recalibration. The measurement stations in the USCRN network are worse.

The precision capabilities of the sensor is *NOT* the accuracy of the measuring stations. It is not even the accuracy of the sensor itself.

Jim Gorman
Reply to  Tim Gorman
January 20, 2023 7:32 am

Some NWS,/NOAA references.

GridArt_20230120_091244318.jpg
Jim Gorman
Reply to  Jim Gorman
January 20, 2023 7:33 am

Some NWS,/NOAA references.

GridArt_20230120_083604965.jpg
Jim Gorman
Reply to  Jim Gorman
January 20, 2023 7:35 am

Some NWS,/NOAA references.

Scarecrow Repair
January 19, 2023 10:12 pm

Time for another ignorant question. I assume digital probes are monitored by computers. How were temperatures recorded with mercury thermometers? Did someone go look at them once every half hour? Was any kind of automatic recording ever possible? How precise were these thermometers? And was the daily low/high just selected from these half hour readings? Or did they have min/max mercury thermometers?

My ignorant intuition says a human went out every half to look and write down the figure, and the thermometers were more amenable to recording tenths of a degree than typical home thermometers when I was a kid. I suspect the human didn’t always record the temperature exactly on schedule.

Jennifer Marohasy
Reply to  Scarecrow Repair
January 19, 2023 10:20 pm

Hi Scarecrow, A mercury thermometer goes up and up until it is reset. So, what used to happen is the official recorder would write down the maximum temperature at 9am each morning and then reset the instrument. That 9am recording became the maximum temperature for the day before, because the highest temperature is always in the afternoon. I used to keep a weather station and records when I worked as a field biologist in Madagascar. :-).

Scarecrow Repair
Reply to  Jennifer Marohasy
January 19, 2023 10:38 pm

Thanks. Surprisingly simple 🙂 Did you record half hour readings too?

Jennifer Marohasy
Reply to  Scarecrow Repair
January 19, 2023 11:01 pm

I was often running experiments with insects, and then I would set up a thermo-hydrography and run that for some days. It recorded temperatures continuously and looked like this: https://www.wetec.com.sg/our-products/environment/product-listing/1646q-thermo-hygrograph

Bill Johnston
Reply to  Jennifer Marohasy
January 20, 2023 11:42 am

Hi Jen,

Probably a thermohygrograph. Quite inaccurate. Measures the diurnal pattern but quite inaccurate when estimating specific values such as max and min. The reason being that friction between the nib of the pen, and the paper on the drum (usually 1-revolution per week) causes quite a long response time to small pulses of heat.

We operated a thermograph (which is more accurate because the chart was ‘higher’ thus the pen had a longer traverse; as well as a thermohygrograph, like the one in the picture. I also ran a separate thermograph in the glasshouse – quite useful for general monitoring but no use for specific values. In a paper or report one could say “temperature was maintained between 20 and 30 degC”, for example.

All the best,

Bill Johnston
http://www.bomwatch.com.au

Jennifer Marohasy
Reply to  Bill Johnston
January 20, 2023 12:01 pm

Hi Bill,

Different instruments, different purposes.

I was using the max-min thermometer to record daily max and min temperatures. And I reset that each morning.

Scarecrow was asking if I recorded temperatures each half hour.

I didn’t. But, as I replied I did have a thermohygrograph that was running continuously to calculate approximations for ‘degree days’ as understood in entomology. It was accurate enough for that purpose.

Cheers,

Bill Johnston
Reply to  Jennifer Marohasy
January 20, 2023 2:44 pm

Thanks Jen,

It is important to consider uncertainity. I don’t know what instrument you used, but a standard Australian degC met thermometer has an uncertainity of +/- 0.25 degC – half the interval scale (which is the 0.5 deg index between whole degree marks). Many other thermometers cannot be read to anything like an “accuracy” of 0.2 degC (here is an example: https://www.ebay.com.au/itm/163986867995?chn=ps&_ul=AU&norover=1&mkevt=1&mkrid=705-139619-5960-0&mkcid=2&mkscid=101&itemid=163986867995&targetid=&device=c&mktype=pla&googleloc=9071758&poi=&campaignid=15791083372&mkgroupid=&rlsatarget=&abcId=9300816&merchantid=116550989) hover and get a close look; then try and read the index bar within the column – it is very difficult to be ‘accurate’.

The ‘accuracy’ of comparing two values is the SUM of the uncertainities (for two met-thermometer observations +/- 0.5 degC). The Bureau reckons the uncertainity of their PRT probes is 0.3 degC (0.25 rounds up to 0.3). So the decimal places after 1, can be misleading (i.e. “accuracy” that is implied rather than accuracy that is attainable). As I said above, 0.25 is also the instrument uncertainity of degC met-thermometers.

The accuracy of an observation may be less than the accuracy of the instrument – observers may record the wetted perimeter of the instrument for example, and there is the problem of parallax-error, which occurs with manually observed Tmax and Tmin. Some observers consistently ’round-up’ (or down), some only observe to the nearest whole degree etc. etc. which makes this whole issue of metrology difficult to resolve on a dataset by dataset basis, especially for historic datasets like Marble Bar for example. Interestingly a Fahrenheit thermometer is more accurate than a Celsius one – there are more whole-degree indices between freezing and boiling!

Maybe I will do a small report on this and make it available for reference on http://www.bomwatch.com.au

Kind regards,

Bill

Loren Wilson
Reply to  Bill Johnston
January 20, 2023 6:32 pm

There is a difference between the readability of the scale and the accuracy. For example, the readability of a mercury thermometer may be 1/2 of the smallest increment of 0.5°C. However, if the mercury thermometer is reading high by 2°C, then your data are precise (relatively) but precisely wrong by 8 times the reading precision. As mentioned above, I was surprised by well-educated colleagues who thought precision and accuracy were the same thing.

I have used mostly thermocouples and platinum resistance thermometers in my career, along with the odd thermistor and mercury in glass thermometer. Thermocouples require expensive voltmeters to read them to better than ±1°C. Thermocouples drift so frequent re-zero checks at ice water temperature are required. By the way, you have to calibrate the entire system, not just the sensor. Your temperature sensor might be fine but the voltmeter could be well out of calibration. Thermistors are much more sensitive sensors but also drift significantly. Platinum resistance thermometers (PRT) are the workhorse of the modern lab but require expensive electronics to achieve ±0.1°C accuracy. We used inexpensive 4-wire PRTs for lab work hooked to nice Fluke-Hart 1502a electronics. PRTs do not like vibration and out equipment shook so we used a less expensive PRT. We had a secondary reference PRT calibrated to ±0.005°C that served as our primary standard for calibrating the industrial grade probes. It is challenging to achieve true accuracy of better than ±0.05°C for less than about $15,000 per set up.

Bill Johnston
Reply to  Loren Wilson
January 20, 2023 7:37 pm

Thanks Loren,

I missed this earlier. Yes we used thermocouples attached to gradient germination plates to map the temperature of the surface under steady state, which is a bit of a dream. As I recall, they needed to be ‘read’ using an AC meter, otherwise they would develop polarity and produce useless random numbers.

Breaks in a mercury met thermometer caused by vibration can usually be seen by eye, and is usually only a problem for max thermometers, because they lie almost flat in the screen. Hardly anyone knows that standard practice each morning was to record the ‘reset’ temperature. This value usually agreed pretty-well with the 9am dry-bulb, but I don’t think the BoM uses the reset as a QA measure for the instrument.

All the best,

Bill

Gunga Din
Reply to  Jennifer Marohasy
January 20, 2023 10:45 am

Here’s some more info on liquid thermometers used to record the Max and Min temps for a time period (usually 24 hours) without taking frequent readings.
https://en.wikipedia.org/wiki/Six%27s_thermometer

siliggy
Reply to  Gunga Din
January 22, 2023 1:23 pm

I like your comment re electrical noise. From 2017 you will find many such comments from me on Jennifer’s blog. Where Jennifer quoted me (Lance Pidgeon) above I should have noted that the approx 30 second response time is the inglass thermometer time constant stated as standard in the old WMO 315. “Response times” are sometimes given as 5,6 or even 7 time constants
Re the old thermometers used in Australia, this from 1914.
“Two thermometers are placed in the screen, a maximum, an instrument with a construction in the bore so that although the temperature may rise in accordance with the heat of the day the mercury cannot run back into the bulb; the reading at any time therefore represents the highest point the temperature has reached since last set. The other instrument is a minimum thermometer and it registers the lowest point to which the temperature has fallen during the period since the instrument was last set. This is accomplished by placing a small indicator in the spirit column. When the temperature falls the needle is drawn, down towards the bulb, and the indicator is left stranded at the point of lowest temperature.”
https://trove.nla.gov.au/newspaper/article/187462958

Last edited 7 days ago by siliggy
Chris Nisbet
January 19, 2023 10:38 pm

Are there any valid reasons why temperature data shouldn’t be made publically available on the internet 100% of the time?

Right-Handed Shark
Reply to  Chris Nisbet
January 20, 2023 12:42 am

Possibly because that would expose an Inconvenient Truth.. ?

Leo Smith
Reply to  Chris Nisbet
January 20, 2023 1:21 am

Are there any valid reasons why electricity genbraton as well as the fossil fuel being burned to generate it shouldn’t be made publicly available on the internet 100% of the time?

Because that would expose an Inconvenient Truth.. ?

Eng_Ian
Reply to  Leo Smith
January 20, 2023 12:46 pm

Here’s one example.
https://www.aemo.com.au/energy-systems/electricity/national-electricity-market-nem/data-nem/data-dashboard-nem

Click on the fuel mix tab. I guess this isn’t too inconvenient.

Loren Wilson
Reply to  Leo Smith
January 22, 2023 7:19 pm

ERCOT (Grid operator for most of Texas) does. It shows the very low dependability of wind-powered generation and the quick response of natural gas generators to make up for it.

tygrus
Reply to  Chris Nisbet
January 20, 2023 2:14 am

Possible reasons to keep data only available for paying customers or selected academics:

  1. commercial value because other people can use it & charge for weather apps/service;
  2. large datasets made public greatly increases load on servers, storage & internet bandwidth used which all increase running costs & higher risk of DoS attacks;
  3. if others want to publish data, the bureau will like to chech & re-run analysis otherwise people could make unnoticed errors or deliberate fraud but the bureau gets quoted & blamed;
  4. the bureau want to be the 1st to publish selected reports & articles as the source, authority & academic credit;
  5. need time to (legitimately) find problems, validate, write comments to attach & adjust data;
  6. hide dubious data processing, and/or using other sources to ‘adjust’ or fill missing data.
Eng_Ian
Reply to  tygrus
January 20, 2023 12:50 pm

Since the BOM is fully funded by the taxpayer, then the data that they measure should be freely available.

If it’s okay to take public money to do the measurements, then it’s okay to share the data with the people who pay for it.

Bill Johnston
Reply to  Chris Nisbet
January 20, 2023 12:23 pm

Handling high frequency data can be difficult. Files can be very large (multiply up 1-secons readings into say a 5-year dataset and you will see what I mean).

I process some very large datasets, tide-gauge data and sea-surface temperature data for example. I use R because it is very efficient at calculating meaningful numbers – averages, extremes and various daily statistics for instance, but tracing back to a particular value is quite tedious, and then working out if the value is ‘real’ or a spike can take a lot of time.

The Bureau does undertake post-acquisition processing of data, before posting it as “the” max or min for a day. Their processes have been explained in various publications available on their web-site.

In relation to the 1-second readings, I made a reply to Mr in my post last week
https://wattsupwiththat.com/2023/01/15/the-on-going-case-for-abandoning-homogenization-of-australian-temperature-data/ (Scroll-down to January 15, 11.29 PM.) Having studied the two cited publications, I tend to lean in the BoM’s favor in this case.

An additional problem with electronic instruments is that the more “accurate” they are (i.e., the more decimal places they register), the more ‘noise’ they convert into ‘signal’ – they sample more variance, This does not necessarily mean they will be biased-high. However, some noise reduction process may need to be applied before data can be regarded as ‘sound’. I’ve seen this in tide-gauge data. Chart recorded don’t sample ripples, but sonic and radar gauges do, for example.

Having undertaken regular weather observations, and also worked directly with temperature probes and automatic weather stations, I can assure you they suffer similar variance issues. Namely, that a passing ‘parcel’ of air measured very accurately by an instrument, may not be representative of the airmass being monitored.

All the best,

Dr Bill Johnston

http://www.bomwatch.com.au

Jennifer Marohasy
Reply to  Bill Johnston
January 20, 2023 1:32 pm

Hi Bill,

The BOM are unambiguous, they do not numerically average. To suggest otherwise is to mislead perhaps because you suffer from ‘oppositional defiance disorder’. 🙂

Can you explain what the values in the above two examples, represent? One second instantaneous readings, thus the discrepancy of 0.5C for Mildura for 23rd September 2017.

Further comment from their director Andrew Johnson in an email to me at that time:

“It is also not true that a change has occurred at the Bureau from averaging one-second readings over one minute. The Bureau uses purpose designed probes that ensure every reading is an integration of 40-80 seconds. The probes were specifically designed to have a long response time to mirror the behaviour of mercury in glass, making numerical averaging unnecessary.
Regards
Dr Andrew Johnson FTSE FAICD
CEO and Director of Meteorology
Bureau of Meteorology”.

Except they were originally designed to integrate over 18 seconds and there are no published specifications and they won’t provide me with any.

Last edited 9 days ago by Jennifer Marohasy
Jennifer Marohasy
Reply to  Jennifer Marohasy
January 20, 2023 1:39 pm

PS To be clear they do not numerically average the air temperature data, the BoM do numerically average the tidal data but that is NOT the topic of this post.

Bill Johnston
Reply to  Jennifer Marohasy
January 20, 2023 3:24 pm

Thanks Jen,

I don’t think I’m defying anyone, and except that my knee is a bit wonky, last time I looked I saw no particular disorder. Do you do knees?

As I explained to Mr at the post I referred to, the instrument, that is the probe itself, has a time constant that exceeds the discrete sampling time, which is 1 minute. Therefore (they argue in the papers that I referred to) that the 1-second reading at the end of each discrete minute, is not an instantaneous value but the culmination/integration of the whole 1-minute long process. Bear in-mind also that 18-seconds integration, is shorter (more spikey) than the 40 to 80 seconds referred-to by Johnson.

Johnson says above that “The Bureau uses purpose designed probes that ensure every [i.e., 1-second end of minute] reading is an integration of 40-80 seconds”. I can understand that and he has two published papers to back him up, including the most recent where they collected every 1-second reading for every minute, at two weather stations. (I was not so impressed with the earlier paper; what were your thoughts?)

Maybe you can contact the Authors, request the 1-second data, analyse them and see if you you arrive at a different conclusion.

As for explaining the values, except that they use post acquisition processing before they post data for the day, I don’t know either. (Be interesting to find out as well.)

But given the general uncertainity considerations, except for their fetish with ‘records’ it probably makes little difference. (Have you been there and seen the new site at Sydney Observatory?)

(The sampling routine for tide-gauge data is done by the logger and is quite different; mainly because modern sonic/radar sensors don’t have ‘time constants’.)

All the best,

Bill

siliggy
Reply to  Bill Johnston
January 21, 2023 9:30 am

Jennifer. You do seem to have attracted some strange people. This Dr Bill Johnston is so very loyal to the BoM that he is willing to claim that 40 seconds is longer than a minute for them.
`

Bill Johnston
Reply to  siliggy
January 21, 2023 9:32 pm

As you know siliggy, I am an experienced independent scientist with no “loyalty” as you put it to the BoM.

I also know you are a pretty smart guy. However, it is not hard to tell the difference between Jennifer’s idea of “instantaneous”, Johnsons explanation of a 30-80 second integration, and your silly notion that I am somehow confusing 40-seconds with 1-minute.

A good starting point for further sensible debate would be for you and Jennifer to read the papers I referred to that justify Johnson’s reply.

All the best,

Bill

siliggy
Reply to  Bill Johnston
January 22, 2023 12:36 pm

Bill. You dodged the issue by name calling “silly” and suggesting reading something you yourself have clearly not understood. You now say, without citation the integration can be as short 30 seconds. Why don’t you try with you own words to explain how an integration of just 30 seconds can possibly represent an average for a full minute.

Bear in mind that this discussion of what the last value for the minute does, proves you also clearly did not understand Jennifer’s post or her more direct question to you. Your arguments are off topic and not disproving or proving anything that matters.

I note also you did not answer her question.

Tim Gorman
Reply to  Bill Johnston
January 23, 2023 4:56 am

The use of the word “integration” is questionable here. Adding thermal mass just lowers response time to temperature changes, it changes the thermal conductivity (amount of heat transferred per unit time).

Integration implies you are considering the entire temperature curve, including the actual maximum temperature. Just lowering response time doesn’t really “integrate” anything. Since the temp curve at maximum is very close to being a sine wave, an integration would be just a cosine curve evaluated from time1 to time2.

The addition of thermal mass may work to mimic the response time of a LIG thermometer but it isn’t an “integration” of the temperature curve.

A nit pick for sure but its important to accurately describe what is happening.

Loren Wilson
Reply to  Bill Johnston
January 22, 2023 8:06 pm

Bill, let me paraphrase to see if I understand correctly. The thermometer is read for a second once a minute. If the BOM is correct in their statement that the time constant of the thermometer is in the neighborhood of one minute, then that second represents a running average of a minute of temperature response, lagged by about 30 seconds. If that is the correct understanding of how these temperature sensors operate, then this is a reasonable approach. There is no point in measuring more often because the signal cannot change fast enough to make any material difference in the result.

Now my comments: I run a test lab for a large fossil fuel equipment and services company. My test ovens contain samples with a thermal inertia of 10-40 liters of water contained by a layer of reinforced polyethylene about 20 mm thick. Heat transfer is by conduction through the polyethylene and then free convection in the water. This is a very slow heat transfer rate. The temperature is measured by a thermocouple (I don’t need much accuracy) and electronics to convert the signal from microvolts to temperature. The data logger reads several times per second (we can’t dictate the rate) and computes an average over a minute which is logged. There would be no difference between this approach and just measuring the temperature once a minute because our temperature can’t physically change more than the measurement and calibration uncertainty in one minute.

old cocky
Reply to  Loren Wilson
January 22, 2023 10:02 pm

Loren,

That’s my understanding of what Bill wrote as well.

siliggy
Reply to  old cocky
January 23, 2023 6:37 am

That’s my understanding of what Bill wrote as well.” Yes but what Bill wrote is from a very confused “scientist” from the wrong field, Not an electronics expert at all. Note that a “sample” is a value taken at single point in time, Instantaneous.
The measured value of a signal at a single point in time or space.”
https://www.digitizationguidelines.gov/term.php?term=samplesignal

It would be matched to what a mercury thermometer would do via the thermal time constant approximation of integration if the time constant was longer than a minute but it is not. The sampling rate of 1 per minute is too slow. But it gets worse. There are electrical reasons for the filtering which can’t be satisfied by thermal time constant filtering. The time constant filter needs to be after the linearisation circuitry in the chain of events to produce a linear average or be a numerical average calculated after the analogue to digital sampling. All sixty samples need to be present not just one with an unmodified fast pt100 probe.

siliggy
Reply to  Loren Wilson
January 23, 2023 5:25 am

The time constant is the lag and should only be counted once. What Bill has said and what the BoM have said are two entirely different an unrelated things. Check for yourself.
Have a look at what a thermal time constant actually is. https://www.littelfuse.com/technical-resources/technical-centers/temperature-sensors/thermistor-info/thermistor-terminology/thermal-time-constant.aspx
It is the lag time tweaked to match that of a mercury thermometer. But this thermal exponential decay approximation of integration is not a suitable substitute for an anti aliasing filter and is shorter than one minute. The BoM’s Dr Johnson said there that it can be 40 seconds. 40 seconds is shorter than a minute. Jennifer’s point is that Bill’s time wasting nuisance distraction here has nothing to do with the daily minimum and maximum sample timing. They are timed and chosen differently to the last instantaneous sample of the minute. The timing that Bill has confused is not relevant to them.

old cocky
Reply to  siliggy
January 23, 2023 1:39 pm

It’s quite possible that everybody has expressed themselves unclearly, leading to misunderstandings.

A number of the posters on this thread have questioned various aspects of the BoM’s reporting; not necessarily in the same areas.

Eric Vieira
January 20, 2023 1:18 am

This means for example: one has a cloudy day, but for 1 min the sunshine came through
and shone on the weather station. The temperature goes up and then this higher value is given as “representative” for the whole day ? This isn’t science, this is absolutely ridiculous …

Eric Vieira
January 20, 2023 1:24 am

Thanks to the author for such an explicit graph (the first one): it really says it all without having to look at any numbers. After one specific time point, the hot days are “hotter” and the cold days are practically gone …

SteveG
January 20, 2023 1:39 am

‘I have been seeking this information for 34 different locations as part of a Freedom of Information Request, so far denied by the Bureau. An appeal against this is being heard in the Administrative Appeals Tribunal on 3rd February 2023 in Brisbane.”

————

Now why would the government bureau not want that temperature data in the public space?

Tom Abbott
Reply to  SteveG
January 20, 2023 5:56 am

That would be my question, too.

Hiding temperature data? Whatever for? I’m being sarcastic.

Richard Greene
January 20, 2023 3:06 am

The author carefully explained why climate bureaucrats can’t be trusted

Unfortunately even if historical temperature data were 100% accurate, we’d still be hearing predictions of climate doom

The predictions of doom are barely related to historical climate change trends. Because extrapolation of past global warming trends from 1975 to 2015 would not scare anyone. A climate boogeyman has to scare people — that’s the goal.

Tom Abbott
Reply to  Richard Greene
January 20, 2023 6:09 am

“The predictions of doom are barely related to historical climate change trends.”

You must be kidding!

The first thing an alarmist points at to “prove” their case is a bogus, bastardized Hockey Stick global temperature chart, and its “hotter and hotter and hotter” temperature profile, that climbs in parallel to CO2 increases.

All the claims of gloom and doom go back to the bogus temperature profile. Without the bastardized temperature record, the alarmsits would have nothing to talk about or scare people with.

So the temperature record is of supreme importance when it comes to dispelling the human-caused climate change narrative.

The Temperature Data Mannipulators have created a false temperature profile they want all of us to live in. The historical, wriitten temperature records put the lie to the “hotter and hotter and hotter” climate change claims and show that CO2 is a minor player with regard to the Earth’s atmosphere.

If it was just as warm in the recent past as it is today, and it was, and that’s what the historical, written temperature record says, and more CO2 is in the air today than there was in the past, then logic should tell us that CO2 has had little effect over those years.

The climate alarmist meme can’t survive without a bogus, bastardized temperature profile. Those who deliberately lied about the temperature records should go to jail for all the unnecessary harm they have caused over the years.

Richard Greene
Reply to  Tom Abbott
January 20, 2023 11:46 am

I originally wrote:
“The predictions of doom are barely related to historical climate change trends.”

You responded with:
“You must be kidding!”

Even when I am kidding, with satire, people still think I am serious

The Climate Howlers cherry pick 1975 to 2015 when CO2 and global average temperature happened to be rising at th same time.

They ignore 1940 to 1975 and 2015 to 2023 when the “proper” CO2 correlation with average temperature did not exist.

They DO NOT claim the future climate change will be the same as the 1975 to 2015 period, whether determined by surface or satellite measurements (which are similar)

If “more of the same” was the prediction, they would be predicting a mild rate of global warming, that obviously harmed no one from 1975 to 2015, and probably was not even noticed by most people

That would not be climate scaremongering

Instead, they predict 2x to 3x faster warming in the future, and have been making the same wrong prediction since the 1979 Charney Report.

Climate Howlers do not base their always wrong predictions on any past climate change trends. They ignore past trends when they are inconvenient, such as 1940 to 1975 and 2015 to 2023.

If the current temperature trend was important to the Climate Howlers, then the flat trend from 2015 to 2023 would have destroyed the Climate Howlers predictions of doom. In fact that flat trend from 2015 to 2023 had no effect on climate predictions at all. Proving that predictions of climate doom are barely related to historical climate change trends. Which is exactly what I said.

The hockey stick chart can not be proven wrong except for the DEVIOUS switch from proxies to measurements in the 1800s which was NOT IDENTIFIED ON THE CHART\.

We know the average temperature has varied in the past few thousand years. But averaging local climate proxies can’t prove that.

Any global average temperature change in the past 5000 years is hidden within A REASONABLE MARGIN OF ERROR IN THE ESTIMATES.

You have to go back to the Holocene Climate Optimum from 5000 to 9000 years ago to prove there was a warmer period than in the past decade. Maybe only 1 to 2 degrees C. warmer, but that should be enough to exceed the margin of error in the local climate proxy estimates.

The climate alarmist meme HAS survived global cooling from 1940 to 1975. And it survived no warming from 2015 to 2023 in spite of the fastest rise of manmade CO2 in history. It is a myth that is like a zombie — it never does,

The CAGW myth never dies because it is just a prediction, not reality. Predicted every year starting in 1979. CAGW never shows up in the historical temperature data, but the CAGW prediction never changes.

And never forget that the historical temperature record is controlled by the same people who have been predicting CAGW since 1979. The global average temperature will always be whatever the government bureaucrat scientists tell us it is. The actual average temperature could go down, as they claim it is going up. The climate change inmates are running the climate change asylums (NOAA and NASA-GISS).

Last edited 9 days ago by Richard Greene
Tom Abbott
Reply to  Richard Greene
January 22, 2023 5:03 am

Well, I can’t prove it, but I suspect that there is a Hockey Stick chart in the heads of all these politicians and climate change activists, and also in the skeptic politician’s heads, which keeps them silent because what can they say when the global temperatures are “obviously” going higher and higher every year in concert with CO2.

They see Al Gore’s Hockey Stick in their dreams.

A false reality disputed by the written, historical temperature records from around the world that show we are not experiencing unprecedented warming today, as it was just as warm in the recent past. No amount of computer mannipulation can change these facts.

Here are 600 graphs from around the world that do not show a “hotter and hotter and hotter” temperature profile.

https://notrickszone.com/600-non-warming-graphs-1/

They directly dispute the Hockey Stick “hotter and hotter” temperature profile. Just because these written temperature records don’t cover every inch of the Earth’s surface does not make them or their information invalid.

The Climate Change Data Mannipulators deliberately created a false reality with their computer-generated Hockey Stick chart. The Hockey Stick chart profile does not represent the regional temperature profile. It’s a blatant lie created for selfish/political purposes and is doing great damage to Western society right now.

Joe Crawford
January 20, 2023 8:13 am

Makes one wonder if anyone has tried to develop and test an electronic temperature probe encased in a blown glass container. I would think you might be able to simulate the response time of a mercury thermometer by controlling the type and thickness of the glass.

Gunga Din
Reply to  Joe Crawford
January 20, 2023 11:10 am

I suspect that one of the problems is in the “noise” in the line of any electronic transmission.
Another is “trusting” the input/output value more than it deserves to be trusted just because it’s “electronic” and/or a computer was involved.

Tim Gorman
Reply to  Gunga Din
January 20, 2023 12:26 pm

The tolerance of every electronic component in the measuring station, from resistors, capacitors, analog-digital converters, etc, must be considered when determining the measurement uncertainty of a field instrument. There isn’t a single component that doesn’t have some kind of drift over time. Even using ratio comparisons doesn’t eliminate measurement uncertainty.

Jennifer Marohasy
Reply to  Gunga Din
January 20, 2023 12:55 pm

Joe, Strictly speaking it is not an ‘electronic’ probe. Rather a platinum resistor. https://technology.matthey.com/article/24/3/104-112/

Gunga, The BoM seems to ignore the issue of ‘electrical noise’ in the transmission line.

Lance Pidgeon (8th Jan 2023) has previously emailed me:

The BoM as you know claim to have put a sleeve on the platinum resistance thermometer to slow its thermal response time down to match the thermal response time of a mercury thermometer, about 30 seconds at 5 M/S wind speed.

“This does nothing about electrical noise. Nothing at all. It is electrical noise that could randomly synchronize with any one of the the 60 spot readings to shift the temperature up by 0.3 for a single reading and if it happens to increase at a rate of 0.3 per second and continue to syncronise for the right 60 samples then an 18 degree increase is theoretically possible. 

Andrew Johnson, Director of the BOM (30th October 2017) has emailed me:

“It is also not true that a change has occurred at the Bureau from averaging one-second readings over one minute. The Bureau uses purpose designed probes that ensure every reading is an integration of 40-80 seconds. The probes were specifically designed to have a long response time to mirror the behaviour of mercury in glass, making numerical averaging unnecessary.

Jennifer Marohasy
Reply to  Jennifer Marohasy
January 20, 2023 2:55 pm

More information. The best I can determine is that:

1.     An original IT system for averaging the one-second readings from the Bureau’s electronic probes was put in place by Almos Pty Ltd, who had done similar work for the Indian, Kuwaiti, Swiss and other meteorological offices. The software in the Almos setup (running on the computer within the on-site shelter) computed the one-minute average (together with other statistics). This data was then sent to what was known as a MetConsole (the computer server software), which then displayed the data, and further processed the data into ‘Synop’, ‘Metar’, ‘Climat’ formats. This system was compliant with World Meteorological Organisation (WMO) and the International Civil Aviation Organisation (ICAO) standards. The maximum daily temperature for each location was recorded as the highest one-minute average for that day.  This was the situation until at least 2011 and may have been the situation until perhaps February 2013 when Sue Barrell from the Bureau wrote to a colleague of mine, Peter Cornish, explaining that the one-second readings from the automatic weather station at Sydney Botanical Gardens were “numerically-averaged”. 

2.     At some point over the last five years, however, this system has been disbanded.   That the Bureau now only records one-second extrema was confirmed to me in an email from the Director of the Bureau, Andrew Johnson, in an email dated October 30, 2017.  He wrote: 

“It is also not true that a change has occurred at the Bureau from averaging one-second readings over one minute. The Bureau uses purpose designed probes that ensure every reading is an integration of 40-80 seconds. The probes were specifically designed to have a long response time to mirror the behaviour of mercury in glass, making numerical averaging unnecessary.”

3.     All, or most, of the automatic weather stations now stream data from the electronic probes directly to the Bureau’s own software. This could be an acceptable situation, except that the Bureau no-longer averages the one-second readings over a one-minute period.  For example, at Rutherglen, it is likely that the Bureau averaged the one-second readings from January 1998 until sometime between 2011 and 2013.  The maximum temperature as recorded each day at Rutherglen is now the highest one-second spot reading from the custom-built probe. That is correct – a spot reading.  So, to reiterate, we now have a non-standard method of measuring (spot readings) from non-standard equipment (custom-built probes) making it impossible to establish the equivalence of recent temperatures from Rutherglen with historical data.

Bill Johnston
Reply to  Jennifer Marohasy
January 20, 2023 6:12 pm

Hi Jen,

Does it matter that Sydney Botanical Gardens does not have an AWS?

Although you may will it to be so, values reported by AWS are not instantaneous ‘spot’ values. As the time-constant exceeds each discrete 1-minute sampling interval, it is as Johnson points out, a time-integrated value.

Obtaining a spot value would require an instrument with a very short time constant and a much shorter sampling interval.

All the best,

Bill

ThinkingScientist
January 20, 2023 8:34 am

This is very similar to the support or change of scale problem in geostatistics. The readings of mercury thermometers are more average ie over a longer time window compared to the new readings. This means that the new digital readings (before averaging) will have higher variance than the smoother (more average) mercury readings.

If there is a set of parallel records over time then a correction factor based on the change of variance could be estimated. The correction is called the affine correction and simply requires a variance adjustment factor to be computed. Then the extreme values documented with the digital readings could potentially be scaled back to their mercury thermometer equivalent values, at least for comparison purposes.

Eng_Ian
Reply to  ThinkingScientist
January 20, 2023 12:56 pm

A much simpler solution is to embed the digital thermometer into a block of aluminium. The thermal mass can be selected to provide a 1 minute averaging.

High school mathematics required to calculate the size of the aluminium.

Simples.

Ben Vorlich
January 20, 2023 9:59 am

Even if the Mercury Thermometer is inaccurate the only way to move to a new instrument is to run the two in parallel and check whatever, in this case, averaging algorithm is being used gives a result as close as possible to the original.
If that’s not done you can’t compare the two record sets. Simple as that.

Tim Gorman
Reply to  Ben Vorlich
January 20, 2023 12:22 pm

Running in parallel doesn’t insure anything if the new instrument has a built-in uncertainty. That in-built uncertainty just carries over to being unsure of the accuracy of the old instrument.

The only *real* solution is to start a new record for the new instrument. Comparing the old record to the new record, even for trending, must also consider the measurement uncertainty of both instruments. Either that or do a calibration lab process on both instruments at the same time in order to determine their actual differences.

Richard Greene
January 20, 2023 11:19 am

I trust the author far more than any government bureaucrat “scientists:

A side issue: Her blog used to have one of the best photos on the internet — Jennifer as a child with her parents, which I can’t find anymore. I once suggested in a comment there that it should be on the home page of her website but it seems to have disappeared. And I’m disappointed.

Eng_Ian
Reply to  Richard Greene
January 20, 2023 12:57 pm

Let the wayback machine be your friend.

Jennifer Marohasy
Reply to  Richard Greene
January 20, 2023 1:48 pm

Richard Greene, Thanks for remembering :-), the photograph I will try and post with this comment was once on my home page. I’m to the right of Mum. That photograph would have been taken about 1973, at Conondale in Queensland.

Conondale-Mum-4Kids copy.jpeg
Last edited 9 days ago by Jennifer Marohasy
Richard Greene
January 20, 2023 12:06 pm

All the temperature measuring, adjustments, readjustments, re-readjustments, homogenization, pasteurization and infilling is a waste of time and money.

Australia climate authorities should place one weather station outside their headquarters and use it to guess the average temperature of Australia.

Who cares about the average temperature of Australia anyway?
No one lives in that average temperature.

People might be interested in local temperature trends where they live and work. They can probably feel them without the help of Ph.D. scientists. What difference does it make what the temperature of Perth was in 1895, or 1934?
How is knowing that going to affect the current climate or the life of anyone living in Perth today?

I say historical temperature data should be outlawed.
They serve no purpose that benefits society,

We must ban climate history. Lets just pretend there was no climate in history and move on with our lives.

The climate is whatever it is, and it is always changing. It has changed every year of our lives and we survived.

Only a lower intelligence species would spend so much time worrying about the climate and demonizing CO2, the staff of almost all life on our planet.

So the obvious solution is TO BAN CLIMATE HISTORY and put those pesky government bureaucrat scientists on the unemployment lines.

This is a serious comment, not satire
And not posted after 8 beers — I do not drink beer.

Last edited 9 days ago by Richard Greene
Jennifer Marohasy
Reply to  Richard Greene
January 20, 2023 1:42 pm

Richard Greene, Here’s a best bet historical temperature reconstruction for Australia, based on temperatures at Coonabarabran which still has a mercury thermometer. I’m hoping the chart will post with this …

Coonabarabran.png
sherro01
Reply to  Jennifer Marohasy
January 20, 2023 5:59 pm

Then, there is the UAH satellite data for monthly temperature anomalies version 6.0 lower troposphere. This is the usual data, nothing adjusted after the numbers left UAH in Huntsville.
http://www.geoffstuff.com/uahjan2023.jpg
There are imponderables when comparing UAH with BOM conventional daily data. It is noted that each measurement system has a gross uncertainty containing many factors, so uncertainty analysis has to be done.
For the BOM numbers, I wrote 3 WUWT articles late in 2022, the third co-authored with Tom Berger. The BOM were emailed an invitation to join in with comments, but that email was not even acknowledged as received. The comments to these three WUWT articles showed a large range of opinion, understanding and misunderstanding of the concept of uncertainty, so I cannot report any breakthroughs born from the confusion.
Uncertainty is a major factor for understanding what these temperature numbers mean. One important path to understanding uncertainty is to take measurements in parallel. The reluctance of BOM to supply some such data is reprehensible and hopefully will be shown by a Court as illegal.
Personally, I have been asking questions to BOM for many years, but their general tone has been unfriendly and minimally cooperative. This is not good enough when their temperature numbers are fed into global estimates that we can now see being used for major industrial change like closing fossil fuels and mandating battery powered cars and calling for dangerous reductions in fertilizer use. These major changes are built on very uncertain temperature numbers, often with trends badly confused by noise of various types.
The science behind all this is not settled. Please continue to study and object to errors being downplayed. It really is quite important. Geoff S

Bill Johnston
Reply to  Jennifer Marohasy
January 20, 2023 7:54 pm

Coonabarabran ummmm

See attached from Simon Torok’s thesis

Cheers,

Bill

Coonabarabran.jpg
Bob
January 20, 2023 2:10 pm

The solution to this kind of Tom Foolery is so obvious I’m surprised I have to point it out. No measuring device should be replaced. Instead new ones can be added. The results of both devices should be recorded side by side for a reasonable amount of time. If they give the same results then we can feel confident the old device is still in good working order. If the results are different then we need to record them both for a good long time. If the difference remains the same then we can be confident both devices are in good working order but differ. From here on out it doesn’t matter whether we use the new or old devise so long as we know and acknowledge that their results are different. Because one devise measures hotter or cooler than the other is not proof of global warming or cooling, it is mearly proof of the difference between the two devices.

Jennifer Marohasy
Reply to  Bob
January 20, 2023 2:52 pm

Agreed. And I’ve explained the policy versus the reality before, quoting myself:

1.     Since 1996, the Bureau has been transitioning from manual recordings of daily temperatures from liquid in glass thermometers (mercury for maximum temperatures and alcohol for minimum temperatures) to an automated system using electronic probes, with the probes more responsive to fluctuations in temperatures and, therefore, likely to record both hotter and colder for the same weather. This is stated in the Bureau’s Research Report No. 032 The Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) Version 2(October 2018) by Blair Trewin:  
“In the absence of any other influences, an instrument with a faster response time will tend to record higher maximum and lower minimum temperatures than an instrument with a slower response time.  This is most clearly manifested as an increase in the mean diurnal range.  At most locations (particularly in arid regions), it will also result in a slight increase in mean temperatures, as short-term fluctuations of temperature are generally larger during the day than overnight.” (Page 21) End quote.  

2.     Because of the difficulty in achieving consistency between temperature recordings from the newer electronic probes with traditional mercury thermometers the Indonesian Bureau of Meteorology (BMKG) records and archives measurements from both devices with a policy of having both types of equipment in the same Stevenson Screen at all its official weather recording stations. 

3.     The Australian Bureau has a policy of maintaining mercury thermometers with electronic probes in the same Stevenson Screen for a period of at least three years when there is a change over.  This policy, however, is not implemented and mostly ignored.  For example, the Rutherglen agricultural research station had a long, continuous, temperature record with minimum and maximum temperatures first recorded using standard and calibrated equipment in a Stevenson Screen back at the beginning of November 1912. Considering the first 85 years of summer temperatures – unadjusted/not homogenized – the very hottest summer on record at Rutherglen is the summer of 1938/1939.  At Rutherglen, the first significant equipment change happened on 29 January 1998. That is when the mercury and alcohol thermometers were replaced with an electronic probe – custom built to the Australian Bureau of Meteorology’s own standard, with the specifications yet to be made public.  According to Bureau policy, when such a major equipment change occurs there should be at least three years (preferably five) of overlapping/parallel temperature recordings. However, the mercury and alcohol thermometers (used to measure maximum and minimum temperatures, respectively) were removed on the very same day the custom-built probe was placed into the Stevenson screen at Rutherglen, in direct contravention of this policy.   
 

%d bloggers like this:
Verified by MonsterInsights