Dr Jennifer Marohasy writes by email:
There is evidence to suggest that the last 20 years of temperature readings taken by the Australian Bureau of Meteorology, from hundreds of automatic weather stations spread across Australia, may be invalid. Why? Because they have not been recorded consistent with calibration.

Somewhat desperate to prove me wrong, late yesterday the Bureau issued a ‘Fast Facts’ that really just confirms the institution has mucked-up. You can read my line-by-line response to this document here: http://jennifermarohasy.com/2017/09/bureau-management-rewrites-rules/
This comes just a few days after the Bureau issued a 77-page report attempting to explain why, despite MSI1 cards being installed into weather stations preventing them from recording cold temperatures, there is no problem.
Earlier this year, specifically on Wednesday 26 July, I was interviewed by Alan Jones on radio 2GB. He has one of the highest rating talkback radio programs in Australia. We discussed my concerns. And according to the new report, by coincidence, the very next day, some of these cards were removed…
You can read why my husband, John Abbot, thinks this 77-page report is “all about me” in an article published just yesterday at The Spectator: https://www.spectator.com.au/2017/09/not-really-fit-for-purpose-the-bureau-of-meteorology/
I’ve republished this, and much more at my blog.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I thought we were supposed to be interested in the SURFACE temperature of the earth (something to do with a black body and an emissivity of 1 and a T^4 function IIRC). Surely we should start putting some sensors on the ground by now?
oops – I initially misread the title of this piece as:
… Australian BoM climate readings may be invalid due to lack of Collaboration
surely they can fix this problem with more collaboration, we just need to stop doubting them.
For mostly unrelated questions except for how it relates to measuring aspects of climate …
Recall the recent article with the picture of Northern Lights reflecting off of still water. Have we been tracking still waters? (Is it possible to measure?) Is there a trend in the amount of still water? For those that believe that the Earth is warming due to CO2 emissions, what trend would you expect to see in the amount of still waters? If hurricanes are made more intense as they claim, does this imply that there would be less overall still waters occurring? But still waters happen because the weight of the atmosphere is more forceful than ambient air/water currents for that location. How frequently does this occur? If we’re adding carbon to the atmosphere, are we increasing the weight of the atmosphere and thereby increasing the amount of still water? Are we to fear increases in still water?
Perhaps while we track hurricanes with such intent, we should also make parallel measurements of still waters.
TH,
I don’t put any stock in your speculation. “Still water” is the result of not having enough wind to ripple the surface of the water. There are underwater waves in the oceans, under significantly greater pressure from the overlying water than any surface waves experience from an atmosphere increasing its pressure minisculely from a slight increase from a trace gas! I suggest that you read this: https://en.wikipedia.org/wiki/Internal_wave
Clyde – thank you for the response, what I so awkwardly was attempting to speculate was that since we’ve been tracking climate disturbances, we should also be tracking climate tranquility. And, I used this as an opportunity for believers to apply climate science and predict the current tranquility trend.Maybe we could select several large lakes around the world and then define what constitutes tranquility, and monitor them.
I have often wondered out loud that if we are responsible for the bad weather who or what is responsible for the good weather. A bit tongue in cheek I know, but human attribution shouldn’t just be in one direction surely.
The 1st or 2nd weather station along I-80 in Colorado from the Nebraska state line sits 1/2 over the asphalt of the shoulder.
It has 50% of the same problem as the infamous Arizona State University parking lot weather station.
It sits on the south, eastbound lanes, of the interstate.
It is weather station pole, guard rail, shoulder.
Let me guess, all bad data will be kept in the climate data record as in the case of bad station data in the U.S. that was not cleaned up after closing the stations.
Or data that wasn’t cleaned up after fixing the station, e.g. a string of days of record warmth (apparently) at Honolulu. https://wattsupwiththat.com/2009/06/16/how-not-to-measure-temperature-part-88-honolulus-official-temperature-2/
“Cleaned up”?
Naw. It was laundered.
(Though I think the preferred term is “adjusted”.)
Laundered? Mangled. And pressed into shape with a hot iron..
Hey, weather is not climate, and local weather records are good enough because they aren’t climate. Right? 🙂
“Fast facts”; for when you want to play fast and loose.
Aside from thermometers, it looks like she cleaned their clock too.
“…the Bureau has explicitly stated, most recently in an internal report released just last Thursday, that for each one minute temperature it only records the highest one-second temperature, the lowest one-second temperature, and the last one-second temperature – in that one minute interval. The Bureau does not record every one-second value. In the UK, consistent with World Meteorological Organisation Guidelines, the average temperature for each minute is recorded.”
How is the average temperature for each minute determined in the UK? Do they average 60 one-second readings to get the average? Or do they average the highest and lowest readings during that minute? Would the difference really be all that significant when we’re talking about such a short time period? Temperature adjustments applied later would seem to me to affect the end result much more significantly than how the one-minute average is determined. You want to get it as accurate as possible, but this seems to be an example of straining at gnats and swallowing a camel. What am I missing?
“Would the difference really be all that significant when we’re talking about such a short time period?”
Depends on the response time of the sensor and environment. And that can vary. The PtR thermometer is quite good. You can get accuracy to .01°C if recently calibrated. The response time depends on the mass. Usually on the order of a few seconds for a change to 63% of the total change. No matter what there is a delay.
He: So what is the real temperature?
She: How close do you need to know?.
===
Which brings up another point. Calibration. I’d like to see if there is a standard method. Where the calibrator is placed vs the recording PtR. And if they track calibration changes over time. Drift should slow down over time. If not – possible problem.
“You can get accuracy to .01°C if recently calibrated.” ” track calibration changes over time”
Most people do not understand “calibration”. A calibration does not guaranty any subsequent measurement accuracy. It only confirms how accurate the instrument was measuring “as found”. It is then adjusted (if necessary) to display a correct measurement as compared to a known “standard” (usually a Measurement Standard calibrated at a National Metrology Laboratory like NIST or an intrinsic standard such as a triple point of water standard).
After a “calibration interval” in the field the instrument is submitted for another calibration at which point the instruments ability to hold its accuracy (drift) is assessed. Measurements made with the instrument during the calibration interval can then be analyzed – adjusted or discarded.
Based on the instruments accuracy history at the time of calibrations the calibration interval may be shortened, lengthened or left the same to maintain the desired measurement reliability.
In short an instrument may be “inaccurate” or “unreliable” as soon as it leaves the calibration station but you won’t know until the next calibration.
M Simon
Thank you for bringing up the tracking of calibration adjustments. Platinum 100 Resistance Temperature Devices (Pt100 RTD’s, meaning 100 ohms at 0 degrees) can be purchased as a ‘matched pair’ and both installed in the same station. That is how the ARGO Floats work. The likelihood of them both drifting exactly the same amount is low so one is used as a reference for the other.
The importance of routine calibration cannot be overstated for a weather station because there is literally no reference measurement of the same air parcel at the same time. It is one measurement of one ‘volume’ of air, once each. Taking a measurement per second and averaging them is a good idea in the expectation that ‘the temperature won’t change much in one minute’.
The error propagation from 60 readings with a measurement error of ±0.002 C (I have checked that myself) is small so that when averaging temperatures from hundreds of stations the final answer doesn’t have too large an uncertainty.
What is missing (from what I see) is the reporting of the uncertainty of the final regional or national average. Reports pretend that the initial values and the final average have the same uncertainty, which is not how error propagation works.
A record of drift is a good start for making corrections to the lasts data set to reduce the uncertainty, but the final numbers carry the intrinsic characteristics of the instrument through to all final results.
M Simon,
Caution must be used when reporting the accuracy of any RTD. All such devices have a non-linear response to temperature, making the transfer function from ohms to degrees critical in regard to accuracy. Although platinum RTDs have a “nearly” linear response, it is still a curve. If you use a simple linear transfer function, you can get up to 0.38C error. Using a quadratic equation to fit the curve, you can get that down to 0.10 in the climatic ranges we are interested in. Using a high order polynomial you can probably get down to .01C, but that would require 64 bit calculations, and I’m not sure the hardware they are using is capable of that. Frankly I don’t know what kind of transfer function the automated sensors use, so unless you do, you might want to be careful about throwing accuracy numbers around. (I know you said “can get accuracy”, but many people will read that as “has accuracy”.)
There is a calibration; the instruments are not thermistors; they are PRT Platinum Resistance Thermometers; most of the info about the Bureaus methods is here:
http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Observation_practices_WEB.pdf
My take on the report is:
It is useful that issues relating to minimum-temperature cut-offs at several of the Bureau’s AWS were highlighted in the public domain and will be rectified; however, I agree the effect overall (on the network and ACORN-SAT) is of no great consequence. Some issues highlighted by the report deserve further comment; namely, the Bureau should acknowledge the role of ‘citizen scientists’ in identifying the problem and their overviewing of Bureau’s sites and methods; public access to SitesDB and various reports referenced, which are not currently in the public domain should be made accessible; and issues related to infrastructure: lack of regular site maintenance (mowing the grass, cleaning of instruments and screens; removing debris from tipping-bucket raingauges etc. is of concern.
The budget the Bureau allocates to systems, computers, flow-charts and internal controls is wasted if the weather stations it relies on to generate data are poorly sited and maintained; or equipment, particularly rapid-sampling electronic probes in small Stevenson screens is biased.
[For weather stations beside dusty tracks; or in the vicinity of airport tarmac; or screens over-exposed to the weather and sea-spray at lighthouses; one inspection and one maintenance visit per year (Table 1; p. 54) is insufficient to ensure instruments remain clean and dry and that screens are in good-order and not dust-stained or weather-beaten.]
I fully support the call for an open pubic inquiry into the Bureau, focusing on site bias, data handling (temperature cutoffs and averaging); and most importantly, the way homogenisation is done.
Cheers,
Dr. Bill Johnston
1 minute readings – which liquid in glass thermometers respond too slowly to record – are what has allowed the UK met office to claim numerous ‘record temperatures’ where the average temperatures recorded either side of a one minute spike have been 1 deg C and more lower .
Can’t trust those ‘record’ temperatures …… certainly not against historic liquid in glass records, nor against a 10min average as recommended by WMO.
So what is the content of this post? What calibration is she talking about? All I can see is a claim that Jennifer and Alan Jones have put their heads together and decided the BoM is wrong (of course). But how?
Well, without repeating the whole article let’s start at number 1:
You can read the rest for yourself.
The point is that the Aussie BOM has been caught breaching quality guidelines and is now spinning like crazy to justify its continued funding – having failed in its primary function.
More intriguingly is that other MET Offices (like the UK and US) are not helping with independent audits…
Guess they don’t want their dirty linen being exposed in turn.
M,
No answer to the question – what calibration are we talking about? It’s up there in the headline etc. But what is it?
You say they were caught breaching quality guidelines – what guidelines, and who caught them?
It seems the BoM has given a perfectly reasonable account of their process. They use a platinum resistance thermometer which has similar thermal inertia to the old liquid in glass. I don’t see any attempt here to deal with that.
Dirty linen:
It’s about what smoothing filter you are, in fact, using. I think you know very well that a one-second smooth is not the same as a one-minute smooth or a ten-minute smooth. When the “max” and the “min” temperature are averaged together to calculate an “average” temperature, the smoothing filters that are, in fact, being used can lead to significantly different values. Once a particular smoothing filter is used, it isn’t always possible to go back later and correct the smoothing operation. It all depends on whether and how much of the unsmoothed data is recorded and preserved.
The BoM says that because their platinum thermometers have thermal inertia comparable to LiG, there is no need for digital smoothing. The response time of the thermometers is, in their words, 40-80 seconds.
Nick Stokes
September 11, 2017 at 2:08 pm
The BoM says that because their platinum thermometers have thermal inertia comparable to LiG, there is no need for digital smoothing.
That would be easy to test. Did they?
“Did they?”
Platinum resistance thermometers have been used world-wide for many years. I’m sure their response times are well known, and on the manufacturers data sheets.
The thermometers do provide a 1 second data stream, which BoM condenses to a 1 minute summary for transmission. I’m sure they have looked at that data.
‘The thermometers do provide a 1 second data stream, which BoM condenses to a 1 minute summary for transmission. I’m sure they have looked at that data.’
Did you actually read the section reproduced by M Courtney? BoM state they RECORD the date every second, when according to JM what they actually do is record just three values every minute, min max and last. So not only are they not doing what they say they do, what they actually do is not compliant with accepted guidelines ( which even the UK MetOff manage to do. Why should you being sure they have looked at that data give anyone any confidence?
DaveS,
“BoM state they RECORD the date every second”
They said the sensor records data every second. They transmit a one-minute summary.
“what they actually do is not compliant with accepted guidelines”
What no-one seems interested in is reading the actual guidelines. Here it is,
WMO sec 2.1.3.3
“For routine meteorological observations there is no advantage in using thermometers with a very small time-constant or lag coefficient, since the temperature of the air continually fluctuates up to one or two degrees within a few seconds. Thus, obtaining a representative reading with such a thermometer would require taking the mean of a number of readings, whereas a thermometer with a larger time-constant tends to smooth out the rapid fluctuations. Too long a time constant, however, may result in errors when long-period changes of temperature occur. It is recommended that the time constant, defined as the time required by the thermometer to register 63.2% of a step change in air temperature, should be 20 s.”
Just what the BoM says and does.
Relax Nick: your ‘ ‘ analysis ‘ ‘ hasn’t caught for 20 years the fact that – green house gases stopping 20% of total available warming firelight of the sun, can’t make instruments detect more and more light arrive to warm the earth – as those green house gases, stop more and more – from ever arriving TO warm it.
You’re a sophist and fraud barking con man who’s so evil you tried to foist off violation of Conservation of Energy so blatant even school kids scoff in the faces of the hacks claiming it could be real.
We – meaning all civilization – don’t need your fraudulent ‘analyses’. You’re a fraud barking scam peddler who’s not fit to talk about anything related to physics or the laws governing physics.
“Nick Stokes September 11, 2017 at 1:15 pm
So what is the content of this post? What calibration is she talking about? All I can see is a claim that Jennifer and Alan Jones have put their heads together and decided the BoM is wrong (of course). But how?”
“All I can see is a claim … ” Come off it. 3 wise monkeys stunt. This latest discrepancy is explained in full, and you cannot possibly have missed all the other discrepancies, eg upward trends replacing downward trends, homogenisation over 1000km and a mountain range, evidence of manual alteration of records, records disappearing due to breakages that occur after the event.
A simple question – what is the “lack of calibration” in the headline? No-one seems to know.
Nick Stokes September 11, 2017 at 2:54 pm
A simple question – what is the “lack of calibration” in the headline? No-one seems to know.
A simple answer: The the instruments are calibrated to record temperatures as low as -60 C, but are deliberately limited to exclude anything reading below -10 C, by installing MSI1 cards to accomplish this end. Not only do they lack calibration for anything below -10 C, they deliberately exclude it.
The bigger point is the BOM lied about this. IIRC first it was an equipment malfunction, then it was data transmission problem, next it was an software algorithm that caused it. Now we found out the BOM installed these cards to purposely exclude these readings.
Why would any reasonable person trust them, especially when this field of science has a history of corruption, lack of ethics and scientific principles.
I would say it was this: “a report issued by the World Meteorological Organisation (WMO) in 1997 entitled ‘Instruments and Observing Methods’ (Report No. 65) that explained because the modern electronic probes being installed across Australia reacted more quickly to second by second temperature changes, measurements from these devices need to be averaged over a one to ten-minute period to provide some measure of comparability with the original thermometers.”
Certainly not explicitly stated very well, but she raises that same point elsewhere in her article. That sounds like a calibration issue to me.
James,
“that explained because the modern electronic probes being installed across Australia reacted more quickly to second by second temperature changes”
The WMO report did not say anything about the probes being installed across Australia. That is Jennifer’s interpolation. The BoM in their review report does describe those probes, and it denies Jennifer’s claim that they respond more quickly.
It seems the instruments deliberately exclude temperatures below -10C. That’s a calibration error, and an unbelievably stupid and unscientific one. If the thermometer is reliable, then it won’t show <-10C unless the temperature is <-10C. If the thermometer is unreliable, then you are introducing a bias by eliminating selected low temperatures while leaving in all the other measurements that might be wrong. Question: How can there ever be a new low temperature? Question: Are they looking at the recorded temperatures and seeing nothing under -10C because under–10Cs are excluded, and then concluding that it's safe to exclude under–10Cs because there aren't any in the record?
Where I lived until recently, -10C was not rare. We were near Goulburn but fully rural (no UHE). I'm pretty sure that Goulburn and Moss Vale would get UHE (higher lows), so their natural temperatures would have been just as low as ours.
“It seems the instruments deliberately exclude temperatures below -10C. That’s a calibration error”
Nonsense. Do you even know what calibration means? It is a manufacturers limit on the performance of the processing card (not even the measuring instrument).
Calibrate : to mark units of measurement on an instrument such so that it can measure accurately [sic]” – Cambridge Dictionary. Therefore, to deliberately change the (electronic) markings on a thermometer so that temperatures below -10C are recorded as equal to -10C is to introduce a calibration error. NB. That’s quite different to a thermometer with a range >= -10C.
I’m flying to Murmansk in a few hours time. I’ll bet their thermometers can handle < -10C!
Nick Stokes, can you point me to exactly where the report you link denies Jennifer’s claim that they respond more quickly?
I have tried reading the document around the areas where they discuss temperature probes (searching for “platinum” and/or “probe” in the text) and I can currently find no such denial of Jennifer’s claim.
Michael Hart,
” can you point me to exactly where the report you link denies Jennifer’s claim that they respond more quickly?”
The document that Jennifer linked says:
“Therefore, to deliberately change the (electronic) markings on a thermometer so that temperatures below -10C are recorded as equal to -10C is to introduce a calibration error.”
They aren’t doing anything like that. The transmission card (MS11) has a manufacturers limit of -10C. The BoM isn’t calibrating anything there. And the temperatures aren’t recorded by the instrument as being -10C. The device simply stops transmitting. The -10C is just the lowest temperature recorded (accurately) before it stopped.
Having spent a good part of last week calibrating instruments let me jump in here.
A measurement can be made between two calibrated limits (low limit and high limit) or the instrument can be calibrated over two points within a reporting range.
Often the low limit is zero, but in the case of temperature it is some lower temperature. The upper limit is often called the ‘span calibration’ and there may be additional values between them to get linearity if needed.
The BOM instruments were calibrated at the low end using -10 C and presumably at the upper end with some value above the expected maximum such as 60 C. Because they set the card to reject as ‘outside the calibrated range’ all values below -10 C (and probably above +60 C) the low values were not recorded, returning a bad signal report such as 9999.
If, and it is a big if, the warrantied performance of the instrument is that it only reports correctly values between, not outside, the span limits, AND it was agreed that all readings falling outside shall be ignored, then ‘the apparatus’ has not been properly calibrated for the measurement task at hand.
If their protocol had permitted the logging of temperatures outside the calibration limits, fine. But those measurements have a slightly higher uncertainty because the calibration points themselves have uncertainties, and the true reading is being projected from the calibration points outside the reporting range. But they didn’t do that.
So the screw up is a mismatch between the data quality protocol (only record data between the calibration points) and the range of measurements to be made. The direct cause of the dropping of values <-10 C is the calibration of the instruments on the low end at -10.
They have two choices: permit the logging of numbers outside the calibration range, or recalibrating the instrument at -15 C to +60 C, for example.
Calibration error?
Calibration issue, management error.
What would I do? With the low limit so close to expectable values I would have permitted the logging of values 20% of the calibration range below the low limit, i.e.
-10-(60+10)/5= -24 C
It would be far better to have a value with a slightly higher uncertainty than no measurement at all.
Lastly, though briefly described in the discussion, the typical behaviour of a logging temperature device is to record the Min and Max and average all the one-second readings. The reported average is not the sum of the Min and Max divided by two. Devices like an OPUS 10 from Lufft produce exactly this type of output.
Nick Stokes, I asked you for evidence of your claim where you said it was. You didn’t provide it, but you then linked to another document without apology for deceiving readers. Why should I take you seriously again when you waste people’s time like this?
(bold mine)
https://www.spectator.com.au/2017/09/not-really-fit-for-purpose-the-bureau-of-meteorology/
If the above is true (regardless of how they account for it), why would this be a “perfectly reasonable” process?
“If the above is true (regardless of how they account for it), why would this be a “perfectly reasonable” process?”
The bolded stuff about a lower limit is a nonsense. Temperatures below -10°C are extremely rare in Australia. The Bureau in its report lists just six stations (out of many hundreds) with this equipment that have gone below -8°C, at any time, ever. Goulburn’s -10.4 was an all-time record. These events may excite record aficionados, but they don’t distort climate averages.
It is perfectly reasonable because, as they say, the platinum resistance thermometers have similar inertia and response time to the old LiG thermometers. No-one put those readings through a digital smoothing process.
@ur momisugly nick stokes the Golburn temperature is far from being an all time record low. As normal, climate alarmists have to rely on falsehoods…… they just keep on being caught out like you .
Thanks Nick.
“Temperatures below -10°C are extremely rare in Australia. “
By that logic shouldn’t they place limits on the upper register?
For example, if the highest ever recorded temp is 59 C, shouldn’t a limit be placed there? If not, why not?
I’m genuinely trying to learn, btw, not trap or otherwise ad hom you 🙂
“far from being an all time record low”
Well, “far from” is a falsehood. But yes, a temperature of -10.9 was recorded in August 1994. It is still true that these events are very rare, and have no significant effcet on average climate.
“It is still true that these events are very rare, and have no significant effcet on average climate.”
Until they aren’t rare, and until they do have an effect. Would you argue that a cap on the upper recorded temperature be placed as well for the same reasons?
It would seem to be rare that a temp higher than 59C would be recorded there as well, hence, why shouldn’t we cap the upper limit to 58.1C, since it would appear we’re willing to give at least .9 on the low end?
Nick: “It is still true that these events are very rare, and have no significant effcet [sic] on average climate.”
Where would you say the line can safely be drawn to leave out data that would “have no significant effect”? If the temp has only fallen to -8 C six times in history, ever, could we leave everything below -7 out and hope it has no significant effect also? What is Australia suddenly cools and there’s a rash of -10 C temps that just get left out because, hey — it hardly ever gets that cold, so why should it ever?
Is global temperature now the equivalent of the judges’ scoring in figure skating events, where the high and low scores are thrown out and just the middle averaged? As a scientist, why would one EVER not want to record ALL the data ALL the time? If you have it and keep it, it can be analyzed; if you never record it, it’s gone forever.
I have never in my life seen a group so disdainful of actual, raw, data from the field as “climate scientists.”
” why shouldn’t we cap the upper limit to 58.1C”
They do. The BoM agrees that -10 is inadequate for high-altitude southern regions, and is replacing MS-11 with MS-12. From their report:
“The MSI2 can record temperatures over a broader range (nominally –25°C to +55°C) than the MSI1 (nominally –10°C to +55°C) “
I have never in my life seen a group so disdainful of actual, raw, data from the field as “climate scientists.”
There is no involvement of climate scientists here. BoM has for many decades recorded local temperatures for local information, as in Goulburn and Thredbo. This data is not entered into any climate databases. It is not used by climate scientists.
“As a scientist, why would one EVER not want to record ALL the data ALL the time? If you have it and keep it, it can be analyzed; if you never record it, it’s gone forever.”
Indeed, and this is exactly the problem layman such as I have with this seemingly dismissive view of raw data. How could it be possible that exact data is less expressive of truth than inexact data??
Moreover, it would appear only the low end is dismissed. Why? Inconsistency breeds questions of contradiction and rightly so.
You don’t have to be a scientist to see the funny logic in this approach.
“They do. The BoM agrees that -10 is inadequate for high-altitude southern regions, and is replacing MS-11 with MS-12. From their report…”
Thanks Nick, if true, that would seem to be the most logical approach to gathering data.
Do you have a link to the report you’ve referenced please sir?
I found the Australian BoM but…I’m lazy. 😛
Btw, Nick:
Just for the record (since I didn’t actually correctly cite the highest temp ever recorded in Australia, which appears to be 50.7C at Oodnadatta Airport in 1960):
“The MSI2 can record temperatures over a broader range (nominally –25°C to +55°C) than the MSI1 (nominally –10°C to +55°C) “
Then would you have agreed to capping the highest record-able temp of MS-11 or 12 at 48.8C because that was the highest temp ever recorded in Australia?
If not, why not?
Nick Stokes September 11, 2017 at 2:47 pm
Temperatures below -10°C are extremely rare in Australia
Is your argument that anything “extremely rare” should be excluded?
So any extremely weather event and any annual temperature anomaly that is claimed to be “unprecedented” should be discarded as well, because they are extremely rare.
The Earth is 4.5 billion years old, claiming something that has happened recently is “unprecedented” is frankly preposterous. Pretty sure you know this but obviously you don’t care.
“Then would you have agreed to capping the highest record-able temp of MS-11 or 12 at 48.8C because that was the highest temp ever recorded in Australia?”
That is, 50.7 – .9 (that which is allowed on the low end) = 49.8C
If you would object to the upper cap, why would you?
sy
“Do you have a link to the report you’ve referenced”
It’s here. Page 46.
Nick – many thanks!
Ok, Nick’s “extremely rare” claim has been done to death. Trouble is excluding them skews the averages. Data from the BoM about as reliable as a Reliant Robin.
In other words, GARBAGE!
Nick: “There is no involvement of climate scientists here. BoM has for many decades recorded local temperatures for local information, as in Goulburn and Thredbo. This data is not entered into any climate databases. It is not used by climate scientists.”
I refer you to your post of Aug. 8, “BoM raw temperature data, GHCN, and global averages,” at https://wattsupwiththat.com/2017/08/08/bom-raw-temperature-data-ghcn-and-global-averages/.
You said, “in Australia, the raw station data is immediately posted on line, then aggregated by month, submitted via CLIMAT forms to WMO, then transferred to the GHCN monthly unadjusted global dataset. This can then be used directly in computing a global anomaly average.”
That directly contradicts what you’re saying now. Can you explain the discrepancy? Do only these “local” stations that don’t get sent to the WMO and GHCN use those cards with the cutoff temps?
James,
“That directly contradicts what you’re saying now.”
I said earlier there:
“I switched [to Melbourne from Goulburn] because I am now following a post from Moyhu here, and I want a GHCN station which I could follow through.”
It is GHCN stations that are submitted via CLIMAT forms. Goulburn was certainly not one of those. But yes, the BoM grades its stations into three tiers, of which the top are the ACORN stations, which include the GHCN stations. Of these, only Canberra has any likelihood of getting to -10°C, and hasn’t since 1972, although it did get to about -8 in July. BoM says ACORNs have been prioritised for upgrade to MS12, which go to -25C. Only top tier stations get into any climate database.
“only Canberra”
Oops, forgot Cabramurra, and maybe Butler’s Gorge. But they are few.
isn’t reading such a chore?
I so enjoy talking point bulletins, that makes life so much easier.
Nick when you don’t understand the article try re-reading it. Then you might not make such a fool of yourself ……. but a lack of comprehension and critical appreciation seems a hallmark of warmest religionists.
It seems he will defend BoM whatever they do. The usual fall back position being that the trends are not affected.
@ur momisugly DaveS September 11, 2017 at 3:57 pm
It seems he will defend BoM whatever they do. The usual fall back position being that the trends are not affected.
Stokes and Mosh always trot out this ridiculous argument when the gatekeepers get caught red-handed. If the adjustments (manipulations) don’t truly matter, why make them at at all?
Common sense says this is ridiculous BS.
What it does show is that confirmation and political bias has sadly corrupted this field of science.
Nick,
its very simple, contrary to what the numbskull Senator whatshisname said there is heaps of empirical evidence regarding ‘global warming’. The real issue is that its quality is suspect to say the least.
I think it is junk insofar as the level of precision required to properly assert the AGW case or even the GW case. Dr Marohasy is simply focussing on another aspect of the ‘unfitness for purpose’. of this junkfomation.
The CAGWarmistas are the ones making the assertion. Where’s the substantive evidence? It is all so polluted by UHI, calibration, station siting stuffups etc etc that the land surface temp is just junk as far as the trend assertion is concerned.
It is up to BOM and CRU and NOAA etc al to firmly and unequivocally establish the accuracy and veracity of their data. They are the ones ‘prosecuting’ the case are they not? Or is this just what it seems, a religio-ideological persecution?
And the fact that this issue makes great media copy with headlines of DEADLY GLOBAL WARMING on offer as every new paper is published is just straight out of Dr Goebbels playbook. The NAZI’s would be so proud of the CAGWarmist Einszatgruppen.
Another excuse to do some after the fact adjustments.
It won’t be surprising when these new adjustments make the data better match the output of the models.
Jennifer’s last sentence sounds like a statement, but ends with a question mark.
Further, platinum thermistor instrument calibration was never discussed.
What is clear from her JM remarks is that BoM has adopted methods with their AWS that would be consistent with a warm the present relative to the past bias. That has nothing as I can see it to do with equipment calibration, but intentional bias by BoM.
Pt100 RTD’s are calibrated like any other temperature device: by setting an offset in the electronics that controls the readout, and checking it across a range using a calibrated Standard Device for comparison. The range given above is -10 to +50. Temperature reading devices are usually calibrated twice a year because they tent to remain ‘fixed’ in their behaviour for a long time. In the case of RTD’s they typically drift less than 0.02 C per year.
The BOM temperature trend starts ~1910 because, they say, the prior records are unreliable.
This raises the question generally how much of the apparent historical temperature trend (Australia & global) is due to improved observational techniques, dodgy adjustments aside.
For instance clearly the global volcanism historical hockey-stick trend is an artefact of detection:
It’s analogous to the macro-scale where climate events are purported to be more frequent, powerful or extreme whereas that impression is an artefact of higher living standards — modern technology, reporting, capital investment in ‘weather-prone’ areas etc. — living standards ironically that are directly due overwhelmingly to the use of the fossil fuels that the hysterics want to ban.
Your point is well made. What is interesting is the series of articulations you can see in it. That curve correlates with the evolution of communications and communications technology including speed, range, and volume of information, and the growth of population. You can see for example three trend shifts: two subtle trend shifts: one slight shift beginning around 1150, and a second right about 1500. The third and most dramatic occurs between 1750 and 1800. The first corresponds to the era of the crusades and to the Ottoman expansion. While this period is commonly discussed in relation to political and religious patterns it also marks a period of “hybrid” invigoration of early, formative science as Muslim and European scientists began communicating more frequently and consistently. The exchange included a good deal of geographic information. The second change comes along with the beginning of the age of exploration as Portuguese, Spanish, English, French, Dutch and German explorers pushed into the unknown for fun and profit, and more importantly it corresponds with the invention of the printing press, allowing those discoveries and their accompanying maps to be published more quickly and more widely. The third uptick matches the invention of the marine chronometer and the first truly effective means of ascertaining longitude, leading to an increased effort in exploration and mapping. It also matches the extreme period of European expansion. So, sadly, you are quite right. That curve is not likely to have much to do with geological events per se at all.
Something that troubles me about the BoM procedure is that they are only recoding the highest and lowest temperatures of the 60 temperatures measured each minute. Electronic noise (such as from thunder storms or commutators on motors) tends to be random positive and negative spikes (also commonly, a power line-frequency hum). Unless the noise-source is known and characterized, one cannot assume that the spikes will be of equal amplitude or that there won’t be a bias associated with the noise. Worse still, one is only measuring the noise. The recorded 1-minute temperatures should be averaged to eliminate any impulse noise, and then the diurnal highs and lows extracted from the daily collection of 1-minute temperature averages.
I think that some of you electrical engineers should weigh in on this.
Hard to think of any noise source coupled via EM that would have a DC component…
Hard to think of any noise source coupled via EM that would have a DC component…
Depends on how many samples you use for averaging.
Also – is the sampling synchronous with the source? Not likely – but possible. That will add a bias.
Leo,
While it isn’t strictly a DC component, if the sensors are clipping all temperatures below a given threshold, then for days with cold temperatures, there might well be a positive bias for the random noise.
There is also sensor and measurement noise – besides environmental noise. That is usually small. On the order of .001°C on a well designed unit. Probably no worse than .01°C on a cost constrained device.
>>
I think that some of you electrical engineers should weigh in on this.
<<
My expertise is more in line with digital circuits–rather than linear ones. However, if they are using an op amp in the circuit, then the noise should be rejected at the input of the op amp. (Noise will usually appear as a common-mode signal on both the plus and minus inputs of the op amp. The input side of an op amp is a differential amp that rejects common-mode signals.) That doesn’t say anything about the rest of the circuit’s noise immunity.
Jim
With so much at stake, why are meteorology organizations required to comply with quality control standards such as the ISO 9000 standards?
Because ISO 9000 has you labelling what is in cupboards, not building good instruments.
A decent voltage instrument can resolve 0.067 microvolts which means an RTD can be read (and observed moving around) at 0.002 degrees. As that is both up and down around a central point, the total change is 0.004, which is less than half of a digit change in 0.01. That is the root of the claim that a Pt100 RTD can read to 0.01 C. To get another extra significant requires making heroic efforts to shield things (etc).
A very good mass balance control head can resolve 2-12 ma with similar precision.
Jennifer, although you don’t really discuss a calibration issue in this post, the question of calibration runs all through the global temperature records. My favorite way to describe the problem is to ask how confident a person might be using high/low temperature readings collected by old men in bathrobes, wearing bi-focals, using mail order thermometers in South Dakota on a February day in blowing snow? Could you confidently resolve data collected by these folks to 0.1C? 1C? +/- 3C?
This is why I mostly ignore the data sets based on ground stations, even those that have been automated to some extent. I favor the satellite data sets simply because they self calibrate prior to every observation.
BTW, my example is usually set in 1890.
“… readings collected by old men in bathrobes, wearing bi-focals, using mail order thermometers in South Dakota on a February day in blowing snow?”.
=============================
LOL, “… readings collected by old white men in bathrobes, wearing bi-focals, using mail order thermometers in South Dakota on a February day in blowing snow?”.
Fixed.
Racist. 🙂
In the engineering world of manufacturing as well as in science generally there are very strict requirements for regular calibration of all measuring instruments. The procedures including frequency of calibration are well established and must be complied with to obtain or retain accreditation to ISO standards.
Without this accreditation businesses, laboratories, etc. cannot survive in the commercial world with any credibility or for very long because their customers insist on evidence of accreditation.
Calibration of all measuring instruments is vital, from electronic devices thru to micrometers and even measuring tapes.
To even think that the BOM has not been complying with calibration requirements in temperature measurement is mind blowing. Do they think they are “above the law”?
Do they think they are “above the law”?
Beyond it.
“To even think that the BOM has not been complying with calibration requirements in temperature measurement is mind blowing.”
Agreed. It defies reason that global economic policy might be based on data that has virtually no quality control. But that’s what’s happening.
“But that’s what’s happening.”
Still no-one can say what calibration requirement has been failed. Let alone how it affects global economic policy.
Nick, the point I made concerning the historic temperature record is there was no calibration done at all and that’s how it fails.
It’s not that calibration has been done and it was done wrong, there is no calibration at all.
And clearly, global economic policy is being effected by it. See “IPCC” in Wikipedia for a discussion.
“concerning the historic temperature record is there was no calibration done at all”
How can you calibrate a historic temperature record? What would you do?
The post seems to be talking about calibrating measurement instruments. That at least is a recognised scientific activity. It just doesn’t identify the alleged failure.
“How can you calibrate a historic temperature record? What would you do?”
Nothing. I’d use it with estimated error bars that were very large; in essence, “take it with a grain of salt”.
There’s really nothing to do other than rely on contemporary, well calibrated sources. That’s what I’d do.
@Nick
Something along the following lines:
Have the new sensor and the LIG in the same enclosure for a minimum period of 10 years, during which time the LIG thermometer is being read and recorded using the same practice as that station used in the preceding 30 or if records exist preceding 60 years.
If there were differences in TOB in the past then during the overlap 10 year observing period, one would observe the LIG thermometer with each TOB used at that station, making the appropriate note of TOB on every entry in the record .
Not difficult. Just commonsense as one tries to do with any splice.
Richard,
That isn’t calibrating the record. It is calibrating the instruments. Do you have any evidence that it wasn’t done?
Nick
You know that you cannot calibrate a record. A record is simply the presentation of a collection of statements of fact. Of course, you can carry out a quality check, to ascertain that no transcription errors were made etc.
The original point being made was that there was not any attempt (or perhaps not a proper attempt) to ensure that the continued integrity of the record (ie., the time series measurements). Perhaps the record should have been completely cut when the LIG thermometer was replaced, and two records with no splicing produced. One covering the LIG measurements, the other covering the platinum resistance measurements, and making the point that no comparison between the two records should be made.
If one wants to have a continuous record by splicing two different devices then one needs to calibrate the new device against the old device, and that should have been done individually in each enclosure using a reasonable overlap time. .
“The original point being made was that there was not any attempt”
It wasn’t the original point of the OP. It’s yours, and you have no evidence at all for saying so. In fact they did have overlap, and a lot of study on the change. If you look at the metadata for Sydney Observatory here, they installed the AWS probe 1 April 1990. They removed the mercury 31 May 1995. Cape Otway, installed probe 15 April 1994, removed mercury 5 Dec 2012.
No calibration is by definition a calibration fail.
Nick, as you already know, you can’t calibrate data that has already been taken.
The only thing you can do is to adjust the error bars to account for the new unknowns.
Absent correct, proper and verified calibration records for ALL instruments which are being used and have been used [a HUGH chore] the raw data needs to have a suitable error figure applied – to ALL of it. I suggest that plus or minus 2 deg C be that error until a tighter error bar is justified by independent analysis. That would infer that NO temperature in the record be considered more precise than 2 deg C. “Looking for a warming of 1 deg C — sorry, we cannot depend on our data for better than 2 deg accuracy”.
And precision in the reading (0.001 deg?) is NOT the same as accuracy. Accuracy is the actual measured deviation from a standard [accuracy of the standard being taken into account] at ANY temperature in the calibrated range. It represents the unknowable in all future use of the data. Only after that can you start averaging, calculating mean values, applying statistical tricks, etc.
This has gone off in too many side-halls. The question is not ‘calibration’ but ‘recalibration’ of instruments in use.
There is a claim that nothing has been recalibrated for 20 years after installation. True or not? Partly true?
There is a claim that the range over which the instruments have been calibrated is not up to snuff: the calibrated low limit is exceeded by actual temperatures and the electronics ignores such numbers because it has been programmed to do so. That is a separate issue and is apparently not in dispute.
So, were the instruments recalibrated after installation 20 years ago, or not?
If they were, when? According to which protocol? Where is the BOM protocol for the recalibration of weather station temperature measuring devices? Who is qualified to perform these recalibrations? Who certified them and who checked their work?
“Temperatures below -10°C are extremely rare in Australia.” This would be because the BOM in Australia does not record the! Go, Jennifer, you caught the BOM with its pants down yet again.
Records in Australia go back way beyond the introduction of AWS. Temperatures below -10 were very rare then too.
Charlotte Pass was replaced by Thredbo Top, which is somewhat higher.
Are the BOM’s AWS systems based on the AWOS or ASOS systems in use in the US? Both are primarily used at airports for pilots to make aeronautical decisions. I would expect for safety reasons they may have a slight warming bias built in, as planes lose performance in the heat. These systems were designed to replace aviation weather observers, not for monitoring long term climate trends.
James,
However, to be the Devils Advocate, pilots also have to worry about icing conditions. It is best if the thermometers are as accurate as possible.
The worry with icing is primarily flying through visible moisture (clouds) with air temperature below freezing. That is why IFR certified planes are required to have an outside air thermometer. On the ground there are a number of deice agents and anti icing agents that are sprayed on the plane prior to take off during cold weather. I did read a description describing the different chemicals, and it is complicated.
As far as take off performance is concerned, temperature plays a big factor in how much runway you will need. I have read about an ASOS being relocated to a windier spot as the pilots felt that it did not accurately reflect actual wind conditions. Tail winds also increase the runway length required to get airborne.
In affect what I am saying is that weather stations that were installed to facilitate safe aviation, may not be that good for accurate climate comparison over time.
James,
De-icing agents cost money. Airlines don’t want to delay flights and spend money if it isn’t necessary, so they need accurate ground and air temperatures.
You said, “In affect what I am saying is that weather stations that were installed to facilitate safe aviation, may not be that good for accurate climate comparison over time.” Anyone who is objective will agree that meteorological stations that were intended for monitoring weather leave a great deal to be desired to be used for climatology. However, that’s all we have historically.
“The worry with icing is primarily flying through visible moisture (clouds) with air temperature below freezing”
Ground sub-zero conditions at Australian landing strips are associated with morning frost – clear sky.
@Nick Stokes if you read the Association of European Airlines recommendation for de icing it recommend measuring the temperature of the airplane. this will be different to the Outside air temp, as it may have very cold fuel in the tanks after flying etc. I suggest you read paragraph 3.3.5 it states that wings can be cold soaked in air temperatures up to 15 degrees C, and therefore need de icing.
I think you will find that the would rather spend the money on de icing agents when in doubt, than cash the plane. Experience has shown repeat customers are not dead.
https://www.icao.int/safety/AirNavigation/OPS/Documents/aea_deicing_v23.pdf
Whether or not a plane is de-iced is the call of the captain and is made after a visual inspection of the wings.
They don’t rely on thermometers to tell them whether to de-ice or not.
The BOM’s own data tools provides interactive charts, and it’s clear that from 1999 that the trend in diurnal temperature range changes abruptly after the year 2000, which correlates with sensor changes. This would suggest that the new sensors are faster responding than the older lig’s. This definitely brings into question the integrity of the BOM’s data and its value as a reference for climate trend analysis. There’s also the issue of high temperature records.
The effect of a lighter instrument (less smoothed) is higher highs and lower lows. As long as the averaging period is adequate, it doesn’t make much difference. Using the older instruments and changing the averaging period would alone make a detectable (significant) change to the claimed highs and lows.
This is a business fraught with opportunities for misrepresentation, unfortunately for the consumers of the information.
The problem I have with this is the significant of a high and low without considering the enthalpy of the air at the time. Unless the heat capacity of the air measured is considered, the numbers don’t mean much. A ‘new high’ might mean the same as before with less humidity. What does that prove?
‘The problem I have with this is the significant of a high and low without considering the enthalpy of the air at the time. Unless the heat capacity of the air measured is considered, the numbers don’t mean much. A ‘new high’ might mean the same as before with less humidity.’
This is confounded by weather predictions citing the high temperature of the day setting some new record, today in Sydney it was ‘the hottest for 7 years’ without measuring the humidity and relating that to the apparent temperature of the human body which with windy weather and low humidity is in fact cooler.
As you point point out the temperature is not an accurate reflection of heat content of the air, so today, with a low humidity is an example of the misdirection of temperature in expressing the heat content in the air in Sydney.
The crux of this issue is the thermal response time of the equipment used by BOM, and only BOM can provide full details of the equipment that they use. BOM should be pressed to provide this.
i frequently make the point that if we really want to know whether there has been any temperature change then it is necessary to retrofit the stations with the same LIG thermometers used at each station in the past (say during the 1930s/1940s) and then observe today using the same practice and procedure as used by each station in the past (1930s/1940s). We need to get like for like RAW data that requires no adjustments whatsoever, so that a direct comparison can be made. Obviously, we also need to carefully consider whether there are any individual siting issues and/or changes in local environment. I would automatically disregard all stations where there is any form of siting issue or local environmental change since it is difficult to assess what impact that has on the observed temperature. I would only use what I would call prime stations.
Nick Stokes commented:
I am not surprised by the suggestion that LIG thermometers have a thermal response time of around 40 to 80 seconds. I always allow a minute to let such thermometers settle although my impression is that they generally respond within around 30 to 45 seconds. I am surprised by the assertion that platinum resistance thermometers have a similarly slow response time since a very quick internet search suggests that the thermal response time for platinum resistance thermometers is just a few seconds, and I came across a NASA paper suggesting that it can be less than a second (but of course that was no doubt for the space industry applications).
The ATMOS DAS equipment is described in this report, sec 4.1
What the BoM fast facts said was
This means that each one second temperature value is not an instantaneous measurement of the air temperature but an average of the previous 40 to 80 seconds. This process is comparable to the observation process of an observer using a “mercury-in-glass” thermometer.
NS,
Maybe we are reading different documents. The one that I read (your link, sec. 4.2.4.1) said:
“All valid one-second temperature values within the minute interval are assembled into an array. If there are more than nine valid one-second temperature values within the minute interval, then a range of one-minute statistics are generated from these values. These include:
an instantaneous air temperature is the last valid one-second temperature value in the minute interval; one-minute maximum air temperature is the maximum valid one-second temperature value in the minute interval; and one-minute minimum air temperature is the minimum valid one-second temperature value in the minute interval.”
I think that they should be reporting just the statistics for the 1-minute interval, and not reporting the instantaneous 1-second readings. They do a rate check for the 1-second readings, but they should probably also look for individual 1-second readings that are more than 2 or 3 standard deviations from the mean. The range test (–50°C to +70°C) leaves a lot of room for mischief when temp’s below -10 are rare. Also, instead of using the last valid one-second temperature in the array, they should probably be using a moving average of the last several seconds of the last string of valid temp’s.
Clyde,
“Maybe we are reading different documents”
Yes, we are. I’m quoting from the “fast facts” doc that Jennifer links. But I did read that one too. It’s actually describing the hard-wired protocol of the commercial MS11 card; it isn’t BoM’s choice. But what they are saying in my quote is that it really doesn’t matter. The platinum thermometer doesn’t change much from second to second, so any reasonable summary statistic for the minute will do. I expect the MS11 makers chose that scheme to suit their storage, or lack of it.
NS,
Sorry, I missed this the first time through.
You said, ” The platinum thermometer doesn’t change much from second to second, so any reasonable summary statistic for the minute will do.” I don’t believe that for a minute! If there wasn’t a concern about changes and noise then they wouldn’t have implemented a range and rate test to validate data. You are just making an excuse to rationalize a poorly conceived data gathering schema. It also isn’t necessary to grid the 60 readings. They could just do greater than or less than tests and stack or sort them and only retain the largest and smallest. However, the important thing is that they have come up with a procedure that is inherently sensitive to noise and doesn’t really utilize all the information that is potentially present in the 60 seconds of data. It has the potential for sending only outliers and not a true estimate of the mean of the population.
Nick, usually your comments are constructive, but I can find no details in section 4.1 of the manufacturer of the probe, its model number and manufacturer’s specification/data sheet for the probe used. Nor can I find details of the Bureaux’s tender specification. Essentially, all I found was:
As regards the section quoted by you, you are a very competent mathematician so you well know that if the equipment has a response time of say 1 second, and you take 60 one second measurements and average these, the result is not the same as a LIG thermometer that has a response time of 60 seconds and just 1 reading is taken (either the setting of min or max as the case may be).
BOM’s contention that:
is simply wrong, and the same result would only occur as a matter of fortuity.
Richard,
“I can find no details in section 4.1 of the manufacturer of the probe”
Section 4.1.2 is all about the ATMOS DAS. You can read about it here. It also tells you about the associated MS11 and MS12 cards and their BoM history. 4.1.3 is about the Telmet system which the BoM tried out, somewhat unsuccessfully. If you go to the go to the Goulburn metadata pages, p 15, it tells you the details of the probe, number and all, including installation date:
23/FEB/2012 REPLACE Temperature Probe – Dry Bulb (Now WIKA TR40 S/N – 107822-1) Surface Observations
I presume this is the probe that sits within the ATMOS DAS.
Nick
Thanks your further comments, and I appreciate these.
The BOM report does not itself provide the detail set out in the Goulburn report. This report does list the maker/model of the probe, and I have had a quick look for their data sheet but it in turn does not list the thermal response time. Eg., http://www.wika.us/upload/BR_TR40_en_us_18333.pdf
As you know the AWS, ATMOS DAS and the cards whilst part of the system are not the issue being raised by the head article. The cards are part of a separate issue, namely the cold temperature clipping (which you argue makes little difference to the record, and if cold temps are truly rare events, I envisage that you are probably right on that assertion).
But materially the official BOM report does not detail the thermal response time of the platinum resistance probe. It is the thermal response time of this probe that is the issue, and BOM are very coy about that. It ought not to be left to the reader to spend hours and hours seeking to obtain details of the central issue, which issue ought to have been squarely addressed with full and complete particularity in the report.
Of course there is software that collects records and handles the measurements from the platinum resistance probe, and part of this system is BOMs averaging of the one second measurements that BOM (incorrectly) claims to make the
If BOM want to make that assertion, they should prove its correctness. You are an extremely competent mathematician, so you well know that the assertion is not correct, and it would only be a fortuity that the average of 1 second data collected from a probe with a response time of 1 second, is the same as the collection of just 1 data point from a LIG thermometer that has a response time in the order of 40 to 80 seconds.
It worries me that BOM could even make that assertion. It does not say much about the knowledge and quality of their staff.
“But materially the official BOM report does not detail the thermal response time of the platinum resistance probe.”
And as you say, neither does WIKA. I think the reason is that the time depends on both the probe and the environment. The amount of metal determines the heat capacity, but the adjacent layer of air determines the resistance, making a kind of RC for the time constant. And the air resistance depends a lot on movement, which comes down to the enclosure. So I think that when BoM quotes time constants, they are empirical measures of the device in situ.
The response time of an RTD is strongly dependent on the case it is put in. I frequently use a 6mm diameter stainless steel case 50mm long because it will push into a 6mm compressed air fitting – the ‘push-in’ type. I have zero expectation that it will respond in 10 seconds to a temperature change. The RTD device is a tiny square perhaps 2mm on a side with 4 wires (if it is a 4-wire RTD). Without any casing, it can respond very quickly to a change in air temperature. Are the BOM units exposed like that?
It would be useful to know if it is a 6-wire RTD in which case it is a pair – typical for installing where precision is required over a period of time. It actually contains 2 x 3-wire RTDs in a single package.
NS,
You said, ” So I think that when BoM quotes time constants, they are empirical measures of the device in situ.” That is an assumption you are making. Do you have any evidence to support that assumption?
Thanks Nick.
I am a little surprised that a quick search does not reveal the full product specification for the WIKA TR 40 probe. Of course, I do not know whether this model is fitted throughout the network.
Thermal response is part of the ISO/IEC 60751 Ed.2 0 2008-07, ISBN 2-8318-9849-8 test. So either WIKA or BOM ought to have carried out such test. It should be in the product specification.
Personally, I would have thought that even if BOM bought WIKA products with known specification, then they ought to have undertaken an in enclosure test including thermal response time as part of the calibration process.
Personally, I consider that specific material casing should have been specified so that the platinum resistance probe when in this casing, and housed in the enclosure, had the same thermal response time to the LIG thermometers being replaced. I consider that to be part of the calibration process, unless one wishes to simply truncate the record up to the end of LIG thermometers, and a new record starting with the date of the use of the platinum resistance probe.
Perhaps BOM should have had 2 records, with no splicing.
Clyde,
You ask me for evidence. On this thread, no-one seems to have evidence of anything. I still haven’t been able to establish what omitted calibration the OP claims was invalidating BoM readings, let alone any evidence that it was actually omitted.
My suggestion there was based on physical reasoning, which I gave, and is supported by Crispin’s preceding comment:
“The response time of an RTD is strongly dependent on the case it is put in. “
He was referring to thermal mass, which is augmented by the casing. I was referring more to the surrounding thermal resistance; both are needed. It is futile to expect a device manufacturer to supply a meaningful time constant based on the sensor alone.
Stupidity on purpose. ” history doesn’t repeat itself but sometimes it rhymes” we resist
I’m still trying to get my head around the name of the device they use , first they call it a Thermistor and later it becomes a thermometer, arguments now over is there such a thing as a platinum thermistor and which one do bom use if they don’t know who does .
In all realism, an edict is needed that anyone in the BOM that is found to be misrepresenting or manipulating data to be in keeping with their political beliefs or for what they perceive to be a higher social cause must be punished to the full extent of the law. If political pressure is applied for BOM employees to misrepresent the facts, employees must report them by a set date or be deemed complicit. Those putting the pressure on to misrepresent data or threaten the jobs of those that are unwilling to comply must simply be removed from society. This includes the most senior of politicians.
Nothing so sullies the integrity of humanity as the subversion of science for the servitude of politics.
Automatic weather stations, initially proposed to install in remote areas where there is no possibility to keep an observer. Later on several manufacturing companies pressurised the UN agencies like WMO and bargained local governments to install poor quality automatic weather stations everywhere. I myself looked in to such instruments raised doubt on the accuracy of the measurements and infact made my observations to WMO. Money makes many things like GMOs.
Dr. S. Jeevananda Reddy
You would not be able to use platinum thermistors without calibrating them. They are very nonlinear and do not output in degrees Celsius or Fahrenheit. We use them in our borehole temperature probes. They are sensitive to about 1/10,000th of a degree and non-linear by several degrees from 0-50 deg C.
I imagine there is a problem in periodic recalibration, which would probably affect the electronics more than the sensor which is very stable (but nonlinear) over time.
Years ago we had to calibrate our sensors for Japanese hot-springs. They needed accurate data (+/- 0.5) up to 50 deg C. We calibrated the voltage output to 24 deg C (using a linear conversion) and by the time we were at 50 deg the thermistor was out by over 4 deg (where we needed a 3rd order polynomial). There was never a requirement to go below 0.0 as we assumed the borehole water would be frozen.
We bought a second calibrated thermometer to check our original calibration. The two thermometers in a water bath were often out by more than 0.5 deg relative to each other.
A scientist studying fresh ground water asked us to build a thermistor array (up to 8 thermistors in a single probe). The lateral gradient (he argues) gives you the direction of water inflow (or outflow) into the borehole and therefore the direction to the aquifer (assuming you have an onboard orientation system, which we do). This produced some real issues as gradient offsets were present, due to differences in thermistor calibration. Luckily it is the change in gradient that the scientist wanted so we were safe.
Steve,
Contrary to the claims made in the article, the BoM link that Nick Stokes provided claims that the sensors have to pass acceptance tests before being installed, and that they are periodically (interval not specified) checked for being in calibration and swapped out if they are out of tolerance. Inasmuch as the BoM seems to be operating in a ‘CYA’ mode, I wouldn’t be surprised if the maintenance was only being done when it was obvious that there was something seriously wrong with the readings.
“I wouldn’t be surprised if the maintenance was only being done when it was obvious that there was something seriously wrong with the readings.”
The BoM posts extensive metadata for each of its stations. I have an access system here. Here is Goulburn NSW. It tells you that last inspection and testing was 10 July 2017. OK, that might be following its 15 minutes of fame. But if I look at another random station – say Ulladulla, I see on p 16, last temperature testing 7 March 2017.
NS,
In your readings, did you run across anything that would lead you to believe that there is a regular schedule like weekly, monthly, or quarterly to check the instrumentation?
Now that I know you are reading the recent posts, I have to ask why you haven’t commented about my showing that your statement about how the readings are made was wrong. [September 11, 2017 at 6:00 pm]
Clyde,
“In your readings, did you run across anything that would lead you to believe that there is a regular schedule”
Yes. Section 4.4 of the long doc I linked tlks a lot about inspections and maintenance protocols. It refers to documents that don’t seem to be online, such as:
7 Inspections handbook 2010. Bureau of Meteorology. Internal document 60/3317.
10 Programme of maintenance 2017–18.
12 Calibration of working reference SPRT and IPRT 2016. Document RIC-TP-WI-002, version 3.0.
13 Calculation of uncertainty for temperature working references 2016. Instrument test report 709.
14 Calibration of industrial platinum resistance thermometers and Agromet probes (IPRTs) 2003. Standard procedure
number Pt100_02SCP01.
15 Verification of field IPRT (inspection and field) 2016. Document RIC-TP-WI-004, version 3.0.
The documents were available to the review panel, and they concluded:
“overall, the Bureau’s field practices are of a high standard, and reflect accepted practice for meteorological services worldwide”
NS,
You said, ” It refers to documents that don’t seem to be online, such as:..” Do I see a pattern here? The author of this article basically claims that BoM is doing shoddy work. When people try to verify the claim they discover that there are key documents that aren’t available online and that BoM does not seem to be in compliance with the WMO sampling standards. Worse yet, is a reporting procedure that favors impulse noise over any commonly practiced measure of central tendency. You keep giving them the benefit of the doubt!
NS,
I asked about regularly scheduled maintenance and calibration and you responded, “Yes. Section 4.4 of the long doc I linked tlks [sic] a lot about inspections and maintenance protocols.” However, you had previously remarked that the Ulladulla station was last checked 6 months ago. Is it your position that all stations are checked biannually (a lot can go wrong in 1/2 a year), or that they are only checked when they appear to be malfunctioning, as I suggested? Either way, it leaves a lot to be desired with respect to reliability.
Steve
Another point about these types of sensors, when they degrade over time they all degrade in the same direction, so sensors not calibrated will give a biased reading that shows a trend in only one direction.
That is true. Three things to consider. 1-sensitivity, 2-relative accuracy and 3-absolute accuracy.
These sensors have high sensitivity (0.0001 deg C), good relative accuracy (+/-0.15) and moderate absolute accuracy (+/0.5).
Relative accuracy is the reading after correction for sensor non-linearity. The output is now almost linear but not necessarily accurate across the temperature range.
I sense it is starting to unravel for the BoM and similar bodies. They scuppered the last inquiry but I doubt they will the next. The Minister at the time was Greg Hunt who was a simple apologist for the “scientists” who he could not see them doing anything untoward.
Not unlike myself until about 15 years ago and I am sure a vast array of others who believed institutions such as NASA, NOAA etc were beyond reproach. Sadly that appears not to be the case.
Now thanks to Dr Jennifer and others the pressure is elevating.
If, as I fear, serious inconsistencies are found then I believe there are grounds for Criminal action against the perpetrators.
No one is asking me but here goes: The only reason to take one-second readings is to judge how good your recording device is and how well it is cited. The following procedure would work:
– Take 58 of the 60 readings per minute, exclude the highest and lowest, sum them and divide by 58. This is the average for the minute.
– Record this average, the highest and the lowest, and throw out the other data since it really has no use at this point.
– Use the highest and lowest only to find that if they differ significantly enough from the average then the instrument or the siting should be questioned, especially if it keeps happening. These high and low temps should be used for nothing else, they are discarded from the average.
I’m not sure I see this as much different than what the BoM is doing, except somehow they are using these high/low intra-minute temperatures in calculations somewhere??
If you believe there could be erratic readings it would make sense to take the median over 60 readings and then take reading 50 as the high and 10 as the low.
The stupidity of the masses never ceases to amaze me. Just go tho the nearest mirror and say out load, “It’s all crap” then get on with your life.