Ooops! Australian BoM climate readings may be invalid due to lack of calibration

Dr Jennifer Marohasy writes by email:

There is evidence to suggest that the last 20 years of temperature readings taken by the Australian Bureau of Meteorology, from hundreds of automatic weather stations spread across Australia, may be invalid. Why?  Because they have not been recorded consistent with calibration.

BOM weather station 068151 – Point Perpendicular, NSW, Australia

Somewhat desperate to prove me wrong, late yesterday the Bureau issued a ‘Fast Facts’ that really just confirms the institution has mucked-up.   You can read my line-by-line response to this document here: http://jennifermarohasy.com/2017/09/bureau-management-rewrites-rules/

This comes just a few days after the Bureau issued a 77-page report attempting to explain why, despite MSI1 cards being installed into weather stations preventing them from recording cold temperatures, there is no problem.

Earlier this year, specifically on Wednesday 26 July, I was interviewed by Alan Jones on radio 2GB. He has one of the highest rating talkback radio programs in Australia. We discussed my concerns.  And according to the new report, by coincidence, the very next day, some of these cards were removed…

You can read why my husband, John Abbot, thinks this 77-page report is “all about me” in an article published just yesterday at The Spectator: https://www.spectator.com.au/2017/09/not-really-fit-for-purpose-the-bureau-of-meteorology/

I’ve republished this, and much more at my blog.

Advertisements

230 thoughts on “Ooops! Australian BoM climate readings may be invalid due to lack of calibration

  1. I thought we were supposed to be interested in the SURFACE temperature of the earth (something to do with a black body and an emissivity of 1 and a T^4 function IIRC). Surely we should start putting some sensors on the ground by now?

  2. oops – I initially misread the title of this piece as:

    … Australian BoM climate readings may be invalid due to lack of Collaboration

    surely they can fix this problem with more collaboration, we just need to stop doubting them.

    • For mostly unrelated questions except for how it relates to measuring aspects of climate …

      Recall the recent article with the picture of Northern Lights reflecting off of still water. Have we been tracking still waters? (Is it possible to measure?) Is there a trend in the amount of still water? For those that believe that the Earth is warming due to CO2 emissions, what trend would you expect to see in the amount of still waters? If hurricanes are made more intense as they claim, does this imply that there would be less overall still waters occurring? But still waters happen because the weight of the atmosphere is more forceful than ambient air/water currents for that location. How frequently does this occur? If we’re adding carbon to the atmosphere, are we increasing the weight of the atmosphere and thereby increasing the amount of still water? Are we to fear increases in still water?

      Perhaps while we track hurricanes with such intent, we should also make parallel measurements of still waters.

      • TH,
        I don’t put any stock in your speculation. “Still water” is the result of not having enough wind to ripple the surface of the water. There are underwater waves in the oceans, under significantly greater pressure from the overlying water than any surface waves experience from an atmosphere increasing its pressure minisculely from a slight increase from a trace gas! I suggest that you read this: https://en.wikipedia.org/wiki/Internal_wave

      • Clyde – thank you for the response, what I so awkwardly was attempting to speculate was that since we’ve been tracking climate disturbances, we should also be tracking climate tranquility. And, I used this as an opportunity for believers to apply climate science and predict the current tranquility trend.Maybe we could select several large lakes around the world and then define what constitutes tranquility, and monitor them.

      • I have often wondered out loud that if we are responsible for the bad weather who or what is responsible for the good weather. A bit tongue in cheek I know, but human attribution shouldn’t just be in one direction surely.

    • The 1st or 2nd weather station along I-80 in Colorado from the Nebraska state line sits 1/2 over the asphalt of the shoulder.

      It has 50% of the same problem as the infamous Arizona State University parking lot weather station.

      It sits on the south, eastbound lanes, of the interstate.

      It is weather station pole, guard rail, shoulder.

  3. Let me guess, all bad data will be kept in the climate data record as in the case of bad station data in the U.S. that was not cleaned up after closing the stations.

  4. “…the Bureau has explicitly stated, most recently in an internal report released just last Thursday, that for each one minute temperature it only records the highest one-second temperature, the lowest one-second temperature, and the last one-second temperature – in that one minute interval. The Bureau does not record every one-second value. In the UK, consistent with World Meteorological Organisation Guidelines, the average temperature for each minute is recorded.”

    How is the average temperature for each minute determined in the UK? Do they average 60 one-second readings to get the average? Or do they average the highest and lowest readings during that minute? Would the difference really be all that significant when we’re talking about such a short time period? Temperature adjustments applied later would seem to me to affect the end result much more significantly than how the one-minute average is determined. You want to get it as accurate as possible, but this seems to be an example of straining at gnats and swallowing a camel. What am I missing?

    • “Would the difference really be all that significant when we’re talking about such a short time period?”

      Depends on the response time of the sensor and environment. And that can vary. The PtR thermometer is quite good. You can get accuracy to .01°C if recently calibrated. The response time depends on the mass. Usually on the order of a few seconds for a change to 63% of the total change. No matter what there is a delay.

      He: So what is the real temperature?

      She: How close do you need to know?.

      ===

      Which brings up another point. Calibration. I’d like to see if there is a standard method. Where the calibrator is placed vs the recording PtR. And if they track calibration changes over time. Drift should slow down over time. If not – possible problem.

      • “You can get accuracy to .01°C if recently calibrated.” ” track calibration changes over time”
        Most people do not understand “calibration”. A calibration does not guaranty any subsequent measurement accuracy. It only confirms how accurate the instrument was measuring “as found”. It is then adjusted (if necessary) to display a correct measurement as compared to a known “standard” (usually a Measurement Standard calibrated at a National Metrology Laboratory like NIST or an intrinsic standard such as a triple point of water standard).
        After a “calibration interval” in the field the instrument is submitted for another calibration at which point the instruments ability to hold its accuracy (drift) is assessed. Measurements made with the instrument during the calibration interval can then be analyzed – adjusted or discarded.
        Based on the instruments accuracy history at the time of calibrations the calibration interval may be shortened, lengthened or left the same to maintain the desired measurement reliability.
        In short an instrument may be “inaccurate” or “unreliable” as soon as it leaves the calibration station but you won’t know until the next calibration.

      • M Simon

        Thank you for bringing up the tracking of calibration adjustments. Platinum 100 Resistance Temperature Devices (Pt100 RTD’s, meaning 100 ohms at 0 degrees) can be purchased as a ‘matched pair’ and both installed in the same station. That is how the ARGO Floats work. The likelihood of them both drifting exactly the same amount is low so one is used as a reference for the other.

        The importance of routine calibration cannot be overstated for a weather station because there is literally no reference measurement of the same air parcel at the same time. It is one measurement of one ‘volume’ of air, once each. Taking a measurement per second and averaging them is a good idea in the expectation that ‘the temperature won’t change much in one minute’.

        The error propagation from 60 readings with a measurement error of ±0.002 C (I have checked that myself) is small so that when averaging temperatures from hundreds of stations the final answer doesn’t have too large an uncertainty.

        What is missing (from what I see) is the reporting of the uncertainty of the final regional or national average. Reports pretend that the initial values and the final average have the same uncertainty, which is not how error propagation works.

        A record of drift is a good start for making corrections to the lasts data set to reduce the uncertainty, but the final numbers carry the intrinsic characteristics of the instrument through to all final results.

      • M Simon,
        Caution must be used when reporting the accuracy of any RTD. All such devices have a non-linear response to temperature, making the transfer function from ohms to degrees critical in regard to accuracy. Although platinum RTDs have a “nearly” linear response, it is still a curve. If you use a simple linear transfer function, you can get up to 0.38C error. Using a quadratic equation to fit the curve, you can get that down to 0.10 in the climatic ranges we are interested in. Using a high order polynomial you can probably get down to .01C, but that would require 64 bit calculations, and I’m not sure the hardware they are using is capable of that. Frankly I don’t know what kind of transfer function the automated sensors use, so unless you do, you might want to be careful about throwing accuracy numbers around. (I know you said “can get accuracy”, but many people will read that as “has accuracy”.)

      • There is a calibration; the instruments are not thermistors; they are PRT Platinum Resistance Thermometers; most of the info about the Bureaus methods is here:

        http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Observation_practices_WEB.pdf

        My take on the report is:

        It is useful that issues relating to minimum-temperature cut-offs at several of the Bureau’s AWS were highlighted in the public domain and will be rectified; however, I agree the effect overall (on the network and ACORN-SAT) is of no great consequence. Some issues highlighted by the report deserve further comment; namely, the Bureau should acknowledge the role of ‘citizen scientists’ in identifying the problem and their overviewing of Bureau’s sites and methods; public access to SitesDB and various reports referenced, which are not currently in the public domain should be made accessible; and issues related to infrastructure: lack of regular site maintenance (mowing the grass, cleaning of instruments and screens; removing debris from tipping-bucket raingauges etc. is of concern.

        The budget the Bureau allocates to systems, computers, flow-charts and internal controls is wasted if the weather stations it relies on to generate data are poorly sited and maintained; or equipment, particularly rapid-sampling electronic probes in small Stevenson screens is biased.

        [For weather stations beside dusty tracks; or in the vicinity of airport tarmac; or screens over-exposed to the weather and sea-spray at lighthouses; one inspection and one maintenance visit per year (Table 1; p. 54) is insufficient to ensure instruments remain clean and dry and that screens are in good-order and not dust-stained or weather-beaten.]

        I fully support the call for an open pubic inquiry into the Bureau, focusing on site bias, data handling (temperature cutoffs and averaging); and most importantly, the way homogenisation is done.

        Cheers,

        Dr. Bill Johnston

    • 1 minute readings – which liquid in glass thermometers respond too slowly to record – are what has allowed the UK met office to claim numerous ‘record temperatures’ where the average temperatures recorded either side of a one minute spike have been 1 deg C and more lower .

      Can’t trust those ‘record’ temperatures …… certainly not against historic liquid in glass records, nor against a 10min average as recommended by WMO.

  5. So what is the content of this post? What calibration is she talking about? All I can see is a claim that Jennifer and Alan Jones have put their heads together and decided the BoM is wrong (of course). But how?

    • Well, without repeating the whole article let’s start at number 1:

      FAST FACTS: How does the Bureau measure temperature?

      1. The Bureau measures air temperature using an electronic sensor (a platinum resistance thermistor) placed within a Stevenson Screen, and temperature is recorded every second.

      JM: No. The temperature is measured every second, it is not recorded every second by the Bureau. Rather, the Bureau has explicitly stated, most recently in an internal report released just last Thursday, that for each one minute temperature it only records the highest one-second temperature, the lowest one-second temperature, and the last one-second temperature – in that one minute interval. The Bureau does not record every one-second value. In the UK, consistent with World Meteorological Organisation Guidelines, the average temperature for each minute is recorded.

      You can read the rest for yourself.

      The point is that the Aussie BOM has been caught breaching quality guidelines and is now spinning like crazy to justify its continued funding – having failed in its primary function.

      More intriguingly is that other MET Offices (like the UK and US) are not helping with independent audits…
      Guess they don’t want their dirty linen being exposed in turn.

      • M,
        No answer to the question – what calibration are we talking about? It’s up there in the headline etc. But what is it?
        You say they were caught breaching quality guidelines – what guidelines, and who caught them?

        It seems the BoM has given a perfectly reasonable account of their process. They use a platinum resistance thermometer which has similar thermal inertia to the old liquid in glass. I don’t see any attempt here to deal with that.

    • It’s about what smoothing filter you are, in fact, using. I think you know very well that a one-second smooth is not the same as a one-minute smooth or a ten-minute smooth. When the “max” and the “min” temperature are averaged together to calculate an “average” temperature, the smoothing filters that are, in fact, being used can lead to significantly different values. Once a particular smoothing filter is used, it isn’t always possible to go back later and correct the smoothing operation. It all depends on whether and how much of the unsmoothed data is recorded and preserved.

      • The BoM says that because their platinum thermometers have thermal inertia comparable to LiG, there is no need for digital smoothing. The response time of the thermometers is, in their words, 40-80 seconds.

      • Nick Stokes
        September 11, 2017 at 2:08 pm

        The BoM says that because their platinum thermometers have thermal inertia comparable to LiG, there is no need for digital smoothing.

        That would be easy to test. Did they?

      • “Did they?”
        Platinum resistance thermometers have been used world-wide for many years. I’m sure their response times are well known, and on the manufacturers data sheets.

        The thermometers do provide a 1 second data stream, which BoM condenses to a 1 minute summary for transmission. I’m sure they have looked at that data.

      • ‘The thermometers do provide a 1 second data stream, which BoM condenses to a 1 minute summary for transmission. I’m sure they have looked at that data.’

        Did you actually read the section reproduced by M Courtney? BoM state they RECORD the date every second, when according to JM what they actually do is record just three values every minute, min max and last. So not only are they not doing what they say they do, what they actually do is not compliant with accepted guidelines ( which even the UK MetOff manage to do. Why should you being sure they have looked at that data give anyone any confidence?

      • DaveS,
        “BoM state they RECORD the date every second”
        They said the sensor records data every second. They transmit a one-minute summary.

        “what they actually do is not compliant with accepted guidelines”
        What no-one seems interested in is reading the actual guidelines. Here it is,
        WMO sec 2.1.3.3

        “For routine meteorological observations there is no advantage in using thermometers with a very small time-constant or lag coefficient, since the temperature of the air continually fluctuates up to one or two degrees within a few seconds. Thus, obtaining a representative reading with such a thermometer would require taking the mean of a number of readings, whereas a thermometer with a larger time-constant tends to smooth out the rapid fluctuations. Too long a time constant, however, may result in errors when long-period changes of temperature occur. It is recommended that the time constant, defined as the time required by the thermometer to register 63.2% of a step change in air temperature, should be 20 s.”

        Just what the BoM says and does.

    • Relax Nick: your ‘ ‘ analysis ‘ ‘ hasn’t caught for 20 years the fact that – green house gases stopping 20% of total available warming firelight of the sun, can’t make instruments detect more and more light arrive to warm the earth – as those green house gases, stop more and more – from ever arriving TO warm it.

      You’re a sophist and fraud barking con man who’s so evil you tried to foist off violation of Conservation of Energy so blatant even school kids scoff in the faces of the hacks claiming it could be real.

      We – meaning all civilization – don’t need your fraudulent ‘analyses’. You’re a fraud barking scam peddler who’s not fit to talk about anything related to physics or the laws governing physics.

      “Nick Stokes September 11, 2017 at 1:15 pm
      So what is the content of this post? What calibration is she talking about? All I can see is a claim that Jennifer and Alan Jones have put their heads together and decided the BoM is wrong (of course). But how?”

    • “All I can see is a claim … ” Come off it. 3 wise monkeys stunt. This latest discrepancy is explained in full, and you cannot possibly have missed all the other discrepancies, eg upward trends replacing downward trends, homogenisation over 1000km and a mountain range, evidence of manual alteration of records, records disappearing due to breakages that occur after the event.

      • Nick Stokes September 11, 2017 at 2:54 pm
        A simple question – what is the “lack of calibration” in the headline? No-one seems to know.

        A simple answer: The the instruments are calibrated to record temperatures as low as -60 C, but are deliberately limited to exclude anything reading below -10 C, by installing MSI1 cards to accomplish this end. Not only do they lack calibration for anything below -10 C, they deliberately exclude it.

        The bigger point is the BOM lied about this. IIRC first it was an equipment malfunction, then it was data transmission problem, next it was an software algorithm that caused it. Now we found out the BOM installed these cards to purposely exclude these readings.

        Why would any reasonable person trust them, especially when this field of science has a history of corruption, lack of ethics and scientific principles.

      • I would say it was this: “a report issued by the World Meteorological Organisation (WMO) in 1997 entitled ‘Instruments and Observing Methods’ (Report No. 65) that explained because the modern electronic probes being installed across Australia reacted more quickly to second by second temperature changes, measurements from these devices need to be averaged over a one to ten-minute period to provide some measure of comparability with the original thermometers.”

        Certainly not explicitly stated very well, but she raises that same point elsewhere in her article. That sounds like a calibration issue to me.

      • James,
        “that explained because the modern electronic probes being installed across Australia reacted more quickly to second by second temperature changes”
        The WMO report did not say anything about the probes being installed across Australia. That is Jennifer’s interpolation. The BoM in their review report does describe those probes, and it denies Jennifer’s claim that they respond more quickly.

      • It seems the instruments deliberately exclude temperatures below -10C. That’s a calibration error, and an unbelievably stupid and unscientific one. If the thermometer is reliable, then it won’t show <-10C unless the temperature is <-10C. If the thermometer is unreliable, then you are introducing a bias by eliminating selected low temperatures while leaving in all the other measurements that might be wrong. Question: How can there ever be a new low temperature? Question: Are they looking at the recorded temperatures and seeing nothing under -10C because under–10Cs are excluded, and then concluding that it's safe to exclude under–10Cs because there aren't any in the record?

        Where I lived until recently, -10C was not rare. We were near Goulburn but fully rural (no UHE). I'm pretty sure that Goulburn and Moss Vale would get UHE (higher lows), so their natural temperatures would have been just as low as ours.

      • “It seems the instruments deliberately exclude temperatures below -10C. That’s a calibration error”
        Nonsense. Do you even know what calibration means? It is a manufacturers limit on the performance of the processing card (not even the measuring instrument).

      • Calibrate : to mark units of measurement on an instrument such so that it can measure accurately [sic]” – Cambridge Dictionary. Therefore, to deliberately change the (electronic) markings on a thermometer so that temperatures below -10C are recorded as equal to -10C is to introduce a calibration error. NB. That’s quite different to a thermometer with a range >= -10C.

        I’m flying to Murmansk in a few hours time. I’ll bet their thermometers can handle < -10C!

      • Nick Stokes
        September 11, 2017 at 4:26 pm

        The BoM in their review report does describe those probes, and it denies Jennifer’s claim that they respond more quickly.

        Nick Stokes, can you point me to exactly where the report you link denies Jennifer’s claim that they respond more quickly?
        I have tried reading the document around the areas where they discuss temperature probes (searching for “platinum” and/or “probe” in the text) and I can currently find no such denial of Jennifer’s claim.

      • Michael Hart,
        ” can you point me to exactly where the report you link denies Jennifer’s claim that they respond more quickly?”
        The document that Jennifer linked says:

        The response time of the sensor used in the Bureau AWSs is as long or longer than the changes in the temperature of the air it is measuring.

        This means that each one second temperature value is not an instantaneous measurement of the air temperature but an average of the previous 40 to 80 seconds. This process is comparable to the observation process of an observer using a “mercury-in-glass” thermometer.

      • “Therefore, to deliberately change the (electronic) markings on a thermometer so that temperatures below -10C are recorded as equal to -10C is to introduce a calibration error.”
        They aren’t doing anything like that. The transmission card (MS11) has a manufacturers limit of -10C. The BoM isn’t calibrating anything there. And the temperatures aren’t recorded by the instrument as being -10C. The device simply stops transmitting. The -10C is just the lowest temperature recorded (accurately) before it stopped.

      • Having spent a good part of last week calibrating instruments let me jump in here.

        A measurement can be made between two calibrated limits (low limit and high limit) or the instrument can be calibrated over two points within a reporting range.

        Often the low limit is zero, but in the case of temperature it is some lower temperature. The upper limit is often called the ‘span calibration’ and there may be additional values between them to get linearity if needed.

        The BOM instruments were calibrated at the low end using -10 C and presumably at the upper end with some value above the expected maximum such as 60 C. Because they set the card to reject as ‘outside the calibrated range’ all values below -10 C (and probably above +60 C) the low values were not recorded, returning a bad signal report such as 9999.

        If, and it is a big if, the warrantied performance of the instrument is that it only reports correctly values between, not outside, the span limits, AND it was agreed that all readings falling outside shall be ignored, then ‘the apparatus’ has not been properly calibrated for the measurement task at hand.

        If their protocol had permitted the logging of temperatures outside the calibration limits, fine. But those measurements have a slightly higher uncertainty because the calibration points themselves have uncertainties, and the true reading is being projected from the calibration points outside the reporting range. But they didn’t do that.

        So the screw up is a mismatch between the data quality protocol (only record data between the calibration points) and the range of measurements to be made. The direct cause of the dropping of values <-10 C is the calibration of the instruments on the low end at -10.

        They have two choices: permit the logging of numbers outside the calibration range, or recalibrating the instrument at -15 C to +60 C, for example.

        Calibration error?

        Calibration issue, management error.

        What would I do? With the low limit so close to expectable values I would have permitted the logging of values 20% of the calibration range below the low limit, i.e.
        -10-(60+10)/5= -24 C

        It would be far better to have a value with a slightly higher uncertainty than no measurement at all.

        Lastly, though briefly described in the discussion, the typical behaviour of a logging temperature device is to record the Min and Max and average all the one-second readings. The reported average is not the sum of the Min and Max divided by two. Devices like an OPUS 10 from Lufft produce exactly this type of output.

      • Nick Stokes, I asked you for evidence of your claim where you said it was. You didn’t provide it, but you then linked to another document without apology for deceiving readers. Why should I take you seriously again when you waste people’s time like this?

    • Recording one-second extrema (rather than averaging) will bias the minima downwards, and the maxima upwards. Except that the Bureau has been placing limits on how cold an individual weather station can record a temperature, so most of the bias will have been upwards over the last few years – in accordance with Dr Jones’ favourite story about man-made warming.

      (bold mine)

      https://www.spectator.com.au/2017/09/not-really-fit-for-purpose-the-bureau-of-meteorology/

      If the above is true (regardless of how they account for it), why would this be a “perfectly reasonable” process?

      • “If the above is true (regardless of how they account for it), why would this be a “perfectly reasonable” process?”
        The bolded stuff about a lower limit is a nonsense. Temperatures below -10°C are extremely rare in Australia. The Bureau in its report lists just six stations (out of many hundreds) with this equipment that have gone below -8°C, at any time, ever. Goulburn’s -10.4 was an all-time record. These events may excite record aficionados, but they don’t distort climate averages.

        It is perfectly reasonable because, as they say, the platinum resistance thermometers have similar inertia and response time to the old LiG thermometers. No-one put those readings through a digital smoothing process.

      • @ nick stokes the Golburn temperature is far from being an all time record low. As normal, climate alarmists have to rely on falsehoods…… they just keep on being caught out like you .

      • Thanks Nick.

        “Temperatures below -10°C are extremely rare in Australia. “

        By that logic shouldn’t they place limits on the upper register?

        For example, if the highest ever recorded temp is 59 C, shouldn’t a limit be placed there? If not, why not?

        I’m genuinely trying to learn, btw, not trap or otherwise ad hom you :-)

      • “far from being an all time record low”
        Well, “far from” is a falsehood. But yes, a temperature of -10.9 was recorded in August 1994. It is still true that these events are very rare, and have no significant effcet on average climate.

      • “It is still true that these events are very rare, and have no significant effcet on average climate.”

        Until they aren’t rare, and until they do have an effect. Would you argue that a cap on the upper recorded temperature be placed as well for the same reasons?

        It would seem to be rare that a temp higher than 59C would be recorded there as well, hence, why shouldn’t we cap the upper limit to 58.1C, since it would appear we’re willing to give at least .9 on the low end?

      • Nick: “It is still true that these events are very rare, and have no significant effcet [sic] on average climate.”

        Where would you say the line can safely be drawn to leave out data that would “have no significant effect”? If the temp has only fallen to -8 C six times in history, ever, could we leave everything below -7 out and hope it has no significant effect also? What is Australia suddenly cools and there’s a rash of -10 C temps that just get left out because, hey — it hardly ever gets that cold, so why should it ever?

        Is global temperature now the equivalent of the judges’ scoring in figure skating events, where the high and low scores are thrown out and just the middle averaged? As a scientist, why would one EVER not want to record ALL the data ALL the time? If you have it and keep it, it can be analyzed; if you never record it, it’s gone forever.

        I have never in my life seen a group so disdainful of actual, raw, data from the field as “climate scientists.”

      • ” why shouldn’t we cap the upper limit to 58.1C”
        They do. The BoM agrees that -10 is inadequate for high-altitude southern regions, and is replacing MS-11 with MS-12. From their report:

        “The MSI2 can record temperatures over a broader range (nominally –25°C to +55°C) than the MSI1 (nominally –10°C to +55°C) “

      • I have never in my life seen a group so disdainful of actual, raw, data from the field as “climate scientists.”
        There is no involvement of climate scientists here. BoM has for many decades recorded local temperatures for local information, as in Goulburn and Thredbo. This data is not entered into any climate databases. It is not used by climate scientists.

      • “As a scientist, why would one EVER not want to record ALL the data ALL the time? If you have it and keep it, it can be analyzed; if you never record it, it’s gone forever.”

        Indeed, and this is exactly the problem layman such as I have with this seemingly dismissive view of raw data. How could it be possible that exact data is less expressive of truth than inexact data??

        Moreover, it would appear only the low end is dismissed. Why? Inconsistency breeds questions of contradiction and rightly so.

        You don’t have to be a scientist to see the funny logic in this approach.

      • “They do. The BoM agrees that -10 is inadequate for high-altitude southern regions, and is replacing MS-11 with MS-12. From their report…”

        Thanks Nick, if true, that would seem to be the most logical approach to gathering data.

        Do you have a link to the report you’ve referenced please sir?

        I found the Australian BoM but…I’m lazy. :-P

      • Btw, Nick:

        Just for the record (since I didn’t actually correctly cite the highest temp ever recorded in Australia, which appears to be 50.7C at Oodnadatta Airport in 1960):

        “The MSI2 can record temperatures over a broader range (nominally –25°C to +55°C) than the MSI1 (nominally –10°C to +55°C) “

        Then would you have agreed to capping the highest record-able temp of MS-11 or 12 at 48.8C because that was the highest temp ever recorded in Australia?

        If not, why not?

      • Nick Stokes September 11, 2017 at 2:47 pm
        Temperatures below -10°C are extremely rare in Australia

        Is your argument that anything “extremely rare” should be excluded?

        So any extremely weather event and any annual temperature anomaly that is claimed to be “unprecedented” should be discarded as well, because they are extremely rare.

        The Earth is 4.5 billion years old, claiming something that has happened recently is “unprecedented” is frankly preposterous. Pretty sure you know this but obviously you don’t care.

      • “Then would you have agreed to capping the highest record-able temp of MS-11 or 12 at 48.8C because that was the highest temp ever recorded in Australia?”

        That is, 50.7 – .9 (that which is allowed on the low end) = 49.8C

        If you would object to the upper cap, why would you?

      • Ok, Nick’s “extremely rare” claim has been done to death. Trouble is excluding them skews the averages. Data from the BoM about as reliable as a Reliant Robin.

        In other words, GARBAGE!

      • Nick: “There is no involvement of climate scientists here. BoM has for many decades recorded local temperatures for local information, as in Goulburn and Thredbo. This data is not entered into any climate databases. It is not used by climate scientists.”

        I refer you to your post of Aug. 8, “BoM raw temperature data, GHCN, and global averages,” at https://wattsupwiththat.com/2017/08/08/bom-raw-temperature-data-ghcn-and-global-averages/.

        You said, “in Australia, the raw station data is immediately posted on line, then aggregated by month, submitted via CLIMAT forms to WMO, then transferred to the GHCN monthly unadjusted global dataset. This can then be used directly in computing a global anomaly average.”

        That directly contradicts what you’re saying now. Can you explain the discrepancy? Do only these “local” stations that don’t get sent to the WMO and GHCN use those cards with the cutoff temps?

      • James,
        “That directly contradicts what you’re saying now.”
        I said earlier there:
        “I switched [to Melbourne from Goulburn] because I am now following a post from Moyhu here, and I want a GHCN station which I could follow through.”
        It is GHCN stations that are submitted via CLIMAT forms. Goulburn was certainly not one of those. But yes, the BoM grades its stations into three tiers, of which the top are the ACORN stations, which include the GHCN stations. Of these, only Canberra has any likelihood of getting to -10°C, and hasn’t since 1972, although it did get to about -8 in July. BoM says ACORNs have been prioritised for upgrade to MS12, which go to -25C. Only top tier stations get into any climate database.

    • Nick when you don’t understand the article try re-reading it. Then you might not make such a fool of yourself ……. but a lack of comprehension and critical appreciation seems a hallmark of warmest religionists.

      • It seems he will defend BoM whatever they do. The usual fall back position being that the trends are not affected.

      • @ DaveS September 11, 2017 at 3:57 pm
        It seems he will defend BoM whatever they do. The usual fall back position being that the trends are not affected.

        Stokes and Mosh always trot out this ridiculous argument when the gatekeepers get caught red-handed. If the adjustments (manipulations) don’t truly matter, why make them at at all?

        Common sense says this is ridiculous BS.

        What it does show is that confirmation and political bias has sadly corrupted this field of science.

    • Nick, the equipment in question is being used in a manner inconsistent with bureau policy. If used consistently with bureau policy then the temperature record would be different.

      Does that help? If not, read the linked articles again and again until you understand the problem. Perhaps look up the meaning of the word calibration as well. But then again you know what calibration is, don’t you?

      And then there is the BOM’s quick facts document which is nothing more than a failed attempt at propaganda, much the same as everything you write. The link is in the article.

    • Nick,
      its very simple, contrary to what the numbskull Senator whatshisname said there is heaps of empirical evidence regarding ‘global warming’. The real issue is that its quality is suspect to say the least.

      I think it is junk insofar as the level of precision required to properly assert the AGW case or even the GW case. Dr Marohasy is simply focussing on another aspect of the ‘unfitness for purpose’. of this junkfomation.

      The CAGWarmistas are the ones making the assertion. Where’s the substantive evidence? It is all so polluted by UHI, calibration, station siting stuffups etc etc that the land surface temp is just junk as far as the trend assertion is concerned.

      It is up to BOM and CRU and NOAA etc al to firmly and unequivocally establish the accuracy and veracity of their data. They are the ones ‘prosecuting’ the case are they not? Or is this just what it seems, a religio-ideological persecution?

      And the fact that this issue makes great media copy with headlines of DEADLY GLOBAL WARMING on offer as every new paper is published is just straight out of Dr Goebbels playbook. The NAZI’s would be so proud of the CAGWarmist Einszatgruppen.

  6. Another excuse to do some after the fact adjustments.
    It won’t be surprising when these new adjustments make the data better match the output of the models.

  7. Jennifer’s last sentence sounds like a statement, but ends with a question mark.

    Further, platinum thermistor instrument calibration was never discussed.

    What is clear from her JM remarks is that BoM has adopted methods with their AWS that would be consistent with a warm the present relative to the past bias. That has nothing as I can see it to do with equipment calibration, but intentional bias by BoM.

    • Pt100 RTD’s are calibrated like any other temperature device: by setting an offset in the electronics that controls the readout, and checking it across a range using a calibrated Standard Device for comparison. The range given above is -10 to +50. Temperature reading devices are usually calibrated twice a year because they tent to remain ‘fixed’ in their behaviour for a long time. In the case of RTD’s they typically drift less than 0.02 C per year.

  8. The BOM temperature trend starts ~1910 because, they say, the prior records are unreliable.
    This raises the question generally how much of the apparent historical temperature trend (Australia & global) is due to improved observational techniques, dodgy adjustments aside.
    For instance clearly the global volcanism historical hockey-stick trend is an artefact of detection:

    It’s analogous to the macro-scale where climate events are purported to be more frequent, powerful or extreme whereas that impression is an artefact of higher living standards — modern technology, reporting, capital investment in ‘weather-prone’ areas etc. — living standards ironically that are directly due overwhelmingly to the use of the fossil fuels that the hysterics want to ban.

    • Your point is well made. What is interesting is the series of articulations you can see in it. That curve correlates with the evolution of communications and communications technology including speed, range, and volume of information, and the growth of population. You can see for example three trend shifts: two subtle trend shifts: one slight shift beginning around 1150, and a second right about 1500. The third and most dramatic occurs between 1750 and 1800. The first corresponds to the era of the crusades and to the Ottoman expansion. While this period is commonly discussed in relation to political and religious patterns it also marks a period of “hybrid” invigoration of early, formative science as Muslim and European scientists began communicating more frequently and consistently. The exchange included a good deal of geographic information. The second change comes along with the beginning of the age of exploration as Portuguese, Spanish, English, French, Dutch and German explorers pushed into the unknown for fun and profit, and more importantly it corresponds with the invention of the printing press, allowing those discoveries and their accompanying maps to be published more quickly and more widely. The third uptick matches the invention of the marine chronometer and the first truly effective means of ascertaining longitude, leading to an increased effort in exploration and mapping. It also matches the extreme period of European expansion. So, sadly, you are quite right. That curve is not likely to have much to do with geological events per se at all.

  9. Something that troubles me about the BoM procedure is that they are only recoding the highest and lowest temperatures of the 60 temperatures measured each minute. Electronic noise (such as from thunder storms or commutators on motors) tends to be random positive and negative spikes (also commonly, a power line-frequency hum). Unless the noise-source is known and characterized, one cannot assume that the spikes will be of equal amplitude or that there won’t be a bias associated with the noise. Worse still, one is only measuring the noise. The recorded 1-minute temperatures should be averaged to eliminate any impulse noise, and then the diurnal highs and lows extracted from the daily collection of 1-minute temperature averages.

    I think that some of you electrical engineers should weigh in on this.

      • Leo,
        While it isn’t strictly a DC component, if the sensors are clipping all temperatures below a given threshold, then for days with cold temperatures, there might well be a positive bias for the random noise.

    • There is also sensor and measurement noise – besides environmental noise. That is usually small. On the order of .001°C on a well designed unit. Probably no worse than .01°C on a cost constrained device.

    • >>
      I think that some of you electrical engineers should weigh in on this.
      <<

      My expertise is more in line with digital circuits–rather than linear ones. However, if they are using an op amp in the circuit, then the noise should be rejected at the input of the op amp. (Noise will usually appear as a common-mode signal on both the plus and minus inputs of the op amp. The input side of an op amp is a differential amp that rejects common-mode signals.) That doesn’t say anything about the rest of the circuit’s noise immunity.

      Jim

  10. With so much at stake, why are meteorology organizations required to comply with quality control standards such as the ISO 9000 standards?

    • Because ISO 9000 has you labelling what is in cupboards, not building good instruments.

      A decent voltage instrument can resolve 0.067 microvolts which means an RTD can be read (and observed moving around) at 0.002 degrees. As that is both up and down around a central point, the total change is 0.004, which is less than half of a digit change in 0.01. That is the root of the claim that a Pt100 RTD can read to 0.01 C. To get another extra significant requires making heroic efforts to shield things (etc).

      A very good mass balance control head can resolve 2-12 ma with similar precision.

  11. Jennifer, although you don’t really discuss a calibration issue in this post, the question of calibration runs all through the global temperature records. My favorite way to describe the problem is to ask how confident a person might be using high/low temperature readings collected by old men in bathrobes, wearing bi-focals, using mail order thermometers in South Dakota on a February day in blowing snow? Could you confidently resolve data collected by these folks to 0.1C? 1C? +/- 3C?

    This is why I mostly ignore the data sets based on ground stations, even those that have been automated to some extent. I favor the satellite data sets simply because they self calibrate prior to every observation.

  12. In the engineering world of manufacturing as well as in science generally there are very strict requirements for regular calibration of all measuring instruments. The procedures including frequency of calibration are well established and must be complied with to obtain or retain accreditation to ISO standards.
    Without this accreditation businesses, laboratories, etc. cannot survive in the commercial world with any credibility or for very long because their customers insist on evidence of accreditation.
    Calibration of all measuring instruments is vital, from electronic devices thru to micrometers and even measuring tapes.

    To even think that the BOM has not been complying with calibration requirements in temperature measurement is mind blowing. Do they think they are “above the law”?

    • “To even think that the BOM has not been complying with calibration requirements in temperature measurement is mind blowing.”

      Agreed. It defies reason that global economic policy might be based on data that has virtually no quality control. But that’s what’s happening.

      • Nick, the point I made concerning the historic temperature record is there was no calibration done at all and that’s how it fails.

        It’s not that calibration has been done and it was done wrong, there is no calibration at all.

        And clearly, global economic policy is being effected by it. See “IPCC” in Wikipedia for a discussion.

      • “concerning the historic temperature record is there was no calibration done at all”
        How can you calibrate a historic temperature record? What would you do?

        The post seems to be talking about calibrating measurement instruments. That at least is a recognised scientific activity. It just doesn’t identify the alleged failure.

      • “How can you calibrate a historic temperature record? What would you do?”

        Nothing. I’d use it with estimated error bars that were very large; in essence, “take it with a grain of salt”.

        There’s really nothing to do other than rely on contemporary, well calibrated sources. That’s what I’d do.

      • @Nick

        You are grasping onto this straw as desperately as always. You must have done quite a bit of bureaucratic covering up in your day.

        For goodness sake look at the heading “lack of calibration”. Open your mind and focus on the word LACK. I repeat for the terminally slow, LACK OF CALIBRATION. As in THE CALIBRATION REQUIRED HAS NOT BEEN PERFORMED.

        To produce results fit for purpose the BOM needs to use the instruments in a particular way consistent with its own policies. It has not. The measurement policy is to use instruments to produce AVERAGE results over each time interval. The instruments have NOT been calibrated for use when only the highest or lowest values over each time interval are recorded.

        And of course the measurements other than the highest and lowest values have been DISCARDED. There is no way to recover the data the BOM has wilfully thrown away.

        The effect is similar to measuring tide height by recording the highest and lowest wave heights each minute.

        The only other way I can suggest for you to get this into your head is to print it out and eat the printed copy.

      • “concerning the historic temperature record is there was no calibration done at all”
        How can you calibrate a historic temperature record? What would you do?</blockquote?

        Something along the following lines:

        Have the new sensor and the LIG in the same enclosure for a minimum period of 10 years, during which time the LIG thermometer is being read and recorded using the same practice as that station used in the preceding 30 or if records exist preceding 60 years.

        If there were differences in TOB in the past then during the overlap 10 year observing period, one would observe the LIG thermometer with each TOB used at that station, making the appropriate note of TOB on every entry in the record .

        Not difficult. Just commonsense as one tries to do with any splice.

      • @Nick

        “concerning the historic temperature record is there was no calibration done at all”
        How can you calibrate a historic temperature record? What would you do?

        Something along the following lines:

        Have the new sensor and the LIG in the same enclosure for a minimum period of 10 years, during which time the LIG thermometer is being read and recorded using the same practice as that station used in the preceding 30 or if records exist preceding 60 years.

        If there were differences in TOB in the past then during the overlap 10 year observing period, one would observe the LIG thermometer with each TOB used at that station, making the appropriate note of TOB on every entry in the record .

        Not difficult. Just commonsense as one tries to do with any splice.

      • Nick

        You know that you cannot calibrate a record. A record is simply the presentation of a collection of statements of fact. Of course, you can carry out a quality check, to ascertain that no transcription errors were made etc.

        The original point being made was that there was not any attempt (or perhaps not a proper attempt) to ensure that the continued integrity of the record (ie., the time series measurements). Perhaps the record should have been completely cut when the LIG thermometer was replaced, and two records with no splicing produced. One covering the LIG measurements, the other covering the platinum resistance measurements, and making the point that no comparison between the two records should be made.

        If one wants to have a continuous record by splicing two different devices then one needs to calibrate the new device against the old device, and that should have been done individually in each enclosure using a reasonable overlap time. .

      • “The original point being made was that there was not any attempt”
        It wasn’t the original point of the OP. It’s yours, and you have no evidence at all for saying so. In fact they did have overlap, and a lot of study on the change. If you look at the metadata for Sydney Observatory here, they installed the AWS probe 1 April 1990. They removed the mercury 31 May 1995. Cape Otway, installed probe 15 April 1994, removed mercury 5 Dec 2012.

      • Nick, as you already know, you can’t calibrate data that has already been taken.
        The only thing you can do is to adjust the error bars to account for the new unknowns.

    • Absent correct, proper and verified calibration records for ALL instruments which are being used and have been used [a HUGH chore] the raw data needs to have a suitable error figure applied – to ALL of it. I suggest that plus or minus 2 deg C be that error until a tighter error bar is justified by independent analysis. That would infer that NO temperature in the record be considered more precise than 2 deg C. “Looking for a warming of 1 deg C — sorry, we cannot depend on our data for better than 2 deg accuracy”.
      And precision in the reading (0.001 deg?) is NOT the same as accuracy. Accuracy is the actual measured deviation from a standard [accuracy of the standard being taken into account] at ANY temperature in the calibrated range. It represents the unknowable in all future use of the data. Only after that can you start averaging, calculating mean values, applying statistical tricks, etc.

    • This has gone off in too many side-halls. The question is not ‘calibration’ but ‘recalibration’ of instruments in use.

      There is a claim that nothing has been recalibrated for 20 years after installation. True or not? Partly true?

      There is a claim that the range over which the instruments have been calibrated is not up to snuff: the calibrated low limit is exceeded by actual temperatures and the electronics ignores such numbers because it has been programmed to do so. That is a separate issue and is apparently not in dispute.

      So, were the instruments recalibrated after installation 20 years ago, or not?

      If they were, when? According to which protocol? Where is the BOM protocol for the recalibration of weather station temperature measuring devices? Who is qualified to perform these recalibrations? Who certified them and who checked their work?

  13. “Temperatures below -10°C are extremely rare in Australia.” This would be because the BOM in Australia does not record the! Go, Jennifer, you caught the BOM with its pants down yet again.

      • And I repeat Nick, who would ever know? The station has been REMOVED.

        Glad others did your thinking for you and cleared up the calibration question for you. How do I know? You stopped grasping at that particular straw.

        Somehow continuity of records and quality of data just don’t seem to matter to you. You even gave the old “it makes no difference” line. No scientist would ever say such a thing.

  14. Are the BOM’s AWS systems based on the AWOS or ASOS systems in use in the US? Both are primarily used at airports for pilots to make aeronautical decisions. I would expect for safety reasons they may have a slight warming bias built in, as planes lose performance in the heat. These systems were designed to replace aviation weather observers, not for monitoring long term climate trends.

    • James,
      However, to be the Devils Advocate, pilots also have to worry about icing conditions. It is best if the thermometers are as accurate as possible.

      • The worry with icing is primarily flying through visible moisture (clouds) with air temperature below freezing. That is why IFR certified planes are required to have an outside air thermometer. On the ground there are a number of deice agents and anti icing agents that are sprayed on the plane prior to take off during cold weather. I did read a description describing the different chemicals, and it is complicated.

        As far as take off performance is concerned, temperature plays a big factor in how much runway you will need. I have read about an ASOS being relocated to a windier spot as the pilots felt that it did not accurately reflect actual wind conditions. Tail winds also increase the runway length required to get airborne.

        In affect what I am saying is that weather stations that were installed to facilitate safe aviation, may not be that good for accurate climate comparison over time.

      • James,
        De-icing agents cost money. Airlines don’t want to delay flights and spend money if it isn’t necessary, so they need accurate ground and air temperatures.

        You said, “In affect what I am saying is that weather stations that were installed to facilitate safe aviation, may not be that good for accurate climate comparison over time.” Anyone who is objective will agree that meteorological stations that were intended for monitoring weather leave a great deal to be desired to be used for climatology. However, that’s all we have historically.

      • “The worry with icing is primarily flying through visible moisture (clouds) with air temperature below freezing”
        Ground sub-zero conditions at Australian landing strips are associated with morning frost – clear sky.

      • @Nick Stokes if you read the Association of European Airlines recommendation for de icing it recommend measuring the temperature of the airplane. this will be different to the Outside air temp, as it may have very cold fuel in the tanks after flying etc. I suggest you read paragraph 3.3.5 it states that wings can be cold soaked in air temperatures up to 15 degrees C, and therefore need de icing.

        I think you will find that the would rather spend the money on de icing agents when in doubt, than cash the plane. Experience has shown repeat customers are not dead.

        https://www.icao.int/safety/AirNavigation/OPS/Documents/aea_deicing_v23.pdf

      • Whether or not a plane is de-iced is the call of the captain and is made after a visual inspection of the wings.
        They don’t rely on thermometers to tell them whether to de-ice or not.

  15. The BOM’s own data tools provides interactive charts, and it’s clear that from 1999 that the trend in diurnal temperature range changes abruptly after the year 2000, which correlates with sensor changes. This would suggest that the new sensors are faster responding than the older lig’s. This definitely brings into question the integrity of the BOM’s data and its value as a reference for climate trend analysis. There’s also the issue of high temperature records.

    • The effect of a lighter instrument (less smoothed) is higher highs and lower lows. As long as the averaging period is adequate, it doesn’t make much difference. Using the older instruments and changing the averaging period would alone make a detectable (significant) change to the claimed highs and lows.

      This is a business fraught with opportunities for misrepresentation, unfortunately for the consumers of the information.

      The problem I have with this is the significant of a high and low without considering the enthalpy of the air at the time. Unless the heat capacity of the air measured is considered, the numbers don’t mean much. A ‘new high’ might mean the same as before with less humidity. What does that prove?

      • ‘The problem I have with this is the significant of a high and low without considering the enthalpy of the air at the time. Unless the heat capacity of the air measured is considered, the numbers don’t mean much. A ‘new high’ might mean the same as before with less humidity.’
        This is confounded by weather predictions citing the high temperature of the day setting some new record, today in Sydney it was ‘the hottest for 7 years’ without measuring the humidity and relating that to the apparent temperature of the human body which with windy weather and low humidity is in fact cooler.
        As you point point out the temperature is not an accurate reflection of heat content of the air, so today, with a low humidity is an example of the misdirection of temperature in expressing the heat content in the air in Sydney.

  16. The crux of this issue is the thermal response time of the equipment used by BOM, and only BOM can provide full details of the equipment that they use. BOM should be pressed to provide this.

    i frequently make the point that if we really want to know whether there has been any temperature change then it is necessary to retrofit the stations with the same LIG thermometers used at each station in the past (say during the 1930s/1940s) and then observe today using the same practice and procedure as used by each station in the past (1930s/1940s). We need to get like for like RAW data that requires no adjustments whatsoever, so that a direct comparison can be made. Obviously, we also need to carefully consider whether there are any individual siting issues and/or changes in local environment. I would automatically disregard all stations where there is any form of siting issue or local environmental change since it is difficult to assess what impact that has on the observed temperature. I would only use what I would call prime stations.

    Nick Stokes commented:

    The BoM says that because their platinum thermometers have thermal inertia comparable to LiG, there is no need for digital smoothing. The response time of the thermometers is, in their words, 40-80 seconds.

    I am not surprised by the suggestion that LIG thermometers have a thermal response time of around 40 to 80 seconds. I always allow a minute to let such thermometers settle although my impression is that they generally respond within around 30 to 45 seconds. I am surprised by the assertion that platinum resistance thermometers have a similarly slow response time since a very quick internet search suggests that the thermal response time for platinum resistance thermometers is just a few seconds, and I came across a NASA paper suggesting that it can be less than a second (but of course that was no doubt for the space industry applications).

    • The ATMOS DAS equipment is described in this report, sec 4.1

      What the BoM fast facts said was

      This means that each one second temperature value is not an instantaneous measurement of the air temperature but an average of the previous 40 to 80 seconds. This process is comparable to the observation process of an observer using a “mercury-in-glass” thermometer.

      • NS,

        Maybe we are reading different documents. The one that I read (your link, sec. 4.2.4.1) said:

        “All valid one-second temperature values within the minute interval are assembled into an array. If there are more than nine valid one-second temperature values within the minute interval, then a range of one-minute statistics are generated from these values. These include:
         an instantaneous air temperature is the last valid one-second temperature value in the minute interval;  one-minute maximum air temperature is the maximum valid one-second temperature value in the minute interval; and one-minute minimum air temperature is the minimum valid one-second temperature value in the minute interval.”

        I think that they should be reporting just the statistics for the 1-minute interval, and not reporting the instantaneous 1-second readings. They do a rate check for the 1-second readings, but they should probably also look for individual 1-second readings that are more than 2 or 3 standard deviations from the mean. The range test (–50°C to +70°C) leaves a lot of room for mischief when temp’s below -10 are rare. Also, instead of using the last valid one-second temperature in the array, they should probably be using a moving average of the last several seconds of the last string of valid temp’s.

      • Clyde,
        “Maybe we are reading different documents”
        Yes, we are. I’m quoting from the “fast facts” doc that Jennifer links. But I did read that one too. It’s actually describing the hard-wired protocol of the commercial MS11 card; it isn’t BoM’s choice. But what they are saying in my quote is that it really doesn’t matter. The platinum thermometer doesn’t change much from second to second, so any reasonable summary statistic for the minute will do. I expect the MS11 makers chose that scheme to suit their storage, or lack of it.

      • NS,

        Sorry, I missed this the first time through.

        You said, ” The platinum thermometer doesn’t change much from second to second, so any reasonable summary statistic for the minute will do.” I don’t believe that for a minute! If there wasn’t a concern about changes and noise then they wouldn’t have implemented a range and rate test to validate data. You are just making an excuse to rationalize a poorly conceived data gathering schema. It also isn’t necessary to grid the 60 readings. They could just do greater than or less than tests and stack or sort them and only retain the largest and smallest. However, the important thing is that they have come up with a procedure that is inherently sensitive to noise and doesn’t really utilize all the information that is potentially present in the 60 seconds of data. It has the potential for sending only outliers and not a true estimate of the mean of the population.

      • Nick, usually your comments are constructive, but I can find no details in section 4.1 of the manufacturer of the probe, its model number and manufacturer’s specification/data sheet for the probe used. Nor can I find details of the Bureaux’s tender specification. Essentially, all I found was:

        The Bureau uses platinum resistance probes to measure surface air temperature at AWS (4.1.1)

        Any equipment the Bureau buys must meet its quality and accuracy performance specifications, which are set out in tender documents.

        As regards the section quoted by you, you are a very competent mathematician so you well know that if the equipment has a response time of say 1 second, and you take 60 one second measurements and average these, the result is not the same as a LIG thermometer that has a response time of 60 seconds and just 1 reading is taken (either the setting of min or max as the case may be).

        BOM’s contention that:

        This process is comparable to the observation process of an observer using a “mercury-in-glass” thermometer.

        is simply wrong, and the same result would only occur as a matter of fortuity.

      • Richard,
        “I can find no details in section 4.1 of the manufacturer of the probe”
        Section 4.1.2 is all about the ATMOS DAS. You can read about it here. It also tells you about the associated MS11 and MS12 cards and their BoM history. 4.1.3 is about the Telmet system which the BoM tried out, somewhat unsuccessfully. If you go to the go to the Goulburn metadata pages, p 15, it tells you the details of the probe, number and all, including installation date:

        23/FEB/2012 REPLACE Temperature Probe – Dry Bulb (Now WIKA TR40 S/N – 107822-1) Surface Observations

        I presume this is the probe that sits within the ATMOS DAS.

      • Nick

        Thanks your further comments, and I appreciate these.

        The BOM report does not itself provide the detail set out in the Goulburn report. This report does list the maker/model of the probe, and I have had a quick look for their data sheet but it in turn does not list the thermal response time. Eg., http://www.wika.us/upload/BR_TR40_en_us_18333.pdf

        As you know the AWS, ATMOS DAS and the cards whilst part of the system are not the issue being raised by the head article. The cards are part of a separate issue, namely the cold temperature clipping (which you argue makes little difference to the record, and if cold temps are truly rare events, I envisage that you are probably right on that assertion).

        But materially the official BOM report does not detail the thermal response time of the platinum resistance probe. It is the thermal response time of this probe that is the issue, and BOM are very coy about that. It ought not to be left to the reader to spend hours and hours seeking to obtain details of the central issue, which issue ought to have been squarely addressed with full and complete particularity in the report.

        Of course there is software that collects records and handles the measurements from the platinum resistance probe, and part of this system is BOMs averaging of the one second measurements that BOM (incorrectly) claims to make the

        process …comparable to the observation process of an observer using a “mercury-in-glass” thermometer.

        If BOM want to make that assertion, they should prove its correctness. You are an extremely competent mathematician, so you well know that the assertion is not correct, and it would only be a fortuity that the average of 1 second data collected from a probe with a response time of 1 second, is the same as the collection of just 1 data point from a LIG thermometer that has a response time in the order of 40 to 80 seconds.

        It worries me that BOM could even make that assertion. It does not say much about the knowledge and quality of their staff.

      • “But materially the official BOM report does not detail the thermal response time of the platinum resistance probe.”
        And as you say, neither does WIKA. I think the reason is that the time depends on both the probe and the environment. The amount of metal determines the heat capacity, but the adjacent layer of air determines the resistance, making a kind of RC for the time constant. And the air resistance depends a lot on movement, which comes down to the enclosure. So I think that when BoM quotes time constants, they are empirical measures of the device in situ.

      • The response time of an RTD is strongly dependent on the case it is put in. I frequently use a 6mm diameter stainless steel case 50mm long because it will push into a 6mm compressed air fitting – the ‘push-in’ type. I have zero expectation that it will respond in 10 seconds to a temperature change. The RTD device is a tiny square perhaps 2mm on a side with 4 wires (if it is a 4-wire RTD). Without any casing, it can respond very quickly to a change in air temperature. Are the BOM units exposed like that?

        It would be useful to know if it is a 6-wire RTD in which case it is a pair – typical for installing where precision is required over a period of time. It actually contains 2 x 3-wire RTDs in a single package.

      • NS,
        You said, ” So I think that when BoM quotes time constants, they are empirical measures of the device in situ.” That is an assumption you are making. Do you have any evidence to support that assumption?

      • Thanks Nick.

        I am a little surprised that a quick search does not reveal the full product specification for the WIKA TR 40 probe. Of course, I do not know whether this model is fitted throughout the network.

        Thermal response is part of the ISO/IEC 60751 Ed.2 0 2008-07, ISBN 2-8318-9849-8 test. So either WIKA or BOM ought to have carried out such test. It should be in the product specification.

        Personally, I would have thought that even if BOM bought WIKA products with known specification, then they ought to have undertaken an in enclosure test including thermal response time as part of the calibration process.

        Personally, I consider that specific material casing should have been specified so that the platinum resistance probe when in this casing, and housed in the enclosure, had the same thermal response time to the LIG thermometers being replaced. I consider that to be part of the calibration process, unless one wishes to simply truncate the record up to the end of LIG thermometers, and a new record starting with the date of the use of the platinum resistance probe.

        Perhaps BOM should have had 2 records, with no splicing.

      • Clyde,
        You ask me for evidence. On this thread, no-one seems to have evidence of anything. I still haven’t been able to establish what omitted calibration the OP claims was invalidating BoM readings, let alone any evidence that it was actually omitted.

        My suggestion there was based on physical reasoning, which I gave, and is supported by Crispin’s preceding comment:
        “The response time of an RTD is strongly dependent on the case it is put in. “
        He was referring to thermal mass, which is augmented by the casing. I was referring more to the surrounding thermal resistance; both are needed. It is futile to expect a device manufacturer to supply a meaningful time constant based on the sensor alone.

    • I’m still trying to get my head around the name of the device they use , first they call it a Thermistor and later it becomes a thermometer, arguments now over is there such a thing as a platinum thermistor and which one do bom use if they don’t know who does .

  17. In all realism, an edict is needed that anyone in the BOM that is found to be misrepresenting or manipulating data to be in keeping with their political beliefs or for what they perceive to be a higher social cause must be punished to the full extent of the law. If political pressure is applied for BOM employees to misrepresent the facts, employees must report them by a set date or be deemed complicit. Those putting the pressure on to misrepresent data or threaten the jobs of those that are unwilling to comply must simply be removed from society. This includes the most senior of politicians.

    Nothing so sullies the integrity of humanity as the subversion of science for the servitude of politics.

  18. Automatic weather stations, initially proposed to install in remote areas where there is no possibility to keep an observer. Later on several manufacturing companies pressurised the UN agencies like WMO and bargained local governments to install poor quality automatic weather stations everywhere. I myself looked in to such instruments raised doubt on the accuracy of the measurements and infact made my observations to WMO. Money makes many things like GMOs.

    Dr. S. Jeevananda Reddy

  19. You would not be able to use platinum thermistors without calibrating them. They are very nonlinear and do not output in degrees Celsius or Fahrenheit. We use them in our borehole temperature probes. They are sensitive to about 1/10,000th of a degree and non-linear by several degrees from 0-50 deg C.

    I imagine there is a problem in periodic recalibration, which would probably affect the electronics more than the sensor which is very stable (but nonlinear) over time.

    Years ago we had to calibrate our sensors for Japanese hot-springs. They needed accurate data (+/- 0.5) up to 50 deg C. We calibrated the voltage output to 24 deg C (using a linear conversion) and by the time we were at 50 deg the thermistor was out by over 4 deg (where we needed a 3rd order polynomial). There was never a requirement to go below 0.0 as we assumed the borehole water would be frozen.

    We bought a second calibrated thermometer to check our original calibration. The two thermometers in a water bath were often out by more than 0.5 deg relative to each other.

    A scientist studying fresh ground water asked us to build a thermistor array (up to 8 thermistors in a single probe). The lateral gradient (he argues) gives you the direction of water inflow (or outflow) into the borehole and therefore the direction to the aquifer (assuming you have an onboard orientation system, which we do). This produced some real issues as gradient offsets were present, due to differences in thermistor calibration. Luckily it is the change in gradient that the scientist wanted so we were safe.

    • Steve,
      Contrary to the claims made in the article, the BoM link that Nick Stokes provided claims that the sensors have to pass acceptance tests before being installed, and that they are periodically (interval not specified) checked for being in calibration and swapped out if they are out of tolerance. Inasmuch as the BoM seems to be operating in a ‘CYA’ mode, I wouldn’t be surprised if the maintenance was only being done when it was obvious that there was something seriously wrong with the readings.

      • “I wouldn’t be surprised if the maintenance was only being done when it was obvious that there was something seriously wrong with the readings.”
        The BoM posts extensive metadata for each of its stations. I have an access system here. Here is Goulburn NSW. It tells you that last inspection and testing was 10 July 2017. OK, that might be following its 15 minutes of fame. But if I look at another random station – say Ulladulla, I see on p 16, last temperature testing 7 March 2017.

      • NS,

        In your readings, did you run across anything that would lead you to believe that there is a regular schedule like weekly, monthly, or quarterly to check the instrumentation?

        Now that I know you are reading the recent posts, I have to ask why you haven’t commented about my showing that your statement about how the readings are made was wrong. [September 11, 2017 at 6:00 pm]

      • Clyde,
        “In your readings, did you run across anything that would lead you to believe that there is a regular schedule”
        Yes. Section 4.4 of the long doc I linked tlks a lot about inspections and maintenance protocols. It refers to documents that don’t seem to be online, such as:
        7 Inspections handbook 2010. Bureau of Meteorology. Internal document 60/3317.
        10 Programme of maintenance 2017–18.
        12 Calibration of working reference SPRT and IPRT 2016. Document RIC-TP-WI-002, version 3.0.
        13 Calculation of uncertainty for temperature working references 2016. Instrument test report 709.
        14 Calibration of industrial platinum resistance thermometers and Agromet probes (IPRTs) 2003. Standard procedure
        number Pt100_02SCP01.
        15 Verification of field IPRT (inspection and field) 2016. Document RIC-TP-WI-004, version 3.0.

        The documents were available to the review panel, and they concluded:
        “overall, the Bureau’s field practices are of a high standard, and reflect accepted practice for meteorological services worldwide”

      • NS,
        You said, ” It refers to documents that don’t seem to be online, such as:..” Do I see a pattern here? The author of this article basically claims that BoM is doing shoddy work. When people try to verify the claim they discover that there are key documents that aren’t available online and that BoM does not seem to be in compliance with the WMO sampling standards. Worse yet, is a reporting procedure that favors impulse noise over any commonly practiced measure of central tendency. You keep giving them the benefit of the doubt!

      • NS,
        I asked about regularly scheduled maintenance and calibration and you responded, “Yes. Section 4.4 of the long doc I linked tlks [sic] a lot about inspections and maintenance protocols.” However, you had previously remarked that the Ulladulla station was last checked 6 months ago. Is it your position that all stations are checked biannually (a lot can go wrong in 1/2 a year), or that they are only checked when they appear to be malfunctioning, as I suggested? Either way, it leaves a lot to be desired with respect to reliability.

    • Steve

      Another point about these types of sensors, when they degrade over time they all degrade in the same direction, so sensors not calibrated will give a biased reading that shows a trend in only one direction.

      • That is true. Three things to consider. 1-sensitivity, 2-relative accuracy and 3-absolute accuracy.

        These sensors have high sensitivity (0.0001 deg C), good relative accuracy (+/-0.15) and moderate absolute accuracy (+/0.5).

        Relative accuracy is the reading after correction for sensor non-linearity. The output is now almost linear but not necessarily accurate across the temperature range.

  20. I sense it is starting to unravel for the BoM and similar bodies. They scuppered the last inquiry but I doubt they will the next. The Minister at the time was Greg Hunt who was a simple apologist for the “scientists” who he could not see them doing anything untoward.
    Not unlike myself until about 15 years ago and I am sure a vast array of others who believed institutions such as NASA, NOAA etc were beyond reproach. Sadly that appears not to be the case.
    Now thanks to Dr Jennifer and others the pressure is elevating.
    If, as I fear, serious inconsistencies are found then I believe there are grounds for Criminal action against the perpetrators.

  21. No one is asking me but here goes: The only reason to take one-second readings is to judge how good your recording device is and how well it is cited. The following procedure would work:

    – Take 58 of the 60 readings per minute, exclude the highest and lowest, sum them and divide by 58. This is the average for the minute.
    – Record this average, the highest and the lowest, and throw out the other data since it really has no use at this point.
    – Use the highest and lowest only to find that if they differ significantly enough from the average then the instrument or the siting should be questioned, especially if it keeps happening. These high and low temps should be used for nothing else, they are discarded from the average.

    I’m not sure I see this as much different than what the BoM is doing, except somehow they are using these high/low intra-minute temperatures in calculations somewhere??

    • If you believe there could be erratic readings it would make sense to take the median over 60 readings and then take reading 50 as the high and 10 as the low.

  22. The stupidity of the masses never ceases to amaze me. Just go tho the nearest mirror and say out load, “It’s all crap” then get on with your life.

  23. ckb and Steve,
    I agree that a procedure similar to what you suggest is MUCH preferable to reporting only the high, low, and last good reading in the 1-second reading matrix! It reduces the probability of impulse noise (incidentally, BoM notes that one of the weak points in the system is the connections where the resistance can be increased), and gives a better estimate of what is essentially a sample of temperatures, rather than using three of the 60 samples to represent the whole time-series.

    • ” It reduces the probability of impulse noise”
      In fact, they have a guard against that. By “good reading”, they mean one that differs by no more than 0.4C from its predecessor. The min/max are over “good” readings. But the main guard is the thermal inertia, as recommended by the WMO. There isn’t much point in hassling about how to characterise the minute scale data when the response time is of order 40-80 secs.

      Incidentally, none of this is BoM’s design. It is hard-wired into the commercial MS11 card.

      • NS,
        So, if the system has a high frequency hash with a peak-to-peak value of 0.6 deg C all temp’s get retained and one could see a range of +/- 0.3 deg C . The thermal inertia is irrelevant if the electronics are being impacted by external EM sources. Again, information is being lost when only 3 out of 60 readings are being reported. It may not be BoM’s design, but they bought the systems!

      • There isn’t much point in hassling about how to characterise the minute scale data when the response time is of order 40-80 secs.

        But Nick, the response time is not in the order of 40 to 80 seconds as you pointed out to me.
        This means that each one second temperature value is not an instantaneous measurement of the air temperature but an average of the previous 40 to 80 seconds.

        BOM average the data collected every second. This infers that the response time might be as low as 1 second, but it does not expressly say that, or say what the response time is. BOM are being unfortunately rather coy about the fine detail.

        BOM then incorrectly conclude:

        This process is comparable to the observation process of an observer using a “mercury-in-glass” thermometer.

      • We record twice per second. Typical “noise” between readings is less than 1/1000th of a degree. To measure a 0.40C change the sensor would have been subjected to a several degree change over a few seconds. In a water-filled borehole we never see such rapid changes unless we pass by an open facture (with water entering or leaving the borehole). In the shop we see such changes, for example when the air conditioning comes on, or if the sunlight strikes the sensor directly. But such changes would be an indication that the sensor is not properly installed rather than sensor error. If you leave a probe in water you can actually see the water heating up due to the power supply of the probe (50 mW). For this reason, we log only in the downhole direction and then have to wait at least 24 hours for the hole to re-equilibrate. Thermistors have far more sensitivity than is required to measure earth temperature variation. But they are difficult to calibrate accurately.

        If BoM assumes the accuracy of their measurement is +/-0.5 deg C absolute accuracy then they should be fine with these sensors. The circuitry for thermistors is very simple and the product is very stable requiring periodic recalibration on an annual basis. Even recalibration can be a problem because determining new coefficients for the thermistor can shift the “accuracy” of the thermistor up or down by 0.25 deg C. Imagine recalibrating the thermistor and it now reads 0.25 deg C warmer just because that is the limit of calibration. That is what they are up against.

  24. From the Spectator article by Jennifer:
    “Indeed, in the years preceding the flooding of Brisbane the Bureau’s own David Jones, Head of Climate Analysis, was often penning opinion pieces, including for the Sydney Morning Herald, that explained drought was the new norm for Australia. In an email, back in September 2007 he went as far as to say that: “CLIMATE CHANGE HERE IN AUSTRALIA IS NOW RUNNING SO RAMPANT THAT WE DON’T NEED METEOROLOGICAL DATA TO SEE IT.”
    Dr Jones could be characterised as a ‘true believer’. He is now the Head of Climate Monitoring and Prediction at the Bureau. Perhaps not surprisingly the Bureau keeps telling us that next year will be hotter than the last and that this last winter was the warmest on record – never mind the record number of frosts being tallied up by farmers across the south east.”

    Yep, is Dr. Jones of the BoM suggesting we don’t need temperature data anymore, “the science is settled” and “even the rain that falls will not fill up the dams” (last quote from Australia’s ex-Climate Commissioner, Tim Flannery).
    Probably explains why the BoM climate prediction models for Australia are consistently wrong.
    http://joannenova.com.au/2016/10/bom-september-failure-but-who-can-predict-the-climate-a-whole-month-ahead/

  25. BTW, Australia’s permanent drought is looking grim. Current Dam levels of our capital cities today:
    Sydney, 90%; Melbourne, 67%; Brisbane, 72%; Adelaide, 90% and Perth, 37%.
    Sunny days ahead!

    • These details do not appear to be in the BOM report.

      From a quick search, that I did late last night, it appears that platinum resistance probes have a response time measured in seconds (1 to 3 seconds). Some manufacturers listed the thermal response time as less than 2 seconds.

      If BOM are able to obtain 60 measurements per minute then that would suggest a response time in the region of 1 second.

      • “If BOM are able to obtain 60 measurements per minute then that would suggest a response time in the region of 1 second.”
        That has nothing to do with the thermal response time. It just says how often you choose to apply a small voltage and measure the current. You can do that faster than the temperature is changing if you want.

      • Nick

        Thanks. I am alive to that that point, and that is why I use the woolly expression suggests

        If the thermal response was very very slow, there would be no need to take 1 second measurements.

        It seems to me that BOM were alive to the thermal response issue, when they upgraded their system from LIG thermometers. It appears that BOM are very well aware that the platinum resistance probes have a very different response time to that of LIG thermometers.

        We know that BOM are so aware, since they have included software to try and deal with this issue. The question becomes how to deal with that difference. And this is where BOM have erred.

        BOM have introduced some averaging process. They collect data every second from equipment that has a response time in the region of 1 second, and then they average that data. BOM suggest that their averaging process

        is comparable to the observation process of an observer using a “mercury-in-glass” thermometer.

        Nick, I am one of the people on this site, who always read and consider every comment you make. On every article, I specifically look out for your comments, and I very much welcome your contribution to this site. This site would be far poorer without your contributions.

        Your comments are always well argued, but well arguing is sometimes a defensive ploy. Whatever side of the debate one is on, one should be able to be objective. I am not on any side as such, I am only after the truth, and that is why I am sceptical, and, of course, sceptical to both side’s position.

        It seems to me that there are three issues involved:
        First, one is the thermal response time of the equipment used.
        The BOM report is defective in that it does not address that issue head on. The BOM report should clearly state the thermal response time of the equipment that it uses.
        Second, what potential impact does the difference in thermal response time between that of LIG thermometers and platinum resistance probes have. Again, that is not clearly dealt with.
        Third, how should one properly deal with the problem created by a different thermal response time. In this regard, BOM state that they deal with it by creating an average of the previous 40 to 80 seconds of data that has been collected once a second. and then they assert that

        “This process is comparable to the observation process of an observer using a “mercury-in-glass” thermometer.”

        On any basis, the official BOM report is defective. On the last assertion, BOM are plain wrong, and you know that as an extremely competent mathematician. You are probably one the most competent mathematicians who comment on this site, and you really ought to call out the BOM comment for what it is, ie., conceptually misconceived.

        BOM are aware of a material issue, they have made some attempt to deal with it, but their methodology employed does not properly deal with the issue. The question then becomes, what mpact has the failure to properly deal with the difference in thermal response had on the resultant record, and what error bars should now be applied to the record?

        The supplementary question is: is there a better way to deal with the thermal lag issue, or are we left to simply widening the error bounds to the record?

      • It is unlikely that the temperature response of the measurement system is that of the thermistor. For example, we epoxy a thermistor into (what is essentially) a hypodermic needle. This protects the thermistor but has the effect of increasing the time constant because you have to warm up the housing plus the thermistor, not just the thermistor itself. This leads to lower noise levels because the thermal volume being heated acts as a filter against temperature change over short time periods.

      • Steve

        Here is a picture of what the probe looks like:

        You are right that the response of an individual component when tested in laboratory conditions may be different to the response when it forms part of the system as a whole. The entire system might have a different response time, and the ascertaining/measurement of all of this ought to form part of the calibration process.

        You are also right that the probe could be placed in something. What it is placed in could have a different thermal inertia response, to that of the bare equipment.

        One way to mimic the timing of the thermal response of a LIG thermometer would be to place the platinum resistance probe in a casing which produced the same thermal response time as that of a LIG thermometer. A range of different materials could be tested in the lab, and then again tested in the enclosure alongside the LIG thermometer that the probe is replacing. All of this is part of the proper calibration process.

        This is what I was getting at when I rhetorically asked is there a better way of dealing with the thermal response rather than using software coding? Why not take a materials
        approach to this issue?

        There is an official standard dealing with the testing of thermal response.

  26. Just now looking at the BOM’s sydney area data. At Sydney airport the 1:00pm temperature was 26.7 and the high temp of 27.2 was also recorded at 1:00pm. This means that within a 60 second interval there was a variation of 0.5 degrees C, or greater.

    • “there was a variation of 0.5 degrees C”
      It’s probably real. You’ll see that they have put in an extra value of 21.4 at 1.16pm. That is what they do when there is a sudden cool change. And it says the wind changed in that time from NW to SW. That is then a drop of 5.3C in 16 minutes, so a drop of 0.5 in a minute or so is not implausible.

      • Of course it’s real. The question is whether it would have been possible with a LiG, and in my experience it would be very unlikely.

  27. The media in Australia are reporting record amounts of snow – the most for 17 years. Meanwhile, they assure us that we have just had the warmest winter evah!

    “More spring snow on the way”

    http://www.weatherzone.com.au/news/more-spring-snow-on-the-way/526849

    “Victoria on track to break snow record in spring, while rain persists”

    http://www.heraldsun.com.au/news/victoria/rain-wont-go-away-this-week-with-more-wintry-conditions-on-the-way/news-story/87299e5cf02ca41c3e88e83ce35c7a3e

    “BOM: Australia’s hottest winter on record, maximum temperatures up nearly 2C on the long-term average”

    http://www.abc.net.au/news/2017-09-01/australia-winter-2017-was-hot-dry-and-a-record/8862856

    A severe case of cognitive dissonance IMHO

  28. I live within sight of Buffalo and Hotham and Bulla and it’s been the coldest winter we have had for ages .
    Mostly our news reports on warmest winter eveahhh only and pretty much ignores the coldest bit .

  29. In reality, it is quite obvious that there is all but no quality control by any of the world wide meteorological societies. The lack of proper quality control means that none of the data sets are fit for scientific purpose.

    If there was quality control, the very starting point would be to undertake a very detailed on site audit of all the stations in its compass, dealing with individually station by station with its siting, siting changes, local environment changes, equipment used, equipment changes, calibration record, maintenance, its practices and procedures including record keeping, length and quality of record etc. etc.

    It amazes me, in fact I find it dumb founding, that it would be left to citizen scientists (such as our host and followers who conducted the surface station audit) to carry out the most basic of all quality checks. This should have been done worldwide as the starting point, as soon as AGW took hold, and prior to the IPCC being set up, or at the very latest, it should have been the very first edict made by the IPCC and the results thoroughly reviewed and examined in AR2.

    Science is a numbers game, and more so, in this particular field. Quality of data is paramount if it is to tell us anything of substance. We should have identified the cream, and disregarded the crud. We should have worked only with the cream, since one cannot make a silk purse out of a sow’s ear. We should obtain good quality RAW data that can be compared with earlier/historic RAW data without the need for any adjustment whatsoever. This would have required the keeping of, or the retrofitting of LIG thermometers in the most prime stations.

    B€ST was a missed opportunity. It should have adopted a different paradigm to the assessment of temperature records, rather than adopting the homogenisation/adjustment approach adopted by the usual suspects. It should have carried out the above audit, selected say 100 to 200 of the best stations and retrofitted these with the same type of LIG thermometers as used by each of those stations, and then observed using the same practice and procedure that was historically used at that station in the 1930s/1940s. Modern collected RAW data could then be compared with each station’s own historic RAW data without the need for any adjustment whatsoever. One would not try and produce a global data set, merely a table showing how many stations showed say 0.2 degC cooling, 0.1 deg C cooling, no warming, +0.1 degC warming, +0.2 deg C warming etc since each station’s historic high of the 1930s/1940s.

    AGW rests upon CO2 being a well mixed gas, such that one does not need thousands of stations to test the theory. One does not need global coverage to test the theory. 100 to 200 well sited prime stations would suffice to test the theory. An approach along the above lines would have given us a very good feeling as to whether there has been any significant warming since the historic highs of the 1930s/1940s and which covers the period when some 95% of all manmade emissions has taken place.

    • “B€ST was a missed opportunity. It should have adopted a different paradigm to the assessment of temperature records, rather than adopting the homogenisation/adjustment approach adopted by the usual suspects. It should have carried out the above audit, selected say 100 to 200 of the best stations and retrofitted these with the same type of LIG thermometers as used by each of those stations, and then observed using the same practice and procedure that was historically used at that station in the 1930s/1940s. Modern collected RAW data could then be compared with each station’s own historic RAW data without the need for any adjustment whatsoever. ”

      1. You fail to understand what we set out to Disprove. Skeptics Argued that the existing adjustment proceedures were TAINTED and SUSPECT. So we devised a data driven approach. rather than appealing to human judgement about which stations need adjusting or human judgements about what is a “Best Station” we let an algorithm anneal the surface. The one human judgement? The “error of preditiction”
      should be minimized.

      2. The process actually allows us to identifiy the best stations in a mathematical way. Last I counted out of the 40K stations around 15K of them have no adjustments.

      3. A record of the Best stations ( choose them any way you like ) yeilds the same answer as adjusted stations.

      • Thanks Steven

        I think that you misunderstand the point that was being made.

        Skeptics asserted that the thermometer record had become unreliable/tainted for a variety of reasons (eg., station moves, station drop outs, station siting, biasing away from high latitude to mid latitude, biasing away from rural to urban, biasing towards airport stations and where airports have greatly developed, equipment changes, and homogenisation/adjustments etc).

        The only way to test whether the record has become tainted and unreliable is to carry out a field test. You can use a different algorithm to that adopted by HADCRU, or GISS, you may make different assumptions in the adjustment and homenisation of the data, and maybe that is or then again maybe it is not an improvement on what HADCRU and GISS do. But this in no way gets to the heart of the issue, namely is the record unreliable and tainted?

        The only way to test the record is to get back to the data, and to conduct a field test and obtain like for like RAW data that requires no adjustment whatsoever.

        Science is about experimentation and observation. The proposition is that due to a variety of reasons/issues the thermometer record has become tainted, so to test the proposition actual experimentation (retrofit best sited stations) and observation should be conducted (observe using the same practices as that used in the past on a by station basis).

        As Lord Rutherford observed:

        “If your experiment needs statistics, you ought to have done a better experiment.”

        We can carry out an experiment, a field test along the lines that I suggest, that does not require any substantial use of statistics.

      • Oh Mossshhher the once Great and Powerful, you say you set out to prove that existing adjustment procedures were tainted and subject.

        And to do that you manufactured a single figure which you claim was representative of the entire earth. And to do that you chopped records into little pieces and sewed the pieces together in a way which appealed to you. And the data having been tortured confessed to anything.

        How telling that even you don’t use the word science for your unmitigated data fiddling.

  30. Clarification time.

    This is a report issued by Australia’s BOM a few days ago –
    http://www.bom.gov.au/inside/Review_of_Bureau_of_Meteorology_Automatic_Weather_Stations.pdf
    It is the report at issue with Dr Marohasy.

    Temperatures reported by Pt resistance thermometers show a lot of noise, often more in daylight hours than at night.
    Here is a time series from an Australian site, BOM data –
    http://www.geoffstuff.com/temperature noise thangool.jpg
    In that report it is claimed that natural lethargy of the Screen and Pt thermometer reduces noise to levels comparable to LIG thermometers, but this graph shows that is not so.
    The BOM claims that its procedures delete 1-second readings that are more than 0.4 deg C different to adjacent 1-second readings. This graph shows they do not.
    The BOM made several explanations of why readings colder than about -10C were being clipped. Clipping is not part of WMO best practice.

    Dr Marohasy used the word ‘calibration’ and claimed the BOM failed to implement it to WMO recommendations or suggestions. Calibration has several meanings in this context as readers have already noted. The important calibration is Pt resistance thermometry against LIG thermometry, to ensure that older records merge seamlessly with newer ones and do not induce an artificial warming or cooling. To my knowledge, there has been inadequate calibration of one against the other.
    If there had been, there would not be graphs like I have shown above. There are many more graphs like that. For example, start here and read on –
    https://kenskingdom.wordpress.com/2017/08/07/garbage-in-garbage-out/
    Geoff

    • I find it hard to believe the variation in the graphs is “noise” from the Pt thermistor. It is likely variation due to installation (i.e. real changes). If you stretch a graph out to minutes rather than hours you can see the temperature variation naturally changing rather than simple sensor noise. If you want to blame BoM for something, blame them for over-sampling.

      • “If you want to blame BoM for something, blame them for over-sampling.”
        And yes, everyone does want to blame BoM for something. But what is wrong with oversampling?

      • SfR,
        Shorthand use of Pt thermometer noise. The noise is not there AFAIK, with LIG thermometry. Much of the day, the old Max/min thermometers recorded a single temperature, unmoving, until they were reset. The Pt resistance instrument has the noise shown, the LIG does not, so I called it “noise” from the Pt resistance thermometer. I do not know the origins of the noise. Still working on that. Geoff

    • Geoff,
      “This graph shows they do not.”
      I don’t think it shows what you claim. The purple annotation says that they are 1 second intervals. But the original black seems to make it clear that they were 1 minute intervals, and the graphs run for 1 hour and 14 hours.

      “To my knowledge, there has been inadequate calibration of one against the other.”
      You give no evidence. I don’t think it is true. But anyway, this isn’t calibrating an instrument for accuracy. It is testing for consistency. There is no reason to expect the LiG to be more accurate.

    • Nick,
      My apologies, you are correct. The lower graph with the purple annotation is 1-minute sampling, not the 1-second that I annotated. Ken had no descriptive label on his axes and I jumped in unthinking.
      However, it remains likely that much of my criticism (and that of Dr Marohasy and Ken Stewart) remains valid.

      We are trying to ensure that the old data from LIG thermometers flows smoothly to the newer Pt resistance instruments, so that true trends over time can be revealed, without a jump when the later instruments started into the record. It is proving very hard to find data to eliminate the jump. It is rather easy to find indirect evidence of a jump, when it is noted that some of the time the old Max-min device showed a constant temperature, whereas the Pt devices show noise as depicted. The noise is like spikes on a more steady background. The fundamental question is, do you record a spike or a stead background? Past signal processing routines suggest spike removal in many cases.

      Nick, do you have any ides of the cause of the noise that seems more intense in the daylight? I can’t find it in the literature I have read, which has some speculations, but no firm conclusions. If the cause becomes known, correction becomes more targeted.

      You must agree that this noise/spike topic needs to be better understood. Geoff.

      • Geoff,
        “You must agree that this noise/spike topic needs to be better understood.”
        It’s always good to have better understanding. But at least with AWS we have the data. No-one knows about how LiG thermometers reached their max/min values. They may have been just as jumpy.

      • Also, they might not have been just as jumpy. It is academic for them because the reading resolution would have been too coarse to see. Have you ever watched a LIG thermometer under customary change and observed the absence of visible jumps? Geoff.

      • What you term “noise” is actually high-frequency temperature fluctuations due to turbulent mixing of wind-borne air masses. They show up not only in measurements made by thermistors but also, sometimes quite coherently, in hot-wire anemometer records. LIG thermometers have response time-constants very much greater and tend to smooth them out as would any exponential low-pass filter.

      • 1sky1,
        “in hot-wire anemometer records”
        Yes. Those instruments are designed to pick up turbulent fluctuations. But BoM say that their instrument is designed to have a time constant comparable to LiG.

  31. I guess when considering Australian temperature records, one should bear in mind the following:

    AND

    No doubt someone will explain why the temperatures prior to about 1910 should be viewed with caution (even though it is clear that many stations were fitted with Stevenson type screens), and why BOM were right to start their reconstruction around 1910 which happened to be rather cold.

    • “No doubt someone will explain why the temperatures prior to about 1910 should be viewed with caution”

      OK, I will, it took but a few seconds on Google.

      http://www.bom.gov.au/climate/change/acorn-sat/#tabs=Early-data

      “The national analysis begins in 1910

      While some temperature records for a number of locations stretch back into the mid-nineteenth century, the Bureau’s national analysis begins in 1910.
      There are two reasons why national analyses for temperature currently date back to 1910, which relate to the quality and availability of temperature data prior to this time.
      Prior to 1910, there was no national network of temperature observations. Temperature records were being maintained around settlements, but there was very little data for Western Australia, Tasmania and much of central Australia. This makes it difficult to construct a national average temperature that is comparable with the more modern network.
      The standardisation of instruments in many parts of the country did not occur until 1910, two years after the Bureau of Meteorology was formed. They were in place at most Queensland and South Australian sites by the mid-1890s, but in New South Wales and Victoria there were still many non-standard sites in place until 1906–08. While it is possible to retrospectively adjust temperature readings taken with non-standard instrumentation, this task is much harder when the network has very sparse coverage and descriptions of recording practices are patchy.
      These elements create very large uncertainties when calculating national temperatures before 1910, and preclude the construction of nation-wide temperature (gridded over the Australian continent) on which the Bureau’s annual temperature series is based.”

    • Richard,
      “one should bear in mind the following”
      No one should not. It’s the usual Goddard-style nonsense of showing the year by year average of a constantly changing mix of stations (and without area weighting). All it is telling you is that 70 years ago the mix included more warm places.

      There is a simple iron rule here – never average a bunch if different stations without first subtracting a mean of each, preferably over a fixed period.

      • Nick,
        The parallel danger is that today the mix includes too many warm/cool days, whatever you wish for.
        The present disposition of BOM stations for ACORN-SAT was chosen by methods unknown to the public. It is unbalanced when you look at a map. There is a weighting around the southern capitals, that seems to relate to availability of stations, when the criterion should be a distribution that covers station temperatures representatively. While the anomaly method might compensate for some imbalance, it is a dubious mathematical dodge that is no match for a proper ab initio station selection process. Why, for goodness sake belatedly add Rabbit Flat to the national record? I have little doubt that station selection has favoured a narrative of global warming. There seem to be no reports to contradict this. The uncertainty introduced by a small number of key stations in central Australia and the huge weighting they carry makes official error estimated look optimistic. These stations are not coped with adequately, as the frequent appearance of single station bullseyes on national contoured maps shows clearly. It is a simple matter that the early temperature data, say up to the satellite era, is blatantly unfit for purpose, when the purpose is to inform major political and economic decisions about global warming. Geoff.

      • Nick

        It’s the usual Goddard-style nonsense of showing the year by year average of a constantly changing mix of stations (my emphasis)

        As I mentioned to you, only a couple of days ago, as soon as you change the sample set, the time series becomes meaningless.

        This is why no one knows whether the temperature today is any warmer than it was in 1940 or in 1880 etc.

        If one wishes to know whether there may have been any warming since say 1880, one identifies the stations that reported data in 1880 and which stations have continuous records through to 2017. It maybe that in 1990 some 406 stations reported data, and of those 406 stations only 96 have a continuous record to date. So be it. One collects the data for the 96 extant stations and then a time series plot can be presented either of absolute temperatures, or if one prefers of anomalies from any constructed reference period

        This type of data set informs something of significance because we have maintained at all times a like for like sample. We can say that as at the 96 stations there has been a change in temperature and how much that change is. one can then go on and start investigating why there has been a change in temperature (eg perhaps due to urbanisation/change in local environ land use, or equipment change etc), and whether changes have occurred at any specific time, and the rate of change etc, and whether there has been a change in the rate of change.

        If one wants to know whether today is any warmer than say 1940, one conducts a similar exercise. One identifies the stations that were reporting in 1940 (say 6,000) and then identify which of these have a continuous record to date (say 1500). One then collects the data from the extant stations with continuous record (ie., the 1500) and then a time series presentation can be made. Such a plot will inform something of significance. It will enable us to see what changes have taken place in the area of the 1500 stations over the 77 year period.

        The present presentation of the data tells us nothing because of the constantly changing of mix of stations. At all times throughout the entirety of the time series, you must have precisely the same sample set and no changes whatsoever to that sample set.

        If I am tasked with considering whether the height of men has increased due to better diet over the past 70 years, I cannot go about that task by collecting details of the average height of Spanish men for the period 1951 to 1960, then collating the average height of Spanish and Portuguese men for the period 1961 to 1970, then collating the average height of Spanish, Portuguese, Italian and German men for the period 1971 to 1980, then collating the average height of Spanish, German and Dutch men for the period 1981 to 1990, then collating the average height of German, Dutch and Norwegian men for the period 1991 to 2000, then collating the average height of German, Dutch, Swedish and Norwegian men for the period 2000 to 2010, then collating the average height of German, Dutch, Swedish, Finnish and Norwegian men for the period 2010 to 2020. If I make a time series plot, I note that over time the average height of men has increased, but in practice all that I am showing is that Southern European men are shorter than Northern European men, and all I am showing is the resultant change of my sample set,

        The above is an extreme example but in essence it demonstrates the problem with the present time series anomaly sets. the constantly changing sample set within the time series invalidates the series and means that the present presentation of the data is not informative of anything at all. We simply do not know whether it is or is not warmer today than it was in 1940 or 1880, nor if so, then by how much.

      • Richard,
        “as soon as you change the sample set, the time series becomes meaningless”
        But you post it here and say we should bear it in mind?

        In fact, “like for like” means from the same expected probability distribution. That is why you should always subtract the mean, since it is the main source of difference. There may still be a difference of variance, but that won’t bias the average, generally. The time series of anomalies is not meaningless.

        Incidentally, I can’t reproduce that plot even using dumb Goddard methods. Just plain averaging of the varying set gives an increasing trend.

      • Nick

        Thanks your further comment.

        I take all these data sets with a pinch of salt. All the various data sets have significant issues such that they are not fit for scientific purpose. Probably the best of a bad bunch being Mauna Loa CO2. That said, whilst I accept that CO2 is a relatively well mixed gas at high altitudes, it is anything but well mixed at low altitude and that is why the IPCC rejected the Beck restatement of the historic chemical CO2 analysis data. Whether the variability of CO2 at low altitudes is significant to AGW, I have yet to see any study on that, so who knows.

        What we do not need is area weighting or krigging or expected probability distribution or some fancy statistical manipulator. Like for like simply means the very same stations at all times. No more, no less. If there is an issue with station moves, the station should be chucked out since it is no longer the same station.

        What we need is to accept is the limitation of our data. We could design a new network and obtain good quality data going forward in time, but if we wish to delve into the past then we are stuck with the historic station set up which is being used for a purpose for which it was never designed. We should not try and recreate a hemispherical data sets, or global data sets. I do not consider we should even endeavour to make a country wide data set, although perhaps the contiguous US may have sufficient sampling to allow a reasonable stab at that.

        There is no need to make such data sets since AGW relies upon CO2 being a well mixed gas such that it should be possible to detect its impact with just 100 to 200 well sited stations. I recall many years ago that Steven Mosher was of the view that 50 stations would suffice. I do not strongly disagree with that assessment, but I would prefer to use more.

        All we need do is to identify 200 good stations that have no siting issues whatsoever, and have good historic data with good known procedures and practices for data collection and record keeping and maintenance. Then we simply need to retrofit those stations with the same type of LIG thermometers used in the past (1930s/1940s) calibrated as they were (Fahrenheit or Centigrade as the case may be, and using the same historical methodology for calibration as used in the country in question), and then observe today using the same practice and procedure (eg., the same historic TOB) as used at each station. In that manner we obtain modern day RAW data that is directly comparable with historic RAW data with no adjusting.

        We only have to go back to 1930/1940, since this is the period from which some 95% of all manmade CO2 emissions have been made. Obviously it would be interesting to look at earlier periods going back to circa 1860, but we do not need to do so in order to examine the AGW theory. If you want to go back earlier, maybe you will need 2 different LIG thermometers, and you may need to use 2 different observation periods. It is simply a question of examining the historic record.

        Now obviously that approach is not perfect. A retrofit LIG thermometer will not be identical, but if the same manufacturing process is used and the same calibration techniques used, the error bounds will be small. We then simply say what has happened at each of the 200 locations. We do not seek to average them together or to form any spatial map, but we should note how many show 0.2 degC cooling, 0.1 degC cooling, no change, 0.1 degC warming, 0.2 deg C warming etc. Just simply tabulate the result without the use of any fancy statistics. This approach would tell us something of real substance.

        Whereas presently we have no idea whether the temperatures today are warmer than they were in the 1930s/1940s or than they were in the 1880s. All we know is that there has been some movement since the LIA and that there is much yearly and multidecadal variation in temperatures.

      • Further to my above comment in which I suggest that we should accept the limitation of our data. Here is a good visual that shows the point.:

        AND

        The proposition is that CO2 is a well mixed gas and increasing levels of CO2 leads to warming.

        The further proposition being that CO2 levels increased significantly only after 1940.

        The point is that it is not necessary to make any global or hemisphere wide reconstruction. We can test the proposition(s) without such.

        Since CO2 is well mixed, we only need a reasonable number of well sited stations. Since CO2 increased significantly only after 1940, we need only consider what the temperature was for the period say 1934 to 1944 as measured at the well sited stations, and then take measurement today of the current temperatures at those well sited stations, using the same type of equipment, practice and procedure as used in the period1934 to 1944.

        There is no shortage of money going into climate science, and this observational test could quickly be carried out, and we would know the result within a few years. Materially, we can simply compare RAW data with RAW data with no need for any adjustments, and there is no need to employ the use of any fancy statistical model or technique.

        As Lord Rutherford suggested: design an experiment that does not need statistics to explain the result.

      • For the sake of completion:

        Australia has a bit of data, but it is generally on the East coast. There is no spatial coverage, unlike the image of the contiguous US above. Even with the US, it is only the Eastern half that is really well sampled.

        If quality data does not exist, it simply does not exist, and I consider it unscientific to present data suggesting that we have insight into global temperatures on an historic basis.

      • Richard,
        “we should accept the limitation of our data”
        You have the limitation of getting your stuff from Steven Goddard. What you are showing are the stations with daily data in GHCN Daily. What is used for global indices are the stations for which we have monthly averages recorded, a different set. Here is a Google Maps page which lets you see which GHCN monthly stations were reporting at different times (and many other things). On that map you can click each station for details. Here is a shot of those whose reporting period included 1900 (there are 1733). It is very different from what you show:
        https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/09/1900.png

      • Nick

        Thanks your further comments. I appreciate the time spent getting back to me.

        Probably most commentators on this site have a scientific background, and most of us are commenting on matters outside our direct expertise and where our knowledge may be limited. But then again, Feynman was insightful when he observed that: “Science is the belief in the ignorance of the experts” With expertise comes the risk of becoming blinkered, failing to see the wider picture and being unable to see the wood for the trees.

        I have been on that website before which looks interesting, but I have never had success with it. Tonight, I sought to obtain a plot of the SH and waiting 40 mins but still no map loaded. But if you look at your screen shot, you will note that in the SH (this is just north of say Libreville in Gabon) there are relatively few stations.

        I say that if you want to make any meaningful comparison, one must always compare like for like, and this is a point that you have side stepped. Returning to the thrust of the original article, I have suggested above that the platinum resistance probe should have been cased in material that had the same thermal response time as a LIG thermometer, and that ought to be part of the calibration of the equipment, and then field tests (in enclosure tests) should have been conducted (for a lengthy period) where the in enclosure LIG thermometer and the platinum resistance probe are monitored alongside each other. This is part of the end to end,/i> system calibration. It appears that this type of calibration was not done. I consider that to be a serious failing of BOM, but of course, they are not alone on that.

        A German scientists Klaus Hager (Chair of Physical Geography and Quantitative Methods at the University of Augsburg) conducted a side by side comparison between a platinum resistance thermometer (PT 100) and the LIG thermometer and found that the platinum resistance thermometer read high by about 0.9 degC:

        That is the sort of field test that BOM should have conducted at many different station sitings.

        The response characteristics of thermometers is a huge and important issue. See for example the guidance given on page in the 1982 Manual for Observers: (https://archive.org/stream/instructionsforv00unitrich#page/18/mode/2up):

        When a maximum thermometer is not read for several hours after the highest temperature has occurred and the air in the meantime has cooled down 15 or 20, the highest temperature indicated by the top of the detached thread of mercury may be too low by half a degree from the contraction of the thread.
        When the fall of temperature from the highest point is very slow a little of the mercury may pass down before the thread breaks, especially when there is no wind to cause a slight jarring of the instrument…. (my emphasis)

        So this is a behavioral characteristic of a LIG thermometer, but how is this behavior replicated with the new replacement platinum resistance thermometer? This is a further issue over and above the thermal response issue.

        The thermal response issue becomes more of an issue with poorly sited stations since the bias will tend exclusively, or very much towards warming. So if one has say a little swirling gust of wind bringing air from over a tarmac plot, or over a runway, or from a passing jet plane one will see a short lived warming spike. However a passing jet plane, or air over tarmac will never produce a cooling spike. The inherent bias is not equally distributed.

        This type of bias causes real issues. Only a couple of years ago the UK Met office declared the highest ever UK temperature. It was at Heathrow and there was a short lived spike of about 0.9degC (if I recall correctly). Subsequent investigation established that a large jet plane was maneuvering in the vicinity at the time at question. Was this the reason, who knows, but it is probable that the old LIG thermometer would not have measured such a spike.

        Then there is another issue and that is that the modern day enclosures are very different to the old Stevenson screens, the size and ventilation is very different, and these different enclosures create there own different thermal response.

        We either have to accept that due to a variety of accumulated issues, our historic land based thermometer plots are accurate to around 2 to 3 degC, or should we wish to have a lower error bound, and hence to have something more informative, then we need to completely recompile the data and take a completely different approach to data acquisition and handling and reproduce, as best possible, like for like observation, including keeping at all times an identical sample set, using the same type of enclosure, painted with the same type of paint, using the same historic equipment and utilising the same historic practices and procedures etc..

        Finally, I would point out that if our data sets were accurate to within a few tenths of a degree then we ought to know what Climate Sensitivity to CO2 is within a few tenths of a degree. The very fact that the IPCC sets out such a wide bound for Climate Sensitivity in itself shows that our data sets have error bounds in the region of 2 to 3 degC. It is the wide error bounds of a temperature data sets that results in us being unable to find the signal over and above the noise.

      • Richard,
        “Tonight, I sought to obtain a plot of the SH and waiting 40 mins but still no map loaded.”
        Sorry to hear that. I think it generally works with Chrome or Firefox, and most work with Safari. It needs Javascriot enabled. If it’s going to load, it shouldn’t take more than a minute or so, at most.

        As a further check, here is the SW of WA in 1900. Your plot showed only one station. But there are 14:

        You can verify with the NOAA station sheets. The three in the bottom SW corner are Cape Leeuwin, Katanning, and Albany. Each month marked with a colored dot.

        On your notions about calibrations, remember that the objective of AWS is not to agree with LiG, but to get it right. If there is a disagreement with LiG, that is a matter for homogenisation, not the indefinite preservation of what may have been errors in LiG. Your Augsburg comparison simply shows a 0.9C discrepancy between two measurements. It doesn’t show which is right. It suggests one wasn’t calibrated properly.

        “It appears that this type of calibration was not done. I consider that to be a serious failing of BOM, but of course, they are not alone on that.”
        You’ve been pretty dogmatic about what BoM hasn’t done, but I don’t think you have made much effort to find out whether they have or not.

      • Your Augsburg comparison simply shows a 0.9C discrepancy between two measurements. It doesn’t show which is right. It suggests one wasn’t calibrated properly.

        Improper calibration would produce a consistent discrepancy, not a “jumpy” one. What’s apparent here is a marked difference in frequency-response severely affecting the detection of daily temperature maxima.

      • “Improper calibration would produce a consistent discrepancy”
        On its own, yes. But it would account for the bias, which seems to be the author’s preoccupation. It’s unclear whether the variation would be different with two identical instruments similarly placed. No control seems to have been performed.

    • “I guess when considering Australian temperature records, one should bear in mind the following:”

      Yes we should and here it is …..

      Not the version you displayed and corrupted as Nick says.

      • BoM average anomalies suffer no less than Hadley CRU anomalies from shuffling stations in and out the set being averaged. The trick lies in the shuffle, just as with “mechanics” in Las Vegas. By insisting upon a FIXED set of stations throughout the entire time span and by avoiding UHI-corrupted urban stations, a set much more representative of the actual temperatures is obtained. Fortunately, Australia has a dozen or so small-town stations reasonably evenly scattered throughout its territory. Their average anomaly shows a great minimum in 1966 and a far gentler century-long trend than BoM’s graph above.

      • It’s at remote places such as Southern Cross, Marble Bar, Alice Springs, Cape Otway, etc. that one finds well-kept long records that manifest little UHI. It’s imperative, however, to use the data as originally recorded–not as spuriously adjusted decades later via ill-founded, blind comparisons with neighboring stations of much inferior reliability. Such ad hoc “homogenization” of indiscriminately located anomalies produces a trend 2 to 3 times higher! Along with attestations of consistent instrument response made by BoM, it’s various claims of “improved” reliability simply don’t withstand scrutiny.

        BTW, my earlier reference to a “great minimum” should read 1976, in concordance with that observed globally, instead of 1966.

    • Obviously, the satellite data set has its own issues, but the most significant one is that it does not cover the 1930s/1940s.

      The start date of 1979 is a sever restriction to its usefulness.

  32. I still maintain that once science is used to justify a practical function, peer review should end and it should have to face a proper quality control department inspection with people paid just to knock holes in the work.
    It is not just the Australian measurements that fails to maintain even remotely adequate standards the UK data would fail the tests used for even products sold for a Pound in a major UK chain. As if this was not bad enough they would fail in the rarely used category of do not consider this supplier again until they have first produced evidence of a considerable level of improvement.
    i also have seen the results and some of the tests that prove that the Stevenson screen is not up to sub degree accuracy in differing air cleanliness conditions but we have no data at all on this aspect of the measurements. The assumption that pre industrial society had really clean air is readily provable to be fallacious.

  33. As an engineer who designs measurement systems, can anyone point me to the data sheet for the sensor in the controversy? Its not just the sensor that is important but the amplifier circuit, the power supply, and the data capture system. All of these have to have their error budgets calculated. It would be good to understand the end to end system, if there is a design spec out there.

    • While you can get data sheets readily from reputable manufacturers (e.g., Belfort Instruments), experience indicates that their reliability is very mixed. That’s why responsible geophysical researchers insist on doing their own calibration tests before taking a sensor into the field. Furthermore, in the case of weather station equipment, it’s not the response of the sensor per se, but the effect of the enclosure that is most difficult to nail down. The white-wash or paint on wooden Stevenson screens deteriorates unevenly over time, as do the plastic thermistor enclosures of AWS. The error budget of the “end-to-end system” thus is by no means easy to understand, especially since test information from long-term deployments is often jealously guarded.

      • I’ve broken ranks with Marohasy. Although I’ve contributed on her blog and defended her in various forums in the past (including The Spectator and The (ridiculous) Conversation) some of what she now writes is extreme; smacks of unscientific vindictiveness and while I might watch from a distance, I’m no longer on her side. Although I once thought we were once (as they say) on the same page; there is simply no victory to be had in the approach she advocates.

      • Standing back with a cool head there are only a few key issues involved and most can be investigated objectively using Bureau data.
        1. It was never envisaged that historic data would be used to benchmark climate trends. Consequently, the history and background of most historic datasets (earlier than 1960) is poorly known or not thoroughly researched.
        No sites in Australia (from Darwin to Hobart; Norfolk Island to Sydney Observatory; Alice Springs to Port Hedland and Brome) have remained the same since their records commenced. (Of the hundreds of sites I’ve researched; only Tennant Creek seems to have remained in the same place; and even its data are affected by nearby changes.)
        2. Changing from large (0.23 m^3) Stevenson screens in use since records commenced to small ones (0.06 m^3) at about the same time; and changing from eyeballed thermometers to electronic probes (and in many cases moving the screen somewhere else), also at about the same time caused a kink in Australia’s temperature record that had nothing to do with the climate.
        3. No sites are homogeneous and it’s a fantasy that faulty data can be homogenised using other data that are faulty. Selection of comparator stations based on their correlation with the target ACORN-SAT site is ludicrous. Homogenisation is done everywhere using comparators having parallel faults. The whole process is a fallacy.
        4. Unmanned sites beside dusty tracks or at airports and lighthouses where gear is not maintained, is guaranteed to be biased high. Only idiots use data they can’t verify and the climate-science world is overflowing with them.
        I disagree with Marohasy; I doubt she understands climate data; how it is collected; why in its raw form it is no use for detecting trends and how hard it is to get data that reflects just the climate. I also disagree with the Bureau, which although abbreviated as BOM is code for climate politics.
        Having taken turns with colleagues observing the weather for over a decade from 1971; set up weather stations at experimental sites; evaluated electronic probes with Stevenson screen data in collaboration with a manufacturer and analysed and used climate data for longer than I can remember; I can vouch that no Australian weather data are useful for measuring trends in the climate.
        Also, even though their network is hopelessly compromised by site and instrument changes; and practices such as herbicide use; gravel mulching; cultivation; poor exposure; sensitive high-frequency probes operating in screens that are too small; and sites that are demonstrably poorly maintained; it is a fantasy that anyone can do it any better.
        Cheers,
        Dr. Bill Johnston

      • I can vouch that no Australian weather data are useful for measuring trends in the climate.

        While I agree, in principle, with all your trenchant criticisms of BoM’s data products, the practical situation is not as dire as you paint it. What permits capable signal analysts to obtain quite useful time series of average anomalies is reliance upon the power of aggregate averaging of long, intact station records. The effects of most of the duly-noted vagaries of individual station records are greatly diminished thereby.

        What is not diminished, of course, are systematic biases, such as due to UHI effects. But these localized effects can be distinguished from bona fide long-term regional climatic variations by their incoherence between stations. Similar to the means by which signal is distinguished from noise in target acquisition schemes, a rigorous vetting procedure for records is mandatory. Sadly, “climate science” is woefully unsophisticated in signal analysis and is obsessed with utilizing all available snippets of data indiscriminately. Proper analysis and screening procedures can demonstrably do much better than that.

  34. Don’t get me wrong.

    The Bureau’s data are and have always been useful for describing the weather; and for deriving site statistics; comparing Adelaide and Darwin or Sydney and Low Head or Townsville; which is a legitimate use; and for modelling such things as pasture growth and farming systems; which I’ve done. However changeable site control (or not even knowing what went on), makes them useless for benchmarking or analysing trends. (Data are inherently coarse/noisy and trends inseparable from serial site changes.)

    Cheers,

    Bill

  35. I’m not referring to weather in my comments. My point is that there are rigorous means for identifying stations whose “changeable site control (or not even knowing what went on), makes them useless for benchmarking or analysing trends.” It’s wide-spread ignorance of those proven analytic methods (and the sheer convenience of passing off UHI as AGW) that inflates the trends of various published climate indices.

    • 1sky1, I base my views on independent post hoc corroboration of data changes with site changes using aerial photographs; visits to museums and archives; local historic societies etc.There is lots of information available; having the time and energy to find it is the problem.

      Changes occurred at many sites across Australia’s network that have been deliberately ignored by homogenisation; on the other hand, changes that had no impact on data are “adjusted”. The result is to engineer trends that don’t exist. Furthermore, faulty data embedding spurious trends are used to adjust other sites. I can give numerous examples but I’m just about to shut down and do something else.

      The primary reference for homogenisation (Peterson et al Int. J. Climatol 1998: pp. 1491 to 1517), has 21 Authors and all groups who do it are represented. It is a consensus paper; what did any of those Authors actually contribute? There is hardly anyone outside the tent able to give critical review anyway. However, it is possible with careful patient research to now provide rebuttal; the challenge then is to get a rebuttal paper published. Which “expert” would do fair peer review for example? Why has the theory not been tested by the numerous professors and PhD students who use homogenised data without checking their veracity from the bottom-up?

      It is almost too-big a boat to rock. Just imagine the consequence for CSIRO and the Bureau; “The Conversation” and everything (including electricity prices), when homogenisation comes apart at the seams.

      Cheers,

      Bill

      • “Just imagine the consequence for CSIRO and the Bureau; “The Conversation” and everything (including electricity prices), when homogenisation comes apart at the seams.”
        CSIRO does nothing with surface temperature analysis. But the effect of homogenisation can be very simply tested by doing the same analysis on unhomogenised data, And it makes very little difference. Electricity prices wouldn’t change.

      • Bill:

        The task of linking data changes to site changes is indeed arduous–and not entirely necessary if only relatively uncorrupted, long station records are being sought. My experience world-wide is that only a small percentage of available records pass stringent spectral tests for very significant coherence at the lowest frequencies. They are found primarily at small towns, reasonably abundant in the USA, Northern Europe, and Australia, but quite rare elsewhere.

        What defenders of ill-conceived “homogenization” schemes fail to realize is the fact the rest are simply not fit for the purpose and should be totally excluded from climate-change analyses. (The notion that truly reliable ad hoc adjustments can be made to all records is patently foolish). Failing to do so, they cheerfully indulge in the casuistry of “testing” the effect of homogenization upon an egregiously corrupted, globally primarily urban, data base. Small wonder that their exercise in leads to the conclusion that “it makes very little difference.” By eschewing such circular reasoning, factor-of-2-or-more reductions in actual century-long linear trends (not highly variable trends per decade expressed per century, as Nick Stokes paints it) readily emerge in the estimates of continentally averaged yearly anomalies.

      • “factor-of-2-or-more reductions in actual century-long linear trends”
        The graph I showed here is below. It shows, with adjusted (purple) and unadjusted (blue) global data, the trend you get for periods from the x-axis date to present. So it includes, on the left, century-long linear trends. The difference is less than 10%, not factor of 2. The breakdown by continent is done in detail here.

      • Because it is based upon indiscriminate use of data from all GHCN stations, instead of only properly vetted time-series, the above graph sheds no light upon the crucial issue at hand. Nick Stokes simply perpetuates the circular reasoning that mere numerical consistency of operations performed upon egregiously corrupted data tells us something meaningful about physical reality.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s