Australia Weather Bureau Caught Tampering With Climate Numbers

Energy

Climate Change wooden sign with a desert background (By ESB Professional/Shutterstock)

From The Daily Caller

Photo of Chris White

Chris White

9:57 PM 07/31/2017

Australian scientists at the Bureau of Meteorology (BOM) ordered a review of temperature recording instruments after the government agency was caught tampering with temperature logs in several locations.

Agency officials admit that the problem with instruments recording low temperatures likely happened in several locations throughout Australia, but they refuse to admit to manipulating temperature readings. The BOM located missing logs in Goulburn and the Snow Mountains, both of which are in New South Wales.

Meteorologist Lance Pidgeon watched the 13 degrees Fahrenheit Goulburn recording from July 2 disappear from the bureau’s website. The temperature readings fluctuated briefly and then disappeared from the government’s website.

“The temperature dropped to minus 10 (13 degrees Fahrenheit), stayed there for some time and then it changed to minus 10.4 (14 degrees Fahrenheit) and then it disappeared,” Pidgeon said, adding that he notified scientist Jennifer Marohasy about the problem, who then brought the readings to the attention of the bureau.

The bureau would later restore the original 13 degrees Fahrenheit reading after a brief question and answer session with Marohasy.

“The bureau’s quality ­control system, designed to filter out spurious low or high values was set at minus 10 minimum for Goulburn which is why the record automatically adjusted,” a bureau spokeswoman told reporters Monday. BOM added that there are limits placed on how low temperatures could go in some very cold areas of the country.

Bureaus Chief Executive Andrew Johnson told Australian Environment Minister Josh Frydenberg that the failure to record the low temperatures at Goulburn in early July was due to faulty equipment. A similar failure wiped out a reading of 13 degrees Fahrenheit at Thredbo Top on July 16, even though temperatures at that station have been recorded as low as 5.54 degrees Fahrenheit.

Failure to observe the low temperatures had “been interpreted by a member of the community in such a way as to imply the bureau sought to manipulate the data record,” Johnson said, according to The Australian. “I categorically reject this ­implication.”

Marohasy, for her part, told reporters that Johnson’s claims are nearly impossible to believe given that there are screen shots that show the very low temperatures before being “quality assured” out. It could take several weeks before the equipment is eventually tested, reviewed and ready for service, Johnson said.

“I have taken steps to ensure that the hardware at this location is replaced immediately,” he added. “To ensure that I have full ­assurance on these matters, I have actioned an internal review of our AWS network and associated data quality control processes for temperature observations.”

BOM has been put under the microscope before for similar manipulations. The agency was accused in 2014 of tampering with the country’s temperature record to make it appear as if temperatures had warmed over the decades, according to reports in August 2014.

Marohasey claimed at the time that BOM’s adjusted temperature records are “propaganda” and not science. She analyzed raw temperature data from places across Australia, compared them to BOM data, and found the agency’s data created an artificial warming trend.

Marohasey said BOM adjustments changed Aussie temperature records from a slight cooling trend to one of “dramatic warming” over the past century.

Follow Chris White on Facebook and Twitter

Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.

HT/MarkW and Joe Perth

Advertisements

103 thoughts on “Australia Weather Bureau Caught Tampering With Climate Numbers

      • Hear hear. Why limit it because if there is any trend eventually your spurious limit will be broken. Take all the readings, then let the stats do the work. Setting these artificial limits is essentially picking and choosing your data point to suit a preconceived idea. Not science.

    • How does ‘replacing the hardware’ not lock in the fake trend. (I assume with ‘hardware’ that automatically self censors low temperatures so as to not let such embarrassing matters get out in public again).

      These people are not a scientist’s bootlace.

    • Geez, at least choose a unit of measurement C or F and stick with it. Don’t put a number 10 and then put another number and say its 14F. I know the story and it was very distracting. Personally, I would use C like the rest of the world…

      • I agree. The unnecessary mixing of Celsius and Fahrenheit makes the article difficult to follow.

    • Tom Halla,
      I am astonished.
      The watermelons, as you rightly imply, will use every deceit in the book – it seems, being an outsider – to massage the Adjustocene records.

      Sad.
      Sad – but not science; advocacy; or taking thirty pieces of silver.

      Auto

  1. OH How the AGWers would SCREAM if there was a Maximum Temperature preset and measurements were automatically dropped if the readings went above that threshold

    • Many years ago the Thai Tourist Bureau decreed that temperatures at Bangkok airport would never be greater than 95F. Bad for tourism, donchano.

      • It might have been an issue once at an important station in my home town in Australia during the 70s and early 80s. A friend who lived nearby said they watered the lawns in the middle of the afternoon around the station on very hot days. January means show a negative trend until 1990.

  2. This is simply bogus. They’re hiding behind automated “outlier” rejection as an excuse.

    If you automatically adjust “outliers”, you’ve programed the results. It isn’t science.

      • Indeed Latitude. As I wrote on Jennifer’s blog:

        It may be informative to search the records for Goulburn and Thredbo for minimums of exactly -10.0 and for blank minimums.

        The best case scenario is that the BOM has an automatic process to identify spurious temperatures and is slow in fixing them. The worst case scenario is that they don’t bother fixing them until somebody makes some noise.

        In either case they should not be displaying fake data and their filtering process is deficient. It would be helpful for the BOM to have a “we were wrong” section on their web site, but I expect to see pigs fly long before that happens.

      • From here,

        http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Observation_practices_WEB.pdf

        The range could be
        […Since 1938, for the meteorological range of –10 to +55°C…]

        And if that is indeed the range, this could easily be one of those “adjustments” whereby more legitimate cold data is rejected than hot data thus biasing the more modern automated readings to be warmer. One might wonder what ranges are in place in other countries.

      • Further down in the document is

        Unlike the MiG and AiG thermometers, automated
        processes developed in the late 1980s and the next
        two decades, enabled all PRS purchased for use to
        be checked prior to use in the field; to ensure they
        met manufacturing standards and provided readings
        within measurement tolerances of ±0.08°C over a
        temperature range from –10 to +55°C. If a PRS failed
        to satisfy those tolerances, it was returned to the
        manufacturer for replacement. As they were also
        used for wet-bulb thermometry, the PRS were batchtested
        to ensure the platinum-resistance element was
        located in the required position in the probe shaft

        So the reading of -10.4 was removed because it was no longer in spec for guaranteed accuracy.

  3. Failure to observe the low temperatures had “been interpreted by a member of the community in such a way as to imply the bureau sought to manipulate the data record,” Johnson said, according to The Australian. “I categorically reject this ­implication.”

    It was just a lucky coincidence.
    Like the boss winning the company raffle.

    • When climate scientists resort to manipulating data, it tells me they are losing confidence that CO2 can warm temperatures by itself. They obviously feel the need to provide an assist. The political ends justify the means.

  4. Imagine if two thermometers were installed at each location. With some level of redundancy, independent bad readings would be more apparent (even within acceptable ranges) and outlier readings where both match would be more accepted.

    • a man with 2 watches never knows what time it is. we do have sites with multiple temp sensors. they can be up to 10 degrees different in any given hour. slight differences in solar exposure make a big difference. differences of up to 1/2 a degree simply in sensor accuracy. would rather just have one and say the temperature is ‘this’….

    • Thomas, we have a old saying among statisticians. It goes like this:

      “A man with a wristwatch always knows the time. A man with two wristwatches can never be sure…”

      • Another saying from when I did stats [long, long ago in a faraway country..]:
        statistics is like a bikini – what remains hidden is probably more interesting than what’s revealed.

  5. I can see how administrators could convince themselves that “quality control” was not “manipulation”.

    But still I would say that this is a soft form of negligence, because a reasonable person, with reasonable intelligence in their field of expertise should know that the possibility of such temperatures exists, and to deny this possibility in the guise of “quality control” is a form of misrepresentation, hence fraud. It’s fraud used to make oneself complacent in one’s role of carrying out the rules. If the rules are based on fraud, then the actions of those who enforce those rules are based of fraud.

    Fraud, fraud, fraud. Have I said it enough now?

    • If it isn’t fraud then what are the limits on the highs? What do lower temperature records actually mean? If the record is -10 then maybe that wasn’t really possible, so they should raise the limit to 9.9C. Hmmmm….it’s fraud!

  6. Is this the AI threat we were warned out recently? I suppose we could miss the next ice age signals with this ‘hand on the dial’ approach.

  7. If you think the sun done it
    solar cycle 24 sunspot number for July 2017 in the old money (Wolf SSN) is fractionally down to 11 points while the new Svalgaard’s reconstructed number is at 18.3.
    Composite graph is here
    SC24 is nearing what might be a prolong minimum (with a late start of SC25) but a ‘dead cat bounce’ from these levels could not be excluded.

    • The southern US is seeing temps and weather fronts that look a lot like the cool summer of 2009 during that solar minimum. Two or three years of that pattern in place of one will be more noticeable.

  8. sorry, I don’t agree with the supposed ‘tampering’ issue here. we at snotel use exactly the same process. since we collect hourly data from 900 stations we have to rely on computer programs as an initial filter to screen out howlers and screamers – bad data. we set an arbitrary limit that alerts us when a particular parameter is near or exceeding a record – high or low… cause that’s stuff you want to know and be aware of. and if its outside a reasonable bound, the computer sets it to suspect so its not incorporated into various hydrologic models as ‘good’ data. so that value would be removed from ‘public consumption’ until there is a validity check via neighboring site, an onsite visit, etc. this to me seems to be more of a standard protocol in the data collection business than an intent to manipulate data.

    • that alerts us…randy, not the same thing….read their two excuses again..designed and automatically adjusted….faulty equipment…..can’t be both

    • “She analyzed raw temperature data from places across Australia, compared them to BOM data, and found the agency’s data created an artificial warming trend.

      Marohasey said BOM adjustments changed Aussie temperature records from a slight cooling trend to one of “dramatic warming” over the past century.”

      Does your method also turn cooling trends into warming trends?

      Think about this just a little:

      ““The temperature dropped to minus 10 (13 degrees Fahrenheit), stayed there for some time and then it changed to minus 10.4 (14 degrees Fahrenheit) and then it disappeared,” ”

      “Bureaus Chief Executive Andrew Johnson told Australian Environment Minister Josh Frydenberg that the failure to record the low temperatures at Goulburn in early July was due to faulty equipment. A similar failure wiped out a reading of 13 degrees Fahrenheit at Thredbo Top on July 16, even though temperatures at that station have been recorded as low as 5.54 degrees Fahrenheit.”

      If you’re too lazy to think about it, Marohasey has already thunk it out for you:
      “Marohasy, for her part, told reporters that Johnson’s claims are nearly impossible to believe given that there are screen shots that show the very low temperatures before being “quality assured” out.”

      What are we to make out of this, what was faulty about the equipment that were giving accurate data? I take it that the “faulty equipment” was actually faulty (err purposely programmed) programmed equipment to systematically remove inconvenient data.

      • some people appear to be missing this very important point. there was nothing wrong with the equipment . the problem arose with the program processing the data . someone from the bom wrote that program to prevent temperatures below a threshold being recorded . is this happening at other stations ?

    • ever heard of current measurements running ‘to high ‘ and need to be adjusted down , a approach readily used for historic records ?

    • we have to rely on computer programs as an initial filter to screen out howlers and screamers – bad data

      I appreciate your problem with the collection and recording from such a large number of stations but the term “computer programs” flags an alarm with me. The term suggests a set of rules for decision processes and I have to wonder what the bases for the rules are. Are they based on statistics? If so, what form of statistics? Gaussian? Normal? Skewed? Multimodal? Can the decisions determine that an outlier is an accident rather than a genuine reading?

      Are the rules based on physics? If so, what physics? What assumptions are involved? Have those assumptions been rigorously verified?

      In systems in which natural forces predominate I am reluctant to accept the rejection of outliers. To my mind the only acceptable form of modification is to correct demonstrable instrumentation problems such as, say, an inexplicable step change in the output from an instrument. In the shorter term nature does not display step changes (but please don’t challenge me with “earthquakes”—I live in New Zealand!)

      • temperature data flagging is based on straight normal statistical bounds. we keep all original data. period. everything. we don’t ‘estimate and replace any temperature data’. if a data point is clearly bad and all data collection systems based on thermistors and electronic transfer will have occassional clearly errant data, (-69.4 or 2021) we set that to null for our data base (no data)… which as previously stated maintains 2 traces: original and edited. so anyone can go in and see what the original was and what the edited is.
        when it come to verification – most would be surprised to know how difficult testing in the field can be. using a certified thermometer as close to the thermistor as possible and recording the two instruments over a few hours one can observe differences of 0 to 10 degrees. the only way to accurately certify a thermistor is to remove it and put it in an environmental chamber, run it from bottom to top of scale. one of the reasons we don’t even try to ‘estimate’ what a value might have been for any given observation
        we run on the assumption that all data are innocent, valuable until proven guilty.

      • Randy,
        Your procedure sounds fine to me, but that’s not what the BOM was doing. And that’s the problem.

    • I don’t see a problem with red-tagging data that might be suspect. A bad platinum RTD might throw a reading of “999.99”, so clearly something must be done. The better approach, in my thinking, is to leave the data up, displayed in a color indicating that it is under review. After the QA/QC process, a note indicating the result could be added to the final reading. In the meantime, if other users are scraping the data from your site for their own use, maybe you return a “no record” result for that time slot until the final determination is made.

    • Randy

      Sorry but this Aus citizen thinks its a fraud. Goulburn where this occurred is one of Australia’s coldest inland places and I can certainly recall over many years times when the temperatures were below -10 C. Setting a lower limit there would meet the “this temperatures dodgy” needs you mention in your post without artificially leading to a warming trend but deleting low temperatures, even though they do occur. By the way, we have had a very cold winter here so far in my area in the ACT with severe frost most mornings in July. No trumpeting of the “cold weather” because it doesn’t fit although it is probably also due to a very dry winter so far. My, non meteorological observations are that dry weather usually means the winter is colder here in Canberra.

    • “alerts us when a particular parameter is near or exceeding”

      Alerts, yes. Flags data as suspect, ok. But changing data to something else? That’s unconscionable.

  9. Add it all up and the Earth is probably already in a slight cooling trend since at least 2003.

    On a side note, it appears there are signs of a slight La Nina forming:


    • RW Turner,

      what you offer is obviously spurious data from faulty instruments and really should have been taken off line as soon as the Director of Climate Control was made aware. S/he will doubtless be sent for counselling in any case as El Nino/La Nina is a male invented dichotomy used to fineagle the truth about the relentless and almost uniform progress of the global warming disaster hitherto created by the male dominated economic and political regime now crumbling before the reality of a feminist influenced reality.

      The Director of Public Persecution will be in touch……meanwhile reflect on the truthiness of the Oz BoM data after it is purified.

  10. “The bureau’s quality ­control system, designed to filter out spurious low or high values was set at minus 10 minimum for Goulburn which is why the record automatically adjusted,” a bureau spokeswoman told reporters

    From a media flack, OK.
    Then there is this:

    Bureaus Chief Executive Andrew Johnson told Australian Environment Minister Josh Frydenberg that the failure to record the low temperatures at Goulburn in early July was due to faulty equipment.

    Somebody is not telling the truth.

    What on earth is a “spurious” value in a linear measurement system, anyway? Either your equipment is functioning correctly or it is not. If your equipment is functioning correctly, you do not get to pick the data you like and discard the rest. If your equipment is broken, you fix it. Again, you do not get to pick and choose what data you like from a broken system.

    • GOod point, faulty equipment would be giving an erroneous reading at all times and would be obvious at all times, not just when the temperature dropped below an inconvenient point.

      • RWturner,
        No, faulty equipment can have intermittent failures. I’ve seen it many times in electronic systems. All it takes is a cold solder joint that has thermally cycled enough to crack, and/or started to oxidize. Or a cracked diode. There are many things that will appear to work fine at certain temperature ranges, but outside that they begin to fail. And they might not be very consistent even then. Lots of non-linear effects going on.

      • I’ve never seen electronic hardware intermittently fail due to physical effects caused by external temperature. Electronics can overheat, then cool off, and work fine after that, sure. But solder, diodes, or capacitors fail intermittently due to cold? Never witnessed that. Once they fail, they fail in my experience. Perhaps there are complex problems that arise in extreme cold that I’m not familiar with, but certainly much colder than a few degrees below freezing.

    • clearly you never worked in climate ‘science’ were a perk of the job is you do get to pick the data you like and discard the rest, hell you can even just make it up and still be fine as long as your results give the ‘right ‘ results no one in that profession gives a dam how you got them .

      • clearly you never worked in climate ‘science’

        True, very true.
        I have always had real jobs, where what I built, hardware and software, had to work, and work correctly.
        It is harder to do things this way, but people do insist, and they can be fussy about it.

        I have been following the Climate Wars for a long time, yet I never cease to be amazed at how even the most basic precepts of data integrity are so flagrantly ignored.

      • @TonyL

        Ahhh, yes. To quote Dr. Ray Stantz: “I’ve worked in the private sector. They expect results.”

  11. “I have taken steps to ensure that the hardware at this location is replaced immediately,”
    What does that mean? Is it equipment failure that measured too low, or will the new equipment limit itself to -10.
    Before they change the equipment it would be wise to check it, and tell if it was a failure or not.

    • I imagine the equipment has already been crushed into a tiny cube. These types of instances seem to be the only time that bureaucracies fail to follow proper procedures, like Clinton just accidently destroying so many devices and hard drives after the FBI asks for it.

    • Yes Svend. One of the many missing pieces of information is whether the equipment self-checks and how it records and responds to self-check failures.

      The letter from the BOM to the minister provides no details whatsoever about how the failure was detected other than that the equipment “stopped recording” which begs the question where the numbers on the screen capture came from in the first place.

      Bollocks and more bollocks, but hey it’s good enough for government work.

    • Anyone who takes official measurements with instrumentation has a certified calibration program in place that must keep periodic records on the performance of all instruments to specification and accuracy against independent standards. Any malfunction or suspected malfunction of an instrument is immediate cause for shutdown and return to the calibration agency where it is impounded and retested. Out Of Tolerance reports are generated and the performance trends of instrument populations are followed as well.

      This should be easy to check. If they don’t have these records available, then their data is not data anyway.

  12. Funny how these ‘mistakes’ always end up favoring ‘the cause ‘ with luck like that they are wasting their time being climate ‘scientist’ when they could be hitting the tables of Vegas

    • Casino owners employ a lot of statisticians to keep detailed data on winnings and losses, for two main purposes: to ensure that the odds stay in favor of the house (otherwise they’re losing money) and to find potential cheaters (who are winning more than they “should”).

      So someone with a climate scientist’s run of luck would quickly be interrogated by security and/or escorted off the grounds. ;]

  13. We the Government set the low limits for a temperature, and no damn instrument has the authority to override our decisions.

  14. This reminds me of a data set I was playing with recently.

    I was curious what the difference between calculating a daily average as (H+L)/2 and that of the average of all observations in a day.

    I have to dig up the code but I think the average difference for each day over a decade was roughly a 0.3 +/- 1.5 F between the two methods.

    In the case where the daily average is a sum of all observations, ignoring a seemingly wrong value would have a much smaller effect on the result.

    • I would imagine that (H+L)/2 would only give you the Median and not necessarily the average which could fall above or below the median
      Like for this set
      15.1
      16.3
      17.4
      18.1
      19.1
      (H+L)/2 would be (15.1 + 19.1)/2 34.2 /2 = 17.1
      while the average is 17.2

      • Throwing out the low in this case would render an average of 17.725
        Far warmer than the average of 17.2 with the reading by 1/2 degree

      • Surely we are interested in heat flow so the temperatures should be in K absolute, you get a better perspective on it then so as to comprehend the real physical processes. Arbitrary scales of C or F may hinder proper understanding. One can always convert back to C/F later for the non-scientists.

      • C is far less arbitrary than F though. 1C and 1K are the same animal the only difference is the zero point location

    • There is min-max thermometer which records a min (L) and a max (H). Most historical data have been recorded with min-max thermometers, read once a day. If you switch to an average of 1,000 observations per day, you have a different animal.

      • True, but if you record 1000 observations a day you can still pick out the min and max for a particular 24 hour period, it then is the same animal and can be compared to the historic data.

        I assume this is how it’s being done. You get the high quality data for acurate weather forcasting but can still use it with historic data for long term trends.

        I mean, it would be stupid to limit ourselves to data quality levels of a hundred years ago, wouldn’t it.

        ~¿~

  15. Speaking from experience, this idea of readings being either acceptable or outliers reminds me of the “normal ranges” that have been established by pathology laboratories to assess the results of blood tests. Whilst the results are deemed to be either within or outside the “normal range”, or borderline, that normal range is only what has been deemed to be normal for the average person through tests conducted over time by that laboratory, but doesn’t allow for the fact that for some people “normal” may be outside that range, and if on occasion a test may fall within that range, rather than it being an indication of all being well, it is in fact indicating that something is not right. The same applies to blood pressure readings.

    • Indeed Kalsel. What is being recorded by the BOM, however, is a quantity which changes over time. Assuming that properly functioning equipment would have recorded a low of -10.4, there would have been a sequence of recorded numbers going down through -9.0, -9.1, … , -10.3, -10.4. That makes the situation different to pathology where a result of -10.4 would be an outlier to a normal range of 4+/-1.

      The question is where the error occurred and how the BOM says it knows which numbers are spurious. I think the explanation by the BOM to the minister that the equipment “stopped recording” can only be explained by the explainer not understanding what he was explaining. In short, it is bollocks.

  16. Jennifer Marohasy deserves the hat/tip for this. She outed the BOM on this and has outed them on earlier occasions for “modifying” the past.
    She is a true heroine of this – rigorously devoted to the science and the truth, and paying in her professional life the price of being an honest and vocal skeptic.

    • Absolutely true re Jennifer. She has always called the crap and paid a significant penalty for it I believe. But eventually we will see through all this stuff and she will be recognised as a heroine. Meanwhile, we will get the suitably adjusted data to reflect the world they want rather than the world we actually have.

  17. Essay ‘When Data Isn’t’ provides a number of examples from Australia, in ludin most famously Rutherglen, also exposed first by Jen Merohasy. In the case of Darwin and Rutherglen, it appeared to be faulty homogenization, with the bad contaminating the good. Regional expectations is always logically flawed if comparison stations are of dissimilar quality or circumstances. For a concrete example, see footnote 26 on BEST 166900.

  18. Bureaus Chief Executive Andrew Johnson told Australian Environment Minister Josh Frydenberg that the failure to record the low temperatures at Goulburn in early July was due to faulty equipment.

    “I have taken steps to ensure that the hardware at this location is replaced immediately,” he added. “To ensure that I have full ­assurance on these matters, I have actioned an internal review of our AWS network and associated data quality control processes for temperature observations.”

    Josh Frydenberg should be asking (in no particular order):
    1. how do you know which equipment is faulty before any testing? Was it the sensor, cabling, the recording equipment, ???
    2. if the temperature monitoring equipment is to be (arbitrarily?) replaced immediately, what steps are in place to reconcile the readings between the old and new equipment? It is not possible to have dual readings in order to be able to cater for slight differences between the new and the old as the old is faulty (apparently). The magical, secretive homogenising process rears its ugly head here.
    3. knowing that this problem is faulty equipment, why is it necessary to action an internal review of the entire system? This seems like overkill for a known hardware fault at one location.
    4. if the problem is faulty equipment, why did the reported data change? – “The temperature dropped to minus 10 (13 degrees Fahrenheit), stayed there for some time and then it changed to minus 10.4 (14 degrees Fahrenheit) and then it disappeared,”
    5. what temperature will be recorded into the ‘official’ records for the moment in question?
    6. how long has the equipment been faulty, how can you determine this with any certainty and how many records for this site will be adjusted or discarded due to the faulty equipment?

    Frydenberg should ensure that the answer to the next questions is a categorical YES:
    7. will the internal review be overseen by any external authority or expert panel to ensure no cover-ups?
    8. will the internal review be made public?
    9. will this spokeswoman lose her job, either for giving mis-information about this issue as she is at odds with the bureau chief or for letting the general public know (if the bureau chief is wrong) that there is a system in place to limit the recording of temperature data

    <blockquote)“The bureau’s quality ­control system, designed to filter out spurious low or high values was set at minus 10 minimum for Goulburn which is why the record automatically adjusted,” a bureau spokeswoman told reporters Monday. BOM added that there are limits placed on how low temperatures could go in some very cold areas of the country.

    10. if (when) the internal review reveals that there is an arbitrary lower limit on temperature ‘data’, will the bureau chief be asked to resign for lying to the minister and the public in that he has claimed, with some certainty, that the issue is faulty equipment

    No doubt smarter people than me can think of many more questions, both about this incident and also for the entire process

    • All good questions John. I have one more. Given that the official explanation is that the equipment “stopped recording”, where did the numbers posted on the BOM web site come from before they were altered and then erased?

      • “The temperature dropped to minus 10 (13 degrees Fahrenheit), stayed there for some time and then it changed to minus 10.4 (14 degrees Fahrenheit) and then it disappeared,”

        “The bureau’s quality ­control system, designed to filter out spurious low or high values was set at minus 10 minimum for Goulburn which is why the record automatically adjusted,” a bureau spokeswoman told reporters Monday. BOM added that there are limits placed on how low temperatures could go in some very cold areas of the country.

        … Wait a minute.

        If I’m reading that correctly, they are saying that they have the system rigged so if the temperature goes below -10C it only records it as -10C, and the fault that they are replacing the equipment for is that it (briefly) showed a temp lower then that.

        They aren’t the least bit apologetic that they have their metaphoric thumb on the scales, they are only worried because for a moment it became visible.

        >¿<

      • Schiztree that was the first explanation. The second explanation was that the equipment stopped recording. The obvious next question is where the numbers on the BOM web site came from.

        Somebody at the BOM hasn’t thought their excuses out very well.

    • One more question. If the equipment is faulty, then why is any of the data from that equipment accepted? How long was it faulty and how much data from that site should be thrown down the memory hole?

  19. This is equivalent to the statements of national and provincial leaders in various countries saying they have no gay people there. And there are no freezing temps in Australia either.

  20. Another major difficulty with believing data from the BOM are ‘raw’ is this: all temperature data in Australia are ONE SECOND READINGS, so a spike due to instrument error is recorded as the day’s maximum, unlike the rest of the world where typically 5 minute averages are used.

  21. The Grand Project is to make 2017 appear ‘the hottest year ever’ for Global Warming, and the BOM is being paid to assist in this by limiting low temperatures and adjusting high temperatures even higher. Routinely the past temperatures are adjusted lower and current temperatures are reported higher. As a result of this BOM fraud’s success, the CSIRO is now emboldened to call for the recruiting of even more Climate Scientists to assist in this giant fraud under the guise of improving ‘Models’ for future planning. Their ‘Models’ have never worked and never will as they are based solely on the action of CO2, a trace gas.

    • Yes DJ. Surely Mossshhher the magnificent and Stokes the incredible will be along shortly to tell everybody to stay calm, believe every different version of the BOM’s explanations, and carry on.

  22. The temperature dropped to minus 10 (13 degrees Fahrenheit), stayed there for some time and then it changed to minus 10.4 (14 degrees Fahrenheit) and then it disappeared…

    That didn’t make sense to me, and no wonder. Those C-to-F conversions are wrong.
    -10.0° C = 14.0° F (not 13° F)
    -10.4° C = 13.3° F (not 14° F)

  23. The BOMs ENSO graph jumped down into La Nina territory at the same time of their getting caught at trimming off cold temperatures after being on a holding pattern at threshold El Nino for months. Coincidence? You get too many of these and trust is gone.

  24. Can we expect a new hockey stick graph ? Too cold you say ? That cold is just science fictionally impossible and must be ignored . Hide the decline ring a bell .
    When Antarctica continues to grow why is it surprising Australia doesn’t get side swiped ?
    How did a copy of the NOAA manual of temperature “adjustments ” end up in Aussie hands ?

  25. Min temp for Goulburn below -10 rings an alarm bell I suppose that’s plausible but for Thredbo which is above the snow line, -10 as a lower limit is going to take some splaining .

    • Yes, and if you believe the current BOM mannsplaining the equipment is not fit for purpose. What dummy approved it then?

      Covering up the cover up might just be trickier than the BOM top bananas thought.

  26. “Marohasy, for her part, told reporters that Johnson’s claims are nearly impossible to believe given that there are screen shots that show the very low temperatures before being “quality assured” out.”

    Heck one could even lose their job or University tenure for something like this.

    Ah just like the good Dr Marohasy did!!!

  27. Surely the solution to this, or fix, is to use fault tolerant weather stations. E.g. With 3 thermometers in close proximity. When all are working, take the average. When one diverges wildly, take the average of the other two. If one thermometer consistently reads low replace the instrument or station.

  28. BOM CSIRO SCIENTISTS ARE LUCKY THEY BELIEVE IN THEM SELVES AS THE AVERAGE AUSTRALIAN DOESN’T BELIEVE THEM

Comments are closed.