The ‘trick’: How More Cooling Generates Global Warming

From the “we’ll fix that in post” department comes this from down under courtesy of Dr. Jennifer Marohasy.

COOLING the past relative to the present has the general effect of making the present appear hotter – it is a way of generating more global warming for the same weather.

The Bureau of Meteorology has rewritten Australia’s temperature in this way for the second time in just six years – increasing the rate of warming by 23 percent between Version 1 and the new Version 2 of the official ACORN-SAT temperature record.

Temperatures from the Rutherglen research station in rural Victoria are one of the 112 weather stations that make-up ACORN-SAT. Temperature have been changed here by Blair Trewin, under the supervision of David Jones at the Bureau.

Dr Jones’s enthusiasm for the concept of human-caused global warming is documented in the notorious Climategate emails, during which he wrote in an email to Phil Jones at the University of East Anglia Climatic Research Unit on 7 September 2007 that:

“Truth be known, climate change here is now running so rampant that we don’t need meteorological data to see it.”

We should not jump to any conclusion that support for human-caused global warming theory is the unstated reason for the Bureau’s most recent remodelling of Rutherglen. Dr Jones is an expert meteorologist and an honourable man. We must simply keep asking,

“What are the scientifically valid reasons for the changes that the Bureau has made to the temperature records?”

In 2014, Graham Lloyd, Environmental Reporter at The Australian, quoting me, explained how a cooling trend in the minimum temperature record at Rutherglen had been changed into a warming trend by progressively reducing temperatures from 1973 back to 1913. For the year 1913, there was a large difference of 1.7 degrees Celsius between the mean annual minimum temperature, as measured at Rutherglen using standard equipment at this official weather station, and the remodelled ACORN-SAT Version 1 temperature. The Bureau responded to Lloyd, claiming that the changes were necessary because the weather recording equipment had been moved between paddocks. This is not a logical explanation in the flat local terrain, and furthermore the official ACORN-SAT catalogue clearly states that there has never been a site move.

Australians might nevertheless want to give the Bureau the benefit of the doubt and let them make a single set of apparently necessary changes. But now, just six years later, the Bureau has again changed the temperature record for Rutherglen.

In Version 2 of ACORN-SAT for Rutherglen, the minimum temperatures as recorded in the early 1900s, have been further reduced, making the present appear even warmer relative to the past. The warming trend is now 1.9 degrees Celsius per century.

The Bureau has also variously claimed that they need to cool that past at Rutherglen to make the temperature trend more consistent with trends at neighbouring locations. But this claim is not supported by the evidence. For example, the raw data at the nearby towns of Deniliquin, Echuca and Benalla also show cooling. The consistent cooling in the minimum temperatures is associated with land-use change in this region: specifically, the staged introduction of irrigation.

Australians trust the Bureau of Meteorology as our official source of weather information, wisdom and advice. So, we are entitled to ask the Bureau to explain: If the statements provided to date do not justify changing historic temperature records, what are the scientifically valid reasons for doing so?

The changes made to ACORN-SAT Version 2 begin with changes to the daily temperatures. For example, on the first day of temperature recordings at Rutherglen, 8 November 1912, the measured minimum temperature is 10.6 degrees Celsius. This measurement is changed to 7.6 degrees Celsius in ACORN-SAT Version 1. In Version 2, the already remodeled value is changed again, to 7.4 degrees Celsius – applying a further cooling of 0.2 degrees Celsius.

Considering historically significant events, for example temperatures at Rutherglen during the January 1939 bushfires that devastated large areas of Victoria, the changes made to the historical record are even more significant. The minimum temperature on the hottest day was measured as 28.3 degrees Celsius at the Rutherglen Research Station. This value was changed to 27.8 degrees Celsius in ACORN Version 1, a reduction of 0.5 degrees Celsius. In Version 2, the temperature is reduced by a further 2.6 degrees Celsius, producing a temperature of 25.7 degrees Celsius.

This type of remodelling will potentially have implications for understanding the relationship between past temperatures and bushfire behavior. Of course, changing the data in this way will also affect analysis of climate variability and change into the future. By reducing past temperature, there is potential for new record hottest days for the same weather.

Annual average minimum temperatures at Rutherglen (1913 to 2017). Raw temperatures (green) show a mild cooling trend of 0.28 degrees Celsius per 100 years. This cooling trend has been changed to warming of 1.7 degrees Celsius per 100 years in ACORN-SAT Version 1 (orange). These temperatures have been further remodeled in ACORN-SAT Version 1 (red) to give even more dramatic warming, which is now 1.9 degrees Celsius.

184 thoughts on “The ‘trick’: How More Cooling Generates Global Warming

  1. Every time someone mucks with the data, they are lying.

    Always.

    The data are the data, and are not to be mucked with.

    • One of the great aviation engineer geniuses of the age is Burt Rutan. He’s used to analyzing flight data like peoples’ lives depend on it, which they do. He once observed that, if he saw someone over analyzing the data, he knew the analysis was bunk.

      For decades, as a professional experimental test engineer, I have analyzed experimental data and watched others massage and present data. I became a cynic; My conclusion – “if someone is aggressively selling a technical product whose merits are dependent on complex experimental data, he is likely lying”. That is true whether the product is an airplane or a Carbon Credit.

      Actually adjusting the data is one worse. Like you say, the data is the data.

      The standard for lab log books hasn’t changed in my lifetime. If you want to change something, you neatly cross out the old version so it is still legible. Every first year science and engineering student learns that. In that regard, the BOM hasn’t risen to the standard of the average freshman.

      Rutan’s Engineer’s Critique of Global Warming ‘Science’ is impressive.

      • Just wait…
        They utilized a neighboring dataset to alter this dataset so it would agree with its neighbor…
        Next they will alter other datasets a little further awway to agree with this one…
        And the alterations will forever creep outwards until Uluru indicates is thould be snowing there in December 1915…

      • I suspect that they have been comparing the Rutherglen data to the Melbourne data, which is 200K south.
        Melbourne’s minimum temperatures are warmer due to the warmer ocean temperatures, and of course the Urban Heat Island Effect.
        Melbourne has shown warming since 1958.

        • You don’t have to suspect, you can read from their document. 23 stations listed as comparative stations (Melbourne is not there).

          All the changes are public and original data is also available. Adjustments are documented.

      • Even worse. Modern liners execute a permanent data analysis of that many and some more parameters. Which is why lines prioritize data based drills and actions instead of what steam gauge grumpy old captains called airmanship.

      • When I was at Uni the meteorology courses were taught by the geography dept within the Humanities faculty and climate studies were an integral part of all geology courses in the Science faculty. So I’m not surprised BOM are a bit lose with the concept of what science is, as they aren’t scientists.

        • You aren’t suggesting that geologists aren’t scientists, are you?

    • Just another reason to present absolute temperatures, Kelvin I guess.
      Let the observer cherry pick the 30 year period to present the anomaly.

      Let’s use 1915 to 1945 for our baseline, huh?

      Gums sends…

    • I’d rephrase that a little bit. Every time someone “adjusts” data without some kind of a test to verify that the adjusted data is more accurate than the old data, they are guilty of fabricating data.

      If I’m processing a received noisy video signal and I theorize that I can improve the quality of the video by taking the incoming noisy data and applying a low pass filter through it, I can test the before-and-after effect of the low pass filter to verify that the adjustments improve the picture.

      But if all I’m doing is theorizing that the raw data has some bias to it, and implementing some purely theoretical procedure to remove the bias, without any objective means to test whether my procedure over-compensated for the bias, then I think this is plain old data fabrication. And what happens, for example, when the adjustment procedure say is designed to eliminate a theorized bias in a linear trend, but in the process introduces a spurious exponential component to a time series to make it exaggerate acceleration of the change?

      • “If I’m processing a received noisy video signal and I theorize that I can improve the quality of the video by taking the incoming noisy data and applying a low pass filter through it, I can test the before-and-after effect of the low pass filter to verify that the adjustments improve the picture.”

        Kurt, I’m not sure this analogy will hold. If you’re looking at a video picture, you often have a pretty good idea of what it “should” look like – you feel that you can correctly judge when the quality has been “improved.” In weather data, do we actually know this? Isn’t that one of the differences between the justifiers of the jiggery-pokery and the rest of us who ask, “Why should you expect the data to conform to your expectations?”

        • I’m not using the video processing example as an analogy to the adjustments performed on raw climate data. Just the opposite; I’m holding it out as a counterexample – a situation in which changing data would be appropriate and would not be data fabrication, and for exactly the reasons you mentioned.

          Conversely, I consider the adjustments to raw temperature data to be data fabrication because, unlike the video processing application, there is no way of testing the efficacy of the adjustments and also, as you note, the procedure is highly susceptible to confirmation bias where the adjuster just changes the data to conform to some preconceived idea of what it should look like.

      • I noticed an inadvertent test of BEST (and GISS v3?). Two isolated stations in Australia had the exact same data by mistake and despite exactly the same, 200 km apart and hundreds more from others, the algorithms adjusted the temps to create warming. I mentioned it here and that site data was removed rather than the whole process revised.

    • I keep reading this type of jiggery pokery is going on. So why doesn’t somebody-or better yet a group of people go public with a direct accusation of fiddling the data?

      • Ref John Harmsworth 2:53pm: individuals and groups have gone public with direct accusations of fiddling the data. I have older climate records for specific sites collected for work purposes eg CRD (climate responsive design), and found values that have been changed, eg level and cooling trends converted to warming, wind speeds correctly recorded by automatic sensors that become retrospectively “broken”, etc. The perpetrators ignore the complaints. I have had threats from local sycophants but they never turn up to make my day, just advise people not to use my services. I can’t take action on that practice, because it has the opposite effect to what is intended. They don’t seem to realise that many people don’t have to have a detailed knowledge of the subject to know they are being lied to.

        • Wanting to prove AGW can lead to a condition known as echochamberia, sadly it can also lead to a windfall for some of its victims.

    • Do you wear glasses?

      Psst. good thing Romer didnt know this.
      Psst. dont tell tell Christy or Spencer, UAh is a heap of adjustments
      Psst. dont tell Happer one of his claims to fame is finding ways to adjust. check out his history

      • Steve. As usual you allude to adjustments made in other types of systems that have no relevance to the topic under discussion. If you believe all of these adjustments to change earlier temperatures were necessary it would seem you would perceive that they all shouldn’t be in one direction. Here in Northern California there was a cooling trend for Santa Rosa and Ukiah until 2011 when the earlier years suddenly became much colder. Surprisingly the maximums (raw) were higher in the earlier years and still are. For Santa Rosa, the first year of the record in 1904 was 15.2C; the adjustment dropped it 1.1C to 14.1C. For Ukiah 15.2C became 14.5C, and the very warm 1925 to 1940 period was cooled about 1C throughout. Conversely for Ukiah, a strong cooling trend starting at 2005 became slight warming. I’m sure that you can explain all these changes, but I doubt such explanations in light of the patterns observed. But please explain away. Science is not to be taken for granted.

        • Steve actually believes that if one set of adjustments is justified, this proves that all adjustments are justified.

    • Precisely. (For Aussies)

      Fish and chip store guy: Global warming or cooling?

      Dorky dude: Both (snap*)

      Cue dancing

  2. Man, the guys at the Australian MET are real pros! NASA and NOAA will have to get busy!!

    • The BoM are unashamed in claiming they have world’s best practice temperature homogenisation.

      • You missed it is homogenisation with no actual field checks a fact they note in the methodology.

        • yeah…cant leave the ivory tower and aircon can they?
          and errata in the item…
          NO ONE TRUSTS the BoM
          you go look get a vague idea of what might be happening,
          and then go outside and use your own rain gauge /thermometer for actual;-)

      • “The Forum noted that the extent to which the development of the ACORN-SAT dataset from the raw data could be automated was likely to be limited, and that the process might better be described as a supervised process in which the roles of metadata and other information required some level of expertise and operator intervention. The Forum investigated the nature of the operator intervention required and the bases on which such decisions are made and concluded that very detailed instructions from the Bureau are likely to be necessary for an end-user who wishes to reproduce the ACORN-SAT findings. Some such details are provided in Centre for Australian Weather and Climate Research (CAWCR) technical reports (e.g. use of 40 bestcorrelated sites for adjustments, thresholds for adjustment, and so on); however, the Forum concluded that it is likely to remain the case that several choices within the adjustment process remain a matter of expert judgment and appropriate disciplinary knowledge.”

        http://www.bom.gov.au/climate/change/acorn-sat/documents/2015_TAF_report.pdf

        Expert judgement means it can’t be replicated. If it can’t be replicated it can’t be science.

        • “…concluded that very detailed instructions from the Bureau are likely to be necessary for an end-user who wishes to reproduce the ACORN-SAT findings.”

          In plain language: “We aren’t doing real science because our results can’t be reproduced.”

    • ACORN v2 is just an excuse for the next NOAA GHCN adjustment at those stations.
      A game of Leapfrog, to provide cover for the next round of adjustments to GHCN.
      GHCN version 3 was released in 2011. They now need to “update” it in preparation for the AR6 hustle. BoM actions here on ACORN v2 lays the ground work to NOAA/NCDC to adjust those Australian stations again in a new version 4 in time for everyone else (GISS-LOTI).

      So it’s certainly part of a coordinated effort to move regional temp records to more warming — first. That will then be followed by the Global surface datasets in order to not be so embarrassed when AR6 drafts have to be written in 18 months and the inevitable comparison to the CMIP6 ensemble. The imperative is for “observation” to stay in the CMIP6 90% ensemble uncertainty.

      • Damn it, “the CMIP6 90% ensemble uncertainty” is not statistical uncertainty. They take models with base average global temperatures that vary by over 3 C and combine them. Scientific fraud.

  3. We all know the people back in the 20’s, 30’s and 40’s couldn’t read a thermometer. After all, it is a very complicated device and took years of training to get it right.

    Maybe Steven Mosher can come in here and tell us all how he and his fellow scientists determined most reading were just too low?

    • Besides, the nature of mercury has changed. That is why they had to ban it in meteorological instruments.

    • Interesting point on reading thermometors . I have the old style mercury thermometor, Even with good eyesight, the resolution isnt high enough to read the temp closer than within 1 full degree F.

      You go to be impressed the climate scientists skills, that the climate scientists can pin point the temp within .1 of a degree, 80 years later.

      • To say nothing about how they also know the different calibration errors of each instrument. Heck, they could be changing everything to agree with a thermometer that is seriously out of whack. These old thermometers had varying tube sizes and irregularities in the tube that affected calibration. Who knows which one was most accurate and which ones were at the very edge of the accepted calibration.

      • Actually, even with temperatures recorded to the nearest whole number, you can determine the average temperature over a period of time to within 0.1 degrees. You have 365 data points. If you’re interested enough, you can prove it yourself using Excel. Create a column of data to represent true temperature readings, and in the next column round everything to the nearest whole number. Compute the average for each column.

        • That’s only valid if you have many thermometers in the same location to average out. But the reality is, single thermometers measure the temp for a huge area, so it would then follow that thw temperature reading for such n such a place would have a huge uncertainty in accuracy and a .5 C in precision. In sum, you can’t measure the average temperature of something huge like a country or planet. It’s not like measuring the average temp inside a plane or boat, etc., where there are active systems trying to maintain a certain temp.

        • Not as close as you think. I ran the numbers from GHCN Daily for Greer, SC for the year 2011. I took the average of all 365 days in the year, the standard deviation, and the error in the mean calculated as the standard deviation / sqrt(365).

          Using the raw temps with one decimal point in the measurement, the results were

          Avg Temp Std Dev Err. in Mean
          ——– ——- ————
          +17.0 8.6 0.4

          It doesn’t change much by using integers, as you suggested. The 8.6 standard deviation rounds up to 9, but the error in the mean stays at 0.4. Not as precise as one might thing.

          • The average can never be more accurate than the readings, unless you can show that the reading errors are normally distributed and random. This is basic statistics, but obviously ignored to “adjust” the readings!

  4. But they have got it wrong!

    I read and commented just the other day about the fact that “Warming causes Cooling”!

    https://www.livescience.com/3751-global-warming-chill-planet.html
    https://www.livescience.com/3751-global-warming-chill-planet.html

    (I googled “global warming causes global cooling” and then “global cooling causes global warming” and got the same result above 🙂

    https://realclimatescience.com/2018/04/before-extreme-weather-was-caused-by-global-warming-it-was-caused-by-global-cooling/
    https://www.skepticalscience.com/global-cooling.htm

    We cant all be right! Can we?

    Cheers

    Roger

  5. Regarding the changes at Rutherglen, which Minister in the Liberal Government is responsible for the BOM, and does he or she really know what is going on. Or is the Minister a Greenie ?

    MJE

    • And will they actually do anything? Going along with “scientists” one happens to agree with is all too common.

    • Hey Michael,

      The Minister is one Melissa Price who just yesterday announced that the recent bush fires here in Australia could be directly attributable to climate change. It provides some direction in terms of how this government is going to handle this issue of global warming going into the next federal election likely in May.

      That motivated me somewhat to get this blog post out … something about Rutherglen and bush fires, that I had in my back-pocket, so to speak.

      The Conservative Morrison government currently ruling Australia know that the Bureau just make stuff-up, but they are not sure about the extent of the deceit.

      Much THANKS TO WUWT for reposting me.

      • Thanks to Jennifer for writing the article in the first place.

        Price is a bit hard to read. On one hand she is/was a Turnbull supporter which isn’t a smart thing to have on your CV when talking with conservatives. She has, it seems, been trying to gain traction with her Environmental Policy in recent days, baiting Australian Labor (left wing) on Twit with it.

        On the other hand she was also the person who accused the President of one of those ‘victims of rising sea levels’ islands of only being at a conference to ask for money and the Guardian in the last few days has been trying to push the line that she is an invisible minister after several environmental groups have complained that she declines to meet with them.

        Not sure. Someone closer to her might like to confirm or deny that she has splinters up her bum, cause I think she is a fence sitter.

  6. Absolutely throws future analysis for a 6. I thought we owed our children and children’s children etc the truth!
    This has to be a breach of ethics?

  7. The old Mark1 eyeballs show a step-function in the adjusted temperature data around/after 1970 at Rutherglen. Has the BOM explained this obvious step up in temperatures at this location? The raw data shows no such step up in temperature recordings at Rutherglen.

  8. It’s garbage. One of the reasons for using averages for lots of data like this is to average out errors. Gong through each individual record and changing should not therefore change the average – unless there is a systematic problem with the data. If there is no systematic problem, you would expect as many upward revisions as downward, and thus the average would stay the same.

    If the average changes, and there is no systematic reason for changing the original data you are most likely ADDING BIAS.

    It is very unlikely that an unbiased process will discover significantly more adjustments that go one way in the absence of a systematic reason why they should go more on way than the other. Thus the changes in the average strongly suggest bias, unconscious or not.

    • Temperature measurements are not measurements “of the same thing” which can achieve a more accurate “true value” with a normal distribution of errors. A given temperature measurement can not be made “more accurate” by a subsequent reading from later in time. An error budget should be determined prior to any manipulation of a temperature data set. One of the components of a budget is the error in readings. As above, many old temperatures were recorded as integers with an error of +/- 0.5 degrees (at best). Averages of these readings carry forward the same errors as each individual reading. In most cases, this could be assumed to be +/- 0.5 degrees, but a better way is to determine the percent error and carry that forward.

      If you don’t believe me, take temperatures of 55 and 58 degrees F with an error of +/- 0.5. To calculate an accurate average, do you use 55.4 and 57.6 or perhaps 54.6 and 57.7? Another question is how many significant digits should the average have? Two or three?

      • There are hard and fast rules for error propagation and significant digits. I’ve found these sites very helpful:

        https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/index.html

        http://chemistry.bd.psu.edu/jircitano/sigfigs.html

        In the example you gave above, the rules would say this:

        You can’t have more precision in your result than is in the least precise measurement. The average of 55 and 58 degrees, with no decimal part in either value, would be figured as (55+58)/2 = 56.5. Since neither 55 nor 58 have a decimal part, neither would the answer. In scientific rounding, even numbers followed by a five stay at the same number, while odd numbers followed by a five round up. So the anwer to the first part would be AVG(55,58) = 56. If there were more values in the series you could use the error in the mean calculation of ΔX/√N, but I’ve never seen it used with only two values.

        Instead it’s done in a two part process. First you add the the two errors in a quadrature, i.e, √($Delta;X^2 + ΔY^2) = sqrt(0.5*2 +0.5^2) = sqrt(0.5) = 0.7. Dividing by 2 is next.

        According to the rules when multiplying a measurement with uncertainty, you use the fractional, or relative, uncertainty to do the job. In this case the sum is 113 and the added uncertainty is 0.7. Get the relative uncertainty by dividing 0.7/113 = 0.006. That is the fractional uncertainty. Divide by two to get the 56 value from before, and then multiply the fractional uncertainty by the average value to get the new absolute uncertainty. 56*0.006 = 0.336, , rounded to 0.3. The final answer, according to the scientific rules of significant digits and error propagation, would be 56±0.3°F.

        That’s how my cipherin’ goes. If I’ve made a mistake somewhere, I sure would appreciate it being pointed out and explained.

        • Here is where you go off track. You are measuring different things so your errors can’t add in quadrature. Take the measurements of diameter of ten apples and ten limes with twenty different instruments, find the average, and calculate the error of the mean. Does your error calculation mean anything?

          With two or three temps of +|- 0.5 what is the true value of the average? Can you make a choice of what actual temps to use for your average? That is, 55.5 or 54.5 for one of the numbers.

          • I’ve seen online articles using these techniques with populations These aof people to determine the average height and such, but that’s not the point anymore. I think the point is how bad the statistics are that are generated.

            Take the daily averages from any month, average them, and determine the uncertainty — it will be n the 0.2-0.5 range, yet the monthly summaries will have two decimal places and no mention of any uncertainty.

            These are the issues that should be emphasized.

    • ” One of the reasons for using averages for lots of data like this is to average out errors.”

      that is not the purpose of ‘averaging” for temperatures.
      plus we dont average.

      • So who does? Somewhere, somebody is putting out that some years are hotter than others. If this isn’t using averages, what are they using?

  9. If they keep this up much longer the only way they’ll be able to explain the Pilgrims reaching Plymouth will be if they walked across the Atlantic.

  10. Why should we give these data manipulators the benefit of the doubt?

    They just stumbled into this bastardization of the temperatue data? You don’t think they know what they are doing? Have these people ever raised past temperatures? Even once? All the adjustments cool the past. None of them warm the past. Sounds like a plan to me. A plan to make it look like things have been getting hotter and hotter for decades. A diabolical plan to sell the CAGW narrative.

    • It IS obvious that all of the sensor movements have created an artificial warming signal in the present data so We’ll correct it by cooling the past…
      makes perfect sense to me
      /sarc

  11. As Mosh likes to tell us, some of the adjustments result in cooling.
    What he never tells us is that all of cooling occurs in the past, while all of the warming occurs in the present.

    There’s no way that could happen by pure chance.

    • What Most doesn’t tell us…..is how much they adjusted up….before they adjusted down

      When you adjust up 10…and adjust down 2…..

      yeah, you can say some of the adjustments result in cooling

      • You could just look at all the NCDC station plots. they are online

        The code is also online on how to go from unadjusted data to adjusted.

        Once upon a time skeptics would kill to get the code used by climate science..

        or hack systems to get it.

        Now, athough its posted, they generally avoid looking at the code

        much easier to accuse people of fraud when you avoid the best evidence

        here

        ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/52i/

    • “As Mosh likes to tell us, some of the adjustments result in cooling.
      What he never tells us is that all of cooling occurs in the past, while all of the warming occurs in the present.

      There’s no way that could happen by pure chance.”

      Err because its not true.

      Look at individual series. You will see adjustments all over the place.

      I will link to the charts, but you will just change the topic.

      here is the top of the stack

      ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/1/10160355000.gif

      grab another one

      ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/1/10160461000.gif

      whole stack
      ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/

      Now there are a few thousand charts showing you station by station for GHCN v3 where the adjustments UP and adjustments down are made

      all over the map, past cooled, present cooled, warming here, cooling there.

      your insinuation is that all the cooling is done to the past and this is categorically untrue

      AND you insinutae that this cant be by chance, implying that folks like me are being dishonest.

      sorry no evidence of that.

      here is the software that NCDC uses

      ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/

      You can download it, run it and see for youself

      • The last time I saw a FORTRAN listing was in college about 1974, so I wondered if I would recognize anything. Code comments do not inspire confidence:

        C WARNING – POTENTIAL FOR A MAJOR ERROR IN THE PRECIP CODE HAS BEEN
        C DISCOVERED. THE ALGORITHM MAY BE COMPROMISED DUE TO THE USE OF
        C OF A SHORTCUT FOR CALCULATIONS. IT SEEMS AS IF THE ESTIMATION OF
        C THE CONFIDENCE INTERVAL AND THE CORRECTION FACTOR ARE NOT
        C CALCULATED IN NATURAL LOG SPACE AS IN THE ORIGINAL SPERRY VERSION
        C AT PRESENT THE EFFECTS ARE UNKNOWN, BUT UNTIL FURTHER INVESTIGATION
        C !!!!!! CONSIDER THE CODE CORRUPTED !!!!!!
        C 07JUN01 CW

        [from the file filnet_subs.v4p.f]

  12. The Bureau of Misinformation has considered all criticisms.
    And dismissed them.
    The residents of Rutherglen have also been informed.

    All in all dead shit science !

    • And this minister has just attributed the latest bushfires in Victoria to climate change.
      Wonder how she explains Black Sunday 1926, Black Friday 1939, Ash Wednesday 1983?

      • She knows she is extremely unpopular as environment minister especially over the Adani coal port in Queensland. She is trying to save her seat.

        Unfortunately, it seems most Australians have fallen for the propaganda hook, line and sinker. They don’t see problems using 112 devices to calculate a national average temperature. They don’t see an issue with device siting. They turn a blind eye to the blatant corruption of the data by the data keepers. This list goes on and on…

  13. Data is data, I would tend to agree.

    What comes out of an adjustment of data is no longer data. Rather, it is synthetic output.

    If “adjusting” needs to be done, then it is in the EXPLANATION of the data. The explanation of the data should discuss the data’s possible limitations and shortcomings. But the data is the data. Period.

  14. Isn’t it obvious ? Homogenize the southern hemisphere temps and then show …. CAGW !
    Not all that many SH stations …. one temp to control them all ! Hockey stick !

  15. If their scientific rationale for “adjusting” these temperatures doesn’t stand up to even basic scrutiny then the only remaining reason is that the fix is in and approved by all involved if there were no arguments against. They should all be dragged into the sunlight and fired straight up!

  16. the thing is – as long as they CAN get away with it , they will keep on doing it .
    Is there no way WUWT can crowdfund a lawsuit to sue the BOM for damages or
    is there not ONE politician that’s honest that can demand answers

    • The problem here is that anyone – and especially politicians – perceived to have an opposing view is shouted down as a ”denier”, anti-science and a right-wing dinosaur. So politicians who clearly do not go along with this hysterical nonsense dare not admit it. It shows that most are basically either gutless or do not have the confidence to confront the warming claims head on. This in turn stems from the systematic suppression of pertinent information.
      It demonstrates the very pressing need to have someone in power to set up a small investigative committee to carefully check all contrary claims such as have been highlighted by the various sceptical scientists and, having found the inconsistencies (which they will do), set up a more potent inquiry such as a Royal Commission in which all relevant players are bound by law to front.
      The battle to convince the authorities will be difficult. Even just yesterday we had a QC (prominent lawyer) announce he has joined the greens party and in an interview on radio claimed climate change to be the ”most urgent problem facing humanity” or words to that effect. The response from the host – who enjoys a very large audience was ”There is no doubt about that.
      These people who think they are the enlightened and righteous ones are living in ignorance and darkness. Very sad.

      • “The problem here is that anyone – and especially politicians – perceived to have an opposing view is shouted down as a ”denier”, anti-science and a right-wing dinosaur. So politicians who clearly do not go along with this hysterical nonsense dare not admit it. It shows that most are basically either gutless or do not have the confidence to confront the warming claims head on.”

        I think the statement that politicians who are closet skeptics “do not have the confidence to confront the warming claims head on.” is the answer in most cases.

        You know that as soon as a skeptic tries to make a case against CAGW, the alarmists will roll out the “97 percent consensus” and all sorts of studies supporting their claim, and how is your average person, who doesn’t study this stuff every day, going to counter those arguments. They certainly won’t be able to counter all the claims in real time. Even a know-nothing can cite studies supporting CAGW. It takes someone with a little knowledge of the subject to counter those claims, and most politicians or most citizens for that matter, don’t have that detailed knowledge of the subject.

        Therefore, most skeptic politicians keep their mouths shut.

        Fortunately, we have Trump to do the debunking of the CAGW narrative. 🙂

  17. The most socially alarming part is that they get away with it, rewriting history that is.

    • I can only speak to the methodology.

      There are, broadly speaking, two approaches to doing adjustments.

      A. Bottoms up
      B. Top down.

      Bottoms up works like this. A human sometimes with the aid of an algorithm and feild tests looks at the data and tries to reconstruct what should have been measured. Lets take a simple example.

      You have a old data series. say 1900 to current. You look at the metadata and find a record that they
      changed from sensor A to sensor B. You then run side by side field tests of sensor A type
      versus sensor B type. You find out that sensor A has a warm bias of .5C.

      studies like this

      https://www.researchgate.net/publication/252980478_Air_Temperature_Comparison_between_the_MMTS_and_the_USCRN_Temperature_Systems

      https://www.researchgate.net/publication/249604243_Comparison_of_Maximum_Minimum_Resistance_and_Liquid-in-Glass_Thermometer_Records

      So your field work shows you that when you replace sensor A with sensor B you can expect a .5C artificial cold shift or whatever. So this shift wont always show up in the data because if it was warming when you introduced the cold biased sensor, the temperature may just look flat.

      Any way, to remove this Bias due to sensor change you have to Move one segment UP, or the other segment down. Generally people adjust the past. The reason is obvious.

      This is exactly the kind of work that spenser and christy do! it is also what leif does. For UAH you have to stitch together multiple sensors all having different properties and calibrations and drifts.
      So you note that when sensor A and Sensor B where looking at the same patch of earth, one has a warm bias relative to the other. So you adjust one.. or the other. doesnt matter one wit which gets adjusted.

      So this is the bottoms up approach. You are working, typically with a human, looking at series by series
      looking at the metadata and trying to take out the biases where you have good evidence of systematic changes in the methods of observation.

      A similiar one is TOBS. same basic approach. except the correction is a modelled answer. Verified model with field data, but a model nonetheless.

      NCDC used to work this way. You have discrete ‘reasons’ for making the adjustment: station move, tob change, instrument change. and every decision is traceable.

      Now, of course there are several potential challenges here.

      1. can you trust the metadata, was the station REALLY moved, was a TOB missed?
      was an instrument change properly recorded? Did the ACTUAL instrument in question
      have a cold bias or warm bias?
      2. Can you trust the researcher or does he have his thumb on the scale?
      3. Did this bottoms up approach miss obvious flaws in the data?
      4. Did the researcher stop investigating because he got an answer he liked. Note this is
      different than #2

      Some countries ( maybe AUS) tend toward this bottoms up approach. Local experts trying to build a record that accounts for changes in the method of observation: changes in time, changes in place, and
      instrument changes. They also use algorithms to assist in this. Algorithms designed to find
      data that statistically stands out.. think of this as expert based rule governed approach.
      In a perfect world you get a discrete reason for every change.

      Also, note with this approach that people will almost always glom on to #2. Accuse the guys of cheating. In general, this ends all reasonble technical discussions. They made a change, I dont get it, therefore, they are frauds. Nice logic if you can sell it.

      The top down approach is algorithmically driven. There is no human looking at individual series
      and deciding, THIS should go up or THAT should go down. Metadata is used, but not trusted.
      NCDC shifted to this approach with PHA. in these approaches an algorithm looks at the data.
      A typical algorithm is SNHT.
      Here is a sample of some work
      https://cran.r-project.org/web/packages/snht/vignettes/snht.pdf

      The algorithm doesnt need to know anything about metadata or instruments or TOB, or station moves
      It just looks at the data, detects oddities and then suggests an adjustment to remove the oddities.

      The algorithm used by NCDC is an improvement to SNHT ( standard normal homogeniety test)
      Its called pariwiseSNHT. in pairwise SNHT every station is iterative compared to its neighbors.
      First step is to select the 50 closest well correlated neighbors. Then you interatively compare the neighbors. Suppose 49 stations record declining temperatures and 1 records an upward drift. pairwiseSNHT will signal that the oddball may need a correction. or if 49 record a flat period and 1 station shows
      a .5C jump one month and thereafter, that one oddball gets flagged.

      At this point you may go and check? YUP, the metadata shows that stations TOB was changed at that point, so you have a reason for the change. the dataseries says smething changed, the metadata support it. At other times you may not find anything in the metadata.
      the station jumped 1C, but metadata has no supporting reason. Like all data , metadata can have errors or ommission and commision.
      In any case, the algorithm decides what changes need to be made to make the WHOLE dataset more consistent.

      That’s it.

      Now, with this approach you lose something. Since no human had their hand in the individual corrections, you wont always have a DISCRETE reason why the data was adjusted. Bot done it.
      Sometimes the metadata supports the decision and sometimes, well sometimes the only metadata you have is location: no TOB data, no instrument type, no siting photos ect. So you are trusting the algorithm to correct the data. Note there are MULTIPLE statistical approaches to doing this. all withs pros and cons.

      This young lady has a nice masters thesis, and some nice examples not too math heavy
      https://www.math.uzh.ch/li/index.php?file&key1=38056

      One benefit of the top down approach is that you eliminate the “cheating” charge. The guys at NCDC dont go hunting through each series deciding what goes up and what goes down. There is no tie between c02 and the adjustments the algorithm makes. its all standard documented statistical adjsutment.

      At berkeley we developed our own top down approach. One reason we did this was to find out if a totally new top down data driven approach gives the same answer as NCDCs top down approach. and gives the same answer as CRU. This was done to counter the charge of “Cheating” or thumb on the scale. The answer? We get the same global answer after adjusting.

      What are the other benefits of the top down approach? well you can objectively test it.

      1. You can take the code and data, run it and get the same answer. Why is this important?
      Well with the bottoms up approach a different human looking at the same data may make different
      decisions. then what? battle of the experts? With an algorithmic approach the code is open
      and the data is there, you can see exactly how it works. if you want to claim that the code adjusts
      data t match c02 rise, well then the code is there to show you this claim is bogus.

      2. You can run objective tests on the algorithms. This has been done for NCDC and other approaches.
      here is just one example
      https://www.clim-past.net/8/89/2012/cp-8-89-2012.pdf
      here is another
      https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/joc.4265

      3. It serves as a CROSS CHECK on the bottoms up approach. There are handful of papers comparing
      bottoms up “National approaches” with Berkeleys Top down approach. more coming because bottoms up guys believe bottoms up is better. see? debate, in the science.

      4. We can test that it works by comparing to reference stations.
      https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2015GL067640

      5. Lastly, one weakness ( see below) SHOULD focus research on the only questions worth asking
      A) what is the effect of UHI
      B) what is the effect of microsite bias

      For online debates the top down approach helps me understand who is serious about actual techncial
      discussions and who is just interested in smearing and false accusations of fraud. The various codes
      are public. The benchmarks are public, the bugs are documented, the changes and improvements are
      documented, there is a rich pile of purely statistical literature on the general time series topic, and anyone claiming fraud about top down methods, doesnt know what they are talking about. they instantly discredit themselves because there are better arguments against top down methods than the false one of fraud. Folks who use the weakest argument just own goal themselves.

      What are the problems with top down?

      1. People continually want to know the reason for every discrete adjustment. ( think of asking a neural net why it identified the image as a cat ) But in the algorithmic approach the whole rationale is based on getting away from the human element, the human decision, the cheating human. the algorithm
      does not attempt to rationalize every decision, it attempts to REDUCE bias, and fix obvious problems.
      decisions can be confirmed by metadata but not in every case. because metadata, like all data is subject to uncertainty.

      2. The top down approach works best when a majority of stations have no issues. This is a problem especially for locations where urban sites outnumber rural sites. These areas exist. I like a top down approach because its weakness focuses on an area that needs more research. Focus drives
      improvement.

      3. Top down approaches reduces bias in regional metrics sometimes at the expense of local
      fidelity. What’s this mean? if you’re interested in ONE single record at one single location
      and the accuracy of that one single data stream, your best bet is to do top down and bottoms
      up analysis. ( called a hybrid approach)

      In short, top down will elimate the charges of human fraud, it removes potential bias due to idiosyncratic human decisions, but at the expense of having adjustments you can explain in 100% of the cases. It trades global reduction in bias for some potential of odd local results. It gives people cherries to pick. There are two kinds of cherry pickers: Your diligence cherry picker who points out
      the issue and works to improve things, and the chery picker who finds issues and screams fake or fraud.
      guess which ones are from outside the country called science?

      Obviously I prefer the top down approach. One because it lets me know who the dishonest cranks are ( folks who insist the algorithms “cheat” or that NCDC cheats) and second because we have good ways of testing the approach and improving it. So when my friend emails me and says hey we found a problem in your estimates, I am super happy. I have 10 or so years invested in the work. I make the data available to you, especially when your aim is to try and find something wrong with it. !

      https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/joc.4808

      https://sci-hub.tw/https://doi.org/10.1002/joc.4808

      Back To Australia!

      1. I have not read their approaches with a lot of due diligence. I see people claim fraud, but
      havent seen any evidence of that in their publications.

      2. Our approach tends to show less warming than theirs. I started to look into this ( mainly
      trying to help Geoff S) It’s a huge effort to look at an entire country. A while back I helped
      with a project on Labrador https://sci-hub.tw/https://doi.org/10.1139/cjes-2016-0034 and
      going through even a small region by hand is a months long task.

      In general I would choose a top down approach over a bottoms up approach, BUT if you are really interested in australia, really really interested in what the best answer is, then you’d better compare
      top down with bottoms up. You’d test a variety of algorithms, youd hold out some data and see which
      series was best at predicting held out data. huge job, hybrid approach.

      As far global warming goes, I’m happy with the imperfect job our top down approach delivers.
      Algorithm scored well on objective tests. There is room for improvement and I spend most of my time
      looking for systematic flaws, flaws that effect 100s or thousands of stations. In total about 35% of the stations receive no adjustments. of the 65% percent that are adjusted, half are warmed, half are cooled, rougly speaking. Depending on the region, some regions are warmed and others are cooled.
      nobody criticizes the regions our algorithm cools. immigrant labor.

      In the end land adjustments are small, Globally speaking. land is 30% of the record. The other 70% is SST. SST records have their trend adjust down more than land is adjusted up. So if you use
      all raw data for SST and raw land you get MORE WARMING than if you adjust SST and and adjust SAT.

      raw data worshippers dislike this fact.

      How do I put this. Suppose australia is warmed artifically by 1C. Whats that do to the global picture?

      .015C bias to the global record.

      So, am I going to focus my time on AUS? nope. There is more interesting work in UHI and microsite.
      Folks who want to do a proper audit of AUS, and by proper audit I mean a steve mcintyre class of audit, well you have your work cut out for you. I always look at the ROI. If I invest my time in UHI which may have an effect across the globe I will have more impact than if I focus on a single country that doesnt matter in the grand scheme of things.. .015C. Its like AUS c02 cuts !! too small globally to make a difference.

      Anyway. I have been working on a tutorial for WUWT folks, first on the global average, later one
      on adjustments. maybe aus would be a good example to work. But first lets see if I can get the method tutorial done and published here.. its pretty frickin long for a blog post

      Then maybe a post on adjustments, in the context of famous cases where data needed to be corrected
      to move science forward.

        • err no. Usually, since I am busy, I post from the taxi, train or plane. On the phone.
          I had some time to sit down and write and the person asking the question seemed genuinely interested, perhaps willing to learn. plus australia has also been an interesting place when it comes to temperature. A cursory look says the data is ratty, not as bad as USA data, but pretty fricken ratty. a cool challenge. I had spent some time collating data/stations for geoff S because he actually does fricking work and shares ideas, and though we may disagree, he is always respectful and never stoops to screaming fraud. He may disagree with the approach people take but its always in a technical context. In any case he contacted me recently about some technical issues.. One thing he said forced me to re look at some work I did, and it helped ( I owe him a thank you, always best made public, so I’ll do that when the piece goes out) anyway, somebody asked a hinest question about AUS without prejudice or insinuation so that servered a fair and long answer

      • Total gobbledygook garbage that leads to tangled web of bullshit.

        Nothing…that’s right, nothing will come out of it apart from useless mumbo jumbo.

        You don’t adjust data.
        You don’t adjust data.
        You don’t adjust data.
        You don’t adjust data.
        You don’t adjust data.
        You don’t adjust data.
        You don’t adjust data.
        You don’t adjust data.
        You don’t adjust data.

        Talk all you want but you don’t adjust data.

        • Ok, when your social security check comes tell them no CPI adjustment please.

          Psst, do you wear glasses?

        • Mike said:

          “Total gobbledygook garbage that leads to tangled web of bullshit.

          Nothing…that’s right, nothing will come out of it apart from useless mumbo jumbo.

          You don’t adjust data.
          You don’t adjust data.
          You don’t adjust data.
          You don’t adjust data.
          You don’t adjust data.
          You don’t adjust data.
          You don’t adjust data.
          You don’t adjust data.
          You don’t adjust data.

          Talk all you want but you don’t adjust data.”

          Wow. Truth by bald assertion AND repetition! How can you beat that?

          Lets say you had a thermometer. You collect a bunch of reading with it.

          Then you find out that the markings are off by 2 degrees! Damn. The thermometer still reads consistently over it’s range of measurement, but it’s all shifted by 2 degrees.

          Do you throw away your data? Do you insist on keeping the results that are all 2 degrees out? Do you correct for the bias you discovered?

          Is it really true that data should never be adjusted?

          • The thermometer still reads consistently over it’s range of measurement, but it’s all shifted by 2 degrees.

            then why would you need to adjust, it’s consistent meaning the anomalies will be the same, the trend will be the same regardless of which way it’s shifted. Adjusting it only opens opportunities to change the results to something you prefer.

            Is it really true that data should never be adjusted?

            The data is individual observation/measurements. The data is the data like it or not. Adjustments to the data is not data, it’s guesses (whether done by an individual or by an algorithm). The result of data + adjustments is not data. It may or may not be useful to adjust the data but the result is, by definition, no longer data because it is no longer an actual measurement.

          • John Endicott said:

            “then why would you need to adjust, it’s consistent meaning the anomalies will be the same, the trend will be the same regardless of which way it’s shifted. Adjusting it only opens opportunities to change the results to something you prefer.”

            What if you want to know what the actual tempreature was? You know that your thermometer is out by 2 degrees. The only way to determine the actual temperature is to allow for this.

            “The data is individual observation/measurements. The data is the data like it or not. Adjustments to the data is not data, it’s guesses (whether done by an individual or by an algorithm). The result of data + adjustments is not data. It may or may not be useful to adjust the data but the result is, by definition, no longer data because it is no longer an actual measurement.”

            If you know that your data is all inconsistent by 2 degrees with the temperature you now know you really observed, then adjusting your data by 2 degrees to match what you now know you actually really observed doesn’t make it not data.

          • Lets say I’m looking at my thermometer now. It says 20 degrees. I record that data.

            Then I remember that it’s out by 2 degrees. So I put a line through my 20 degrees figure and write 22. Is that no longer data?

          • Data is the *actual* recorded measurements. period. Anything else, no matter how much “better” you might “think” it to be, is not actual measurements and therefore, by definition, is most definitely *NOT* data. The data and *only* the data is the data, no matter how flawed you might consider it to be.

          • And if you think you know that data is “incorrect” by “X” degrees, that’s what ERROR bars are for. You don’t change the data, you use error bars to indicate how accurate the data may or may nor be.

          • John Endicott said:

            “And if you think you know that data is “incorrect” by “X” degrees, that’s what ERROR bars are for. You don’t change the data, you use error bars to indicate how accurate the data may or may nor be.”

            In this case you know exactly what the error is. It isn’t a range. It’s a fixed offset. This cannot be represented by error bars.

          • Well for a start you wouldn’t use a thermometer that was out by 2 degrees having first checked it’s accuracy. Or do you think this is beyond science?
            There is nothing wrong with mercury thermometers. They are very accurate and very consistent. Decade after decade. I know this because I have one which is over 50 years old and is just as accurate today as it was when manufactured. It’s easy enough to check for accuracy against 2 or 3 other newer thermometers. They are also have more than enough resolution for measuring the temperature of the human habitat. Fandangled new digital stuff is either for use in the lab or for climate scientists and meteorologists who need to justify their existence by blinding people with meaningless 0.001 degree resolution data taken over ridiculously short time periods and claiming they can extract permanent trends after they adjust the data without real justification.

            Let’s take your example. You believe (reason?) your measurements are out by 2 degrees so you adjust the temps up or down by 2 degrees. The original measurements are now gone. Then someone comes along 10 years later and decides that these temps are wrong for some other unforseen reason and adjusts the ”data” You are now left with garbage.
            And what happened to the original data? Maybe the original adjustment was made for unscientific reasons? Who knows?
            What should happen is that your suggestions for change and your reasons should be annexed to the original unchanged data so that others may scrutinise at a later date.
            A perfect example of the reasons NOT to adjust is that the BOM is now claiming – AND RECORDING for ever – new record breaking temperatures here almost daily. Given the original posting and many others, I cannot and do not believe them.

          • Mike said:

            “Well for a start you wouldn’t use a thermometer that was out by 2 degrees having first checked it’s accuracy. Or do you think this is beyond science?”

            I’m not sure that you understand how thought experiments work.

            If your absolute statement is correct, then it shouldn’t be possible for me to propose a set of circumstances where it doesn’t apply.

            “There is nothing wrong with mercury thermometers. They are very accurate and very consistent. Decade after decade. I know this because I have one which is over 50 years old and is just as accurate today as it was when manufactured. It’s easy enough to check for accuracy against 2 or 3 other newer thermometers. They are also have more than enough resolution for measuring the temperature of the human habitat. Fandangled new digital stuff is either for use in the lab or for climate scientists and meteorologists who need to justify their existence by blinding people with meaningless 0.001 degree resolution data taken over ridiculously short time periods and claiming they can extract permanent trends after they adjust the data without real justification.”

            This is irrelevant to the thought experiment I have detailed.

            “Let’s take your example. You believe (reason?) your measurements are out by 2 degrees”

            The question about reason suggests to me that you have failed to understand the nature of the thought experiment. This is about what you do if you have verified that your thermometer is marked 2 degrees out.

            “so you adjust the temps up or down by 2 degrees.”

            So now your temperatures reflect the temperature that you actually observed, not the temperature that you wrongly thought you had observed.

            “The original measurements are now gone.”

            No, the true nature of the original measurements is now revealed.

            “Then someone comes along 10 years later and decides that these temps are wrong for some other unforseen reason and adjusts the ”data” You are now left with garbage.
            And what happened to the original data? Maybe the original adjustment was made for unscientific reasons? Who knows?
            What should happen is that your suggestions for change and your reasons should be annexed to the original unchanged data so that others may scrutinise at a later date.
            A perfect example of the reasons NOT to adjust is that the BOM is now claiming – AND RECORDING for ever – new record breaking temperatures here almost daily. Given the original posting and many others, I cannot and do not believe them.”

            I agree that the original observations should be preserved, but if someone asks you what temperature it was at a certain time, and your original observation was 20, and you know your thermometer reads cool by 2 degrees, the correct answer to tell them is that the temperature was 22.

          • This is irrelevant to the thought experiment I have detailed.

            Your thought(less) experiment is irrelevant. If you know in advance that the thermometer is off by 2 degrees, you don’t use that thermometer. Period. And if you don’t know in advance but only “figure it out later”, then you don’t know that the error was actually existent at the time the readings were taken or if it developed over time, you are only guessing and assuming. The data is the data. You can not change the data (at least not if you want to be taken seriously as a scientist. Clearly you do not).

            In this case you know exactly what the error is. It isn’t a range. It’s a fixed offset. This cannot be represented by error bars.

            Bzzt wrong. With thermometers readings, there’s always a range, all this does is shift where the upper and lower bounds of the range should be marked. And no, you don’t know exactly what the error is (unless you are omnipotent) even what you *think* the error is contains error bars.

          • John Endicott said:

            “Your thought(less) experiment is irrelevant. If you know in advance that the thermometer is off by 2 degrees, you don’t use that thermometer. Period.”

            Technically, if you know it is off by exactly 2 degrees because you tested it and determined that the markings are off by 2 degrees, there is absolutely nothing to stop you using it, provided you check that the 2 degree offset is the only thing wrong with it.

            If you get a thermometer that you know to be good, and you re-mark it so that it reads out by 2 degrees, you still know what the true reading is. It doesn’t prevent you from knowing the temperature with the same accuracy you could before you re-marked it.

            Regardless, this is irrelevant to the thought experiment.

            “And if you don’t know in advance but only “figure it out later”, then you don’t know that the error was actually existent at the time the readings were taken or if it developed over time, you are only guessing and assuming.”

            This is not necessarily true.

            Take a glass thermometer that has engraved markings for F and C. Suppose you test it against a correctly calibrated source and you find that it is 2 degrees C out. Then you check the F scale. It reads correctly. The engraving can’t have moved, and the F checks out, so it logically follows that the engravings for C were marked incorrectly by 2 degrees. Now, in a “you can’t actually know anything” sense, this isn’t absolute, but if we’re going to get into arguments about how you can’t actually really know anything there is no point discussing it.

            Heck, what say that I made one deliberately to trick someone, and left the F scale correct so that it could be verified that the thermometer still preformed as it had before I doctored the markings.

            It is simply not impossible.

            “The data is the data. You can not change the data (at least not if you want to be taken seriously as a scientist. Clearly you do not).”

            I would say that in the case of the thermometer with offset markings, that recording what it actually reads to the eye is data about what the thermometer represents the temperature as, and recording what real temperature that actually represents is data about the true temperature represented by the thermometer.

            Sure, you can record both at the same time, but does the latter become not data if you don’t write them down at the same time, and instead, knowing that the markings are off, allow for it later.

            “Bzzt wrong. With thermometers readings, there’s always a range, all this does is shift where the upper and lower bounds of the range should be marked. And no, you don’t know exactly what the error is (unless you are omnipotent) even what you *think* the error is contains error bars.”

            Yeah, but you can’t represent the temperature that you know the thermometer is actually displaying with error bars around the offset reading. The real reading is 2 degrees away, with it’s error bars equally as wide.

          • Heck, what say that I made one deliberately to trick someone,

            Then you’ve just verified that you aren’t interested in science or data or facts. Bottom line in real science, you don’t change the data. period. You report the data you have and you can report alongside the data any errors you think that data contains, but the data remains *AS IT IS* anything else is not data by definition.

          • The question about reason suggests to me that you have failed to understand the nature of the thought experiment. This is about what you do if you have verified that your thermometer is marked 2 degrees out.

            you fix or replace it *before* using it to record data (you did test it before hand, like a good scientist should, right? you weren’t performing shoddy science by using untested, uncalibrated equipment and then going back and changing your data to cover up your slipshod scientific methods, right?).

          • I said:

            “Heck, what say that I made one deliberately to trick someone,”

            John Endicott said:

            “Then you’ve just verified that you aren’t interested in science or data or facts.”

            It’s very scientific, if as in this case what I am testing is the claim that you can’t know afterwards that the thermometer has been reading 2 degrees out the whole time. That is demonstrably not true. This isn’t about ethics.

            “Bottom line in real science, you don’t change the data. period. You report the data you have and you can report alongside the data any errors you think that data contains, but the data remains *AS IT IS* anything else is not data by definition.”

            If, as I have just established, that it is possible to know afterwards that your previous readings were offset by 2 degrees, does it become not data because I write the temperature I know the thermometer is displaying rather than what the markings represent the temperature to be?

            “you fix or replace it *before* using it to record data (you did test it before hand, like a good scientist should, right? you weren’t performing shoddy science by using untested, uncalibrated equipment and then going back and changing your data to cover up your slipshod scientific methods, right?).”

            What in practical reality is the difference between writing a different number next to the markings on the thermometer and then writing down the reading, or doing the correction in your head?

          • I mean, if I’m sitting there looking at the thermometer with a pen in my hand knowing what I need to write next to the markings to correct them, and make an observation doing the correction in my head, is it not data, but suddenly becomes data when I write the new number on the thermometer?

          • “Bottom line in real science, you don’t change the data.”

            For what it’s worth, you have got me thinking about the definition of data vs information. But that also leads me to think that it’s actually impossible to change data. If it becomes information, then you haven’t changed the data.

            But in the case I’m describing, does that mean you’re changing your observation and not the data? And how does that work with the example I give of a thermometer where you have the option to correct the markings with a pen and then write them down, vs doing the correction in your head before writing it down.

          • me: “you fix or replace it *before* using it to record data (you did test it before hand, like a good scientist should, right? you weren’t performing shoddy science by using untested, uncalibrated equipment and then going back and changing your data to cover up your slipshod scientific methods, right?).”

            Phil: What in practical reality is the difference between writing a different number next to the markings on the thermometer and then writing down the reading, or doing the correction in your head?

            The difference is *Science*. The first method (fixing a known “broken” instrument so that it works properly *before* use) is how one properly does science. After the fact rewriting of the data is not science and it is how charlatans get away with making the data fit their ideas.

          • I mean, if I’m sitting there looking at the thermometer with a pen in my hand knowing what I need to write next to the markings to correct them, and make an observation doing the correction in my head, is it not data, but suddenly becomes data when I write the new number on the thermometer?

            remember what we are discussing, changing the data *after* the recording (ie adjusting). If you are mentally adjusting it *before* writing it down, that’s a different issue – one of shoddy science, because you have to remember to mentally adjust it every time, and everyone else that is using that instrument needs to know to and do the same mental adjustment. that you could forget, or someone else on the team not even know to do that calculation opens up a whole lot of room for errors (from honest people) and for manipulation (from dishonest people).

          • John Endicott said:

            “The difference is *Science*. The first method (fixing a known “broken” instrument so that it works properly *before* use) is how one properly does science.”

            You can still do science properly in the thought experiment. You will get the exact same results.

            “After the fact rewriting of the data is not science and it is how charlatans get away with making the data fit their ideas.”

            I guess what I’m thinking about is the difference between adjusting data and adjusting an observation.

            “remember what we are discussing, changing the data *after* the recording (ie adjusting). If you are mentally adjusting it *before* writing it down, that’s a different issue – one of shoddy science”

            I find it hard to draw an exact line there. If I don’t write it down, but remember it with my brain is it not data? Does it only become data when i put it on a piece of paper?

          • You can still do science properly in the thought experiment. You will get the exact same results.

            if your equipment is so faulty that you have to make mental adjustments when record you are doing it all wrong. period.

            I guess what I’m thinking about is the difference between adjusting data and adjusting an observation.

            In real science you don’t adjust data and you don’t adjust observation. If you are, you are doing it all wrong.


            I find it hard to draw an exact line there. If I don’t write it down, but remember it with my brain is it not data? Does it only become data when i put it on a piece of paper?

            If you are not writing your observations down (so that others can see what you did), you are doing it all wrong.

          • “if your equipment is so faulty that you have to make mental adjustments when record you are doing it all wrong. period.”

            So wrong and so terrible, yet you get the exact same results. This isn’t about best practices. It’s about the nature of data, information, observations, and adjustments.

            “In real science you don’t adjust data and you don’t adjust observation. If you are, you are doing it all wrong.”

            So what do you call allowing for the markings being off? If it isn’t an adjustment to the data or the observations, then what is it an adjustment to?

            “If you are not writing your observations down (so that others can see what you did), you are doing it all wrong.”

            The question wasn’t about best practices. What is the answer to the question? If you’re absolutely certain what is data and what isn’t, it should be easy to answer.

      • Here’s the science problem with your two-sensor scenario: you know that sensor A has 0.5C warm bias, but you don’t really know that, It could be that sensor B has a cold bias. Unless you have a control thermometer to compare with both sensor A and sensor B, you can’t tell if A is warm or B is cold.

        In any event, the proper adjustment is to adjust both toward the middle, unless you have absolute certainty that one side actually is warmer or cooler than the other. That will spread the uncertainty to both sides and prevent undue cooling or heating on one side only.

        • Yes.

          In some methods only one series is adjusted. this could lead to a higher error of prediction.
          In other methods ( like the berkely method) all stations are simultaneously adjusted to reduce the error of prediction.. so in effect if sensor B has a cold bias of .5C +.25
          every instance of change between A and B could be adjusted differently such that disagreement with all other series is globally minimized.. the amount of adjustment could vary series to series based on the overall statistics.

          as an example, you have 50 series 40 of which are B type, and 10 are type
          A that switch to B.. not every A that switches to B will get the same adjustment.

          they will get adjusted to make them better predictors of the other 40 series.

          TOP down, adjusted to minimize the error of prediction, rather than bottoms
          up adjusted by one discrete value.

          Globally? doesnt matter which approach you use. different algorithms, same answer
          consistent regardless of the wet ware in the works or software in the works

          Locally? one method may adjust up, a different method down. independent of human
          involved in the process.

          • Question: How many times were the historical recorded temperature readings “adjusted” before 1988?

      • Steven, after going through that long list of reasons why temperatures are adjusted, I didn’t see any mention of why the data manipulators always cool the past and never warm it. Did you fail to mention that data manipulator rule?

      • Mosh,

        The extended reply is greatly appreciated. I know you give and take a lot of guff on here. My two cents; WUWT is no better than the Alarmist Blogs without objective and contrary viewpoints.

        I’m looking forward to a detailed breakdown from Dr. Jen M.

      • An honest will alter no data and use error bars.

        The crook (and non-scienctists) will adjust data and pretend its now more accurate.

      • One benefit of the top down approach is that you eliminate the “cheating” charge.

        Or rather you open up more possibilities to cheat while claiming you are eliminating cheating. Algorithms are written by people. As we’ve seen with Facebook and Googles algorithms, they can be written to give the biased results you want based on the biased assumptions the algorithm writers bake into the algorithms.

        • Yup! “Algorithms” are, like any computer “code” simply going to search for and find what they’re told to search for and find. Bias, conscious or not, can easily be “baked in.”

          I agree with the method described by WXcycles – NO “adjustments” should be done, period. You present the actual measurements and use error bars to indicate ranges of error which covers any issues or inaccuracies. In that case, you actually have DATA, not “guesswork,” and you ALSO have indications of the range of errors present in it.

          The reason THAT isn’t done is because it would reveal just how ridiculous the claims of “precision” regarding the Earth’s temperature history really are.

          • Exactly, the data is what it is. Showing anything other than the actual data is showing a fiction. No matter how much “better” someone thinks that fiction is to the real measurements, it’s still a fiction.

          • John Endicott said:

            “Exactly, the data is what it is. Showing anything other than the actual data is showing a fiction. No matter how much “better” someone thinks that fiction is to the real measurements, it’s still a fiction.”

            Regardless of the definition of data, if someone asks you what the temperature of something was, and you know that the thermometer it was measured with was 2 degrees out, then fiction is insisting that the temperature actually was what was recorded.

            If the temperature recorded was 20, and you know that your thermometer was reading 2 degrees too cold, it is a fiction to insist that the temperature was 20.

          • If you know the thermometer is off by 2 degrees, you don’t use that thermometer. PERIOD. You get one that works. Anything else is unscientific! If you didn’t know that it was off at the time the recording were made, then you are just guessing that it was – and that is a fiction.

            The correct answer to your example is to state “the temperature recorded is 20 degrees” and give the range of error bars – which would include the fact that you *think* the thermometer was off by 2 degrees at the time it was recorded along with all the other errors inherent in using such a device. Including any reasoning for those error bars being the range that they are is a bonus.

      • The algorithm doesnt need to know anything about metadata or instruments or TOB, or station moves
        It just looks at the data, detects oddities and then suggests an adjustment to remove the oddities.

        This “top down” approach, which baldly assumes that the creator of the algorithm thoroughly knows the analytical structure of the underlying signal, is the epitome of sheer academic hubris. In each case cited by Mosher, the developer is not a seasoned analytic geophysicist, but merely a student of mathematics or programming. Instead of proven, realistic conceptions of actual signal structure in situ, they bring all the precious preconceptions of an Ocasio-Cortez to the “noble” task of “detecting oddities” and transmogrifying data in their simplistic search for the holy grail of “data homogeneity.” No serious branch of science is as lax as “climate science” in that regard.

      • Mosh,
        Thank you for this effort to enlighten this shadowy subject. I still get concerned when algorithmic adjustments are made to data that exceed the possible margin of error for the original measurement without a known reason. Assuming someone in 1930 couldn’t read a thermometer within 1/2 a degree is a bad assumption, assuming the weather station exhibits UHI effect over 90 years is possibly a good assumption if one knows the location. Assuming the Stevenson screens all get dusty at an average rate is dubious. In any case, the raw data needs to be kept pristine and available, since applying the same algorithm to the previous answers simply results in a larger adjustment.

  18. How can climatologists claim gold standard accuracy with their climate change claims when the underlying temperature measurements are so poor that they require constant adjustments and are basically unfit for purpose?

    • whats the purpose?

      there are two fundamental purposes, do you know what they are?

      hint, scary headlines is not one of them

      • Steve, when the measurements are so “poor” that they require constant adjustments, they’re not fit for *any* purpose (other than propaganda).

    • “when the underlying temperature measurements are so poor that they require constant adjustments”

      Who says the underlying temperatures require constant adjustments? The Data Manipulators, is who.

      I say, Let’s go with the raw data. That’s more accurate than anything the Data Manipulators come up with. And it is not tainted by agenda-driven human hands. The raw data show the 1930’s as being as warm as today. That’s why the Data Manipulators want to change things.

      Temperature Adjustments are a license to cheat and steal.

    • Indeed. It’s downright comical to assert they have “certainty” about future climate catastrophe when the data is so bad they can’t even agree about what HAS ALREADY HAPPENED.

  19. The Adjustment was necessary for the early readings as Australians (as with many population groups) were shorter then (they had not as good a diet as now). The error was a parallax error and is now corrected to standard height people who can read the thermometer eye level to the instrument.

    …and we all know that the current BOM Mets are ‘on the level’.

  20. I would also check rainfall records for Rutherglen, it was dry in the early 1900s meaning it should be warmer then.

  21. We all know 1930 was a period of world glaciation that reached the equator and needs to be accounted for in the record. Ultimately, the proper temperature record will show the subzero temperature summers in the Midwest that occurred at that time, and the mile of ice over Toronto.

  22. Seems that Acorn v2 went through a very strenuous review.

    Since there is no ISBN number, instead of not trying to list one on page i, they just went with:

    “ISBN: XXX-X-XXX-XXXXX-X”

    And then page ii has a real winner with an invalid email for the author:

    “b.trewin@bom.gov.au:”

    Does not look ready for prime-time. But I’m sure all of the data, methods, and results were sifted-through with a fine-toothed comb, lol.

    • Michael, it was all put together in a rush to meet the deadline for inclusion in the remodelling for AR6 … for the IPCC.

      “The IPCC is currently in its Sixth Assessment cycle. During this cycle, the Panel will produce three Special Reports, a Methodology Report on national greenhouse gas inventories and the Sixth Assessment Report (AR6). The AR6 will comprise three Working Group contributions and a Synthesis Report.
      The AR6 Synthesis Report will integrate and synthesize the contributions from the three Working Groups that will be rolled out in 2021 into a concise document suitable for policymakers and other stakeholders.
      It will be finalized in the first half of 2022 in time for the first global stocktake under the Paris Agreement. https://www.ipcc.ch/report/sixth-assessment-report-cycle/

      ****
      Much thanks to WUWT for reposting here.

      • UN IPCC AR6 WG1 report: Modelturbation all the way down. But there are multiple lines of evidence (speculation) to support such modelturbation. Doancha know?

        The political summary will ignore the science.

        • “The political summary will ignore the science”

          Yeah, if history is any guide, it will.

          Like the time Ben Santer changed the IPCC’s position that they could not attribute human causes to climate change, to just the opposite, that humans were definitely responsible for the climate changing. Santer completely changed the meaning of the IPCC report in his effort to promote the CAGW narrative. And the IPCC let him get away with it.

          Instead of Santer being fired for being a politician posing as a real scientist, he is still working and putting out more lies about CAGW. Now he claims to have discovered absolute proof of human involvement in changing the climate. Same old lies dress up in a new suit.

          I hope one of these days these guys get just what they deserve. Exposing them for the liars they are is a pretty good start.

      • Dr. Marohasy,

        In light of Steve Moser’s comments on a lack of insight into the methodology leading to the adjustments; do you plan on a follow-up post?

        Obviously, you posit nefarious ends but a sound critique of the methods and purported reasons for adjustments would strengthen your case.

  23. Actually, by changing the data to fit the global warming meme they are simply being honest – making the data truthful: Since the data is now man made, the global warming trend it shows using the man made data is also man made! Who could argue with that?
    The data never existed before in the real world therefore it is man-made. It certainly didn’t exist at the time the records were recorded.

  24. Once again, there’s five significant digits and four decimal places from data barely resolved to a tenth of a degree. This doesn’t seem possible if one propagates uncertainty throughout their calculations.

    I’ve been going over Nick Stoke’s TempLS R program for the past few days, and it gave me cause to look at the GHCN monthly summary data it uses. This data has two digits right of the decimal point, but it starts with GHCN daily data in tenths of a degree. No result derived from it should have more than one place to the right of the decimal. NOAA also provides no uncertainty for the series, so I tried my own evaluation.

    I picked a month at random from a site with good numbers for the year. The name of the station is Dumont D’Urville, in the Antarctic. It has two different IDs in the two data sets: AYM00089642 in the daily data and 70089642000 in the monthly summaries, but the latitude and longitude agree, so it’s the same place.

    Right off the bat I saw the numbers don’t agree between data sets. The monthly average in the summary is given as 0.3C for January 2011, but the daily TAVG, averaged for the month, gives 0.5C. An interesting aspect of the daily data set is that it has TMAX, TMIN, and TAVG, but the TAVG is averaged from hourly measurements, not the average of TMAX and TMIN. The monthly data set says only that it is in hundredths of a degree.

    When I ran the numbers on the dailies for the month of January 2011, the error in the mean (ΔX/√N) was 0.2°C. I can’t see how in good faith values in the hundredths place can be published with the uncertainty in the mean is an order of magnitude larger. The monthly average for January 2011 was 0.5±0.2°C. That’s pretty astounding. Am I wrong somewhere?

    • “It has two different IDs in the two data sets: AYM00089642 in the daily data and 70089642000 in the monthly summaries, but the latitude and longitude agree, so it’s the same place.”

      Nope. ! not necessarily.

      having the same lat lon is not assurance it is the same station.!
      You have no assurance that the same location reported in the metadata
      MEANS that the stations are the same or more importantly that the sensor is the same.

      WHY?
      because some locations ( airports, research centers) have TWO sensors located roughly in the same location.

      One sensor may report hourly
      One sensor may report daily.

      Further, Just because the metadata is different ( site name and site location) doesnt MEAN that the
      stations are DIFFERENT!

      WHY?

      well because just like data can be wrong, so can metadata be wrong.

      potential mistake: believing the metadata without question.

      so. there is a way to tell, but you need to do more work.

      For that station 70089642000 there are 6 alternative sources and at least one controlling daily
      GSOD

      From the looks of it you went hunting for a cooling station:

      There are at least two candidate stations there, very slightly different naming and slighty
      lat lon. data series is different and the sources for the data are different.

      http://berkeleyearth.lbl.gov/stations/151563

      http://berkeleyearth.lbl.gov/stations/3022

      welcome to Hell.!!! are these the same station? reported in different databases? two sensors at one location? two different records of the same station?

      The approach folks take ( see ISTI and Berkeley) is to apply tests to both the metadata and the time series data to de conflict multi sources. You have to check all sources and check all metadata
      and check whether the time series match. and then you can deconflict. The deconfliction process
      is probablistic.

      • The stations have the same name and check to within 10.8″ in latitude and longitude (about 880 feet difference), but I shouldn’t trust that because having the same name, latitude, and longitude in an official US Government publication is no reason to trust that it’s the same station. However, your BEST location data doesn’t match either of the NOAA station location data, so maybe there’s actually FOUR stations running down there.

        Do you realize how that sounds?

        What “all sources” and “all metadata” take precedence over the officially published station IDs and metadata in official NOAA data sets? Why do they have different IDs? It’s the Federal government, and one data set is daily records and the other is monthly summaries, so why should they have the same ID in two different data sets? They don’t explain it; they just tell you “These re the station ID.” It actually says, in the daily files readme.txt, “The name of the file corresponds to a station’s identification code. For example, “USC00026481.dly” contains the data for the station with the identification code USC00026481).”

        Do you believe THAT, or do you need independent verification from a third party for that as well?

        Anyway, it’s obvious they ARE the same station, unless one is of the paranoiac type. That isn’t the point, anyway. The point is that if you take the daily temps and get the average, and actually pay attention to the uncertainty, instead of treating scientific measurements like a calculator game, the mean for the month of January 2011 at that station is 0.5±0.2°C. You realize how large that uncertainty is, right? That was using the ΔX/√N formula to get the error in the mean.

        The figure for that month from the monthly summary is only different by the uncertainty from the daily measurements, so I suppose that’s a win. The other months have similar uncertainties, so using results with two digits to the right of the decimal is definitely not supported by the raw data.

        I’m curious; how do you justify using such precision? I see you have some uncertainties on the pages you linked, but they look overprecise for the starting data as well. What’s the method for taking daily data in tenths of a degree and averaging them monthly into hundredths of degrees? Would you take a moment and explain the process?

        • “The stations have the same name and check to within 10.8″ in latitude and longitude (about 880 feet difference), but I shouldn’t trust that because having the same name, latitude, and longitude in an official US Government publication is no reason to trust that it’s the same station. However, your BEST location data doesn’t match either of the NOAA station location data, so maybe there’s actually FOUR stations running down there.

          Do you realize how that sounds?

          Sounds like you never looked at metadata

          What “all sources” and “all metadata” take precedence over the officially published station IDs and metadata in official NOAA data sets? Why do they have different IDs?

          1. NOAA/ NCDC sources are not “Official”
          2. Both GHCN D and GHCN M are aggregated from other sources.
          3. There are multitude of IDs all assigned by different agencies

          Your mistake is thinking that NCDC is a source of data from that part of the world.
          they are not. People send them data. They repost it.
          At the same time people send them data, they also send the data to other other folks
          like WMO and USAF

          Thats right, other governments and other scientific agencies report their data to
          NCDC They also report it to WMO

          NCDC just collect the data.

          But lets do a test

          go to GHCN M

          Check AGM00060355 SKIKDA

          Find the location?

          Let me help

          Lat 36.9330
          Longitude 6.9500

          Do a google map of that
          Find something?

          Yup it in the water

          Now go to WMO oscar metadata.

          https://www.wmo-sat.info/oscar/

          Look up the same site. Its correctly located.

          Why does NCDC have the station in the water and WMO have it correctly placed?

          Because WMO did a whole program to update the metadata of sites and got people to update
          their positions. The program requied people to do reports in degrees, seconds and minutes.
          so WMO metadata is usually pretty good, they did an update.

          This new data gets reported to WMO
          This data doesnt always get reported to NCDC ! DOH!

          So, sometimes NCDC metadata is out of date.

          It goes like this.

          Country X has a station
          1. They report some data to WMO
          2. They report some data to NCDC.
          3. They report some data to USAF
          4. They keep and publish their own local records
          3. They dont always update metadata to all sources ( 1,2,3,4)

          So just because NCDC puts the SKIKDA station in the middle of the ocean doesnt mean
          its in the mddle of the ocean because you think NCDC is “official”
          You need to check
          Want country owns the site
          do they keep a record
          do they send records to WMO
          do they send records to NCDC
          do they send records to USAF, METAR

          Then when you look at all the data you can deconflict.

          So remember

          NCDC is a repository. Other people own and send them data.
          That “same” data is also sent to other repositories.
          The repositoires dont always agree
          First job is deconflicting
          lat lon and name are not enough to deconflict because two sensors can be at the same locating reporting different measures

        • “What “all sources” and “all metadata” take precedence over the officially published station IDs and metadata in official NOAA data sets?

          For GHCN D ( daily )
          They list the ORIGINAL sources for the daily data with every stations

          See this
          ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/readme.txt

          So for the station in question you do back to the origina; sources before it gets into GHCN D

          Sometimes this will be GSOD, or FSOD, or any other number of original sources

          “I’m curious; how do you justify using such precision? I see you have some uncertainties on the pages you linked, but they look overprecise for the starting data as well. What’s the method for taking daily data in tenths of a degree and averaging them monthly into hundredths of degrees? Would you take a moment and explain the process?”

          Sure. First you can read our published paper and appendix

          Next. I put a little tutorial together. Should be done in a couple weeks

          Mostly people dont understand what a global spatial average is. Hint
          its not an average and not a measurement.

          Second, try this. add as much error as you like to the measures.
          then compute a global average and take the anomaly
          what do you notice?

        • ““These re the station ID.” It actually says, in the daily files readme.txt,”

          So you read the readme, but missed the part where they tell you what the original sources are

          Then you ask me why these files ( GHCN D) are not the only sources

          Weird.

  25. Welcome to the Adjustocene. I am a denier. And the first thing I deny is that the “climate scientists” are honest or well intentioned.

    If the question is whether it is warmer now than it was 150 years ago. The answer is maybe. Show me honest unadjusted evidence with plausible error bars.

    • “If the question is whether it is warmer now than it was 150 years ago. The answer is maybe.”

      The real question to ask is whether it was as warm in the 1930’s as it is today. The answer is yes Unmodified charts from all over the world show this.

      So if it was as warm in the 1930’s as it is today, then that means there is no unprecedented amount of heat that has been added to the Earth’s atmosphere between then and now, due to CO2 or anything else. The 1930’s warmth rose to a certain level, then cooled off for a few decades, and now, today, the warmth has risen back to the same level as in the 1930’s. This means CO2 is not a player in this game, it is an also-ran.

      It’s was just as warm in the 1930’s as now. No unprecedented warmth is required. There is no CAGW!

      In fact, current temperatures are about 1C cooler than the high temperatures of the 1930’s. The year 2016 was designated by NASA/NOAA as the “hottest year evah!”. According to the Hansen 1999 US surface temperature chart, 1934 was 0.5C warmer than 1998, and was 0.4C warmer than 2016. And now we have cooled about 0.6C from the 2016 highpoint. “Hotter and hotter” is no longer applicable.

      There is no unprecedented heat in the Earth’s atmosphere. There is no CAGW.

      • ‘Scientific’ prediction: 2020, 2021 and 2022 will be the hotterest years ever in BOM land and new unprecedented all-time high max and min records will be set every other day. And it will require drastic collective actions and the outright rejection of all debate and vilification of all dissent in order to ‘fix’ it.

  26. For anybody interested in the ACORN 2 rewriting of Australia’s temperature history, I’ve uploaded a state-by-state breakdown of all ACORN stations, starting with a national analysis of the 57 locations that have a start year of 1910 at http://www.waclimate.net/acorn2/index.html

    Actually, only 111 of the 112 network stations could be analysed because the bureau’s ACORN 2 daily temp download files provide Halls Creek temperatures in the Kalumburu maximum file. Pretty sloppy for a world class dataset.

    The above linked national page links to NSW, the Northern Territory, South Australia, Tasmania, Victoria and WA with min/max charts and data summaries for each site, as well as Excel downloads containing the daily temps in all three datasets (ACORN 1, ACORN 2 and RAW) and tabulated averages of their annual weekday temps, annual monthly temps and annual temps (averaged to be compliant with ACORN procedures).

    Since there’s been some comment above on decimal precision by observers, the tables also show the annual x.0F/C rounding percentages and the averages before and since 1972 metrication. The bureau has conceded that metrication caused a +0.1C breakpoint (probably because a bit over 60% of all F temps before 1972 were recorded as x.0F with a downward observer bias) but chose not to adjust for this in the homogenised ACORN because the extra warmth may have been caused by major La Ninas in the early to mid 1970s accompanied by the wettest years and highest cloud cover ever recorded across Australia. Good luck figuring that one out.

    • Loved this part:

      The rounding relationship

      An average 60.25% of all Fahrenheit temperatures recorded from 1910 to 1971 were x.0 rounded numbers without a decimal, with a likely but unproven cooling influence due to a greater number of observers (many farmers or post office staff rather than weather bureau employees) writing, for example, 86.7F as 86 rather than 87 because they considered it more accurate and honest.

      Now they’re mind-readers, and can tell that apparently soft-hearted “farmers or post office employees” would be likely to cheat the temps down than would steely-eyed weather bureau employees.

      Is there no excuse to modify data that’s too embarrassing to use?

      • “Is there no excuse to modify data that’s too embarrassing to use?”

        Any excuse will do.

  27. Former Prime Minister Tony Abbott actually did call for an investigation into BoM shenanigans. Not long after, he was goooone and … nothing happened. The great and powerful Tony Jones, highest paid “journo” in Australia’s government broadcaster, the ABC, politely but firmly ordered the lily livered Minister of the Environment and Energy, Greg Hunt, to cease and desist with anything of the sort and….. nothing happened.
    What we need in Australia is an investigation into the BoM and a Royal Commission into the wind and solar sector before they turn a first world nation into a third world nation faster than you can say “renewable energy”. Neither will happen. The Coalition won’t do it, had their chance and wilted under mild heat from Jones and ironically, Malcolm Turnbull. Understandable, yet another former PM, Malcolm Turnbull would be revealed as corrupt, placing the financial interests of his son Alex, a big investor in Infigen Energy, above that of the nation he led. Watch the Turnbull fortune grow now, maybe not quite Al Gore speed but it’ll be impressive – the most dangerous place to be in Australia is between a Turnbull and a bucket of money.
    The Labor party won’t either. They’ll be preoccupied overseeing the tsunami style collapse of the Australian economy, swift, broad, deep and complete

    • I think with reserve bank interest rates on hold, again, house prices and the economy tanking I think the scare campaigns will grow in strength, people will vote on how they feel rather than any kind of sense of the matter.

      Either way, coal will be blamed.

  28. The Bureau has also variously claimed that they need to cool that past at Rutherglen to make the temperature trend more consistent with trends at neighbouring locations. But this claim is not supported by the evidence.

    It wouldn’t be right even if the claim was “supported by evidence”. Why should the trend at one station match the trend at neighboring stations, which could be hundreds of miles away? Planets have been discovered because of small perturbations in the data. Take the discovery of Neptune, for example (quotes from the astronomy site

    https://cseligman.com/text/history/discoveryneptune.htm

    The Discovery of Neptune
    The first accurate predictions of Uranus’ motion were published in 1792. Within a few years it was obvious that there was something wrong with the motion of the planet, as it did not follow the predictions. Alexis Bouvard, the director of the Paris Observatory, attempted to calculate improved tables using the latest mathematical techniques, but was unable to fit all the observations to a single orbit, and finally decided to rely only on the most modern observations, while suggesting, in his 1821 publication of his results, that perhaps there was some unknown factor that prevented better agreement with the older observations.

    That’s Old School science, the kind that got things done, like discovering the Law of Gravity and unseen planetary bodies. This is how Post Modern Climate Science would have handled this pesky perturbation:

    How Neptune Didn’t Fit the Model
    The first accurate predictions of Uranus’ motion were published in 1792. Within a few years it was obvious that there was something wrong with the motion of the planet, as it did not follow the predictions. Michelmas Mannus, the director of The Team Observatory, attempted to calculate modifications to the observations using the latest statistical techniques that he had thought up that morning, but was unable to fit all the observations to a single orbit, and finally decided to rely only on the observations that fit the existing model, while suggesting, in his 1821 publication of his results, that perhaps the Deniez had prevented better agreement with the approved observations.

  29. How the Snip can those who doctor raw temperature data call themselves scientists??

    Profanity is profanity no matter how many ‘*’ you use – MOD

  30. UHI of the Dallas Fort Worth Metroplex is massive. In the Middle sits DFW Airport where all of the Media’s (for at lest 5 Million people) temps are recorded. This morning, the low temp was just 1 degree from the absolute, 120 year, record low for that date in the area.

    Of course, just 15 miles outside of the DFW UHI, temps were 5-6 degrees colder (and would have SMASHED the 120 year record). 120 years ago there was no UHI at DFW airport. Even 40 years ago, there was no MASSIVE UHI at DFW airport… so why in 2019, would ANYONE be adjusting temp from the past upward? If anything, they should be adjusting modern temps downward.

  31. The problem really lies with the fact that ALL the data sources are anecdotal in nature. It doesn’t matter how sensitive and accurate a thermometer is if its data is being averaged over a distance starting at it’s location. Every single smoothed datapoint is falsified information and wrong which means that the entire dataset is abjectly wrong when used. The density of the thermometers isn’t even sufficient to be used in forecasting anymore. I’ve constantly seen a cold AND hot bias from my local airport only 6 miles from me while my two outside thermometers (by the house and on a post 30′ away) agree all the time on the local temp but not with the airport.

    Until they improve the methodology by measuring the actual bias curves and increase the SENSOR resolution and then start actually monitoring atmospheric DEPTH with thermometers as well we will continue to have shit-worthless “science” done by consensus and bad headlines.

    There simply isn’t enough resolution of weather data to prove them wrong.

  32. The idea that you can make scientifically meaningful changes, and it science they claim to be doing , when you cannot correctly define what needs changing and to what degree. Is a load of dingo’s droppings. You have to ask given this approach, amongst many others, does climate ‘science ‘ have any standards at all beyond supporting the cause ‘ and thinking ‘headlines first ‘ ?

    • “does climate ‘science ‘ have any standards at all beyond supporting the cause ‘ and thinking ‘headlines first ‘ ?”

      The answer is no.

  33. Excellent article from Jennifer here on WUWT.

    Australia’s BoM wasn’t forthcoming with an ACORN-SAT version 2 media release, though after it was published, there was some media coverage.

    Around the same time, what I found interestingly honest from the BoM was a Special Climate Statement on the recent Queensland floods. I’m not sure why they didn’t put out a Special Weather Statement, but even I am a bit confused about the difference between climate and weather.
    This Statement was released on social media, and I took up the opportunity to converse. Without putting up all the details here (@BOM_au retweeted @BOM_Qld 15 Feb), someone called David Grimes tweeted “More than a years annual average precipitation in days… incredible”. So I just had to reply “David, you think that is incredible? I think it is incredible that the BoM didn’t see it coming and didn’t record the rain in their own gauges (as I mentioned on the BoM Facebook page, attached).” I conversed with the BoM on Facebook as indicated, where I pointed out “So what the BoM can’t report on in this Statement is either a forecast or rainfall totals in their own gauges. I thought they were meant to be a Bureau of Meteorology.”

    Hopefully, more people are commenting on the BoM social media feeds from the viewpoint of denying anthropogenic global warming, as opposed to the belief system of the BoM.

    I realise this post of mine is just my bit of a say, but in the context of what the BoM does here in Australia, they have been on the backfoot this year without sensible staff to make them reasonable in a lot of peoples’ eyes. The more pushback, the better.

      • WXcyles, that is good documentation and thanks for pointing me to windy.com
        It is poor form from the BoM not to be able to forecast better. Maybe a lot of staff were still on holidays.

        As far as a build up to the monsoon, I noticed from mid-January till the monsoon arrived in the north, then again when it took hold in Queensland, a strong constant flow, I guess hot overland trade winds, from southern Queensland to northern West Australia. Looking back, that must have been a bit novel to BoM as it was to me.
        Weather records need to be understood in the context of cycles such as solar minima and maxima.

        • You’re welcome Doug.

          Windy is an incredible tool, I couldn’t recommend it highly enough. I use it to plan daily. It’s incredibly accurate in 24 to 48 hour periods and will get even more accurate by mid-year as the ECMWF model is getting a major expansion to its continuous data inputs and automated processing. Windy now even contains global MET radar data feeds, such as from BOM with lightning overlaid, and have recently implemented local observations verses model for direct comparisons with the models.

          I don’t need to use BOM any more and increasingly rarely do – ENSO data is about all I need from BOM now. I can get an order of magnitude better cyclone forecasts from a combination of Windy and US Navy cyclone forecasts and its aggregated Sat imagery. BOM is staggeringly rubbish at cyclone forecasts, a joke really, and that’s a pretty serious core-forecasting function to be so terrible at.

          The one thing BOM did OK during the recent NQ rain event was to report river rises and flood level data fairly well, but that’s also nowcasting and aftercasting warnings. The 5 to 7 days of regional warning that they could have provided, well in advance, and which the extremely impressive ECMWF model in Windy provided with very impressive accuracy for areas, timing, intensity and scale, they didn’t provide to those areas to be affected. The models showed the nature of the problem 7 to 10 days in advance that north Queensland was in for record-setting rainfall and floods over a vast land area.

          RE solar interpretations of WX cycles I find inter regional ocean temp effect on seasonal wind and pressure trends much more enlightening, predictive and practical. It doesn’t take long to see what the season is likely to bring via examining those and the related trends for which Windy is an ideal tool. Frankly examining that makes something of a joke of BOM’s regional medium and long-range ‘forecast’ maps, which are notoriously unreliable.

          BOM will just become less and less relevant or credible to the public like the ABC has been doing during the past 20 years and both will get axed. Both of those organizations have aligned themselves to a bunch of radical left-wing hysterical fools which are sounding more and more bonkers everyday. No soup for such deadwood deceivers.

          The allure of Windy is that it came from ordinary everyday people who simply needed more accurate and detailed forecasts, anywhere, anytime, so they could do whatever they needed to do better, with more efficiency, more safety.

    • I’d have looked for some historical events along the same lines. There’s usually multiple examples to de-fuse that type of suggestion that anything happening with the weather is “unprecedented.” Then point out that ‘average” precipitation is nothing more than a midpoint or extremes, it is NOT “normal” or “expected” precipitation, any departure from which is to be viewed as an “anomaly.”

      Finish up with one of THEIR favorites, whenever weather which seems to conflict with their belief system occurs – “It’s WEATHER – Not “climate.”

    • “Australia’s BoM wasn’t forthcoming with an ACORN-SAT version 2 media release, though after it was published, there was some media coverage.”

      To clarify … the only media coverage was the front page of The Australian newspaper several days after the ACORN 2 story was for the first time anywhere revealed by WUWT via a post by myself. The Australian followed it a couple of days later with an excellent editorial.

      Apart from that front page on Australia’s national newspaper, there has been no mainstream media coverage at all. The BoM has still not issued a press release that might prompt other media to inform the public that Australia’s temperature history has been rewritten with a 23% increase in estimated warming since 1910.

      ACORN 2 also has not yet been propagated through the BoM website, assuming it will be to make the whole exercise worthwhile, and with a federal election expected in two months the timing of that propagation and media release will be interesting.

      Visit http://www.waclimate.net if you want full analysis of ACORN 2 including a state-by-state breakdown.

      • Chris, your initial post to WUWT brought this ACORN 2 story to the attention of many including me, thanks.
        At least Sky News Australia was able to have Jennifer interviewed live. I realise mainstream media will have little coverage unless more journalists like Graham Lloyd take up the story.
        Good analysis on http://www.waclimate.net though I won’t be getting into the detail.
        I have an interest in weather from my teenage years even spending a week of high school work-experience with cloud-seeding scientists in the mid-1970s. It would be great if some well trained semi-retired scientists became more vocal in their concerns. I guess some are posting on sites like this.

    • he is probably reading the scientific report

      that would be a good move for folks here too

      • That would be fine. Except that then he’d come here and put forward some predictably lame/inexplicable excuse justifying it (in his mind).

  34. Being unaware that CO2 is not a pollutant and is required for all life on earth is science ignorance.
    Failure to discover that CO2 has no significant effect on climate but water vapor does is science incompetence.
    Changing measured data to corroborate an agenda is science malpractice.

  35. The disconnect between the miniscule tenths of a degree (within the error margin) increases that warmists use to justify their claim of global warming and the adjustments of up to 3 degrees to individual stations on a daily basis up to a century ago is startling.

  36. Water vapor is a ghg. It increased about 7% 1960-2002. During that time, atmospheric WV increased 5 molecules for each CO2 molecule added to the atmosphere. The level of WV in the atmosphere is self-limiting. Except for the aberration of el Ninos, WV appears to have stopped increasing in about 2002-2005.

  37. Deniliquin had a new irrigation area opened in 1939 and another in 1955 so we are comparing apples to oranges. http://www.irrigationhistory.net.au/history/continuing-growth.asp
    Water allocations began in the late 60s and severe restrictions to water use in droughts should mean an artificial warming since 1967.

    Other stations also have reason why the site has changed dramatically (tourist towns watering surrounding lawns to keep maxima down and not scare off tourists) but to adjust and then fitting a linear trend to the data, talking about breaking records by a fraction of a degree, using the ratio of heat to cold records broken as evidence of climate change or my pet hate, a heatwave index that will exaggerate any fraction of degree change to the 90 percentile to look like a 30% increase in extreme heat, is so blatantly unscientific that its extreme incompetence.

  38. If the data adjustments create a warming trend that isn’t there, then models being crated today based on the adjusted data will run hot. Future temperature trends will disappoint the forecasters and will need to be adjusted downward.

  39. I have to commend S. Mosher for making a lengthy posting on this thread.
    He explains the case for CAGW very well.
    The problem that a skeptic like myself has with the whole CAGW theory boils down to my own personal observation of local weather conditions.
    Where I live in Canada, we have long cold winters and short hot summers.
    Like most farmers I have extensive records for historic rainfall, temperature, and general weather conditions going back over the 50 plus years I’ve farmed in this area and prior to that my father was interested in the weather.
    The weather in 2018/2019 seems very similar to the weather we had in 1968/1969.
    Where is the catastrophic warming for our area?
    Why haven’t we experienced the man-made warming that should have brought us milder winters and hotter summers by now?

    • Where is the catastrophic warming for our area?

      There isn’t any. There never will be — not from CO2 (cyclical/natural climate changes have and will continue to occur). CO2 slightly warms the coldest areas & has little effect on the already warm areas. That’s a good/beneficial effect, and then there’s the additional fertilization effect on the biosphere’s plants.

      • If by ‘slightly’ you mean less than a measureable amount, well, maybe. I have never found, in a decade+ of searching, ANY warming that could not be completely explained by factors other than CO2.

  40. he Bureau has also variously claimed that they need to cool that past at Rutherglen to make the temperature trend more consistent with trends at neighbouring locations.

    That may be the most unscientific pronouncement I’ve seen in years. So much wrong in such a small number of words.

    First: raw data is not changed to be “more consistent with trends at neighboring stations.” One doesn’t know that the neighboring stations are any more trustworthy than the station whose data is being changed.

    Second: scientists have to “Know when to hold ’em, know when to fold ’em.” I’ve seen it stated many times that the data”must be adjusted” so that the statistics they want and need for their purposes can be calculated. That is not scientific methodology; that is fabrication.

    If a station has a move, or a sensor change causes a difference in readings, then that station record should be terminated, and a new one begun. One does the science with the data one has, not the data one wishes one had.

  41. The Bureau of Misinformation has a facebook page.

    I put a short post up about BOM’s ACORN 2 with a link to this article on WUWT.

    It never appeared.

    BOM is not interested in hearing criticisms

    From which I conclude it’s time to shut it down.

Comments are closed.