The ‘trick’: How More Cooling Generates Global Warming

From the “we’ll fix that in post” department comes this from down under courtesy of Dr. Jennifer Marohasy.

COOLING the past relative to the present has the general effect of making the present appear hotter – it is a way of generating more global warming for the same weather.

The Bureau of Meteorology has rewritten Australia’s temperature in this way for the second time in just six years – increasing the rate of warming by 23 percent between Version 1 and the new Version 2 of the official ACORN-SAT temperature record.

Temperatures from the Rutherglen research station in rural Victoria are one of the 112 weather stations that make-up ACORN-SAT. Temperature have been changed here by Blair Trewin, under the supervision of David Jones at the Bureau.

Dr Jones’s enthusiasm for the concept of human-caused global warming is documented in the notorious Climategate emails, during which he wrote in an email to Phil Jones at the University of East Anglia Climatic Research Unit on 7 September 2007 that:

“Truth be known, climate change here is now running so rampant that we don’t need meteorological data to see it.”

We should not jump to any conclusion that support for human-caused global warming theory is the unstated reason for the Bureau’s most recent remodelling of Rutherglen. Dr Jones is an expert meteorologist and an honourable man. We must simply keep asking,

“What are the scientifically valid reasons for the changes that the Bureau has made to the temperature records?”

In 2014, Graham Lloyd, Environmental Reporter at The Australian, quoting me, explained how a cooling trend in the minimum temperature record at Rutherglen had been changed into a warming trend by progressively reducing temperatures from 1973 back to 1913. For the year 1913, there was a large difference of 1.7 degrees Celsius between the mean annual minimum temperature, as measured at Rutherglen using standard equipment at this official weather station, and the remodelled ACORN-SAT Version 1 temperature. The Bureau responded to Lloyd, claiming that the changes were necessary because the weather recording equipment had been moved between paddocks. This is not a logical explanation in the flat local terrain, and furthermore the official ACORN-SAT catalogue clearly states that there has never been a site move.

Australians might nevertheless want to give the Bureau the benefit of the doubt and let them make a single set of apparently necessary changes. But now, just six years later, the Bureau has again changed the temperature record for Rutherglen.

In Version 2 of ACORN-SAT for Rutherglen, the minimum temperatures as recorded in the early 1900s, have been further reduced, making the present appear even warmer relative to the past. The warming trend is now 1.9 degrees Celsius per century.

The Bureau has also variously claimed that they need to cool that past at Rutherglen to make the temperature trend more consistent with trends at neighbouring locations. But this claim is not supported by the evidence. For example, the raw data at the nearby towns of Deniliquin, Echuca and Benalla also show cooling. The consistent cooling in the minimum temperatures is associated with land-use change in this region: specifically, the staged introduction of irrigation.

Australians trust the Bureau of Meteorology as our official source of weather information, wisdom and advice. So, we are entitled to ask the Bureau to explain: If the statements provided to date do not justify changing historic temperature records, what are the scientifically valid reasons for doing so?

The changes made to ACORN-SAT Version 2 begin with changes to the daily temperatures. For example, on the first day of temperature recordings at Rutherglen, 8 November 1912, the measured minimum temperature is 10.6 degrees Celsius. This measurement is changed to 7.6 degrees Celsius in ACORN-SAT Version 1. In Version 2, the already remodeled value is changed again, to 7.4 degrees Celsius – applying a further cooling of 0.2 degrees Celsius.

Considering historically significant events, for example temperatures at Rutherglen during the January 1939 bushfires that devastated large areas of Victoria, the changes made to the historical record are even more significant. The minimum temperature on the hottest day was measured as 28.3 degrees Celsius at the Rutherglen Research Station. This value was changed to 27.8 degrees Celsius in ACORN Version 1, a reduction of 0.5 degrees Celsius. In Version 2, the temperature is reduced by a further 2.6 degrees Celsius, producing a temperature of 25.7 degrees Celsius.

This type of remodelling will potentially have implications for understanding the relationship between past temperatures and bushfire behavior. Of course, changing the data in this way will also affect analysis of climate variability and change into the future. By reducing past temperature, there is potential for new record hottest days for the same weather.

Annual average minimum temperatures at Rutherglen (1913 to 2017). Raw temperatures (green) show a mild cooling trend of 0.28 degrees Celsius per 100 years. This cooling trend has been changed to warming of 1.7 degrees Celsius per 100 years in ACORN-SAT Version 1 (orange). These temperatures have been further remodeled in ACORN-SAT Version 1 (red) to give even more dramatic warming, which is now 1.9 degrees Celsius.
0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

184 Comments
Inline Feedbacks
View all comments
Duane
March 5, 2019 1:08 pm

Every time someone mucks with the data, they are lying.

Always.

The data are the data, and are not to be mucked with.

commieBob
Reply to  Duane
March 5, 2019 2:10 pm

One of the great aviation engineer geniuses of the age is Burt Rutan. He’s used to analyzing flight data like peoples’ lives depend on it, which they do. He once observed that, if he saw someone over analyzing the data, he knew the analysis was bunk.

For decades, as a professional experimental test engineer, I have analyzed experimental data and watched others massage and present data. I became a cynic; My conclusion – “if someone is aggressively selling a technical product whose merits are dependent on complex experimental data, he is likely lying”. That is true whether the product is an airplane or a Carbon Credit.

Actually adjusting the data is one worse. Like you say, the data is the data.

The standard for lab log books hasn’t changed in my lifetime. If you want to change something, you neatly cross out the old version so it is still legible. Every first year science and engineering student learns that. In that regard, the BOM hasn’t risen to the standard of the average freshman.

Rutan’s Engineer’s Critique of Global Warming ‘Science’ is impressive.

Bryan A
Reply to  commieBob
March 5, 2019 2:55 pm

Just wait…
They utilized a neighboring dataset to alter this dataset so it would agree with its neighbor…
Next they will alter other datasets a little further awway to agree with this one…
And the alterations will forever creep outwards until Uluru indicates is thould be snowing there in December 1915…

Ian Bryce
Reply to  commieBob
March 5, 2019 8:02 pm

I suspect that they have been comparing the Rutherglen data to the Melbourne data, which is 200K south.
Melbourne’s minimum temperatures are warmer due to the warmer ocean temperatures, and of course the Urban Heat Island Effect.
Melbourne has shown warming since 1958.

Niksu
Reply to  Ian Bryce
March 6, 2019 12:08 am

You don’t have to suspect, you can read from their document. 23 stations listed as comparative stations (Melbourne is not there).

All the changes are public and original data is also available. Adjustments are documented.

Flight Level
Reply to  commieBob
March 6, 2019 1:40 am

Even worse. Modern liners execute a permanent data analysis of that many and some more parameters. Which is why lines prioritize data based drills and actions instead of what steam gauge grumpy old captains called airmanship.

WXcycles
Reply to  commieBob
March 6, 2019 6:48 am

When I was at Uni the meteorology courses were taught by the geography dept within the Humanities faculty and climate studies were an integral part of all geology courses in the Science faculty. So I’m not surprised BOM are a bit lose with the concept of what science is, as they aren’t scientists.

Horst Graben (aka James Schrumpf)
Reply to  WXcycles
March 7, 2019 6:59 pm

You aren’t suggesting that geologists aren’t scientists, are you?

Gums
Reply to  Duane
March 5, 2019 2:23 pm

Just another reason to present absolute temperatures, Kelvin I guess.
Let the observer cherry pick the 30 year period to present the anomaly.

Let’s use 1915 to 1945 for our baseline, huh?

Gums sends…

Kurt
Reply to  Duane
March 5, 2019 2:52 pm

I’d rephrase that a little bit. Every time someone “adjusts” data without some kind of a test to verify that the adjusted data is more accurate than the old data, they are guilty of fabricating data.

If I’m processing a received noisy video signal and I theorize that I can improve the quality of the video by taking the incoming noisy data and applying a low pass filter through it, I can test the before-and-after effect of the low pass filter to verify that the adjustments improve the picture.

But if all I’m doing is theorizing that the raw data has some bias to it, and implementing some purely theoretical procedure to remove the bias, without any objective means to test whether my procedure over-compensated for the bias, then I think this is plain old data fabrication. And what happens, for example, when the adjustment procedure say is designed to eliminate a theorized bias in a linear trend, but in the process introduces a spurious exponential component to a time series to make it exaggerate acceleration of the change?

damp
Reply to  Kurt
March 5, 2019 3:30 pm

“If I’m processing a received noisy video signal and I theorize that I can improve the quality of the video by taking the incoming noisy data and applying a low pass filter through it, I can test the before-and-after effect of the low pass filter to verify that the adjustments improve the picture.”

Kurt, I’m not sure this analogy will hold. If you’re looking at a video picture, you often have a pretty good idea of what it “should” look like – you feel that you can correctly judge when the quality has been “improved.” In weather data, do we actually know this? Isn’t that one of the differences between the justifiers of the jiggery-pokery and the rest of us who ask, “Why should you expect the data to conform to your expectations?”

Kurt
Reply to  damp
March 5, 2019 3:53 pm

I’m not using the video processing example as an analogy to the adjustments performed on raw climate data. Just the opposite; I’m holding it out as a counterexample – a situation in which changing data would be appropriate and would not be data fabrication, and for exactly the reasons you mentioned.

Conversely, I consider the adjustments to raw temperature data to be data fabrication because, unlike the video processing application, there is no way of testing the efficacy of the adjustments and also, as you note, the procedure is highly susceptible to confirmation bias where the adjuster just changes the data to conform to some preconceived idea of what it should look like.

Robert B
Reply to  Kurt
March 6, 2019 1:51 pm

I noticed an inadvertent test of BEST (and GISS v3?). Two isolated stations in Australia had the exact same data by mistake and despite exactly the same, 200 km apart and hundreds more from others, the algorithms adjusted the temps to create warming. I mentioned it here and that site data was removed rather than the whole process revised.

john harmsworth
Reply to  Duane
March 5, 2019 2:53 pm

I keep reading this type of jiggery pokery is going on. So why doesn’t somebody-or better yet a group of people go public with a direct accusation of fiddling the data?

Reply to  john harmsworth
March 5, 2019 5:09 pm

Ref John Harmsworth 2:53pm: individuals and groups have gone public with direct accusations of fiddling the data. I have older climate records for specific sites collected for work purposes eg CRD (climate responsive design), and found values that have been changed, eg level and cooling trends converted to warming, wind speeds correctly recorded by automatic sensors that become retrospectively “broken”, etc. The perpetrators ignore the complaints. I have had threats from local sycophants but they never turn up to make my day, just advise people not to use my services. I can’t take action on that practice, because it has the opposite effect to what is intended. They don’t seem to realise that many people don’t have to have a detailed knowledge of the subject to know they are being lied to.

Joe Spectr
Reply to  Martin Clark
March 5, 2019 10:41 pm

Wanting to prove AGW can lead to a condition known as echochamberia, sadly it can also lead to a windfall for some of its victims.

Steven Mosher
Reply to  Duane
March 5, 2019 3:57 pm

Do you wear glasses?

Psst. good thing Romer didnt know this.
Psst. dont tell tell Christy or Spencer, UAh is a heap of adjustments
Psst. dont tell Happer one of his claims to fame is finding ways to adjust. check out his history

Reply to  Steven Mosher
March 5, 2019 10:39 pm

Steve. As usual you allude to adjustments made in other types of systems that have no relevance to the topic under discussion. If you believe all of these adjustments to change earlier temperatures were necessary it would seem you would perceive that they all shouldn’t be in one direction. Here in Northern California there was a cooling trend for Santa Rosa and Ukiah until 2011 when the earlier years suddenly became much colder. Surprisingly the maximums (raw) were higher in the earlier years and still are. For Santa Rosa, the first year of the record in 1904 was 15.2C; the adjustment dropped it 1.1C to 14.1C. For Ukiah 15.2C became 14.5C, and the very warm 1925 to 1940 period was cooled about 1C throughout. Conversely for Ukiah, a strong cooling trend starting at 2005 became slight warming. I’m sure that you can explain all these changes, but I doubt such explanations in light of the patterns observed. But please explain away. Science is not to be taken for granted.

MarkW
Reply to  Michael Combs
March 6, 2019 10:05 am

Steve actually believes that if one set of adjustments is justified, this proves that all adjustments are justified.

Big T
Reply to  Duane
March 5, 2019 5:40 pm

I just HATE mucking!!

Independent_George
Reply to  Duane
March 6, 2019 12:57 am

Precisely. (For Aussies)

Fish and chip store guy: Global warming or cooling?

Dorky dude: Both (snap*)

Cue dancing

StephenP
Reply to  Duane
March 6, 2019 4:38 am

We used to call it “cooking the books”! A real no-no.

Kerry Eubanks
March 5, 2019 1:13 pm

Man, the guys at the Australian MET are real pros! NASA and NOAA will have to get busy!!

Reply to  Kerry Eubanks
March 5, 2019 2:01 pm

The BoM are unashamed in claiming they have world’s best practice temperature homogenisation.

LdB
Reply to  RickWill
March 5, 2019 5:47 pm

You missed it is homogenisation with no actual field checks a fact they note in the methodology.

ozspeaksup
Reply to  LdB
March 6, 2019 4:14 am

yeah…cant leave the ivory tower and aircon can they?
and errata in the item…
NO ONE TRUSTS the BoM
you go look get a vague idea of what might be happening,
and then go outside and use your own rain gauge /thermometer for actual;-)

lee
Reply to  RickWill
March 5, 2019 7:52 pm

“The Forum noted that the extent to which the development of the ACORN-SAT dataset from the raw data could be automated was likely to be limited, and that the process might better be described as a supervised process in which the roles of metadata and other information required some level of expertise and operator intervention. The Forum investigated the nature of the operator intervention required and the bases on which such decisions are made and concluded that very detailed instructions from the Bureau are likely to be necessary for an end-user who wishes to reproduce the ACORN-SAT findings. Some such details are provided in Centre for Australian Weather and Climate Research (CAWCR) technical reports (e.g. use of 40 bestcorrelated sites for adjustments, thresholds for adjustment, and so on); however, the Forum concluded that it is likely to remain the case that several choices within the adjustment process remain a matter of expert judgment and appropriate disciplinary knowledge.”

http://www.bom.gov.au/climate/change/acorn-sat/documents/2015_TAF_report.pdf

Expert judgement means it can’t be replicated. If it can’t be replicated it can’t be science.

Hivemind
Reply to  lee
March 7, 2019 2:10 am

“…concluded that very detailed instructions from the Bureau are likely to be necessary for an end-user who wishes to reproduce the ACORN-SAT findings.”

In plain language: “We aren’t doing real science because our results can’t be reproduced.”

Reply to  Kerry Eubanks
March 5, 2019 6:37 pm

ACORN v2 is just an excuse for the next NOAA GHCN adjustment at those stations.
A game of Leapfrog, to provide cover for the next round of adjustments to GHCN.
GHCN version 3 was released in 2011. They now need to “update” it in preparation for the AR6 hustle. BoM actions here on ACORN v2 lays the ground work to NOAA/NCDC to adjust those Australian stations again in a new version 4 in time for everyone else (GISS-LOTI).

So it’s certainly part of a coordinated effort to move regional temp records to more warming — first. That will then be followed by the Global surface datasets in order to not be so embarrassed when AR6 drafts have to be written in 18 months and the inevitable comparison to the CMIP6 ensemble. The imperative is for “observation” to stay in the CMIP6 90% ensemble uncertainty.

Dave Fair
Reply to  Joel O'Bryan
March 6, 2019 12:00 am

Damn it, “the CMIP6 90% ensemble uncertainty” is not statistical uncertainty. They take models with base average global temperatures that vary by over 3 C and combine them. Scientific fraud.

rbabcock
March 5, 2019 1:19 pm

We all know the people back in the 20’s, 30’s and 40’s couldn’t read a thermometer. After all, it is a very complicated device and took years of training to get it right.

Maybe Steven Mosher can come in here and tell us all how he and his fellow scientists determined most reading were just too low?

shrnfr
Reply to  rbabcock
March 5, 2019 1:39 pm

Besides, the nature of mercury has changed. That is why they had to ban it in meteorological instruments.

joe- the non climate scientist
Reply to  rbabcock
March 5, 2019 2:04 pm

Interesting point on reading thermometors . I have the old style mercury thermometor, Even with good eyesight, the resolution isnt high enough to read the temp closer than within 1 full degree F.

You go to be impressed the climate scientists skills, that the climate scientists can pin point the temp within .1 of a degree, 80 years later.

Reply to  joe- the non climate scientist
March 5, 2019 5:26 pm

To say nothing about how they also know the different calibration errors of each instrument. Heck, they could be changing everything to agree with a thermometer that is seriously out of whack. These old thermometers had varying tube sizes and irregularities in the tube that affected calibration. Who knows which one was most accurate and which ones were at the very edge of the accepted calibration.

Steve O
Reply to  joe- the non climate scientist
March 6, 2019 3:23 pm

Actually, even with temperatures recorded to the nearest whole number, you can determine the average temperature over a period of time to within 0.1 degrees. You have 365 data points. If you’re interested enough, you can prove it yourself using Excel. Create a column of data to represent true temperature readings, and in the next column round everything to the nearest whole number. Compute the average for each column.

Reply to  Steve O
March 6, 2019 5:01 pm

That’s only valid if you have many thermometers in the same location to average out. But the reality is, single thermometers measure the temp for a huge area, so it would then follow that thw temperature reading for such n such a place would have a huge uncertainty in accuracy and a .5 C in precision. In sum, you can’t measure the average temperature of something huge like a country or planet. It’s not like measuring the average temp inside a plane or boat, etc., where there are active systems trying to maintain a certain temp.

Reply to  Steve O
March 7, 2019 5:17 pm

Not as close as you think. I ran the numbers from GHCN Daily for Greer, SC for the year 2011. I took the average of all 365 days in the year, the standard deviation, and the error in the mean calculated as the standard deviation / sqrt(365).

Using the raw temps with one decimal point in the measurement, the results were

Avg Temp Std Dev Err. in Mean
——– ——- ————
+17.0 8.6 0.4

It doesn’t change much by using integers, as you suggested. The 8.6 standard deviation rounds up to 9, but the error in the mean stays at 0.4. Not as precise as one might thing.

David Stone
Reply to  James Schrumpf
March 8, 2019 3:39 am

The average can never be more accurate than the readings, unless you can show that the reading errors are normally distributed and random. This is basic statistics, but obviously ignored to “adjust” the readings!

March 5, 2019 1:24 pm

But they have got it wrong!

I read and commented just the other day about the fact that “Warming causes Cooling”!

https://www.livescience.com/3751-global-warming-chill-planet.html
https://www.livescience.com/3751-global-warming-chill-planet.html

(I googled “global warming causes global cooling” and then “global cooling causes global warming” and got the same result above 🙂

https://realclimatescience.com/2018/04/before-extreme-weather-was-caused-by-global-warming-it-was-caused-by-global-cooling/
https://www.skepticalscience.com/global-cooling.htm

We cant all be right! Can we?

Cheers

Roger

March 5, 2019 1:26 pm

Regarding the changes at Rutherglen, which Minister in the Liberal Government is responsible for the BOM, and does he or she really know what is going on. Or is the Minister a Greenie ?

MJE

Tom Halla
Reply to  Michael
March 5, 2019 1:43 pm

And will they actually do anything? Going along with “scientists” one happens to agree with is all too common.

C. Paul Barreira
Reply to  Michael
March 5, 2019 1:47 pm

The inaction speaks for itself.

Reply to  Michael
March 5, 2019 5:00 pm

Hey Michael,

The Minister is one Melissa Price who just yesterday announced that the recent bush fires here in Australia could be directly attributable to climate change. It provides some direction in terms of how this government is going to handle this issue of global warming going into the next federal election likely in May.

That motivated me somewhat to get this blog post out … something about Rutherglen and bush fires, that I had in my back-pocket, so to speak.

The Conservative Morrison government currently ruling Australia know that the Bureau just make stuff-up, but they are not sure about the extent of the deceit.

Much THANKS TO WUWT for reposting me.

Craig from Oz
Reply to  Jennifer Marohasy
March 5, 2019 6:27 pm

Thanks to Jennifer for writing the article in the first place.

Price is a bit hard to read. On one hand she is/was a Turnbull supporter which isn’t a smart thing to have on your CV when talking with conservatives. She has, it seems, been trying to gain traction with her Environmental Policy in recent days, baiting Australian Labor (left wing) on Twit with it.

On the other hand she was also the person who accused the President of one of those ‘victims of rising sea levels’ islands of only being at a conference to ask for money and the Guardian in the last few days has been trying to push the line that she is an invisible minister after several environmental groups have complained that she declines to meet with them.

Not sure. Someone closer to her might like to confirm or deny that she has splinters up her bum, cause I think she is a fence sitter.

nankerphelge
March 5, 2019 1:29 pm

Absolutely throws future analysis for a 6. I thought we owed our children and children’s children etc the truth!
This has to be a breach of ethics?

Dave Fair
March 5, 2019 1:45 pm

The old Mark1 eyeballs show a step-function in the adjusted temperature data around/after 1970 at Rutherglen. Has the BOM explained this obvious step up in temperatures at this location? The raw data shows no such step up in temperature recordings at Rutherglen.

Phoenix44
March 5, 2019 1:47 pm

It’s garbage. One of the reasons for using averages for lots of data like this is to average out errors. Gong through each individual record and changing should not therefore change the average – unless there is a systematic problem with the data. If there is no systematic problem, you would expect as many upward revisions as downward, and thus the average would stay the same.

If the average changes, and there is no systematic reason for changing the original data you are most likely ADDING BIAS.

It is very unlikely that an unbiased process will discover significantly more adjustments that go one way in the absence of a systematic reason why they should go more on way than the other. Thus the changes in the average strongly suggest bias, unconscious or not.

Reply to  Phoenix44
March 5, 2019 5:50 pm

Temperature measurements are not measurements “of the same thing” which can achieve a more accurate “true value” with a normal distribution of errors. A given temperature measurement can not be made “more accurate” by a subsequent reading from later in time. An error budget should be determined prior to any manipulation of a temperature data set. One of the components of a budget is the error in readings. As above, many old temperatures were recorded as integers with an error of +/- 0.5 degrees (at best). Averages of these readings carry forward the same errors as each individual reading. In most cases, this could be assumed to be +/- 0.5 degrees, but a better way is to determine the percent error and carry that forward.

If you don’t believe me, take temperatures of 55 and 58 degrees F with an error of +/- 0.5. To calculate an accurate average, do you use 55.4 and 57.6 or perhaps 54.6 and 57.7? Another question is how many significant digits should the average have? Two or three?

Reply to  Jim Gorman
March 5, 2019 8:52 pm

There are hard and fast rules for error propagation and significant digits. I’ve found these sites very helpful:

https://faraday.physics.utoronto.ca/PVB/Harrison/ErrorAnalysis/index.html

http://chemistry.bd.psu.edu/jircitano/sigfigs.html

In the example you gave above, the rules would say this:

You can’t have more precision in your result than is in the least precise measurement. The average of 55 and 58 degrees, with no decimal part in either value, would be figured as (55+58)/2 = 56.5. Since neither 55 nor 58 have a decimal part, neither would the answer. In scientific rounding, even numbers followed by a five stay at the same number, while odd numbers followed by a five round up. So the anwer to the first part would be AVG(55,58) = 56. If there were more values in the series you could use the error in the mean calculation of ΔX/√N, but I’ve never seen it used with only two values.

Instead it’s done in a two part process. First you add the the two errors in a quadrature, i.e, √($Delta;X^2 + ΔY^2) = sqrt(0.5*2 +0.5^2) = sqrt(0.5) = 0.7. Dividing by 2 is next.

According to the rules when multiplying a measurement with uncertainty, you use the fractional, or relative, uncertainty to do the job. In this case the sum is 113 and the added uncertainty is 0.7. Get the relative uncertainty by dividing 0.7/113 = 0.006. That is the fractional uncertainty. Divide by two to get the 56 value from before, and then multiply the fractional uncertainty by the average value to get the new absolute uncertainty. 56*0.006 = 0.336, , rounded to 0.3. The final answer, according to the scientific rules of significant digits and error propagation, would be 56±0.3°F.

That’s how my cipherin’ goes. If I’ve made a mistake somewhere, I sure would appreciate it being pointed out and explained.

Reply to  James Schrumpf
March 6, 2019 10:28 am

Here is where you go off track. You are measuring different things so your errors can’t add in quadrature. Take the measurements of diameter of ten apples and ten limes with twenty different instruments, find the average, and calculate the error of the mean. Does your error calculation mean anything?

With two or three temps of +|- 0.5 what is the true value of the average? Can you make a choice of what actual temps to use for your average? That is, 55.5 or 54.5 for one of the numbers.

Reply to  Jim Gorman
March 7, 2019 2:34 am

I’ve seen online articles using these techniques with populations These aof people to determine the average height and such, but that’s not the point anymore. I think the point is how bad the statistics are that are generated.

Take the daily averages from any month, average them, and determine the uncertainty — it will be n the 0.2-0.5 range, yet the monthly summaries will have two decimal places and no mention of any uncertainty.

These are the issues that should be emphasized.

Steven Mosher
Reply to  Phoenix44
March 5, 2019 11:11 pm

” One of the reasons for using averages for lots of data like this is to average out errors.”

that is not the purpose of ‘averaging” for temperatures.
plus we dont average.

MatthewDrobnick
Reply to  Steven Mosher
March 6, 2019 7:52 am

This is true.

You lie and obfuscate.

Reply to  Steven Mosher
March 6, 2019 11:25 am

So who does? Somewhere, somebody is putting out that some years are hotter than others. If this isn’t using averages, what are they using?

ScienceABC123
March 5, 2019 1:48 pm

If they keep this up much longer the only way they’ll be able to explain the Pilgrims reaching Plymouth will be if they walked across the Atlantic.

David Chappell
Reply to  ScienceABC123
March 5, 2019 5:26 pm

That’s easy, it was a simple site move, Plymouth to Plymouth.

Tom Abbott
March 5, 2019 1:53 pm

Why should we give these data manipulators the benefit of the doubt?

They just stumbled into this bastardization of the temperatue data? You don’t think they know what they are doing? Have these people ever raised past temperatures? Even once? All the adjustments cool the past. None of them warm the past. Sounds like a plan to me. A plan to make it look like things have been getting hotter and hotter for decades. A diabolical plan to sell the CAGW narrative.

Bryan A
Reply to  Tom Abbott
March 5, 2019 2:57 pm

It IS obvious that all of the sensor movements have created an artificial warming signal in the present data so We’ll correct it by cooling the past…
makes perfect sense to me
/sarc

MarkW
March 5, 2019 1:56 pm

As Mosh likes to tell us, some of the adjustments result in cooling.
What he never tells us is that all of cooling occurs in the past, while all of the warming occurs in the present.

There’s no way that could happen by pure chance.

Latitude
Reply to  MarkW
March 5, 2019 2:34 pm

What Most doesn’t tell us…..is how much they adjusted up….before they adjusted down

When you adjust up 10…and adjust down 2…..

yeah, you can say some of the adjustments result in cooling

Steven Mosher
Reply to  Latitude
March 5, 2019 11:56 pm

You could just look at all the NCDC station plots. they are online

The code is also online on how to go from unadjusted data to adjusted.

Once upon a time skeptics would kill to get the code used by climate science..

or hack systems to get it.

Now, athough its posted, they generally avoid looking at the code

much easier to accuse people of fraud when you avoid the best evidence

here

ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/52i/

Steven Mosher
Reply to  MarkW
March 5, 2019 11:46 pm

“As Mosh likes to tell us, some of the adjustments result in cooling.
What he never tells us is that all of cooling occurs in the past, while all of the warming occurs in the present.

There’s no way that could happen by pure chance.”

Err because its not true.

Look at individual series. You will see adjustments all over the place.

I will link to the charts, but you will just change the topic.

here is the top of the stack

ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/1/10160355000.gif

grab another one

ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/1/10160461000.gif

whole stack
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/

Now there are a few thousand charts showing you station by station for GHCN v3 where the adjustments UP and adjustments down are made

all over the map, past cooled, present cooled, warming here, cooling there.

your insinuation is that all the cooling is done to the past and this is categorically untrue

AND you insinutae that this cant be by chance, implying that folks like me are being dishonest.

sorry no evidence of that.

here is the software that NCDC uses

ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/software/

You can download it, run it and see for youself

Trailer Trash
Reply to  Steven Mosher
March 6, 2019 10:45 am

The last time I saw a FORTRAN listing was in college about 1974, so I wondered if I would recognize anything. Code comments do not inspire confidence:

C WARNING – POTENTIAL FOR A MAJOR ERROR IN THE PRECIP CODE HAS BEEN
C DISCOVERED. THE ALGORITHM MAY BE COMPROMISED DUE TO THE USE OF
C OF A SHORTCUT FOR CALCULATIONS. IT SEEMS AS IF THE ESTIMATION OF
C THE CONFIDENCE INTERVAL AND THE CORRECTION FACTOR ARE NOT
C CALCULATED IN NATURAL LOG SPACE AS IN THE ORIGINAL SPERRY VERSION
C AT PRESENT THE EFFECTS ARE UNKNOWN, BUT UNTIL FURTHER INVESTIGATION
C !!!!!! CONSIDER THE CODE CORRUPTED !!!!!!
C 07JUN01 CW

[from the file filnet_subs.v4p.f]

Reply to  Trailer Trash
March 6, 2019 11:39 am

Does this inspire confidence or what?

Bill in Oz
March 5, 2019 2:08 pm

The Bureau of Misinformation has considered all criticisms.
And dismissed them.
The residents of Rutherglen have also been informed.

All in all dead shit science !

March 5, 2019 2:22 pm

The Minister responsible for the Australian BOM is the Hon Melissa Price MP, Minister for the Environment and Energy. She is also the Minister for Silly Walks.

Robber
Reply to  Nicholas William Tesdorf
March 5, 2019 2:48 pm

And this minister has just attributed the latest bushfires in Victoria to climate change.
Wonder how she explains Black Sunday 1926, Black Friday 1939, Ash Wednesday 1983?

Patrick MJD
Reply to  Robber
March 5, 2019 9:01 pm

She knows she is extremely unpopular as environment minister especially over the Adani coal port in Queensland. She is trying to save her seat.

Unfortunately, it seems most Australians have fallen for the propaganda hook, line and sinker. They don’t see problems using 112 devices to calculate a national average temperature. They don’t see an issue with device siting. They turn a blind eye to the blatant corruption of the data by the data keepers. This list goes on and on…

Bryan A
Reply to  Nicholas William Tesdorf
March 5, 2019 3:00 pm

MEE…
ME2…
Me Too

March 5, 2019 2:27 pm

Data is data, I would tend to agree.

What comes out of an adjustment of data is no longer data. Rather, it is synthetic output.

If “adjusting” needs to be done, then it is in the EXPLANATION of the data. The explanation of the data should discuss the data’s possible limitations and shortcomings. But the data is the data. Period.

Sweet Old Bob
March 5, 2019 2:36 pm

Isn’t it obvious ? Homogenize the southern hemisphere temps and then show …. CAGW !
Not all that many SH stations …. one temp to control them all ! Hockey stick !

Jerry
March 5, 2019 2:51 pm

They own the data, sheep…..

john
March 5, 2019 2:56 pm

If their scientific rationale for “adjusting” these temperatures doesn’t stand up to even basic scrutiny then the only remaining reason is that the fix is in and approved by all involved if there were no arguments against. They should all be dragged into the sunlight and fired straight up!

March 5, 2019 3:07 pm

the thing is – as long as they CAN get away with it , they will keep on doing it .
Is there no way WUWT can crowdfund a lawsuit to sue the BOM for damages or
is there not ONE politician that’s honest that can demand answers

Reply to  Kiwibok
March 5, 2019 3:59 pm

The problem here is that anyone – and especially politicians – perceived to have an opposing view is shouted down as a ”denier”, anti-science and a right-wing dinosaur. So politicians who clearly do not go along with this hysterical nonsense dare not admit it. It shows that most are basically either gutless or do not have the confidence to confront the warming claims head on. This in turn stems from the systematic suppression of pertinent information.
It demonstrates the very pressing need to have someone in power to set up a small investigative committee to carefully check all contrary claims such as have been highlighted by the various sceptical scientists and, having found the inconsistencies (which they will do), set up a more potent inquiry such as a Royal Commission in which all relevant players are bound by law to front.
The battle to convince the authorities will be difficult. Even just yesterday we had a QC (prominent lawyer) announce he has joined the greens party and in an interview on radio claimed climate change to be the ”most urgent problem facing humanity” or words to that effect. The response from the host – who enjoys a very large audience was ”There is no doubt about that.
These people who think they are the enlightened and righteous ones are living in ignorance and darkness. Very sad.

Tom Abbott
Reply to  Mike
March 6, 2019 3:57 am

“The problem here is that anyone – and especially politicians – perceived to have an opposing view is shouted down as a ”denier”, anti-science and a right-wing dinosaur. So politicians who clearly do not go along with this hysterical nonsense dare not admit it. It shows that most are basically either gutless or do not have the confidence to confront the warming claims head on.”

I think the statement that politicians who are closet skeptics “do not have the confidence to confront the warming claims head on.” is the answer in most cases.

You know that as soon as a skeptic tries to make a case against CAGW, the alarmists will roll out the “97 percent consensus” and all sorts of studies supporting their claim, and how is your average person, who doesn’t study this stuff every day, going to counter those arguments. They certainly won’t be able to counter all the claims in real time. Even a know-nothing can cite studies supporting CAGW. It takes someone with a little knowledge of the subject to counter those claims, and most politicians or most citizens for that matter, don’t have that detailed knowledge of the subject.

Therefore, most skeptic politicians keep their mouths shut.

Fortunately, we have Trump to do the debunking of the CAGW narrative. 🙂

Flight Level
March 5, 2019 3:10 pm

The most socially alarming part is that they get away with it, rewriting history that is.

Dave Fair
Reply to  Flight Level
March 5, 2019 11:43 pm

Those who control history, control the present.

Flight Level
Reply to  Dave Fair
March 6, 2019 1:25 am

Dam right indeed. Not a good deal to be part of that kind of future tough.

RobR
March 5, 2019 3:11 pm

Mosh, any thoughts on these adjustments? It don’t look good from where I’m sitting.

Steven Mosher
Reply to  RobR
March 5, 2019 7:23 pm

I can only speak to the methodology.

There are, broadly speaking, two approaches to doing adjustments.

A. Bottoms up
B. Top down.

Bottoms up works like this. A human sometimes with the aid of an algorithm and feild tests looks at the data and tries to reconstruct what should have been measured. Lets take a simple example.

You have a old data series. say 1900 to current. You look at the metadata and find a record that they
changed from sensor A to sensor B. You then run side by side field tests of sensor A type
versus sensor B type. You find out that sensor A has a warm bias of .5C.

studies like this

https://www.researchgate.net/publication/252980478_Air_Temperature_Comparison_between_the_MMTS_and_the_USCRN_Temperature_Systems

https://www.researchgate.net/publication/249604243_Comparison_of_Maximum_Minimum_Resistance_and_Liquid-in-Glass_Thermometer_Records

So your field work shows you that when you replace sensor A with sensor B you can expect a .5C artificial cold shift or whatever. So this shift wont always show up in the data because if it was warming when you introduced the cold biased sensor, the temperature may just look flat.

Any way, to remove this Bias due to sensor change you have to Move one segment UP, or the other segment down. Generally people adjust the past. The reason is obvious.

This is exactly the kind of work that spenser and christy do! it is also what leif does. For UAH you have to stitch together multiple sensors all having different properties and calibrations and drifts.
So you note that when sensor A and Sensor B where looking at the same patch of earth, one has a warm bias relative to the other. So you adjust one.. or the other. doesnt matter one wit which gets adjusted.

So this is the bottoms up approach. You are working, typically with a human, looking at series by series
looking at the metadata and trying to take out the biases where you have good evidence of systematic changes in the methods of observation.

A similiar one is TOBS. same basic approach. except the correction is a modelled answer. Verified model with field data, but a model nonetheless.

NCDC used to work this way. You have discrete ‘reasons’ for making the adjustment: station move, tob change, instrument change. and every decision is traceable.

Now, of course there are several potential challenges here.

1. can you trust the metadata, was the station REALLY moved, was a TOB missed?
was an instrument change properly recorded? Did the ACTUAL instrument in question
have a cold bias or warm bias?
2. Can you trust the researcher or does he have his thumb on the scale?
3. Did this bottoms up approach miss obvious flaws in the data?
4. Did the researcher stop investigating because he got an answer he liked. Note this is
different than #2

Some countries ( maybe AUS) tend toward this bottoms up approach. Local experts trying to build a record that accounts for changes in the method of observation: changes in time, changes in place, and
instrument changes. They also use algorithms to assist in this. Algorithms designed to find
data that statistically stands out.. think of this as expert based rule governed approach.
In a perfect world you get a discrete reason for every change.

Also, note with this approach that people will almost always glom on to #2. Accuse the guys of cheating. In general, this ends all reasonble technical discussions. They made a change, I dont get it, therefore, they are frauds. Nice logic if you can sell it.

The top down approach is algorithmically driven. There is no human looking at individual series
and deciding, THIS should go up or THAT should go down. Metadata is used, but not trusted.
NCDC shifted to this approach with PHA. in these approaches an algorithm looks at the data.
A typical algorithm is SNHT.
Here is a sample of some work
https://cran.r-project.org/web/packages/snht/vignettes/snht.pdf

The algorithm doesnt need to know anything about metadata or instruments or TOB, or station moves
It just looks at the data, detects oddities and then suggests an adjustment to remove the oddities.

The algorithm used by NCDC is an improvement to SNHT ( standard normal homogeniety test)
Its called pariwiseSNHT. in pairwise SNHT every station is iterative compared to its neighbors.
First step is to select the 50 closest well correlated neighbors. Then you interatively compare the neighbors. Suppose 49 stations record declining temperatures and 1 records an upward drift. pairwiseSNHT will signal that the oddball may need a correction. or if 49 record a flat period and 1 station shows
a .5C jump one month and thereafter, that one oddball gets flagged.

At this point you may go and check? YUP, the metadata shows that stations TOB was changed at that point, so you have a reason for the change. the dataseries says smething changed, the metadata support it. At other times you may not find anything in the metadata.
the station jumped 1C, but metadata has no supporting reason. Like all data , metadata can have errors or ommission and commision.
In any case, the algorithm decides what changes need to be made to make the WHOLE dataset more consistent.

That’s it.

Now, with this approach you lose something. Since no human had their hand in the individual corrections, you wont always have a DISCRETE reason why the data was adjusted. Bot done it.
Sometimes the metadata supports the decision and sometimes, well sometimes the only metadata you have is location: no TOB data, no instrument type, no siting photos ect. So you are trusting the algorithm to correct the data. Note there are MULTIPLE statistical approaches to doing this. all withs pros and cons.

This young lady has a nice masters thesis, and some nice examples not too math heavy
https://www.math.uzh.ch/li/index.php?file&key1=38056

One benefit of the top down approach is that you eliminate the “cheating” charge. The guys at NCDC dont go hunting through each series deciding what goes up and what goes down. There is no tie between c02 and the adjustments the algorithm makes. its all standard documented statistical adjsutment.

At berkeley we developed our own top down approach. One reason we did this was to find out if a totally new top down data driven approach gives the same answer as NCDCs top down approach. and gives the same answer as CRU. This was done to counter the charge of “Cheating” or thumb on the scale. The answer? We get the same global answer after adjusting.

What are the other benefits of the top down approach? well you can objectively test it.

1. You can take the code and data, run it and get the same answer. Why is this important?
Well with the bottoms up approach a different human looking at the same data may make different
decisions. then what? battle of the experts? With an algorithmic approach the code is open
and the data is there, you can see exactly how it works. if you want to claim that the code adjusts
data t match c02 rise, well then the code is there to show you this claim is bogus.

2. You can run objective tests on the algorithms. This has been done for NCDC and other approaches.
here is just one example
https://www.clim-past.net/8/89/2012/cp-8-89-2012.pdf
here is another
https://rmets.onlinelibrary.wiley.com/doi/full/10.1002/joc.4265

3. It serves as a CROSS CHECK on the bottoms up approach. There are handful of papers comparing
bottoms up “National approaches” with Berkeleys Top down approach. more coming because bottoms up guys believe bottoms up is better. see? debate, in the science.

4. We can test that it works by comparing to reference stations.
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2015GL067640

5. Lastly, one weakness ( see below) SHOULD focus research on the only questions worth asking
A) what is the effect of UHI
B) what is the effect of microsite bias

For online debates the top down approach helps me understand who is serious about actual techncial
discussions and who is just interested in smearing and false accusations of fraud. The various codes
are public. The benchmarks are public, the bugs are documented, the changes and improvements are
documented, there is a rich pile of purely statistical literature on the general time series topic, and anyone claiming fraud about top down methods, doesnt know what they are talking about. they instantly discredit themselves because there are better arguments against top down methods than the false one of fraud. Folks who use the weakest argument just own goal themselves.

What are the problems with top down?

1. People continually want to know the reason for every discrete adjustment. ( think of asking a neural net why it identified the image as a cat ) But in the algorithmic approach the whole rationale is based on getting away from the human element, the human decision, the cheating human. the algorithm
does not attempt to rationalize every decision, it attempts to REDUCE bias, and fix obvious problems.
decisions can be confirmed by metadata but not in every case. because metadata, like all data is subject to uncertainty.

2. The top down approach works best when a majority of stations have no issues. This is a problem especially for locations where urban sites outnumber rural sites. These areas exist. I like a top down approach because its weakness focuses on an area that needs more research. Focus drives
improvement.

3. Top down approaches reduces bias in regional metrics sometimes at the expense of local
fidelity. What’s this mean? if you’re interested in ONE single record at one single location
and the accuracy of that one single data stream, your best bet is to do top down and bottoms
up analysis. ( called a hybrid approach)

In short, top down will elimate the charges of human fraud, it removes potential bias due to idiosyncratic human decisions, but at the expense of having adjustments you can explain in 100% of the cases. It trades global reduction in bias for some potential of odd local results. It gives people cherries to pick. There are two kinds of cherry pickers: Your diligence cherry picker who points out
the issue and works to improve things, and the chery picker who finds issues and screams fake or fraud.
guess which ones are from outside the country called science?

Obviously I prefer the top down approach. One because it lets me know who the dishonest cranks are ( folks who insist the algorithms “cheat” or that NCDC cheats) and second because we have good ways of testing the approach and improving it. So when my friend emails me and says hey we found a problem in your estimates, I am super happy. I have 10 or so years invested in the work. I make the data available to you, especially when your aim is to try and find something wrong with it. !

https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/joc.4808

https://sci-hub.tw/https://doi.org/10.1002/joc.4808

Back To Australia!

1. I have not read their approaches with a lot of due diligence. I see people claim fraud, but
havent seen any evidence of that in their publications.

2. Our approach tends to show less warming than theirs. I started to look into this ( mainly
trying to help Geoff S) It’s a huge effort to look at an entire country. A while back I helped
with a project on Labrador https://sci-hub.tw/https://doi.org/10.1139/cjes-2016-0034 and
going through even a small region by hand is a months long task.

In general I would choose a top down approach over a bottoms up approach, BUT if you are really interested in australia, really really interested in what the best answer is, then you’d better compare
top down with bottoms up. You’d test a variety of algorithms, youd hold out some data and see which
series was best at predicting held out data. huge job, hybrid approach.

As far global warming goes, I’m happy with the imperfect job our top down approach delivers.
Algorithm scored well on objective tests. There is room for improvement and I spend most of my time
looking for systematic flaws, flaws that effect 100s or thousands of stations. In total about 35% of the stations receive no adjustments. of the 65% percent that are adjusted, half are warmed, half are cooled, rougly speaking. Depending on the region, some regions are warmed and others are cooled.
nobody criticizes the regions our algorithm cools. immigrant labor.

In the end land adjustments are small, Globally speaking. land is 30% of the record. The other 70% is SST. SST records have their trend adjust down more than land is adjusted up. So if you use
all raw data for SST and raw land you get MORE WARMING than if you adjust SST and and adjust SAT.

raw data worshippers dislike this fact.

How do I put this. Suppose australia is warmed artifically by 1C. Whats that do to the global picture?

.015C bias to the global record.

So, am I going to focus my time on AUS? nope. There is more interesting work in UHI and microsite.
Folks who want to do a proper audit of AUS, and by proper audit I mean a steve mcintyre class of audit, well you have your work cut out for you. I always look at the ROI. If I invest my time in UHI which may have an effect across the globe I will have more impact than if I focus on a single country that doesnt matter in the grand scheme of things.. .015C. Its like AUS c02 cuts !! too small globally to make a difference.

Anyway. I have been working on a tutorial for WUWT folks, first on the global average, later one
on adjustments. maybe aus would be a good example to work. But first lets see if I can get the method tutorial done and published here.. its pretty frickin long for a blog post

Then maybe a post on adjustments, in the context of famous cases where data needed to be corrected
to move science forward.

Patrick MJD
Reply to  Steven Mosher
March 5, 2019 9:58 pm

Did you cut and paste that? Because it just does not look like your usual style.

Steven Mosher
Reply to  Patrick MJD
March 6, 2019 12:09 am

err no. Usually, since I am busy, I post from the taxi, train or plane. On the phone.
I had some time to sit down and write and the person asking the question seemed genuinely interested, perhaps willing to learn. plus australia has also been an interesting place when it comes to temperature. A cursory look says the data is ratty, not as bad as USA data, but pretty fricken ratty. a cool challenge. I had spent some time collating data/stations for geoff S because he actually does fricking work and shares ideas, and though we may disagree, he is always respectful and never stoops to screaming fraud. He may disagree with the approach people take but its always in a technical context. In any case he contacted me recently about some technical issues.. One thing he said forced me to re look at some work I did, and it helped ( I owe him a thank you, always best made public, so I’ll do that when the piece goes out) anyway, somebody asked a hinest question about AUS without prejudice or insinuation so that servered a fair and long answer

Reply to  Steven Mosher
March 5, 2019 10:49 pm

Total gobbledygook garbage that leads to tangled web of bullshit.

Nothing…that’s right, nothing will come out of it apart from useless mumbo jumbo.

You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.

Talk all you want but you don’t adjust data.

Steven Mosher
Reply to  Mike
March 6, 2019 12:11 am

Ok, when your social security check comes tell them no CPI adjustment please.

Psst, do you wear glasses?

John Endicott
Reply to  Steven Mosher
March 6, 2019 7:23 am

Psst, nobody cares about your glasses fetish.

Philip Schaeffer
Reply to  Mike
March 6, 2019 4:03 am

Mike said:

“Total gobbledygook garbage that leads to tangled web of bullshit.

Nothing…that’s right, nothing will come out of it apart from useless mumbo jumbo.

You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.
You don’t adjust data.

Talk all you want but you don’t adjust data.”

Wow. Truth by bald assertion AND repetition! How can you beat that?

Lets say you had a thermometer. You collect a bunch of reading with it.

Then you find out that the markings are off by 2 degrees! Damn. The thermometer still reads consistently over it’s range of measurement, but it’s all shifted by 2 degrees.

Do you throw away your data? Do you insist on keeping the results that are all 2 degrees out? Do you correct for the bias you discovered?

Is it really true that data should never be adjusted?

John Endicott
Reply to  Philip Schaeffer
March 6, 2019 7:22 am

The thermometer still reads consistently over it’s range of measurement, but it’s all shifted by 2 degrees.

then why would you need to adjust, it’s consistent meaning the anomalies will be the same, the trend will be the same regardless of which way it’s shifted. Adjusting it only opens opportunities to change the results to something you prefer.

Is it really true that data should never be adjusted?

The data is individual observation/measurements. The data is the data like it or not. Adjustments to the data is not data, it’s guesses (whether done by an individual or by an algorithm). The result of data + adjustments is not data. It may or may not be useful to adjust the data but the result is, by definition, no longer data because it is no longer an actual measurement.

Philip Schaeffer
Reply to  Philip Schaeffer
March 6, 2019 8:10 am

John Endicott said:

“then why would you need to adjust, it’s consistent meaning the anomalies will be the same, the trend will be the same regardless of which way it’s shifted. Adjusting it only opens opportunities to change the results to something you prefer.”

What if you want to know what the actual tempreature was? You know that your thermometer is out by 2 degrees. The only way to determine the actual temperature is to allow for this.

“The data is individual observation/measurements. The data is the data like it or not. Adjustments to the data is not data, it’s guesses (whether done by an individual or by an algorithm). The result of data + adjustments is not data. It may or may not be useful to adjust the data but the result is, by definition, no longer data because it is no longer an actual measurement.”

If you know that your data is all inconsistent by 2 degrees with the temperature you now know you really observed, then adjusting your data by 2 degrees to match what you now know you actually really observed doesn’t make it not data.

Philip Schaeffer
Reply to  Philip Schaeffer
March 6, 2019 8:17 am

Lets say I’m looking at my thermometer now. It says 20 degrees. I record that data.

Then I remember that it’s out by 2 degrees. So I put a line through my 20 degrees figure and write 22. Is that no longer data?

John Endicott
Reply to  Philip Schaeffer
March 6, 2019 8:42 am

Data is the *actual* recorded measurements. period. Anything else, no matter how much “better” you might “think” it to be, is not actual measurements and therefore, by definition, is most definitely *NOT* data. The data and *only* the data is the data, no matter how flawed you might consider it to be.

John Endicott
Reply to  Philip Schaeffer
March 6, 2019 8:53 am

And if you think you know that data is “incorrect” by “X” degrees, that’s what ERROR bars are for. You don’t change the data, you use error bars to indicate how accurate the data may or may nor be.

Philip Schaeffer
Reply to  Philip Schaeffer
March 6, 2019 9:16 am

John Endicott said:

“And if you think you know that data is “incorrect” by “X” degrees, that’s what ERROR bars are for. You don’t change the data, you use error bars to indicate how accurate the data may or may nor be.”

In this case you know exactly what the error is. It isn’t a range. It’s a fixed offset. This cannot be represented by error bars.

Reply to  Philip Schaeffer
March 6, 2019 5:07 pm

Well for a start you wouldn’t use a thermometer that was out by 2 degrees having first checked it’s accuracy. Or do you think this is beyond science?
There is nothing wrong with mercury thermometers. They are very accurate and very consistent. Decade after decade. I know this because I have one which is over 50 years old and is just as accurate today as it was when manufactured. It’s easy enough to check for accuracy against 2 or 3 other newer thermometers. They are also have more than enough resolution for measuring the temperature of the human habitat. Fandangled new digital stuff is either for use in the lab or for climate scientists and meteorologists who need to justify their existence by blinding people with meaningless 0.001 degree resolution data taken over ridiculously short time periods and claiming they can extract permanent trends after they adjust the data without real justification.

Let’s take your example. You believe (reason?) your measurements are out by 2 degrees so you adjust the temps up or down by 2 degrees. The original measurements are now gone. Then someone comes along 10 years later and decides that these temps are wrong for some other unforseen reason and adjusts the ”data” You are now left with garbage.
And what happened to the original data? Maybe the original adjustment was made for unscientific reasons? Who knows?
What should happen is that your suggestions for change and your reasons should be annexed to the original unchanged data so that others may scrutinise at a later date.
A perfect example of the reasons NOT to adjust is that the BOM is now claiming – AND RECORDING for ever – new record breaking temperatures here almost daily. Given the original posting and many others, I cannot and do not believe them.

Philip Schaeffer
Reply to  Philip Schaeffer
March 7, 2019 3:13 am

Mike said:

“Well for a start you wouldn’t use a thermometer that was out by 2 degrees having first checked it’s accuracy. Or do you think this is beyond science?”

I’m not sure that you understand how thought experiments work.

If your absolute statement is correct, then it shouldn’t be possible for me to propose a set of circumstances where it doesn’t apply.

“There is nothing wrong with mercury thermometers. They are very accurate and very consistent. Decade after decade. I know this because I have one which is over 50 years old and is just as accurate today as it was when manufactured. It’s easy enough to check for accuracy against 2 or 3 other newer thermometers. They are also have more than enough resolution for measuring the temperature of the human habitat. Fandangled new digital stuff is either for use in the lab or for climate scientists and meteorologists who need to justify their existence by blinding people with meaningless 0.001 degree resolution data taken over ridiculously short time periods and claiming they can extract permanent trends after they adjust the data without real justification.”

This is irrelevant to the thought experiment I have detailed.

“Let’s take your example. You believe (reason?) your measurements are out by 2 degrees”

The question about reason suggests to me that you have failed to understand the nature of the thought experiment. This is about what you do if you have verified that your thermometer is marked 2 degrees out.

“so you adjust the temps up or down by 2 degrees.”

So now your temperatures reflect the temperature that you actually observed, not the temperature that you wrongly thought you had observed.

“The original measurements are now gone.”

No, the true nature of the original measurements is now revealed.

“Then someone comes along 10 years later and decides that these temps are wrong for some other unforseen reason and adjusts the ”data” You are now left with garbage.
And what happened to the original data? Maybe the original adjustment was made for unscientific reasons? Who knows?
What should happen is that your suggestions for change and your reasons should be annexed to the original unchanged data so that others may scrutinise at a later date.
A perfect example of the reasons NOT to adjust is that the BOM is now claiming – AND RECORDING for ever – new record breaking temperatures here almost daily. Given the original posting and many others, I cannot and do not believe them.”

I agree that the original observations should be preserved, but if someone asks you what temperature it was at a certain time, and your original observation was 20, and you know your thermometer reads cool by 2 degrees, the correct answer to tell them is that the temperature was 22.

John Endicott
Reply to  Philip Schaeffer
March 7, 2019 5:18 am

This is irrelevant to the thought experiment I have detailed.

Your thought(less) experiment is irrelevant. If you know in advance that the thermometer is off by 2 degrees, you don’t use that thermometer. Period. And if you don’t know in advance but only “figure it out later”, then you don’t know that the error was actually existent at the time the readings were taken or if it developed over time, you are only guessing and assuming. The data is the data. You can not change the data (at least not if you want to be taken seriously as a scientist. Clearly you do not).

In this case you know exactly what the error is. It isn’t a range. It’s a fixed offset. This cannot be represented by error bars.

Bzzt wrong. With thermometers readings, there’s always a range, all this does is shift where the upper and lower bounds of the range should be marked. And no, you don’t know exactly what the error is (unless you are omnipotent) even what you *think* the error is contains error bars.

Philip Schaeffer
Reply to  Philip Schaeffer
March 7, 2019 7:50 am

John Endicott said:

“Your thought(less) experiment is irrelevant. If you know in advance that the thermometer is off by 2 degrees, you don’t use that thermometer. Period.”

Technically, if you know it is off by exactly 2 degrees because you tested it and determined that the markings are off by 2 degrees, there is absolutely nothing to stop you using it, provided you check that the 2 degree offset is the only thing wrong with it.

If you get a thermometer that you know to be good, and you re-mark it so that it reads out by 2 degrees, you still know what the true reading is. It doesn’t prevent you from knowing the temperature with the same accuracy you could before you re-marked it.

Regardless, this is irrelevant to the thought experiment.

“And if you don’t know in advance but only “figure it out later”, then you don’t know that the error was actually existent at the time the readings were taken or if it developed over time, you are only guessing and assuming.”

This is not necessarily true.

Take a glass thermometer that has engraved markings for F and C. Suppose you test it against a correctly calibrated source and you find that it is 2 degrees C out. Then you check the F scale. It reads correctly. The engraving can’t have moved, and the F checks out, so it logically follows that the engravings for C were marked incorrectly by 2 degrees. Now, in a “you can’t actually know anything” sense, this isn’t absolute, but if we’re going to get into arguments about how you can’t actually really know anything there is no point discussing it.

Heck, what say that I made one deliberately to trick someone, and left the F scale correct so that it could be verified that the thermometer still preformed as it had before I doctored the markings.

It is simply not impossible.

“The data is the data. You can not change the data (at least not if you want to be taken seriously as a scientist. Clearly you do not).”

I would say that in the case of the thermometer with offset markings, that recording what it actually reads to the eye is data about what the thermometer represents the temperature as, and recording what real temperature that actually represents is data about the true temperature represented by the thermometer.

Sure, you can record both at the same time, but does the latter become not data if you don’t write them down at the same time, and instead, knowing that the markings are off, allow for it later.

“Bzzt wrong. With thermometers readings, there’s always a range, all this does is shift where the upper and lower bounds of the range should be marked. And no, you don’t know exactly what the error is (unless you are omnipotent) even what you *think* the error is contains error bars.”

Yeah, but you can’t represent the temperature that you know the thermometer is actually displaying with error bars around the offset reading. The real reading is 2 degrees away, with it’s error bars equally as wide.

John Endicott
Reply to  Philip Schaeffer
March 7, 2019 8:41 am

Heck, what say that I made one deliberately to trick someone,

Then you’ve just verified that you aren’t interested in science or data or facts. Bottom line in real science, you don’t change the data. period. You report the data you have and you can report alongside the data any errors you think that data contains, but the data remains *AS IT IS* anything else is not data by definition.

John Endicott
Reply to  Philip Schaeffer
March 7, 2019 8:46 am

The question about reason suggests to me that you have failed to understand the nature of the thought experiment. This is about what you do if you have verified that your thermometer is marked 2 degrees out.

you fix or replace it *before* using it to record data (you did test it before hand, like a good scientist should, right? you weren’t performing shoddy science by using untested, uncalibrated equipment and then going back and changing your data to cover up your slipshod scientific methods, right?).

Philip Schaeffer
Reply to  Philip Schaeffer
March 7, 2019 9:08 am

I said:

“Heck, what say that I made one deliberately to trick someone,”

John Endicott said:

“Then you’ve just verified that you aren’t interested in science or data or facts.”

It’s very scientific, if as in this case what I am testing is the claim that you can’t know afterwards that the thermometer has been reading 2 degrees out the whole time. That is demonstrably not true. This isn’t about ethics.

“Bottom line in real science, you don’t change the data. period. You report the data you have and you can report alongside the data any errors you think that data contains, but the data remains *AS IT IS* anything else is not data by definition.”

If, as I have just established, that it is possible to know afterwards that your previous readings were offset by 2 degrees, does it become not data because I write the temperature I know the thermometer is displaying rather than what the markings represent the temperature to be?

“you fix or replace it *before* using it to record data (you did test it before hand, like a good scientist should, right? you weren’t performing shoddy science by using untested, uncalibrated equipment and then going back and changing your data to cover up your slipshod scientific methods, right?).”

What in practical reality is the difference between writing a different number next to the markings on the thermometer and then writing down the reading, or doing the correction in your head?

Philip Schaeffer
Reply to  Philip Schaeffer
March 7, 2019 9:14 am

I mean, if I’m sitting there looking at the thermometer with a pen in my hand knowing what I need to write next to the markings to correct them, and make an observation doing the correction in my head, is it not data, but suddenly becomes data when I write the new number on the thermometer?

Philip Schaeffer
Reply to  Philip Schaeffer
March 7, 2019 9:30 am

“Bottom line in real science, you don’t change the data.”

For what it’s worth, you have got me thinking about the definition of data vs information. But that also leads me to think that it’s actually impossible to change data. If it becomes information, then you haven’t changed the data.

But in the case I’m describing, does that mean you’re changing your observation and not the data? And how does that work with the example I give of a thermometer where you have the option to correct the markings with a pen and then write them down, vs doing the correction in your head before writing it down.

John Endicott
Reply to  Philip Schaeffer
March 7, 2019 11:46 am

me: “you fix or replace it *before* using it to record data (you did test it before hand, like a good scientist should, right? you weren’t performing shoddy science by using untested, uncalibrated equipment and then going back and changing your data to cover up your slipshod scientific methods, right?).”

Phil: What in practical reality is the difference between writing a different number next to the markings on the thermometer and then writing down the reading, or doing the correction in your head?

The difference is *Science*. The first method (fixing a known “broken” instrument so that it works properly *before* use) is how one properly does science. After the fact rewriting of the data is not science and it is how charlatans get away with making the data fit their ideas.

John Endicott
Reply to  Philip Schaeffer
March 7, 2019 11:52 am

I mean, if I’m sitting there looking at the thermometer with a pen in my hand knowing what I need to write next to the markings to correct them, and make an observation doing the correction in my head, is it not data, but suddenly becomes data when I write the new number on the thermometer?

remember what we are discussing, changing the data *after* the recording (ie adjusting). If you are mentally adjusting it *before* writing it down, that’s a different issue – one of shoddy science, because you have to remember to mentally adjust it every time, and everyone else that is using that instrument needs to know to and do the same mental adjustment. that you could forget, or someone else on the team not even know to do that calculation opens up a whole lot of room for errors (from honest people) and for manipulation (from dishonest people).

Philip Schaeffer
Reply to  Philip Schaeffer
March 8, 2019 12:53 am

John Endicott said:

“The difference is *Science*. The first method (fixing a known “broken” instrument so that it works properly *before* use) is how one properly does science.”

You can still do science properly in the thought experiment. You will get the exact same results.

“After the fact rewriting of the data is not science and it is how charlatans get away with making the data fit their ideas.”

I guess what I’m thinking about is the difference between adjusting data and adjusting an observation.

“remember what we are discussing, changing the data *after* the recording (ie adjusting). If you are mentally adjusting it *before* writing it down, that’s a different issue – one of shoddy science”

I find it hard to draw an exact line there. If I don’t write it down, but remember it with my brain is it not data? Does it only become data when i put it on a piece of paper?

John Endicott
Reply to  Philip Schaeffer
March 14, 2019 10:11 am

You can still do science properly in the thought experiment. You will get the exact same results.

if your equipment is so faulty that you have to make mental adjustments when record you are doing it all wrong. period.

I guess what I’m thinking about is the difference between adjusting data and adjusting an observation.

In real science you don’t adjust data and you don’t adjust observation. If you are, you are doing it all wrong.


I find it hard to draw an exact line there. If I don’t write it down, but remember it with my brain is it not data? Does it only become data when i put it on a piece of paper?

If you are not writing your observations down (so that others can see what you did), you are doing it all wrong.

Philip Schaeffer
Reply to  Philip Schaeffer
March 17, 2019 5:41 am

“if your equipment is so faulty that you have to make mental adjustments when record you are doing it all wrong. period.”

So wrong and so terrible, yet you get the exact same results. This isn’t about best practices. It’s about the nature of data, information, observations, and adjustments.

“In real science you don’t adjust data and you don’t adjust observation. If you are, you are doing it all wrong.”

So what do you call allowing for the markings being off? If it isn’t an adjustment to the data or the observations, then what is it an adjustment to?

“If you are not writing your observations down (so that others can see what you did), you are doing it all wrong.”

The question wasn’t about best practices. What is the answer to the question? If you’re absolutely certain what is data and what isn’t, it should be easy to answer.

Reply to  Steven Mosher
March 5, 2019 10:58 pm

Here’s the science problem with your two-sensor scenario: you know that sensor A has 0.5C warm bias, but you don’t really know that, It could be that sensor B has a cold bias. Unless you have a control thermometer to compare with both sensor A and sensor B, you can’t tell if A is warm or B is cold.

In any event, the proper adjustment is to adjust both toward the middle, unless you have absolute certainty that one side actually is warmer or cooler than the other. That will spread the uncertainty to both sides and prevent undue cooling or heating on one side only.

Steven Mosher
Reply to  James Schrumpf
March 6, 2019 12:38 am

Yes.

In some methods only one series is adjusted. this could lead to a higher error of prediction.
In other methods ( like the berkely method) all stations are simultaneously adjusted to reduce the error of prediction.. so in effect if sensor B has a cold bias of .5C +.25
every instance of change between A and B could be adjusted differently such that disagreement with all other series is globally minimized.. the amount of adjustment could vary series to series based on the overall statistics.

as an example, you have 50 series 40 of which are B type, and 10 are type
A that switch to B.. not every A that switches to B will get the same adjustment.

they will get adjusted to make them better predictors of the other 40 series.

TOP down, adjusted to minimize the error of prediction, rather than bottoms
up adjusted by one discrete value.

Globally? doesnt matter which approach you use. different algorithms, same answer
consistent regardless of the wet ware in the works or software in the works

Locally? one method may adjust up, a different method down. independent of human
involved in the process.

AGW is not Science
Reply to  Steven Mosher
March 6, 2019 9:30 am

Question: How many times were the historical recorded temperature readings “adjusted” before 1988?

Reply to  James Schrumpf
March 6, 2019 5:11 pm

But you cannot do that with decades old data because you just don’t know.

Tom Abbott
Reply to  Steven Mosher
March 6, 2019 4:02 am

Steven, after going through that long list of reasons why temperatures are adjusted, I didn’t see any mention of why the data manipulators always cool the past and never warm it. Did you fail to mention that data manipulator rule?

RobR
Reply to  Steven Mosher
March 6, 2019 6:32 am

Mosh,

The extended reply is greatly appreciated. I know you give and take a lot of guff on here. My two cents; WUWT is no better than the Alarmist Blogs without objective and contrary viewpoints.

I’m looking forward to a detailed breakdown from Dr. Jen M.

WXcycles
Reply to  Steven Mosher
March 6, 2019 6:57 am

An honest will alter no data and use error bars.

The crook (and non-scienctists) will adjust data and pretend its now more accurate.

John Endicott
Reply to  Steven Mosher
March 6, 2019 7:04 am

One benefit of the top down approach is that you eliminate the “cheating” charge.

Or rather you open up more possibilities to cheat while claiming you are eliminating cheating. Algorithms are written by people. As we’ve seen with Facebook and Googles algorithms, they can be written to give the biased results you want based on the biased assumptions the algorithm writers bake into the algorithms.

AGW is not Science
Reply to  John Endicott
March 6, 2019 7:53 am

Yup! “Algorithms” are, like any computer “code” simply going to search for and find what they’re told to search for and find. Bias, conscious or not, can easily be “baked in.”

I agree with the method described by WXcycles – NO “adjustments” should be done, period. You present the actual measurements and use error bars to indicate ranges of error which covers any issues or inaccuracies. In that case, you actually have DATA, not “guesswork,” and you ALSO have indications of the range of errors present in it.

The reason THAT isn’t done is because it would reveal just how ridiculous the claims of “precision” regarding the Earth’s temperature history really are.

John Endicott
Reply to  AGW is not Science
March 6, 2019 8:47 am

Exactly, the data is what it is. Showing anything other than the actual data is showing a fiction. No matter how much “better” someone thinks that fiction is to the real measurements, it’s still a fiction.

Philip Schaeffer
Reply to  AGW is not Science
March 6, 2019 9:23 am

John Endicott said:

“Exactly, the data is what it is. Showing anything other than the actual data is showing a fiction. No matter how much “better” someone thinks that fiction is to the real measurements, it’s still a fiction.”

Regardless of the definition of data, if someone asks you what the temperature of something was, and you know that the thermometer it was measured with was 2 degrees out, then fiction is insisting that the temperature actually was what was recorded.

If the temperature recorded was 20, and you know that your thermometer was reading 2 degrees too cold, it is a fiction to insist that the temperature was 20.

John Endicott
Reply to  AGW is not Science
March 7, 2019 5:30 am

If you know the thermometer is off by 2 degrees, you don’t use that thermometer. PERIOD. You get one that works. Anything else is unscientific! If you didn’t know that it was off at the time the recording were made, then you are just guessing that it was – and that is a fiction.

The correct answer to your example is to state “the temperature recorded is 20 degrees” and give the range of error bars – which would include the fact that you *think* the thermometer was off by 2 degrees at the time it was recorded along with all the other errors inherent in using such a device. Including any reasoning for those error bars being the range that they are is a bonus.

1sky1
Reply to  Steven Mosher
March 6, 2019 1:48 pm

The algorithm doesnt need to know anything about metadata or instruments or TOB, or station moves
It just looks at the data, detects oddities and then suggests an adjustment to remove the oddities.

This “top down” approach, which baldly assumes that the creator of the algorithm thoroughly knows the analytical structure of the underlying signal, is the epitome of sheer academic hubris. In each case cited by Mosher, the developer is not a seasoned analytic geophysicist, but merely a student of mathematics or programming. Instead of proven, realistic conceptions of actual signal structure in situ, they bring all the precious preconceptions of an Ocasio-Cortez to the “noble” task of “detecting oddities” and transmogrifying data in their simplistic search for the holy grail of “data homogeneity.” No serious branch of science is as lax as “climate science” in that regard.

Reply to  Steven Mosher
March 10, 2019 7:34 am

Mosh,
Thank you for this effort to enlighten this shadowy subject. I still get concerned when algorithmic adjustments are made to data that exceed the possible margin of error for the original measurement without a known reason. Assuming someone in 1930 couldn’t read a thermometer within 1/2 a degree is a bad assumption, assuming the weather station exhibits UHI effect over 90 years is possibly a good assumption if one knows the location. Assuming the Stevenson screens all get dusty at an average rate is dubious. In any case, the raw data needs to be kept pristine and available, since applying the same algorithm to the previous answers simply results in a larger adjustment.

Azeeman
March 5, 2019 3:47 pm

How can climatologists claim gold standard accuracy with their climate change claims when the underlying temperature measurements are so poor that they require constant adjustments and are basically unfit for purpose?

Steven Mosher
Reply to  Azeeman
March 6, 2019 12:40 am

whats the purpose?

there are two fundamental purposes, do you know what they are?

hint, scary headlines is not one of them

WXcycles
Reply to  Steven Mosher
March 6, 2019 7:05 am

Who knew all thermometers in the past ran hot?

dats ‘mazing!

John Endicott
Reply to  Steven Mosher
March 6, 2019 7:25 am

Steve, when the measurements are so “poor” that they require constant adjustments, they’re not fit for *any* purpose (other than propaganda).

Tom Abbott
Reply to  Azeeman
March 6, 2019 4:09 am

“when the underlying temperature measurements are so poor that they require constant adjustments”

Who says the underlying temperatures require constant adjustments? The Data Manipulators, is who.

I say, Let’s go with the raw data. That’s more accurate than anything the Data Manipulators come up with. And it is not tainted by agenda-driven human hands. The raw data show the 1930’s as being as warm as today. That’s why the Data Manipulators want to change things.

Temperature Adjustments are a license to cheat and steal.

AGW is not Science
Reply to  Azeeman
March 6, 2019 7:44 am

Indeed. It’s downright comical to assert they have “certainty” about future climate catastrophe when the data is so bad they can’t even agree about what HAS ALREADY HAPPENED.

Richard from Brooklyn (south)
March 5, 2019 4:09 pm

The Adjustment was necessary for the early readings as Australians (as with many population groups) were shorter then (they had not as good a diet as now). The error was a parallax error and is now corrected to standard height people who can read the thermometer eye level to the instrument.

…and we all know that the current BOM Mets are ‘on the level’.

Thingadonta
March 5, 2019 4:17 pm

I would also check rainfall records for Rutherglen, it was dry in the early 1900s meaning it should be warmer then.

Donald Kasper
March 5, 2019 4:38 pm

We all know 1930 was a period of world glaciation that reached the equator and needs to be accounted for in the record. Ultimately, the proper temperature record will show the subzero temperature summers in the Midwest that occurred at that time, and the mile of ice over Toronto.

Michael Jankowski
March 5, 2019 4:42 pm

Seems that Acorn v2 went through a very strenuous review.

Since there is no ISBN number, instead of not trying to list one on page i, they just went with:

“ISBN: XXX-X-XXX-XXXXX-X”

And then page ii has a real winner with an invalid email for the author:

“b.trewin@bom.gov.au:”

Does not look ready for prime-time. But I’m sure all of the data, methods, and results were sifted-through with a fine-toothed comb, lol.

Reply to  Michael Jankowski
March 5, 2019 5:03 pm

Michael, it was all put together in a rush to meet the deadline for inclusion in the remodelling for AR6 … for the IPCC.

“The IPCC is currently in its Sixth Assessment cycle. During this cycle, the Panel will produce three Special Reports, a Methodology Report on national greenhouse gas inventories and the Sixth Assessment Report (AR6). The AR6 will comprise three Working Group contributions and a Synthesis Report.
The AR6 Synthesis Report will integrate and synthesize the contributions from the three Working Groups that will be rolled out in 2021 into a concise document suitable for policymakers and other stakeholders.
It will be finalized in the first half of 2022 in time for the first global stocktake under the Paris Agreement. https://www.ipcc.ch/report/sixth-assessment-report-cycle/

****
Much thanks to WUWT for reposting here.

Reply to  Jennifer Marohasy
March 5, 2019 6:40 pm

crime meets motive.

Dave Fair
Reply to  Jennifer Marohasy
March 5, 2019 11:51 pm

UN IPCC AR6 WG1 report: Modelturbation all the way down. But there are multiple lines of evidence (speculation) to support such modelturbation. Doancha know?

The political summary will ignore the science.

Tom Abbott
Reply to  Dave Fair
March 6, 2019 4:22 am

“The political summary will ignore the science”

Yeah, if history is any guide, it will.

Like the time Ben Santer changed the IPCC’s position that they could not attribute human causes to climate change, to just the opposite, that humans were definitely responsible for the climate changing. Santer completely changed the meaning of the IPCC report in his effort to promote the CAGW narrative. And the IPCC let him get away with it.

Instead of Santer being fired for being a politician posing as a real scientist, he is still working and putting out more lies about CAGW. Now he claims to have discovered absolute proof of human involvement in changing the climate. Same old lies dress up in a new suit.

I hope one of these days these guys get just what they deserve. Exposing them for the liars they are is a pretty good start.

Steven Mosher
Reply to  Jennifer Marohasy
March 6, 2019 12:50 am

huh. now thats down right funny

RobR
Reply to  Jennifer Marohasy
March 6, 2019 11:36 am

Dr. Marohasy,

In light of Steve Moser’s comments on a lack of insight into the methodology leading to the adjustments; do you plan on a follow-up post?

Obviously, you posit nefarious ends but a sound critique of the methods and purported reasons for adjustments would strengthen your case.

Verified by MonsterInsights