Progress on the problems with Australia's ACORN-SAT surface air temperature records

acorn-sat overview

Dr David R.B. Stockwell writes, Progress on the surface temperature front as follows:

Parliamentary Secretary to the Minister for the Environment Bob Baldwin established a Technical Advisory Forum comprised of leading scientists and statisticians to review  and provide advice on Australia’s official temperature data set in January following recommendations by an independent peer review.

The Forum’s report has been published at

http://www.bom.gov.au/climate/change/acorn-sat/documents/2015_TAF_report.pdf.

Here are the links to the press articles from the Australian included below

http://www.theaustralian.com.au/national-affairs/climate/bureau-of-meteorology-told-to-improve-data-handling-analysis

http://www.theaustralian.com.au/national-affairs/climate/questions-remain-on-bom-records

****

Questions remain on BoM records, The Australian, June 20, 2015

Graham Lloyd

The results of an independent ­review of the Bureau of Mete­or­ology’s national temperature records should “ring alarm bells” for those who had believed the bureau’s methods were transparent, says a key critic, Jennifer ­Marohasy.

Dr Marohasy said the review panel, which recommended that better statistical methods and data handling be adopted, justified many of the concerns raised.

However, the failure to ­address specific issues, such as the exaggerated warming trend at Rutherglen in ­northeast Victoria after homogeni­sation, had left ­important questions ­unresolved, she said.

The review panel report said it had stayed strictly within its terms of reference.

Given the limited time available, the panel had focused on big-picture issues, chairman Ron Sandland said.

The panel was confident that “by addressing our recommend­ations, most of the issues raised on the submissions would be ­addressed”, Dr Sandland said.

The panel is scheduled to meet again early in the next year.

Dr Sandland said that, overall, the panel had found the Australian Climate Observations Reference Network — Surface Air Temperature was a “complex and well-maintained data set that has some scope for further improvements”.

It had made five recommend­ations that would boost transparency of the data set.Although the panel reviewed 20 public submissions, Dr Marohasy said it had failed to address specific concerns.

“While the general tone of the report suggests everything is fine, many of the recommen­dations (are) repeat requests made by myself and others over the last few years,” Dr ­Marohasy said.

“Indeed, while on the one hand the (bureau’s technical ­advisory) forum reports claims that the bureau is using world’s best practice, on the other hand its many and specific recommend­ations evidence the absence of most basic quality controls in the many adjustments made to the raw data in the development of the homogenised temperature series.”

BoM said it welcomed the conclusion that homogenisation played an essential role in eliminating artificial non-clim­ate ­systematic errors in temperature observations, so that a meaningful and consistent set of records could be maintained over time.

*********

Bureau of Meteorology told to improve data handling, analysis, Graham

Lloyd, The Australian, June 19, 2015

Better data handling and statistical methods and the use of pre-1910 temperature records would improve the Bureau of Meteorology’s national temperature data set ACORN-SAT, an ­independent review has found.

A technical advisory panel, brought forward following public concerns that the bureau’s homogenisation process was exaggerating a warming trend, said it was “generally satisfied” with BoM’s performance.

But it said there was “scope for improvements that can boost the transparency of the data set”.

Scientists who queried BoM’s management of the national temperature data said they had been vindicated by the report.

The review panel made five recommendations and said it was “not currently possible to determine whether the improvements recommended by the forum will result in an increased or decreased warming trend as reflected in the ACORN-SAT dataset”.

The independent review panel was recommended by a peer review of the Australian Climate Observations Reference Network — Surface Air Temperature, but it not acted upon until public concerns were raised.

BoM’s technical advisory forum said ACORN-SAT was a complex and well-maintained data set. Public submissions about BoM’s work “do not provide evidence or offer a justification for contesting the overall

need for homogenisation and the scientific integrity of the bureau’s climate records.”

Nonetheless, the review report said it considered its recommendations for improving the bureau’s communications, statistical methods and data handling, and further regional analysis based on the pre-1910 data, would address the most important concerns.

David Stockwell, who raised concerns, said he was “very pleased with the recommendations”.

“They largely identify and address all of the concerns that I have had with the past BoM work,” Dr Stockwell said. “When implemented, it should lead to considerable improvements.

“The panel recommended strongly that the BoM communicate the limitations and it agreed that errors in the data need to be corrected and homogenisation is necessary, as I do, although it must be communicated clearly that the ACORN result is a relative index of change and not an observational series.”

The forum received 20 public submissions which questioned;

• The 1910 starting date (although some pre-1910 records are available) and its potential effect on reported climate trends;

• The treatment of claimed cyclical warming and cooling periods in the adjustment process and its effect on reported warming trends;

• The potential effects of site selection and the later inclusion of stations in warmer regions;

• The treatment of statistical uncertainty associated with both raw and homogenised data sets;

• The ability of individuals to replicate or verify the data set; and

• the justification for adjusting historic temperature records.

Bob Baldwin, the parliamentary secretary responsible for BoM, said the bureau would work to adopt the recommendations.

“We believe the forum’s recommendations for improving the bureau’s overall communications, statistical methods and data handling, and further regional analysis based on the pre-1910 data, will help address the main concerns surrounding the dataset,” he said.

Mr Baldwin said the report was an important part of ensuring the bureau continued to provide world-class information on the climate trends affecting Australia.

The review panel said its recommendations predominately addressed two key aspects of ACORN-SAT.

They were: to improve the clarity and accessibility of information provided, in particular explaining the uncertainty inherent to both raw and homogenised datasets; and refining some data-handling and statistical methods through appropriate statistical standardisation procedures, sensitivity analysis and alternative data filling approaches.


Readers might find this essay from Willis Eschenbach enlightening:

Out of the 112 ACORN-SAT stations, no less than 69 of them have at least one day in the record with a minimum temperature greater than the maximum temperature for the same day. In the entire dataset, there are 917 days where the min exceeds the max temperature …

Australia and ACORN-SAT

5 1 vote
Article Rating
138 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Dudley Horscroft
June 20, 2015 8:16 am

Still seems to leave the question, at every data point changed, what was the reason for this?

Jquip
Reply to  Dudley Horscroft
June 20, 2015 8:25 am

They’ll tell you the reason for each one, after they identify which data points fail to support their theory.

Reply to  Jquip
June 20, 2015 12:59 pm

+100

Gary in Erko
Reply to  Jquip
June 20, 2015 4:33 pm

And they’ll charge you a fee for each question for which they can be bothered inventing an answer.

ICU
Reply to  Dudley Horscroft
June 20, 2015 3:17 pm

“The Forum also received over 20 unsolicited submissions from some members of the public about
the dataset. Further, the Forum is also aware that there have been an unspecified number of
written correspondences from members of the public sent to the Minister for the Environment
the Hon Greg Hunt MP, the Parliamentary Secretary the Hon Bon Baldwin MP and the Bureau
concerning the ACORN-SAT dataset.”
.
.
.
“Nevertheless, in the opinion of the Forum members, the unsolicited submissions received from
the public do not provide evidence or offer a justification for contesting the overall need for
homogenisation and the scientific integrity of the Bureau’s climate records.”
From pp. 4-5 of the aforementioned Forum report. So, homogenization and scientific integrity will continue on unabated. Which seems rather important, given the nature of certain members of the public.

Warren Latham
June 20, 2015 8:21 am

The Australian tax-payers should demand their money back.
It seems they too have been conned.
Perhaps a barbeque should be arranged with a few public drownings on the weekend: just a bit of fun for the children …if the weather is nice.

Tim
June 20, 2015 8:33 am

Exaggerated warming? Nothing to see here, folks. Move along now.

Ian Wilson
June 20, 2015 8:46 am
Reply to  Ian Wilson
June 20, 2015 9:42 am

The Reviewers were appointed by their Masters, the BOM.

RD
Reply to  Ian Wilson
June 20, 2015 9:59 am

An invisible hand in a black box has its thumb on the scales. Well done above….Jo does a great job on this over at her site, too.

Eliza
June 20, 2015 8:46 am

Lukewarmers duped again….LOL

Ralph Kramden
June 20, 2015 8:57 am

I hope they don’t ask NOAA for ways to adjust temperature data.

Reply to  Ralph Kramden
June 20, 2015 12:58 pm

The noaa method is tested in double blind studies and validated.

steveastrouk
Reply to  Steven Mosher
June 20, 2015 1:09 pm

Ah, the blind leading the blind ?

scarletmacaw
Reply to  Steven Mosher
June 20, 2015 1:16 pm

The blind leading the blind?
Pick one example. Tell us exactly how NOAA adjusted the data at that specific site, with the reasoning involved.

scarletmacaw
Reply to  Steven Mosher
June 20, 2015 1:37 pm

Oops, now that I think about it Mosh your post must have been intended as sarcasm. No scientist could actually be defending the adjustments made by Ptolemy Karl.

climatereason
Editor
Reply to  Steven Mosher
June 20, 2015 1:51 pm

Mosh
I thought that in Australia there were questions about the validity of using pre 1910 records in as much you would not be comparing like for like as pre 1910 instruments were unlikely to use Stephenson screens. In addition the number of max/ min thermometers were relatively small thereby calling into question the time of observation validity
There were some very notable temperatures recorded pre 1910 so it would be useful to find a way to properly incorporate them.
Tonyb

Reply to  Steven Mosher
June 20, 2015 2:02 pm

“Oops, now that I think about it Mosh your post must have been intended as sarcasm. No scientist could actually be defending the adjustments made by Ptolemy Karl.”
Stupid. the recent karl paper adjusted the SST series. The methods for SST adjustments are an Open question!!!.
This post is about a LAND RECORD.
Land records are adjusted by Menne’s code. That code has been valdiated under double blind studies.
This much I know. It doesnt matter what homogenization approach you use they all will warm the land record by small amounts (10, 15,, maybe 20%– depending on the time period)
In short skeptics have worried about Land adjustments which warm 30% of the answer by a small amount
while they IGNORE the SST adjustments which cool 70% of the data

Reply to  Steven Mosher
June 20, 2015 2:05 pm

“I thought that in Australia there were questions about the validity of using pre 1910 records in as much you would not be comparing like for like as pre 1910 instruments were unlikely to use Stephenson screens. In addition the number of max/ min thermometers were relatively small thereby calling into question the time of observation validity
There were some very notable temperatures recorded pre 1910 so it would be useful to find a way to properly incorporate them.”
we use them no problem. Siting errors and instrument exposure error and sparsity all become a part of the uncertainty.
Our monthly error is pretty damn big compared to Jones..

Leonard Lane
Reply to  Steven Mosher
June 20, 2015 2:53 pm

Why don’t you explain the method, and then demonstrate on one site the pre and post adjustment data?
Just saying it is so does not necessarily make it so.

Robert B
Reply to  Steven Mosher
June 20, 2015 5:03 pm

while they IGNORE the SST adjustments which cool 70% of the data

to make it look more like a linear warming trend?
I also never got an answer for how you could get a measure of the local temperature in 1840 in Northern Victoria with only a degree of uncertainty when BOM think that the record before 1910 is unreliable, the data for nearby(>400km) capital cities only starts in 1855, and there is not a single temperature measurement within 100km until 1968 (and this data within 30km is not used)
Just noticed this graph that goes with the page. http://berkeleyearth.lbl.gov/auto/Local/TAVG/Figures/34.56S-142.05E-TAVG-Counts.png

lee
Reply to  Steven Mosher
June 20, 2015 7:45 pm

The Stevenson Screens were largely in place by 1880’s.

ironicman
Reply to  Steven Mosher
June 21, 2015 12:27 am

‘The Stevenson Screens were largely in place by 1880’s.’
Not sure about that, it took a few years to get out to every nook and cranny.
Its a pity they don’t recognize the huge temperature spike of 1879, its a game changer.

climatereason
Editor
Reply to  Steven Mosher
June 21, 2015 1:34 am

BOM suggests the Stevenson screens came into general use in 1907. Warwick Hughes suggests some 20 years earlier.
http://www.warwickhughes.com/papers/ozstev.htm
tonyb

hunter
June 20, 2015 9:01 am

The Forum’s refusal to actually meet with those whose complaints led to the creation of the Forum makes their report much less than credible.

noaaprogrammer
June 20, 2015 9:03 am

Except for temperatures that are obviously extreme outliers, (like 40 C in the shade in Anchorage, Alaska on Jan. 1), why don’t climate scientists leave the actual reported temperature for each site untouched, and if they feel that it should be slightly higher or lower, just adjust the error bars accordingly.

Reply to  noaaprogrammer
June 20, 2015 10:05 am

You got it, noaaprogrammer. Adjusting data is completely forbidden in the physical sciences. If data need adjusting, they’re ipso facto bad. The only out is error bars.
But instead one gets adjustments, and no error bars; the very opposite of standard valid practice. The fact that scientific societies endorse this bad practice in climate science is to their everlasting shame. Everlasting shame.

Reply to  Pat Frank
June 20, 2015 2:10 pm

Lack of error bars is my second complaint.
First:
Why are we not using the entire interglacial period for our average temperature?
Currently, we are not sure what forces us into full blown glacial conditions nor what lifts us out.
Our biggest concern should be to measure current temperatures to see how we are doing compared to what we know about previous interglacial climate patterns.
I think since we popped into this interglacial, temperatures have trended towards getting cooler.
It could get very cold sooner than they think…and that would be catastrophic.

Reply to  Pat Frank
June 20, 2015 2:30 pm

mikerestin
Please wash your mouth out with very hot water & Carbolic soap!!
You write
“Currently, we are not sure what forces us into full blown glacial conditions nor what lifts us out.”
But – surely, you must know – ‘the science(*) is settled’ – don’t you?
(*) science – whatever fits our warmunist, unelected super-suckers on the teat of public spending – cause. . . .
Auto writing
Again fully conscious that MODS may need guidance as to /sarc or /notsarcatall, but – as a small government guy – quite willing for Mods to decide on their own.
Smiles,
Auto

K. Kilty
Reply to  Pat Frank
June 20, 2015 3:15 pm

Geophysics is a physical science, and in geophysics we do make adjustments to data. For example gravity data have to be adjusted for elevation differences among observation stations, latitude, nearby terrain and so forth. Gravity data are not “bad” but affected by well understood biases. The difference with climatology is that adjustments to gravity data are extremely easy to justify and theoretically precise. Even so, there are ways to make these adjustments in a manner that produces spurious signals. The adjustments to climatological data are nothing the same. One has suspicions that the data are corrupted by a particular bias, but justifying what one does to estimate and eliminate the bias is not at all easy to do.

Reply to  Pat Frank
June 21, 2015 8:09 am

K. Kilty, I agree with you. We do similar things in Chemistry. However what you describe and what is done with the temperature record are not the same process. “Adjustment,” the same word across the two usages — yours and surface temperature manipulations — takes on different meanings. You clearly noted that.
Your meaning involves extracting a single signal by removing well-understood biasing physical influences. The surface temperature meaning involves removal of dissimilarities and perceived errors and lacunae among data sets by means of statistical criteria, almost without reference to measurement theory at all.

Reply to  noaaprogrammer
June 20, 2015 1:14 pm

because then the error bars are incorrect.

Reply to  noaaprogrammer
June 20, 2015 1:56 pm

“You got it, noaaprogrammer. Adjusting data is completely forbidden in the physical sciences.”
Actually not.
Look at the sun spot record.
in observational science adjusting data is often required because you get the wrong answer otherwise.
Especially when instruments change. take UAH as an example.

catweazle666
Reply to  Steven Mosher
June 20, 2015 3:31 pm

“in observational science adjusting data is often required because you get the wrong answer otherwise.”
Yes, such as when the data doesn’t agree with the computer games climate models and the political imperatives, you mean?

Reply to  noaaprogrammer
June 20, 2015 2:24 pm

1. The records are left untouched.
2. the goal is to build a NEW series.
3. the new series is a prediction.
Lets take a simple example.
You have two scales in your house.
You weigh yourself every day. somedays with both scales some days with one scale after a bit
you stop using scale 1
Scale 1: 200,200,200,200,200,NA,NA,NA,
Scale 2 NA NA NA 202,202,202,202, 202
Now this is historical data. There is no point in bitching about the fact that instruments changed.
there is no point in bitching about the lack of calibration. you have what you have.
This is an observational science question. NOT a labratory science question. In the lab we would
just redo the measurements.
Second this is applied science. That means we have a user who wants and answer. They have a question and use for that answer. You dont get to pitch a fit and whine about the state of the data.
I face this in business every day. here is the history make your best argument for what really happened.
back to our example.
Question 1. What is the best estimate of your weight gain or loss?
Well if you just use the raw data you will infer a weigh gain. So, you have side by side data.. and you note
that scale 1 is 2lb lighter
So you create a THIRD series
Scale 1a :200+2, 200+2 etc.
This is an adjusted series, NOTE.. nobody changed the raw data. the history is STILL THERE.
Now you use Scale 1a and Scale 2, to answer the question: What is the best estimate of weight gain?
And you answer Zero weight gain.
Note: nobody asked what your precise weight was.
In essence the historical record is still intact. the raw data is still there. The raw data sucks.
The question is can you build a NEW SERIES that is an estimate of what the record would have been if known bias’s are removed. To remove bias’s you dont change raw data. You create a new series which represents the raw data + estimated corrections.
There are simple ways to test if adjustments improve a record. They do.

Leonard Lane
Reply to  Steven Mosher
June 20, 2015 2:56 pm

Why don’t you use the data from one real NOAA data site, explain the reason for the adjustments, and then compare the rad data and the adjusted data? Why not be specific and put the speculations to rest?

Reply to  Steven Mosher
June 20, 2015 4:07 pm

“They do.”
No, you use the raw data, and it shows no weight change at all.
If you get anything else, your process is crap.

Reply to  Steven Mosher
June 20, 2015 4:36 pm

Oh, and I didn’t have to splice the two scales together creating a new series.

Reply to  Steven Mosher
June 20, 2015 4:59 pm

Steven,
If a difference of 2 units in your weighing analogy is important for the outcome, then you cannot and should not use this data set as it is not fit for purpose.
When there is a difference of 2 units in the overlap period, then (roughly speaking) one has to assume that both of the instruments – we do not know which one – are able to be in error by 2 units.
This then allows a valid assumption that these next two series are equally as good:
198 198 198 198 198 NA NA NA
NA NA NA 200 200 200 200 200
Uncertainty is present because at least
a. there is not data on calibration of the scales against an absolute or high quality reference measure, such as the standard kilogram in France.
b. there are no error estimates, uncertainties, sigmas, whatever jargon you wish, provided for each of the scales.
c. there is no indication of the distribution of values that can be created by repetition of measurements. It is commonly and wrongly assumed that the normal distribution and its associated suite of canned applications will apply. However, in this case, we would not seem to be dealing with a simple distribution, but a bounded one. Show me a person weighing 100 times the average. I’m not talking statistically improbable, I’m talking physically impossible.
One of the main tests of the poor quality of much climate science is the repeated demonstration of an inability to use and understand errors. I suspect you would agree with me that you should not write your draft paper, then regress the main Y against the main X on an Excel spreadsheet that calculates your error term as a goodness of fit or least squares calculation of the scatter of points.
But, I fear that many cli sci types have little idea why that Excel procedure might not be a complete stoiy.

Reply to  Geoff Sherrington
June 20, 2015 5:19 pm

But Geoff, the question is how much does your weight change, and you have a calibration source used within the period where it is likely unchanged. It is true you don’t know the accuracy or precision of the measurements, and the uncertainty is more likely a sum instead of an average of the uncertainty. And if the measurements could be replaced with good measurements it would be better, but in this case that is not an option.

Reply to  Steven Mosher
June 20, 2015 5:01 pm

The problem Steven, is you can’t combine measurements from more than one station like you did, there no common calibration source like your example, where my process, as long as you only use a complete cycle of measurements you don’t need one, and I surely don’t use a random scale on the other side of the country to somehow make it “better”.

Reply to  Steven Mosher
June 20, 2015 9:11 pm

Was the scale used prior to a pint or after going to the bathroom? Hey a pint can influence the scale (before or after).

Reply to  Steven Mosher
June 21, 2015 12:04 am

Actually this is a good explanation of my process.
Lets say twice a day you get on the same scale, once in the morning, once in the evening. Most days you eat in the afternoon, and use the bathroom in the morning. Most days you get on the scale second to eating or the bathroom.
And you record the difference between measurements. Every day, big meals,small means, every day it’s the difference between the morning readings, the evening reading and min to max to min. And you want to monitor the change in weight, same scale,one person. After a while you get a good idea if you weight is changing and how it changes each year.
Then you decide you want to see how other people’s weight changes, so you can see if you’re the only one packing on a few pounds, so they get their own scale, and measure the same day to day difference, you get 10-20,000 people with scales all over the world doing the same thing, you all use a basic ounce scale that reads to a tenth of an ounce.
You then average a years worth of readings, for the people in various geographical areas because you want to see if it only Americans getting fat, and some time you average the whole world.
I think with 69 million daily samples from 1940 to 2013 you can get a pretty good idea how people’s weight changes.
That’s what I do with station records.

Reply to  Steven Mosher
June 21, 2015 8:16 am

You’ve gone through your example with no reference whatever to the accuracy of the two scales, or whether they both exhibit the same accuracy. As such, your answer is physically meaningless, Steve, and the person who needs data would be foolish to use yours.
In short, your answer is a perfect exemplar of much of what’s wrong with the consensus methodology applied to the surface temperature record.

Reply to  Pat Frank
June 21, 2015 8:48 am

The NCDC read me notes for the data I use say the values are +/- 0.1F, something that is more likely possible for newer data, less likely IMO for the older data.
But, it’s what all of the temperature series have to work with, SST data has to have huge uncertainty due to undersampling, even with good measurements and we know the measurements are questionable at best.

Reply to  Steven Mosher
June 22, 2015 1:14 am

micro6500 wrote on June 20 at 5:19 pm
“But Geoff, the question is how much does your weight change”
Quite so, It is easy to do some numbers wok and come up with a figure that you claim to be the mass change. It might be wrong. For example, if your scales consistently read 10% low, the figure you derive will be wrong. It will be wrong because it is not related to an absolute standard.
Many climate people consistently show ignorance about even the most fundamental aspects of errors in measurement. There is ample high quality material – though never once have I seen the following referenced (in this case for weighing) –
http://www.bipm.org/metrology/mass/
Geoff.

Reply to  Geoff Sherrington
June 22, 2015 5:17 am

Quite so, It is easy to do some numbers wok and come up with a figure that you claim to be the mass change. It might be wrong. For example, if your scales consistently read 10% low, the figure you derive will be wrong. It will be wrong because it is not related to an absolute standard.

While true, the consensus says other stations will randomly have different errors, so they will cancel. Not unreasonable.
But again, complaining about the calibration is not going to stop them from telling the world we’re all going to burn up, or it’s the hottest year ever! I want to show that temps have fluctuated, but there’s zero sign of a loss of cooling, nor is there any heat accumulating year over year, with their data, it makes their argument fallacious.
What it shows is it’s their processing of the data, this making up data for about 80% of the planet is the cause of the warming trend.
IMO it’s the only way

ferdberple
Reply to  Steven Mosher
June 22, 2015 6:35 am

198 198 198 198 198 NA NA NA
NA NA NA 200 200 200 200 200
=========================
What this data is telling you loud and clear is that there is uncertainty in your result. You cannot say if the result should be 198, 199, or 200.
The problem comes when you combine these two records via some mathematical adjustment and end up with for example:
199,199,199,199,199,199,199,199.
What you have done is to falsely eliminate the uncertainty in your result, because you are now saying that the correct answer is 199 at each data point, when you know nothing of the sort.
In effect your numbers at this point become a mathematical lie. They are falsely implying certainty where none exists. And when people then uses this mathematical lie to make decisions, they end up making poor decisions.
In reality, a whole range of answers is the “correct” answer, and you have no information available to tell you which one is “more correct” than the other.

Reply to  ferdberple
June 22, 2015 6:59 am

What this data is telling you loud and clear is that there is uncertainty in your result. You cannot say if the result should be 198, 199, or 200.
The problem comes when you combine these two records via some mathematical adjustment and end up with for example:
199,199,199,199,199,199,199,199.
What you have done is to falsely eliminate the uncertainty in your result, because you are now saying that the correct answer is 199 at each data point, when you know nothing of the sort.
In effect your numbers at this point become a mathematical lie. They are falsely implying certainty where none exists. And when people then uses this mathematical lie to make decisions, they end up making poor decisions.
In reality, a whole range of answers is the “correct” answer, and you have no information available to tell you which one is “more correct” than the other.

Fred, this is why I look at the day to day change for each individual scale (surface station), while it doesn’t over come accuracy or precision of the measurement itself, it doesn’t embellish the anomaly, and it has the least amount of uncertainty possible. Each scale records 0 change in weight, and in surface data it’s the trend that is important, not some made up global average temp.
The surface data sucks, but IMO real useful data can be obtained from it, without infilling, and homogenization from stations >1,000km away. But what did each surface station record as a trend, and what does a collection of trends average out as, I think the data is there to do this.

ferdberple
Reply to  Steven Mosher
June 22, 2015 6:38 am

Consider that the weather forecast for tomorrow is 30% chance of rain, 30% chance of clouds, 40% chance of sun.
What the temperature adjustments do is eliminate the two 30% results in favor of the 40% result, and claim to have improved the record.
In reality, choosing the 40% result is only a statistical improvement, not an actual improvement, because it hides the 60% that says your 40% result is likely wrong.

Stuart Jones
Reply to  noaaprogrammer
June 22, 2015 5:22 pm

BOM are not willing (or able) to reproduce their own calculations as they admit that the process involves some human decision making using experience and “opinions” of the personnell, some of whom are no longer at the organisation. The black box is secret and the guestimations they apply to the results are unreproducable, what a crock of………..

Richard M
June 20, 2015 9:23 am

One way to homogenize the data is to use satellites instead of other ground stations. Satellites would provide a set of data to correct all the measurements after 1979. Why do scientists continue to ignore this resource?

Reply to  Richard M
June 20, 2015 9:36 am

Because it doesn t agree with their agenda

Reply to  Richard M
June 20, 2015 1:12 pm

1. Cowtan and Way did this to correct the arctic and people bitched.
2. Satellites dont measure temperature
3. Satellites INFER temperature at various altitudes above the earth.
4. Only One satillite data product I know of (AIRS) provides an estimate of the air temp at 2meters,
a) the period is short ( after 2000)
b) the first guess to seed the estimating algorithm is a weather forecast model.
5. Satellites dont measure tmin and tmax, for AMU there are two equator crossings
and you get measures at Different times of the day depending on your location.
6. Satellite resolution is not good enough to compare at the station level. The UAH product
is 2.5Degree and input data is at various resolutions.
7. Satellite data series are More heavily adjusted than land surface.
8. One satellite series uses a GCM to homogenize its series.
The bottom line is that for the LAND adjustments warm the record by 10-15%
The adjustments for SST COOL the record by more than 15%
To repeat: the ocean is 70% of the record. It is cooled.
The land is 30% of the record: it is warmed by say 15%.
15% * 30% what is that?
That is the amount of warming in the GLOBAL record that results from LAND adjustments.
in short Adjustments to the land record DONT change any thing important in the science.

Kev-in-Uk
Reply to  Steven Mosher
June 20, 2015 3:53 pm

Steve – if the land based data are so unimportant – how come the alarmists like to use it so much? Warmest year ever and all that…? Are you dissing the land temps? As for the supposed ‘cooling’ via the SST values, that is a travesty in itself being as it is the worst data/volume sampling ratio ever! Gazzillions of tonnes of water simply unable to be spatially measured with any degree of accuracy…..and you think it should still be used?
Bottom line is simple – instead of fudging with data – the so called ‘best we have’ and creating new series, with all the inherent (and potentially magnified errors) and even worse, creating so called ‘Global’ temperature series – just use what we have (minus the known erroneous data) and presenting it all as separate series, stepped, as necessary. Thereafter, if say, we think the past thermometers were a ‘bit out’ we can simply put a note or line on the graph (different colour?) saying so. No fudging, no hiding, no further explanation would be necessary! By presenting such data in tandem with other ‘series’, e.g. satellite data, the oddities should stand out like a sore thumb? If you then want to combine data series, you do so from the aforementioned (and shown) separate series data only, and show your working! I’m actually curious if there has ever been such a presentation of any such data series?
What has happened appears to be a continual production of ‘new series’ as you call them, with no apparent back reference to previous ‘versions’ or the raw data ever being presented ALL AT THE SAME TIME. It always seems to be like a bank statement, i.e. here is your current balance, with a list of figures – in and out, but if anyone wants to actually check the numbers, they have to dig out the check book stubs, receipts, bills , etc to really ‘see’ what has been going on with their money! Of course, 99% of folk just have to ‘accept’ the given bank balance as correct as they don’t have the receipts, etc! Now add in the fact that some of this data goes back many decades and its quite clear to me that few people (if any) truly have access to the raw data anymore, and even less have any frigging idea of who, how, when or why it was ever adjusted before…..just my view

scarletmacaw
Reply to  Richard M
June 20, 2015 1:20 pm

Most of the epicycles are applied to the pre-1979 measurements.

Roy Jones
June 20, 2015 9:50 am

” the absence of most basic quality controls in the many adjustments made to the raw data in the development of the homogenised temperature series.”
In meteorology does “homogenised” mean “heated up”, or am I confusing it with “pasteurised”?

Toneb
Reply to  Roy Jones
June 21, 2015 11:44 am

No, it’s the same meaning as you would find in the dictionary and the same procedure that any data is subject to when instruments/methodology changes … not to mention, in this case, differing regional and worldwide practises. For those without conspiracy ideation it’s just common sense.

Bob Fernley-Jones
Reply to  Toneb
June 21, 2015 10:59 pm

@Toneb
Homogenization as a principle is appropriate IF the nature of change at a site is well known and calculable. The trouble is that various researchers have identified changes that hold no justifications in the site history or are even contradictory.
For instance there is absolutely no doubt that the BoM has increased the temperatures in inexplicable sharp step-changes towards the ACORN start of time in 1910 in State capital city sites having recognised UHI effect, whereas it is generally seen to be opposite in rural sites.
Are you able to analyse this methodology?

cnxtim
June 20, 2015 9:55 am

Let’s be hopeful this is the thin edge of the “truth wedge” and not the attempted whitewash it appears to be. For sure, after so many datasets have been “massaged” and outright smug lies have been told, creditability, egos and reputations are going to be hurt…Keep up with the search,
This malodorous carbuncle must be lanced.

cnxtim
June 20, 2015 10:00 am

Poem by Rudyard Kipling:
I keep six honest serving-men
They taught me all I knew,
Their names are;
What?
and Why?
and When?
and How?
and Where?
and Who?

DD More
Reply to  cnxtim
June 22, 2015 2:06 pm

don’t forget the later part of the poem.
But different folk have different views;
I know a person small
She keeps ten million serving-men,
Who get no rest at all!
She sends em abroad on her own affairs,
From the second she opens her eyes
One million Hows, Two million Wheres,
And seven million Whys!

June 20, 2015 10:01 am

As I see it, there’s only one way to solve the problem. Three third-party engineering firms should be contracted. None of them should know the identity of the other two. A-team, B-team, C-team.
Their remit should include an engineering-quality validation-verification of the entire temperature record. Start with the instruments themselves. Proceed to frequency of calibration, field-resolution, and site moves. Include estimates of operator error. They will report after two years.
They will undoubtedly, one-and-all, point out that adjusting one temperature record to match another removes any independent physical or statistical meaning (a common-place understanding of data in all of the scientific world except climatology).
None of them will have any investment in a given answer. One suspects their independent reports will be thoroughly damning of the entire enterprise, all written in dispassionate engineering language.
I’d strongly expect that the entire surface air temperature record would be found climatologically useless.
Something like this should have been done 20 years ago, not just with the surface temperature record, but with climate models. Had this been done, the entire alarm-thing would have been stillborn. The only sound being the grinding of green teeth.
The fact that independent engineering studies have not been done can be laid right at the feet of political incompetence. One of the three incompetences playing into the whole AGW enterprise; the other two being scientific and journalistic.

Reply to  Pat Frank
June 20, 2015 4:28 pm

Yes, Pat only registered professional engineers (as under the Professional Engineers Act Qld or similar in other countries) are capable of doing that. Under the Act it is a criminal offense to supply an engineering service when not registered. The Act has a requirement to comply with a code of conduct which mentions competence, honesty and integrity. No Climate scientist could comply with the code of conduct, firstly, because they are not competent (ie understand measurements, instrumentation, control, heat transfer, thermodynamics, statistics, dimensional analysis, mathematics etc) but also for lack of honesty and transparency.

Ken Gray
June 20, 2015 10:04 am

An undeniable way to improve transparency is to make available the raw data for public scrutiny by other climatologists and interested parties. This should be SOP for all databases. Is Australia doing this? I’m new to this issue and not a scientist so it may be that my concern has already been addressed. thanks

Reply to  Ken Gray
June 20, 2015 10:51 am

Welcome to the dark side. “The real global warming disaster”
May I suggest to you and anyone wanting to know the truth, read this book

Mariwarcwm
Reply to  Ken Gray
June 20, 2015 10:56 am

The powers that be aren’t interested in getting it right. One look at the length of the Holocene Interglacial compared with the previous 20 Interglacials will tell you that there is no hope of dangerous warming towards the end of an Interglacial. Like getting to the end of the day – the sun will go down and it’s nothing to do with us.
I was told recently that the IPCC doesn’t mention Milankovitch cycles. Is that true?

cnxtim
Reply to  Mariwarcwm
June 20, 2015 2:33 pm

The real “powers that be” are those approving the funding of this monstrous hoax.. The fact that an official enquiry into the accuracy and veracity of the temperature record being in progress is the first step.
Right now it is being hidden from the likes of mere mortal shareholders/taxpayers. That has to stop…

Stuart Jones
Reply to  Ken Gray
June 22, 2015 5:27 pm

The forum recommends that the raw data is displayed ALONGSIDE the homogenised data, at the moment the raw data is hidden within the depths of the BOM site and you can never be sure how accurate it is as they are the ones who maintian it. Another useful source of data is the original log books of the observers, but they seem to have disapeared, rumour has it that they were shredded instead of being archived. There are state government records (gazetted) that were hand checked type set and printed within a year of the observations, BOM cant get their hands on them, someone needs to before they “disapear”

June 20, 2015 10:53 am

Dissaster

knr
June 20, 2015 10:53 am

its terms of reference , that of course is the key trick ensure that these terms mean the review gives you what you you want but keeps well away from anything ‘difficulty’
The other classic trick is to impose time limits , knowing you can always claim ‘ we would have but we ran out of time’ the aim hear is to spend all your time covering meaningless details , meetings are always a great way to burn time off, meaning you never get around to those ‘difficulty areas’

June 20, 2015 12:20 pm

They must be required to provide the raw temperatures. This agency has been found to have altered and even attempted to destroy raw data. It is corrupt, influenced by money and left wing administrators who see climate as a way to permanent power.

June 20, 2015 1:00 pm

“Out of the 112 ACORN-SAT stations, no less than 69 of them have at least one day in the record with a minimum temperature greater than the maximum temperature for the same day. In the entire dataset, there are 917 days where the min exceeds the max temperature ”
yes this is why raw data SUCKS
when the ausssie data is loaded to GHCN Daily, QC proceedures check for this and the data is marked as bad.
Only skeptics who demand raw data be used, are arguing for using bad data.
Scientists of course change history by deleting the bad data.

scarletmacaw
Reply to  Steven Mosher
June 20, 2015 1:25 pm

If there are two measurements per day, one at 7 am called “minimum” and one at 5pm called “maximum”, then it’s possible that there will be days when the minimum and maximum are reversed. That doesn’t mean the data is bad, but the poorly constructed computer code will flag it as bad.
How is a revision process that makes maybe as many mistakes as there are in the original data an improvement?

Billy Liar
Reply to  scarletmacaw
June 20, 2015 1:36 pm

How dare you question a computer program. They’re all the BEST.

FrankKarrv
Reply to  Steven Mosher
June 20, 2015 1:35 pm

So why was the data modified only relatively recently when the global warming scare was in full swing and not before Mosher ?. Your dreaming again my friend. What SUCKS in the blatant modification to fit in with the warmaholics narrative.

Kev-in-Uk
Reply to  Steven Mosher
June 20, 2015 1:47 pm

Come on steve. The ‘use’ or availability of raw data must be shown to be able to validate and replicate adjustments. Of course raw data has errors and requires QC – that is not the issue. The issue is – are unnecessary adjustments being made? And how do we know without both the raw data and adjustment reasons being publicly available? Of course, therein lies the very failure of this so called ‘scientific’ process.

Reply to  Steven Mosher
June 20, 2015 2:13 pm

Unfortunately for your argument Mosh the instances of min > max are ALL in the adjusted temperatures, not the raw data. Check for yourself, as I have. Most are the result of minima being adjusted too high so that they occasionally (212 times at Cabramurra) go above the maxima. And nobody bothered to check, and Blair Trewin has known about it since June 2013 after it was pointed out and has still not “fixed” it.

Bob Fernley-Jones
Reply to  kenskingdom
June 22, 2015 12:13 am

,
I’ve seen Mosh do some good stuff in past years mixed with some opinions that surprised me. I’ve not been over here for a while, but I’m surprised again at his misunderstanding,
I wonder if he will apologise to you.
See also: http://wattsupwiththat.com/2015/06/20/progress-on-the-problems-with-australias-acorn-sat-surface-air-temperature-records/#comment-1969499

Latitude
Reply to  Steven Mosher
June 20, 2015 2:17 pm

yes this is why raw data SUCKS…
===
well that’s a relief….
So we really can’t even measure temperature.

Reply to  Latitude
June 20, 2015 3:28 pm

Thanks Latitude – when you say – “So we really can’t even measure temperature.” You hit the nail exactly on the head. Take the case of ACORN station Cobar – a mining town in outback New South Wales – population now ~4,000. In the early 1960’s the BoM established a purpose built Met Office on the northern outskirts of town which is staffed by BoM professionals – unlike many Australian weather stations which have utilised postal employees, council staff, other public servants etc etc. In spite of this fact that Cobar MO was purpose built, and is professionally staffed and we can assume has state of the art equipement – Cobar MO data is adjusted in ACORN by +0.4 on 1 Jan 1995 (and all earlier) on the basis of computer driven comparisons with amateur run stations hundreds of km away – yet amazingly ignoring Cobar Airport data only ~7km away. I have a post on this from Nov 2014.
http://www.warwickhughes.com/blog/?p=3433
I am planning another attempt to explain the issues.

Reply to  Steven Mosher
June 20, 2015 2:20 pm

The question is, does the Raw data contain less errors than the homogenized adjusted, quality controlled data. And it almost certainly does because we can see the systematic artificial warming errors in just about every single adjusted station data series there is.
The Raw data will have random errors. Not +0.3C of errors up to 1940 and then -0.3C of errors afterward and even more errors in the last 5 years. It’s almost like the current crop of meteorologists screw up their temperature records every single day now.
The adjustments are TOO systematic to be error fixing.
The Raw data is more accurate than the homogenized adjusted temperatures.
Even this committee said they had no idea how the homogenization process of the BOM was carried out and could not replicate it. They called it a “supervised” process whereby some pro-AGW scientist like Tom Karl or Tom Peterson leans over the shoulder of a technician and suggests that adjustments need to be added.

David A
Reply to  Bill Illis
June 21, 2015 3:42 am

Bill, has anyone from the climate community answered your questions concerning the continues .01 degree adjustments to the past?

Leonard Lane
Reply to  Steven Mosher
June 20, 2015 3:00 pm

How about one real world example of the method of adjustment, the raw data and the adjusted data. Make it simple and let others try the method?

Reply to  Steven Mosher
June 20, 2015 3:09 pm

kenskingdom is correct Steven Mosher – the min higher than max errors are in ACORN not the raw data. We were promised these ~1000 errors would be repaired by end 2014 – never happened. Could not be fixed. The entire ACORN concept / project is only fit for the the rubbish bin.

K. Kilty
Reply to  Steven Mosher
June 20, 2015 3:23 pm

I generally agree with what you say, Mosher, but nothing you say is a justification for the Australian BoM refusing to make clear how the collect and adjust data, and claim that there is no adequate expertise outside the BoM who could possibly do the job as well.

K. Kilty
Reply to  K. Kilty
June 20, 2015 3:45 pm

By the way, Mosher, your claim that raw data sucks and adjustments have to be made is something I agree with in general, but is not entirely accurate in particular instances. For example, last time I made a careful study of adjustments to USHCN, from NOAA’s official page, what I read stated plainly that homogenization of the data took place before correcting for missing and censored data and before corrections for UHI. This is quite obviously done out of order. One should not homogenize data that yet contains known biases, as it spreads the bias to unaffected observations.

catweazle666
Reply to  Steven Mosher
June 20, 2015 3:34 pm

“yes this is why raw data SUCKS”
“The data doesn’t matter. We’re not basing our recommendations on the data. We’re basing them on the climate models.”
~ Prof. Chris Folland ~ (Hadley Centre for Climate Prediction and Research)

knr
Reply to  Steven Mosher
June 20, 2015 4:21 pm

Mosher
You cannot make data that ‘sucks’ good data by sucking on it harder .
Or in other words , all that billing shit on top of shit gives you is higher pile of shit not gold .

Chris Hanley
Reply to  Steven Mosher
June 20, 2015 4:22 pm

“Scientists of course change history by deleting the bad data …”.
=================================
They are not deleting “bad data”, they are altering “bad data”, changing into fake data to fit a predetermined narrative.

GlenM
Reply to  Steven Mosher
June 20, 2015 4:28 pm

Not RAW data, but ADJUSTED .Don’t be mischievous.

nutso fasst
Reply to  Steven Mosher
June 20, 2015 7:09 pm

…this is why raw data SUCKS”
Well, some of it does. For an extreme example of how bad “raw” can be, look at Workman Creek, Arizona, extreme high/low. There are lows of -147°F and highs up to 1,473°F.
How do these errors get into the “raw” data record? Surely the volunteers recording the temperatures didn’t record these.
I’ve compared written logs at a Coop station with “raw data” online and found discrepancies. How can this be? Could it have anything to do with so many NCDC workers being contract workers?
I wonder if the raw data doesn’t suck much more by being processed into digital form incompetently.

harrytwinotter
Reply to  Steven Mosher
June 20, 2015 9:46 pm

Steven Mosher.
A minimum temp greater than the maximum temp for a particular day may actually be correct. That is the issue with the daily reading of minimum maximum thermometers. It is possible I suspect. Cold snaps are often preceded by very mild or even warm weather in Australia.
Still, someone complaining about some bad readings is taking cherry-picking to a new extreme.

nutso fasst
Reply to  harrytwinotter
June 21, 2015 7:15 am

“That is the issue with the daily reading of minimum maximum thermometers.”
Yes, most often when the readings are in the morning. So why aren’t the MAX and MIN assigned to the same day they actually occurred in the online “raw” temperature data?
When TObs is in the morning, the MAX is almost invariably the high temperature for the previous day. I noticed this with U.S. Coop stations close to one another, one that takes readings AM and the other PM. On a day with a record MAX, the AM TObs station data shows the record MAX on the day after it actually occurred.
Do the error checking algorithms take this into account?

Bob Fernley-Jones
Reply to  harrytwinotter
June 21, 2015 11:42 pm

@Mosh and nutso fasst,
I have an email from the BoM Help Desk which advises that since 1964, the maxima and minima are spread over two days and prior to 1964 the methodology varies considerably. (paraphrasing)
A fellow researcher has strong evidence that the Oz Automatic Weather Stations (AWS) generally introduced in 1996 give excessively high readings.
I also have an email advising that AWS are not necessarily more reliable than the earlier methodology. For instance at Bourke, the AWS data was shockingly bad in a variety of ways for a decade, and the previous data was good in completeness.
The AWS’s are also famous for blocks of thousands of days of data in whole degrees and long periods of down time.

harrytwinotter
Reply to  harrytwinotter
June 22, 2015 6:22 am

nutso fast.
Any TOB bias occurs if observer practices change to a large degree eg if the guidelines change from taking observations in the morning to the evening or vice versa. If the guidelines don’t change, then there won’t be a bias. The reason for this is long-term temperature trends are based on anomalies compared to a baseline; if the average is consistently warmer or consistently cooler it won’t appear in the anomaly.
When they read a min max thermometer daily, the date used is the date the person recorded what was shown on the thermometer (they then reset the thermometer after the observation). It is not possible to tell if the max and min reading shown is from the day of the observation, or from the previous day. So depending on the weather, sometimes a max is double-counted or a min is double counted.
I can’t speak for the error detection algorithms as I don’t know the details. But I do know they can detect if a change in the measurement guidelines introduced a corresponding change to the average, the average will rise or drop from the point the change to the guideline occurred.

nutso fasst
Reply to  harrytwinotter
June 22, 2015 4:45 pm

harrytwinotter, change of a station’s observation time is a rare enough occurrence that, in the worst case, the day on which it occurred could be discarded with less negative effect than lumping one day’s high with the next day’s low. My studies suggest that extreme high-low discrepancies that raise error flags are more likely when stations use the AM observation time (as suggested by NCDC), and that misleading daily records are also a problem.
For example, an Arizona Coop station switched to AM observations in 1990. NCDC shows the station to have matched a record high MAX of 111°F on Aug 19, 2014. But 111°F is really the MAX for Aug 18. The MAX on Aug 19 was 88°F. When temps are assigned to the days they actually occurred and compared with pre-1990 records, you find that high temps at this location matched a previous highest MAX on Aug 18 and matched a previous lowest MAX on Aug 19.
There is no reasonable justification for assigning “official” high temps to the following day.

Patrick
Reply to  Steven Mosher
June 21, 2015 5:52 am

The raw data is, the raw data. So there must be a fault at the measuring site and NOT the data!

nutso fasst
Reply to  Patrick
June 21, 2015 7:52 am

The older the data, the more likely it was handwritten and submitted by mail. Which suggests that its accuracy may depend on the quality of an OCR scanner.

Patrick
Reply to  Patrick
June 21, 2015 8:46 am

And OCR sanners are not that good. So thus, given we have the ability to scan the actual image of the document, just scan the document at 300dpi! No need for OCR.

Venter
Reply to  Steven Mosher
June 21, 2015 8:39 am

Your adjusted data sucks a lot more than raw data as you screw it up beyond belief with kitchen cooked bogus statistical methods.So the moment you lay your hands on raw data and make any adjustments that is totally untrustworthy as the bias lies with you the adjuster and you are untrustworthy. That’s the simple truth. And that applies to your buddies at NOAA / CDC and GISS.

Reply to  Steven Mosher
June 21, 2015 9:02 pm

Steven, we agree.
“Scientists of course change history by deleting the bad data “.
……………………………………….
There is a large source of error in BEST and other simulations pre-satellite. Here in Oz, we have done many man-hours of work looking at official records compiled pre-1960. One summary of some of this work is at
http://joannenova.com.au/2015/04/two-thirds-of-australias-warming-due-to-adjustments-according-to-84-historic-stations/
We compared, station by station, pre-1960 temperatures with those from (say) 2000-2014. We cannot see the official 0.9 deg C change that is claimed for Australia over the 2000 century. We can see a ‘best’ change of about 0.4 deg C warming from the late 1800s-early 1900s to now. That is what we would accept as Australian global warming, with no way to tell if it is natural or manmade. That is why we good scientists, hard scientists, are cynical about other scientists who change history by deleting good data
The error arises because the data submitted to BEST and other places has already been homogenised. Pre 1960, the science of weather recording was of high importance in Australia and some of the better scientists of the day worked on it. It is entirely plausible that, before the Official Australian Year Books data went to print, experts had pored over it in great detail to delete or adjust the bad data.
So there is no need to do an Acorn on an Acorn exercise unless a new wrinkle is discovered on the old Acorn. Your automated break point detection might sometimes be doing no more than adjusting or reversing deliberate steps inserted for good reason by competent early scientists who had the feel of the data and who were there at the time.
Unfortunately, since the metadata for this work is not adequate for reconstruction, the whole pre-1970 Australian temperature record is, as I claimed before, unfit for purpose.
Unfortunately again, nobody has retained information that can refute my assertions here.
I suggest you start a series of acronym changes that progressively go “BEST”, “BETTER” and “NEARING REALITY”.

Reply to  Geoff Sherrington
June 21, 2015 10:36 pm

Geoff, here’s the day to day change (anomaly) in min and max averaged by year for stations with at least 360 samples per year for Australia, this is the data in the NCDC Global Summary of Days data.
Think of min is the difference between how much temps went up during the day, and how much it cooled the following night.comment image

Phill
Reply to  Steven Mosher
June 22, 2015 8:12 am

Steve, You have it in complete reverse. The raw data has no days where the minimum exceeds the maximum as quality control ensures this doesn’t happen. The 917 days are in the new homogenised data set. Minimums are homogenised separately to maximums and the original quality control is overridden.
Phill

FrankKarrv
June 20, 2015 2:36 pm

I think that the review panel are well aware of the problems but needed to come down softly on the BOM to avoid major embarrassment. This is quite evident from the whole tone of its findings.

bobl
Reply to  FrankKarrv
June 22, 2015 2:05 am

I agree Frank, the whole tone was to play down the problems, but the way I read the report it says (paraphrasing). We acknowledge the need for homogenisation but think how you have done it is totally incompetent. “Opportunity to improve it” being governmentese for “You really screwed that one up sunshine”.
My only real problem is that this set should be maintained as, raw value, transformation (correction offset or function), reasons, output value. I never destroy or overwrite data, I take the raw values, process them through a filter containing the corrections to produce a final result. This is pretty much self documenting, if errors are found in corrections they become trivial to fix because no individual data points never get changed, just the transformation applied. Supervised or not, this is how the data should have been represented, anything short of this is effectively unmaintainable. If it is not this way, then they should flush ACORN down the toilet and start again.

Chris Hanley
June 20, 2015 5:01 pm

The Australian BOM temperature record may be unique in that it so neatly fits the IPCC narrative:
http://www.bom.gov.au/state-of-the-climate/images/fig3.png
(BOM).

harrytwinotter
Reply to  Chris Hanley
June 20, 2015 9:53 pm

Chris Hanley.
“The Australian BOM temperature record may be unique in that it so neatly fits the IPCC narrative:”
Can you explain this claim? The graph you show does not appear to contain any IPCC data or narrative.
The Australian surface temperature does follow an upward trend similar to that found for the globe by other teams, but this has nothing to do with the IPCC.

Chris Hanley
Reply to  harrytwinotter
June 20, 2015 10:27 pm

Can you explain this claim?
======================
I might have if you had used your real name.

Patrick
Reply to  harrytwinotter
June 21, 2015 8:51 am

So just ignore Chris.

kim
Reply to  harrytwinotter
June 22, 2015 2:25 am

The cool thing is that it was warped to fit the narrative, yet there is no need for any individual region’s temp record to fit the overall narrative.
Given away by a guilty conscience? No, but something related.
=======================

kim
Reply to  harrytwinotter
June 22, 2015 2:29 am

Though the Ozites are hardly parochial, it may have something to do with insularity. Perhaps there was a need to demonstrate to an insulated population that they weren’t so insulated after all.
I dunno, the warp may have been completely accidental, purely a function of the algorithm. But one must suspend critique to accept that easily.
===========

toorightmate
Reply to  Chris Hanley
June 21, 2015 4:29 am

I am with harrytwinotter.
Chris Hanley, please answer the query or we simply assume you are unable to.
The absurdity of accusing past BOM personnel and others collecting data which understates pre 1960’s temperatures in Australia is pure unadulterated bunkum.
In my working life, I only had one experience of a highly qualified scientist who doctored data. That person disgusts me (then and now). In 2015 we have a multitude of “scientists” who are doctoring data. I am also disgusted with them.
Shame.

kim
Reply to  toorightmate
June 22, 2015 2:32 am

Nope, not tasty at all. Early surfeit, yes indeed.
================

Phil B
June 20, 2015 5:17 pm

The solution to this is to install a 5km by 5km grid, with weather stations at the corners, all running 24/7 and uploaded to a freely accessible central database.
Then no homogenization is needed and no data manipulation need ever be undertaken.
Until then we have no temperature record, just a model of what some people think it should have been.

Reply to  Phil B
June 20, 2015 8:42 pm

“Until then we have no temperature record, just a model of what some people think it should have been.”
While that may be, it’s not going to stop them from claiming they do.
My approach uses their own data, and does no infilling or homogenization and it doesn’t show any signs that CO2 has reduced nighttime or year to year cooling.
Which is hidden by using daily average temps, instead of using min and max temp trends.

1sky1
June 20, 2015 5:37 pm

Emblematic of how governments approach the climate debate is the absence of anyone expert in geophysical data acquisition and analysis on the Aussie review panel.
Emblematic of how unfit for determining the actual climate change the extant station records are is the uncritical acceptance of the “need” for ad hoc homogenization by both sides of the debate.
That there is no adequate means for recognizing systematic errors in most cases, let alone reliable means for compensating them seems to escape the bureaucratic mind. Sow’s ears will continue to be made into silk purses, with the ceremonial blessings of irrelevant academics.

EternalOptimist
June 20, 2015 5:48 pm

Mosher says the raw data sucks and the adjusted series are a prediction. It’s a good job nobodies life depends upon this confection.

Siberian Husky
June 20, 2015 9:13 pm

BoM’s technical advisory forum said ACORN-SAT was a complex and well-maintained data set. Public submissions about BoM’s work “do not provide evidence or offer a justification for contesting the overall
need for homogenisation and the scientific integrity of the bureau’s climate records.”
but that won’t stop the tin foil hat brigade here.

harrytwinotter
June 20, 2015 9:32 pm

It is a good report. Overall the forum were happy with the work done by the Australian BOM.
“The Forum concludes that ACORN-SAT is a complex and well-maintained dataset. In fulfilling its
role of providing advice on the ongoing development and operation of ACORN-SAT, the Forum also
concludes that there is scope for improvements that can boost the transparency of the dataset
and increase its usefulness as a decision-making tool.”

Patrick
June 21, 2015 5:42 am

As I have said before, 112 stations to “calculate” the average surface temperature of a continent the size of Aus, really? That is, if we “average” (Heh!) them. That’s 1 device for every ~68,500 square kilometers of land! Rediculous!

richard verney
Reply to  Patrick
June 21, 2015 12:06 pm

And what about the globe!
Clearly there is lack of spatial coverage to enable one to draw any firm conclussions..

Reply to  richard verney
June 21, 2015 1:23 pm

You can’t draw a conclusion on the globe, you can draw a conclusion on what measured by surface stations, the rest not being measured is IMO unknowable .

Patrick
Reply to  richard verney
June 21, 2015 7:52 pm

Yes, exactly and it’s totally meaningless!

1sky1
Reply to  Patrick
June 22, 2015 5:18 pm

Actually, because there is substantial spatial homegeneity of temperature variations at scales of several hundred kilometers, 112 stations would more than adequate The rub is that they are unevenly distributed geographically, are of very much different record lengths, and vary immensely in terms of UHI effects. Separating the wheat from the chaff in extracting the regional signals is the real challenge.

Reply to  1sky1
June 23, 2015 5:58 am

Actually, because there is substantial spatial homegeneity of temperature variations at scales of several hundred kilometers,

Well that’s what’s claimed, but it isn’t true, where I live there’s a 10-20F swing based on where the jet stream is running today, the temperature field is not linear, and it isn’t just on the coasts.

Pamela Gray
June 21, 2015 6:42 am

Australia has a boarding house full of biased investigators. The overall error here is using temperature to infer a biased cause. The proper order is to understand what would naturally produce a rise in temperature and then see if it does. When the oceans are calm, it is well understood that warm water evaporates and sends heat into the atmosphere. This heat would show up as warmer humid conditions. Does it? Why yes it does. I have experienced the normally cold Oregon Pacific Coastline when the wind has died down. It gets hot and humid. Fortunately it doesn’t happen very often. I would rather leave all the heat and humidity where it normally occurs, IE Indonesia and places like that. Since it is well understood why Indonesia is a hot and humid climate, if the conditions that cause this heat and humidity occur elsewhere, it is reasonable to assume that temperatures will rise. And sure enough, that happens.
What amazes me are the sheer number of climate scientists who disregard this large body of research that has confirmed this naturally occurring relationship between oceanic-atmospheric conditions to produce land temperatures to instead blindly beat the drum of human-sourced CO2. That they do so blindly can easily be seen in the research record. Many times and even now, biased researchers try to show that human sourced CO2 is causing an increase in the occurrence of oceanic-atmospheric heat producing set up, only to have observations kill those notions one by one. The desperation is so rampant they even try to blame human sourced CO2 when it gets cold.
Just fricken amazing.

Toneb
June 21, 2015 11:53 am

Pamela,
Actually, I find it “fricken amazing” that you imagine that “the sheer number of climate scientists who disregard this large body of research that has confirmed this naturally occurring relationship between oceanic-atmospheric conditions to produce land temperatures to instead blindly beat the drum of human-sourced CO2.” …….. haven’t !
And have found it inadequate to explain continued warming. Not least in the oceans themselves. Now do you think that the heat emerging the oceans has come from other than the Sun? and given you agree that it obviously has, how do you explain the fact that overall the TSI has been slowly ramping down this last 50 years. The conclusion must be that more solar is being retained than used to be …. and guess what? that matches empirical science’s known conclusions from ~150 years of investigation (MINUS internal natural cycles).

Bob Fernley-Jones
Reply to  Toneb
June 21, 2015 11:12 pm

@Toneb,
One problem is of course, that the oceans are chaotically dynamic and how can you possibly determine such a massive thermal sink’s temperature average to a hundredth of a degree when it wont sit still? . I think we’re up to 13 hypotheses as to why it is conjectured that heat is entering the oceans and staying there longer than before in recent decades.
Do you support one of these 13 or so, or do you have your own hypothesis?

Toneb
Reply to  Bob Fernley-Jones
June 22, 2015 8:33 am

Bob:
IMO There is no conjecture on the fact that the heat entering the oceans comes from the Sun. All the heat in the climate system comes from the Sun. The only other places could be geothermal (which would be detectable via deep convection and would need to massive anyhow), or from the air (nope, not that either as the oceans heat the air (overwhelmingly) and not vice versa. The oceans are a storage radiator (containing ~93% of climate heat) and move it around, sometimes hiding it, sometimes releasing it.
That is my hypothesis and the only one that makes logical sense. Some heat is also being lost via LH uptake to melt Arctic ice.

Bob Fernley-Jones
Reply to  Bob Fernley-Jones
June 22, 2015 5:19 pm

@Toneb,
I think you misunderstood the question. No one disputes that al those naughty ergs in the oceans originate almost entirely from the sun. However, some doubters of a pause in warming seem to believe that under various hypotheses (13?) the rate at which heat enters and leaves the oceans has mysteriously changed,
Are you believing in one of the 13, or do you have your own hypothesis?

getitright
June 21, 2015 12:09 pm

We may feel suitably removed from superstition and biases but remember, just up the road a ways from OZ theer are still people who will believe getting naked anger the volcano Gods.

Bob Fernley-Jones
June 21, 2015 4:26 pm


The terms of reference (1) appear to have been hurriedly drafted and lack the normal things like publication date, signatures of authority, or departmental logos. ‘Document properties’ for this PDF reveal that they were authored by a government staffer and accessed once in early March 2015 some two weeks before the joke of a one-day review between the “forum” and the BoM (2).
(1) http://www.environment.gov.au/minister/baldwin/2015/pubs/technical-advisory-forum-tor.pdf
(2) http://www.environment.gov.au/minister/baldwin/2015/pubs/communique-20150326.pdf
The “independent forum” panel was announced in January 2015 and it is planned to meet with the BoM for one day each year

June 21, 2015 11:49 pm

Pre-1910 temperatures were a key part of my submission among the 20 to the ACORN Technical Advisory Forum.
My submission was essentially an invitation for panel members to look at the detail of http://www.waclimate.net/year-book-csir.html, where I’ve analysed historic records published in 1933 by Australia’s CSIR of annual average minima and maxima from 1855 to 1931 at 226 weather stations across the country, as well as 1911-1940 averages at 44 stations published in Australia’s 1953 Year Book.
The 226 CSIR stations suggest 0.5C unadjusted mean warming across Australia from <1931 to 2000-14, while the 44 Year Book stations show 0.4C mean warming since 1911-40. The 1954 Year Book provides more raw station records, totalling 84 weather stations that had a mean temperature increase of 0.3C from 1911-40 to 2000-14.
The historic CSIR and Year Book records encompass a network more than twice as large as ACORN and suggest homogenisation has doubled, possibly tripled Australia's mean temperature increase over the past century to 0.9-1.0C. Many of the temperature records analysed from the documents pre-date what is publicly available in the BoM's raw or ACORN datasets.
The page linked above considers the shelter screen question with most evidence suggesting 0.0-0.2C difference between annual means recorded in Glashier and Stevenson screens, although many Australian locations used neither shelter in the 1800s. The 1911-40 Year Book temperatures were all from Stevenson screens and the influence of non-Stevensons in a small proportion of total averages from 1855 to 1931 is unlikely to have caused even 0.1C influence. Conversely, the page considers UHI which has had an unknown influence since the late 1800s (ditto Airport Heat Island).
Station relocations, mostly from town centres to outlying airports, are not considered as each has a unique warming or cooling influence that may or may not be worthy of adjustment. The analysis also doesn't reference the use at most locations since 2000 of Automatic Weather Stations, which evidence suggests have contributed to warming trends. Also ignored is the whole .0 rounding of more than 50% of all Fahrenheit temperatures recorded before 1972 metrication, which the BoM concedes may be responsible for a 0.1C warming at that time but won't adjust because the early 1970s were also Australia's wettest and cloudiest period on record (?).
In my opinion, these numerous competing artificial influences result in averages that are no more or less reliable than ACORN, bearing in mind the latter dataset calculates Australia's hottest ever day was 51.7C in the cool south coastal town of Albany in 1933 (that day was 44.8C in raw). ACORN has a lot more problems than minima warmer than maxima and it's unlikely to deserve the review's endorsement as a "well maintained" dataset.
If adopted, several of the review recommendations will allow more accessible and transparent research. I hope the BoM takes action to expand the ACORN timeline back to the 1800s, digitises its mountain of Fahrenheit temp observations before 1957, and properly considers all of the many artificial variables that are or are not included within its adjustments.

Kev-in-Uk
Reply to  waclimate
June 22, 2015 12:14 am


an adjustment from 44.8 to 51.7C ?? That is over 10% adjustment!? WTF? Just seen your post, and will have a look at your link later, but I have to ask if that adjustment is reasoned/explained anywhere?

Reply to  Kev-in-Uk
June 22, 2015 11:40 pm

Kev-in-Uk … yes, there is some explanation by ACORN architect Blair Trewin on p64 of Techniques Involved in Developing the ACORN-SAT Dataset (http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf) …
“At a small number of locations, the PM95 method could not adequately homogenise some extremes. All such locations were coastal locations where the observation site moved from a highly exposed coastal site to a site further inland. The issue is most acute for summer maximum temperatures; on coasts with a strong land-sea temperature contrast, the coastal-inland temperature difference tends to increase with increasing temperature, before collapsing to near zero on the very hottest days as offshore winds override any marine influence (Fig. 14). The 95th or 99th percentiles are insufficient to resolve this behaviour at some locations (where extreme heat with strong offshore flow occurs on much less than 1% of days), leading in some cases to highly unrealistic adjustments for the most extreme values. Figure 20 shows an example of this at Albany – the differences between the airport and town sites for the 95th and 99th percentiles of summer maximum temperatures were between 6-8°C (Figure 15), and hence pre-1965 extreme high temperatures at the town were typically adjusted upwards by this amount, but on the very hottest days the difference between the two sites is in reality near zero (Fig. 14), resulting in a few very unrealistic values (e.g., a value from the town site of 44.8°C in 1933 was adjusted to a clearly unrealistic 52.5°C). The method produces a realistic time series for mean temperatures but a highly unrealistic one for extremes.”
So the 51.7C has seemingly been corrected from 52.5C originally calculated by ACORN.
If you compare ACORN daily and RAW daily for the southern town of Albany via the BoM website, you’ll find that just about everything above 30C is grossly exaggerated, starting with …
2 Feb 1940 – raw 39.7C / ACORN 46.1C
6 Feb 1940 – raw 40.8C / ACORN 47.2C
12 Jan 1958 – raw 41.7C / ACORN 47.9C
12 Feb 1915 – raw 42.8C / ACORN 46.8C
14 Mar 1922 – raw 40.8C / ACORN 46.0C
20 Jan 1930 – raw 41.1C / ACORN 46.8C
24 Jan 1948 – raw 40.6C / ACORN 46.8C
26 Feb 1934 – raw 38.8C / ACORN 45.2C
23 Dec 1945 – raw 41.1C / ACORN 45.1C
02 Jan 1954 – raw 39.5C / ACORN 45.7C
11 Jan 1954 – raw 38.9C / ACORN 45.1C
22 Jan 1956 – raw 39.3C / ACORN 45.5C
05 Jan 1969 – raw 45.6C / ACORN 45.7C
31 Jan 1991 – raw 45.3C / ACORN 45.4C
18 Feb 1910 – raw 40.1C / ACORN 44.1C
27 Jan 1931 – raw 39.1C / ACORN 44.8C
25 Feb 1932 – raw 37.8C / ACORN 44.2C
07 Feb 1943 – raw 38.3C / ACORN 44.3C
02 Feb 1945 – raw 38.1C / ACORN 44.1C
16 Jan 1953 – raw 38.3C / ACORN 44.5C
26 Jan 1957 – raw 38.0C / ACORN 44.2C
25 Feb 1957 – raw 38.4C / ACORN 44.4C
04 Jan 1965 – raw 38.0C /ACORN 44.2C
12 Feb 1914 – raw 39.7C / ACORN 43.7C
etc, etc down to about raw 30C. Each of these single day errors increases Albany’s relevant monthly mean max by about 0.2C and are carried through all ACORN calculations to estimate Australia’s official temperature trend since 1910. Because of an observation shift from Albany’s town centre 11 km north to the airport in 1965, there are ACORN upward adjustments of pre-65 to counter the warmer inland daytime climate. However, airport raw mean monthly max in summer are about 2C warmer at the airport than in Albany itself, compared to an average 5.25C difference in the examples listed above.
This is one of those rare situations where an inbuilt ACORN error actually increases rather than decreases historic temps, and this is apparent if you view RAW and ACORN timeline chart temperature trends for Albany. Nevertheless, the joke of 51.7C at Albany as the hottest day ever anywhere in Australia has been referenced on various skeptical websites and other forums for almost a year but the BoM doesn’t seem interested in doing a few hours work to correct the Albany database.
As in my earlier comment, this is the ACORN Technical Advisory Forum’s definition of “well maintained”.

Bob Fernley-Jones
Reply to  waclimate
June 22, 2015 12:31 am

,
I made a substantial submission too, and one of the things that annoys me is that after all the hard work detailing inconvenient FACTS, that although the Hon MP Mr Bob Baldwin’s office confirmed that it had been forwarded to the “forum” (presumably to chair Ron Sandland ex CSIRO), I’ve had no advice since and don’t actually know if it was ever given a tracking identity or even if it now sits in some file somewhere.
I’m aware of several others who are perplexed with somewhat similar experiences.

kim
Reply to  Bob Fernley-Jones
June 22, 2015 2:39 am

I’ve long wondered how they expect to maintain the charade. Do they really believe an end point, or are they just hoping against hope their impossible visions will come true?
=================