Where the hell is Taralga? (Latitude -34.4048, Longitude 149.8197; BoM ID 94735)

Dr. Bill Johnston.

(Former NSW natural resources research scientist.)

Once a staging post on the track to Oberon and Bathurst, the delightful little village of Taralga is 44 km (27 miles) north of Goulburn New South Wales and 135 km (84 miles) from Canberra. The post office (Figure 1) is in the news recently because it is one of several weather stations where the Bureau of Meteorology were caught-out deleting low minimum temperature (Tmin) values for “quality control” reasons. Stories have been published in The Australian newspaper about deletions at Goulburn airport and Thredbo; and there may be others.

clip_image002

Figure 1. The Taralga post office (left) and Court House in 1910 (National Archives of Australia (NAA)).

Because it is used to homogenise a 1964 Tmin time of observation change at Canberra airport, Nowra RAN, Richmond RAAF and Sydney Observatory, which are ACORN sites used to calculate Australia’s warming (Australian Climate Observations Reference Network – Surface Air Temperature); its worth sleuthing Taralga’s data, especially since the site’s metadata is sparse and incomplete (the earliest site-plan is 1998 and there is no mention of when the current 0.06 m3 small screen replaced the former 0.23 m3 large one).

Faulty ACORN data won’t be properly adjusted using other faulty data.

Analysis of daily data available from 1957 shows average annual maxima (Tmax) stepped-up a hefty 0.96oC in 1965 and 0.95oC in 2004 (0.71oC and 0.91oC rainfall adjusted) (Figure 2). Two step-changes result in three data segments whose relationship with rainfall is linear and statistically parallel. Accounting for step-changes and rainfall explains 60.2% of Tmax variation and although residuals are variable in the time-domain (due to synoptic, site and missing-data effects) there is no additional hidden trend suggestive of climate warming. (Years having appreciable missing data (1970 to 1986) don’t make much difference overall so are treated as valid data.)

Local evaporation, which removes heat as latent heat (2.45 MJ/kg of water evaporated) can’t exceed the rainfall (mm=kg/m2); thus provided the yard is not watered, a dependent robust negative relationship is expected between Tmax and rainfall. This is confirmed statistically: rainfall reduces annual average Tmax by 0.21oC/100 mm of annual rainfall.

A Tmin step-change in 1973 (0.47oC) aligns with metrication and its magnitude may be affected by missing data (Figure 2). Perhaps the Fahrenheit thermometer was faulty; protocols were also tightened-up and many sites were visited and made more compliant (as there is not much point in changing the thermometer if sites are in a poor state, the screen may have been repaired, painted or moved to improve exposure).

Rainy years are cloudy, which reduces outgoing nighttime long-wave emissions causing Tmin to be warmer. However, as cloudy-days don’t always bring rain the effect is often not significant for particular sites. At Taralga it is. Cloudiness associated with rainfall causes average Tmin to increase 0.07oC/100 mm; the step-change and rainfall explains 20.0% of Tmin variation and there is no residual hidden trend.

clip_image004

Aerial photographs in 1944, 1952 and 1989 at the National Library of Australia may throw some light on the problem but have not been accessed. However, as part of another study, a visit in April 2016 found a small screen, well exposed and maintained (Figure 3). The serial number shows it was made in 2002, thus probably installed in 2003 just before the Tmax up-step in 2004. So a link is established between a site change and the 2004 Tmax up-step.

Figure 2. A step-change in Tmax in 1965 indicates exposure of the Stevenson screen changed. The step-change in 2004 aligns with replacement of a large (0.23m3) Stevenson screen with a small one (0.6m3). The 1973 Tmin step-change aligns with metrication. Segmented regressions (right) are free-fit to show relationships between T and rainfall is robust and not coerced by the analysis. Despite variation due to missing data, lines are statistically parallel and median-rainfall adjusted differences are statistically significant. Accounting for step-changes and rainfall leaves no unexplained residual trend. (Dotted lines indicate average T and median rainfall.)

Mysteries remain. At NAA a 1942 post office plan possibly relates to replacing the verandah to accommodate a telephone exchange. In 1946 a timber-framed lavatory was erected in the yard, which isn’t there anymore. There are other notes up to 1985; and who knows when the yard was fenced-in on two sides with steel cladding?

clip_image006

Figure 3. The small Stevenson screen’s serial number indicates it was made in 2002 and probably installed in 2003. The previous large screen was likely to have been at the end of the concrete path on the right, which leads from the post office. Interestingly, there is a 5-inch (127 mm) raingauge but a standard 8-inch (203 mm) Rimco tipping-bucket gauge for rainfall intensity. (The small concrete pad was for a previous pluviometer.)

Changing the screen size affects the data.

Data are split into decade-long segments each side of the step-change (about 3600 data-days/tranche) and daily temperature distributions are compared. Sounds complicated … but it’s actually simple. We know a Tmax step-change happened and what caused it; distributions provide insights into the shape of the change.

clip_image008

Thought-of as smoothed histograms, frequency is often visualised as probability density plots (Figure 4). But wait … we can also do exploratory statistical tests. The Kolmogorov-Smirnov test for equal distributions found pre and post 2004 Tmax distributions are different (Psame <0.05) while those for Tmin are the same; likewise the Mann-Whitney test for equal medians finds Tmax medians are different [17.5oC (pre) vs. 18.5oC (post)], while Tmin is not (6.0 vs. 5.8). However, as will be apparent, density plots and statistical tests don’t visualise how post-2004 data differ from those measured before 2004 in the large-screen.

Figure 4. Probability density plots of Tmax and Tmin each calculated over identical temperature ranges for the decade before 2004 (black line) and the decade after the small screen was installed (red dashed line). The test statistic (the Kolmogorov-Smirnov test for equal distributions) indicates density distributions for Tmax are not the same; Tmin distributions are not different at the P05 level of significance.

What are percentiles?

Percentile temperatures are daily values ranked by 1%-frequency increments. So 1% of observations are less than the 1st percentile; 2% are less than the 2nd percentile and so on. Each end of the data-range upper and lower extremes are calculated usually as values greater than the 95th and less than the 5th percentile. There are other convenient breakpoints: 25% of daily temperatures are less than the lower quartile; 25% also exceed the upper quartile (75th percentile); the mid-point temperature (which may be different to the mean) is the median (50th percentile).

Percentile differences throw light on the nature of the disturbance: did data step-up uniformly across the percentile range; randomly; or did some sections of the data distribution change systematically?

Percentiles calculated for the decade before 2004 (which is the reference) are deducted from those for the decade after (Figure 5). The expectation is that Tmax differences will be random around the up-step value of 0.95oC; and as there was no up-step in Tmin, random each side of zero. Differences are appraised graphically (Figure 5).

Small-screen Tmax is biased systematically – bias increases with the temperature being measured up to the median (17.5oC), levels-off at the 60th percentile (19.8oC) with an asymptote 1.2oC warmer than equivalent pre-2004 percentiles. So the up-step is caused by the combination of higher temperatures being measured generally, combined with upper-range bias.

clip_image010

Tmin percentile differences contradict equivalence of Tmin probability density plots and the Kolmogorov-Smirnov test for equal distributions. Although median Tmin (6.0oC) lies close to zero percentile-difference (hence the medians of respective distributions are the same) upper-quartile small-screen Tmin is skewed persistently higher by 0.2oC; and between the median and lower quartile, lower by around -0.3oC to as much as -0.6oC (Figure 5). Relative to pre-2004 percentiles the range of the difference is about 0.5oC. Even though there is no apparent change in average (median) Tmin, tails of the distribution are different.

Figure 5. Percentile differences [percentiles calculated for the decade 1 January 2004 to 31 December 2013, minus percentiles for the decade before 1994 (from 1 January 1994)] The LOWESS curve provides a visual reference. The behaviour of extremes [data >95th percentile (30.2oC (Tmax) and 15.0oC (Tmin) and less than the 5th percentile (8.5oC and –2.5oC)] are highlighted.

An additional point illustrated by Figure 5 is that Tmax and Tmin temperatures less than respective 5th percentiles (8.5oC and -2.5oC) depart from the general trajectory abruptly, systematically, adding weight to the likelihood they are adjusted up; between January 2004 and December 2013 some 140 individual values may be affected. At the warm-end of the spectrum, except for the highest (100th percentile) values, which could be random outliers, Tmax and Tmin greater than the 95th percentile (about 180 individual values) are clustered and randomly dispersed around the trajectory indicated by the LOWESS curve.

Conclusions.

  • Taralga is one of hundreds of cases from across Australia (many of them ACORN sites) where the change from large Stevenson screens, in use since the late 1800s to small screens caused Tmax (and in some cases Tmin) to abruptly step-up. In a similar way as shown in Figure 5, analysis of many individual sites shows bias increases with the temperature being measured. At sites where the mean or median each side of a screen-change are statistically the same, small screens cause distributions to be skewed; spuriously implying that daily extremes have increased due to the climate.
  • There is evidence that temperatures at Taralga less than the 5th percentile are adjusted-up and that for the decade since 2004, some 140 individual data may be affected.
  • No Tmin step-change is detected at Canberra airport, Nowra RAN, Richmond RAAF or Sydney Observatory attributable to a time of observation change in 1964. Adjusting imaginary faults in ACORN data using other faulty data is unscientific and disingenuous. An open public inquiry into Australia’s Bureau of Meteorology is urgently needed to clear the air.

[pdate 8/25/17 8:30 pm PDT.  Changed second conclusion bullet from 95th percentile to 5th percentile per author’s instructions~ctm]

0 0 votes
Article Rating
84 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Bill Johnston
August 25, 2017 6:07 pm

Correction:
The second conclusion dot-point should read:
· There is evidence that temperatures at Taralga less than the 5th (NOT 95th) percentile are adjusted-up and that for the decade since 2004, some 140 individual data may be affected.
Sorry!
Bill

Tom Halla
August 25, 2017 6:27 pm

More “adjustments”?

Nick Stokes
Reply to  Tom Halla
August 25, 2017 8:40 pm

No. No-one adjusts data for a place like Taralga, for any reason.

Evan Jones
Editor
Reply to  Nick Stokes
August 25, 2017 9:22 pm

Well it is a Stevenson Screen, isn’t it? And we all know how biased Tmax is on those obsolete monstrosities, don’t we?
No data from a Stevenson Screen is accurate. Just ain’t. They warm (or cool) way faster than any other equipment, and the devil is in Tmax. Every CRS back to 18-frigging-50 needs to be adjusted. The dang things carry a wooden heat sink around on their shoulders, so what else would you expect?

Reply to  Nick Stokes
August 25, 2017 9:25 pm

proof by assertion?

Greg
Reply to  Nick Stokes
August 26, 2017 1:45 am

Well it is established that they removed an inconvenient Tmin from the record. That in itself disproves you assertion. The record had been altered for alleged QA reasons: it has been adjusted.

Greg
Reply to  Nick Stokes
August 26, 2017 1:51 am

Well it is a Stevenson Screen, isn’t it? And we all know how biased Tmax is on those obsolete monstrosities, don’t we?

They are known to be inaccurate , especially if not maintained by regular repainting with lime.
They are intended for meteo data collection not multidecadal climate analysis down to the nearest 0.1 deg/decade accuracy.
They are not fit for purpose of climate but they are all we have. The problem is the fictitious uncertainly claims that are made for all these non scientific “average temps”.

Nick Stokes
Reply to  Nick Stokes
August 26, 2017 2:50 am

“Well it is established that they removed an inconvenient Tmin from the record. “
Not at Taralga. And it isn’t established that there was permanent removal anywhere.
The only stations BoM homogenisess are the ACORN stations. Stations are adjusted for use as a regional representative in some larger calculation – global or regional average. It requires effort, and is not done for data that will not be used for larger purposes.
In fact the complaint here is that Taralga was not adjusted.

Reply to  Nick Stokes
August 26, 2017 9:23 am

“They are known to be inaccurate , especially if not maintained by regular repainting with lime.”
Anthony actually tried to test this.
never finished or published

HotScot
Reply to  Nick Stokes
August 26, 2017 9:45 am

Greg
Not to mention the inaccuracies caused by lazy scientists sending the tea boy out to take readings, in the dark/rain/snow etc. with a torch (or even a candle in the 1850’s). Or data not being read for a week because the only guy available was sick/on holiday/couldn’t be bothered, so he made up the data for the week, based on his best guess when he returned. And I’ll bet the short guy on the team consistently read the temperatures higher than the guy at 6’2″.
Even to a layman like me, considerable doubt of any records taken from Stephen screens remain. Quite apart from their historical misplacement, there is also questions over maintenance, as you pointed out with their painting regime. We had one prominently placed in our school in the 70’s which was in a grassed area, less than fifteen feet from a wall of glass windows. It was also painted a nice gloss white because the headmaster didn’t like the matt colour of the lime. Official site, and guess who was sent out to record the data? Yep, us kids between the ages of 11 and 16, most of whom couldn’t be bothered, and we frequently spiked the data just for a laugh.

Reply to  Nick Stokes
August 26, 2017 5:01 pm

Nick Stoke says “Not at Taralga” Here they know how the mistake was made and so could corrct the value rather than delete it. Bet they don’t.
The Australian newspaper says “In a new twist, missing records of low temperatures have spread past automatic weather stations to those collected by hand in ­regional areas.
Taralga Post Office, north of Goulburn in NSW, is the latest unseasonal hotspot in an investi­gation in which several automatic weather stations have been declared “unfit for purpose”.http://www.theaustralian.com.au/national-affairs/climate/fresh-doubts-over-bom-records-after-thermometer-read-at-wrong-end/news-story/8b7fd6ae4fc27b2b6f2429b016be21ef

August 25, 2017 6:46 pm

The true awkwardness of this is what it implies for limits to accuracy (not precision). How much really is and will remain the +/-? How much warming bias has been introduced through homogenization or error cutoffs (a lot)? What then is the error bar on the anomalies and is it fair to look at the center of the bar as “the” number for IPCC et al calculations?
And what about the ocean temperatures? How real are they and how much are they model adjusted to become sorta real?
We’re told they’re counting fleas on the back of a big dog. Or are they really just noting how much the dog scratches, and saying that’s a reflection of the number of fleas the dog has? Maybe the dog just scratches itself a lot – with or without fleas.
Is see more scratching studies than flea counting.

Evan Jones
Editor
Reply to  douglasproctor
August 25, 2017 9:26 pm

How much really is and will remain the +/-?
Without hordes of stations, it’s real dang hard to tell. I am doing the best I can with what I have. The only thing to beat the ol’ error bars is oversampling.
Is see more scratching studies than flea counting.
Counting them is what I do.

August 25, 2017 6:50 pm

“who knows when the yard was fenced-in on two sides with steel cladding?”
Bill in your photo a bright sun reflection spot spot can be seen in the fence rails. Just back from the shadow of the Stevenson screen toward the small tree. Each rail has the bright spot but it seems brighter in the highest rails. The rails seem to have a concave shape facing the screen. If so would reflect like a parabolic dish but along a blurry line not to a point. Seems like a great way to warm the box.

John V. Wright
August 25, 2017 7:02 pm

Wear the fox hat.

Reply to  John V. Wright
August 26, 2017 3:36 am

Near the Crystal Brook lavender farm. 😉
https://goo.gl/maps/3hYNQw1TM1J2

Roger Knights
August 25, 2017 7:03 pm

Please keep us informed if this scandal gets traction down under.

AP
August 25, 2017 7:24 pm

I haven’t looked at weather station siting guidelines for a while, but I am pretty sure those trees are way too close.

Thor Shammer
August 25, 2017 7:49 pm

“The step-change in 2004 aligns with replacement of a large (0.23m3) Stevenson screen with a small one (0.6m3). ”
Is this correct? The larges screen is smaller than the small one?

Bill Johnston
Reply to  Thor Shammer
August 25, 2017 8:08 pm

Another typo have to sack the checker – should be 0.06 m^3
Thanks,
Bill

August 25, 2017 7:54 pm

I looked at three stations and found no evidence for AGW even with all that data tampering
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2968352

Nick Stokes
August 25, 2017 8:39 pm

“its worth sleuthing Taralga’s data”
It isn’t, really. Taralga’s data is not used in any index or regional calculation. It’s mainly of interest just around Taralga. The talk of it being used to adjust Canberra is misconceived. For a TOBS adjustment, you basically just need the diurnal cycle through the year. I would have thought Canberra’s data was good enough for that, but they may supplement with other data nearby. A change of screen or whatever is not likely to affect a measure of the diurnal cycle.
Taralga might be used to help resolve inhomogeneities in ACORN ststions. What happens there is that an apparent inhomogeneity must first be detected at that site, and then other sites are checked to see if they register the same. If so, it argues against making an adjustment. So an inhomogeneity at Taralga won’t cause an adjustment to be made elsewhere. If by a considerable coincidence there was a simultaneous ihhomogeneity at a nearby ACORN station, it might prevent an adjustment that should have been made. But it’s unlikely; it’s why several other stations are consulted.
I actually remember Taralga from about 60 years ago. My mother’s folks were in Oberon, and Taralga was on a very rough short cut road to the south. It was a long way from the bitumen. But indeed, a charming village.

Bill Johnston
Reply to  Nick Stokes
August 25, 2017 10:02 pm

Well yes it is Nick.
Taralga is used to adjust ACORN sites; and yes, its data are not homogeneous; and yes, there is poor site control; and anyway if correctly adjusted for site/instrument changes and rainfall; there is no climate warming there since 1957.
Its also incorrect that “an apparent inhomogeneity must first be detected” …. there is no detectable change in the ACORN site data due to the alleged time of observation inhomogeneity in 1964.
In Sydney Observatory’s case, there are influential site changes which are not documented by ACORN (a move in 1947/48; opening of the Cahill Expressway in 1958; building of a wall in 1972/73; another move to the centre of the cottage yard; then installation of the AWS in 1990; then a small screen in 2000).
Homogenisation arbitrary ignores changes that are verified by aerial photographs and historic accounts; and applies adjustments for changes that made no difference; using other data like Taralga: Port Kembla, Canberra Forestry; Point Perpendicular; Moruya Pilot Station, Gunnedah Resource Centre, Orange post office, Tamworth airport and Jerrys Plains! Can you believe that. Science at its most bizarre.
And now the road to Oberon is sealed; even the notorious dip into the Abercrombie gorge. Great drive. I recommend it.
Cheers,
Bill

Nick Stokes
Reply to  Bill Johnston
August 25, 2017 10:22 pm

Bill,
“Its also incorrect that “an apparent inhomogeneity must first be detected” …. there is no detectable change in the ACORN site data due to the alleged time of observation inhomogeneity in 1964.”
If they are making a TOBS adjustment, it will be because of a recorded change in time of observation. I think 1964 was about the time responsibility would have been transferred from RAAF Fairbairn to the civilian airport.

bill johnston
Reply to  Nick Stokes
August 25, 2017 11:14 pm

No Nick. However, I did not quite get the wording right.
Time of observation adjustments were made at Sydney Observatory and some lighthouses too; and not at all former RAAF airports. The issue is mentioned in an ACORN bulletin (don’t have it hand). TOBS adjustments were only made to Tmin; the effect was fairly incidental and I never found that it translated into a significant step-change. It is interesting that no adjustments are made for daylight saving, which because data are observed 1 hour earlier, could be expected to impact on Tmin, which usually occurs in the early morning. (There are of-course odd-cases; when T falls dramatically during the day and is warmer at night.)
The point is that adjusting for a latent change (one that doesn’t impact on the data-stream) is not justified.
The bigger problem for most sits that I’ve examined is ignoring changes that happened – that can be documented by aerial photographs, historical accounts and archived documents and maps. Telephone exchanges being built in post office yards is a classic example. At Orange post office they built two brick extensions (one was a machine-room for a generator) each side of the yard, which are still there! (Classic bonded-brick; built in 1947 and 1952.) This created an inhomogeneity in post office data, which was not corrected before they were used to homogenise Sydney Observatory, Bathurst Ag; Canberra AP; Dubbo AP; and Nowra RAN. (any of those names ring a bell!)
Errors are compounded across the network by the homogenisation process. In fact faulty Sydney Observatory data can be tracked as far away as Alice Springs (via. Tibooburra)!
Cheers,
Bill

Reply to  Nick Stokes
August 26, 2017 2:22 pm

Nick,
No uncertainty in this comment:

No. No-one adjusts data for a place like Taralga, for any reason.

Then you say:
– I would have thought
– but they may
– is not likely to affect
– might be used
– an apparent
– it might prevent
– But it’s unlikely
For us mere readers of these comments from persons (apparently) more knowledgeable than us, commentators wavering between certainty and questioning does little to convince us as to their verisimilitude.

August 25, 2017 8:45 pm

At least I looked up before posting… Anthony, can we add a “Cancel Reply” to the site?

Nick Stokes
August 25, 2017 8:52 pm

” Adjusting imaginary faults in ACORN data using other faulty data is unscientific and disingenuous. An open public inquiry into Australia’s Bureau of Meteorology is urgently needed to clear the air.”
There has been a recent inquiry; Technical Advisory Forum on Australia’s climate records. It includes three of the top statisticians in Australia, and other very notable (non-climate) scientists. They said:

The Forum recognises that homogenisation plays an essential role in eliminating artificial non-climate systematic errors in temperature observations so that a meaningful and consistent set of records can be maintained over time. There is a need to adjust the historical temperature record to account for site changes, changes in measurement practices and identifiable errors in measurement. The Forum considers that the analyses conducted by the Bureau reflect good practice in addressing the problem of how to adjust the raw temperature series for systematic errors. To this end, the Forum supports the need for the Bureau’s homogenisation process to incorporate both metadata-based adjustments and adjustments based on the statistical detection of atypical observations. In the opinion of the Forum members, unsolicited submissions received from the public did not offer a justification for contesting the overall need for homogenisation or the scientific integrity of the Bureau’s climate records.

They don’t think it is “unscientific and disingenuous”.

Bill Johnston
Reply to  Nick Stokes
August 25, 2017 10:26 pm

The Technical Forum did not investigate any sites by looking at data or doing any research.
If they had Nick, they would have seen glaring problems right across the network. They missed for instance that the Stevenson screen at Laverton was originally on the roof of the RAAF Meteorological Section Building; that at Alice Springs there was a site-change in 1954 due to the Aeradio building being up-graded; that at Townsville there were at least three site moved before 1994; that the Onslow Aeradio site was cooled by watering for dust suppression; that at Low Head (Tas) the site is almost in the sea; at Bruny it’s the scrub; that the met-lawn at Woomera is gravel-mulched; that the enclosure at Bourke and the AWS at Badgerys Creek and Cunderdin (and other places) are ploughed-around; that at Devonport, Adelaide, Tennant Creek, Canberra and Ceduna the screen is too close to wind-profiler arrays … and wait there is more. For instance, metadata for most historical post office sites does not report when telephone exchanges were built close to screens in post office yards; vegie gardens planted and concrete paths laid; sheds built (and pulled down).
Nick, even without going to individual sites like Taralga, you could grab Google Earth (pro) and do some research using time-lapse images that the Technical Forum neglected to do! Then report back.
The Forum were paid to tick the box, that is all.
Homogenisation is a joke. While useful for describing the weather, no Australian sites are useful for detecting trends in the climate.
Name one, and ‘ll check!
Cheers,
Bill

Nick Stokes
Reply to  Bill Johnston
August 25, 2017 10:56 pm

Bill,
“If they had Nick, they would have seen glaring problems right across the network. They missed for instance that the Stevenson screen at Laverton was originally on the roof of the RAAF Meteorological Section Building”
The review panel wasn’t reviewing station housekeeping. They were reviewing the way the records are handled, and in particular the use of homogenisation. As in the excerpt quoted, they thought it was fine, as practising scientists and statisticians usually do. There is a pattern here. The locally bally-hooed GWPF panel took one look at the submissions and decided not to report. You’re calling for yet another inquiry – why do you think it would think any differently?
As to Australian trends, there are plenty. I maintain a gadget here which lets you display trends direct from GHCN unadjusted, over various time intervals. The excerpt here is the last 50 years. It shades by trend, in C/year. The shading is correct for each station. On the gadget, you can click to show the trend. In this region, Wagga and Canberra are both 2.48 C/cen, Sydney Airport 3.55 C/Cen.comment image

HotScot
Reply to  Bill Johnston
August 26, 2017 10:07 am

Nick stokes
How the hell can an inquiry be considered credible if it doesn’t examine the process from A – Z.
Data analysis can be conducted well on good or bad data.
“Your car’s running fine sir, the engines in perfect nick. The tyres are shot, the wipers don’t work, the speedometer reads 10 mph slow and the brakes are useless. But other then that, the car’s perfect, it starts and drives doesn’t it? What are you complaining about?”.

Alan
Reply to  Bill Johnston
August 27, 2017 5:56 pm

Gotta love that map Nick. The wider the data spacing the greater the trend. In the area of resource estimation in which I work that would be called bovine scatology, go mine where you have the least data. The way to lose your money

lee
Reply to  Nick Stokes
August 26, 2017 1:09 am

Nick, “The Forum noted that the extent to which the development of the ACORN-SAT dataset from
the raw data could be automated was likely to be limited, and that the process might better be
described as a supervised process in which the roles of metadata and other information required
some level of expertise and operator intervention. The Forum investigated the nature of the
operator intervention required and the bases on which such decisions are made and concluded
that very detailed instructions from the Bureau are likely to be necessary for an end-user who wishes to reproduce the ACORN-SAT findings. Some such details are provided in Centre
for Australian Weather and Climate Research (CAWCR) technical reports (e.g. use of 40 best-
correlated sites for adjustments, thresholds for adjustment, and so on); however, the Forum
concluded that it is likely to remain the case that several choices within the adjustment process
remain a matter of expert judgment and appropriate disciplinary knowledge. ”
http://www.bom.gov.au/climate/change/acorn-sat/documents/2015_TAF_report.pdf
Expert judgement can’t be replicated. If it can’t be replicated it ain’t science.
That forum?

bill johnston
Reply to  lee
August 26, 2017 2:33 pm

You hit a few nails on the head there lee. “Expert knowledge” can also be imaginary – adjusting for changes that don’t impact on data, as though they should; “experts” can also deliberately ignore changed that happened. Use of correlated series is a form of selection bias: selecting comparators that have parallel faults as the target (ACORN) site.
There are many examples of parallel changes – telephone exchanges were not built in post office yards randomly in time; most were built as part of a network up-grade program; most in the late 1950s & early 1960s. (Telephone exchanges were later built on separate blocks of land away from post offices). Aeradio was split between DCA and the Bureau- buildings were up-graded to accommodate more staff, mostly around the early to mid 1950s; small screens were specified in 1973 just after metrication; most stations are now equipped with small screens; most of those were deployed around the time AWS became primary instruments on 1 November 1996 (which, like metrication was time-coordinated). Of the ACORN sites, to my knowledge only Moruya Heads, Gunnedah and Hobart airport (which is not an ACORN site) still operate large screens. (Large screens are still in use at some agricultural research stations also.)
Most stations in the so-called AWAP network also use small screens; and most small screen/AWS combinations that I’ve looked at (some 150 sites), exhibit similar biases that I’ve highlighted at Taralga.
If one averages (anomalies) across all these parallel changes, while they may appear to be statistically significant, trends emerge that have nothing to do with the climate.
Cheers,
Bill

BruceC
August 25, 2017 9:53 pm

“its worth sleuthing Taralga’s data”

Taralga Post Office is included in NOAA’s GHCN-D (daily) dataset. To the best of my knowledge, it is as recorded and unadjusted.
ASN00070080 -34.4048 149.8197 845.0 TARALGA POST OFFICE

M Seward
Reply to  BruceC
August 25, 2017 10:43 pm

You mean…… Nick Stokes is a DENIER??! He certainly sounds like one to me.
NS quotes some report:-
“The Forum recognises that homogenisation plays an essential role in eliminating artificial non-climate systematic errors in temperature observations so that a meaningful and consistent set of records can be maintained over time. There is a need to adjust the historical temperature record to account for site changes, changes in measurement practices and identifiable errors in measurement.”
What is at issue is not that “homogenisation” or “adjustment” is theoretically justifiable but that the basis and methodology for such actions be rational and deliver an improved quality of information ( as distinct from corrupt, disinformation).
The debate is about about how the methodology is not so much just flawed but clearly appears so and to be deliberately so. In turn this goes to the credibility and integrity of those involved and that is the real issue. This has all the hallmarks of the “hockey stick” and “Nature trick” let alone “hiding the decline”.

Nick Stokes
Reply to  M Seward
August 25, 2017 11:07 pm

“What is at issue is not that “homogenisation” or “adjustment” is theoretically justifiable but that the basis and methodology for such actions be rational and deliver an improved quality of information”
You left out the bit I quoted:
“The Forum considers that the analyses conducted by the Bureau reflect good practice in addressing the problem of how to adjust the raw temperature series for systematic errors. To this end, the Forum supports the need for the Bureau’s homogenisation process to incorporate both metadata-based adjustments and adjustments based on the statistical detection of atypical observations.”
That is a firm commendation of their basis and methodology.

M Seward
Reply to  M Seward
August 26, 2017 4:22 am

Yeah I read that Nick but it sound just like the sort of ‘expert’ waffle I have read and rebut in civil litigation matters. The game is to rent an ‘expert’ to waffle on that their client ( s/he who pays the bill that is) is doing a really great job in their bald but otherwise unexplained opinion. That is why I did not include that bit in the quote, cos its just meaningless, worthless, ‘expert, texpert’ shite , imo.

bitchilly
Reply to  M Seward
August 26, 2017 4:23 pm

nick, the forum can offer commendation for anything they like. it is no more than opinion. bill johnston has shown that opinion not to be worth the paper it is written on.

Nick Stokes
Reply to  M Seward
August 26, 2017 11:42 pm

“it is no more than opinion”
That’s how it goes. Folks like Bill set up a clamor for an inquiry. The Gov’t sets one up, with top statisticians and other eminant scientists. They report. But what they say doesn’t agree with the sages at WUWT, so it’s just an opinion.
So what use is it demanding an inquiry?

bill johnston
Reply to  Nick Stokes
August 27, 2017 1:11 am

Looks like you a busy afternoon Nick. However, you bypassed the problem, which is that there was no need to adjust Sydney Observatory for the Tmin TOBS change; it was not influential on the trajectory of the data. It was a spurious changepoint.
INSTEAD, they should have adjusted for the change in exposure in 1947/48; building of the wall in 1972/73 and installation of the small screen in 2000. By wiping corporate memory (but not aerial photographs), they left those real changes in place as “climate change” and instead adjusted for something that made no difference. (For Tmax they should have also adjusted for opening of the Cahill Expressway; but didn’t. I went there too in the mid afternoon; and yes Nick, hot air from the Cahill Expressway rises!)
In Taralga’s case; step-changes in the mean were detected first; then attributed post hoc; however, early Google-Earth satellite images are too grainy to pick-up what happened; so like every good scientist should, I went to have a squiz and took some photographs. On that same day (27 April 2016) I also visited Goulburn airport and took a photo. There was no screen near the AWS at Goulburn airport on that day and no one I could find knew where it was.
Presuming you know your way around data; I do a three-stage analysis (which results in Figure 2). Data step-changes are detected first; then I deduct the overall linear rainfall signal using regression; test for step-changes in rainfall residuals and compare them with step-changes in the data. If they are different, I recursively fit the residual step-changes as factors and compare the recursive fit with the original step-change model (also factored), using rainfall as the control or co-variable (technically the co-variate) Concentrate Nick!. If there are no differences I accept the simplest (moist easily explained) model. Then after that I look for reasons for the changes; calculate segmented density plots; investigate percentiles; look for evidence and sometimes go for a drive with a camera.
Nick, its not about fitting some diurnal rhythm; its about explaining a step-change in data and trying to work-out if it it is real, or if it mattered.
Why not put a month or so aside and check out all those stations for Wagga Wagga. As most no longer exist, good luck with that! (Allow three months.)
I operate in a a no-tricks zone; data and all statistical tools I use are in the public domain.
(Happy to share R-code too.)
Cheers,
Bill

Nick Stokes
Reply to  M Seward
August 27, 2017 3:10 am

Bill
“However, you bypassed the problem, which is that there was no need to adjust Sydney Observatory for the Tmin TOBS change; it was not influential on the trajectory of the data. It was a spurious changepoint.”
It wasn’t a changepoint; TOBS is based on metadata. At least 14 stations, including Sydney, Richmond, Nowra, Wagga, Williamtoam, changed on 1/1/1964. The note says:
“obs time – indicates a change in observation time (most often the 1964 change at some stations from a midnight to 9am observation time),”

bill johnston
Reply to  Nick Stokes
August 27, 2017 3:47 am

So why adjust for it?
And why not adjust for changes that made a difference?
Did you check-out those stations used to “adjust” Wagga data? Do you want a list for Port Hedland? What about Charleville, where they ignored that the Aerado site was at the dispersal area; or Darwin, where it was near the RAAF operations centre (I found pictures); or Meekatharra, Mardie, Mildura and Marble Bar? Woomera, Wilsons Prom and Wangaratta?
Nick, I truly admire your Excel trend lines; however, it seems you prefer real research is done by others.
What is it that you don’t understand about Taralga’s data (or Sydney Observatory …. Canberra; Orange PO, Geraldton, Alice Springs … Wagga Wagga .. Low Head(Tas), Oodnadatta, Rabbit Flat or Nowra RAN)?
Here in OZ its snooze time!
Cheers,
Bill

Nick Stokes
Reply to  M Seward
August 27, 2017 4:20 am

Bill,
“So why adjust for it?”
They have to. They now there was a relevant change. You don’t know the effect till you try. And if there is any effect, it should be included. Why not?
For Richmond they wrote down the TOBS adjustment as 0. That is the right thing to do. They record that they calculated it and found it was 0. That is science at work.

Reply to  BruceC
August 27, 2017 1:42 pm

Here’s a plot of the Taralaga data from GHCN’s daily files. I calculated the monthly baselines for Jan-Dec using the 1981-2010 data, then got the monthly anomalies by calculating the monthly averages over the entire period and subtracting the anomaly for each calendar month. The standard deviation of the anomalies over the entire period is 1.18C. From Jan 2004 to Mar 2017 the trend is practically zero.comment image

bill johnston
Reply to  James Schrumpf
August 27, 2017 2:54 pm

Thanks James.
I guess the graph is of mean T ((max+Min)/2) is how its calculated in OZ. For a single site, where no comparisons are involved, I don’t worry about baselines; for monthly data I’d decycle by deducting monthly grand-means from respective monthly data (or create a factor for each month and use multiple linear regression to account for the cycle; or sin/cos). But there is still a problem. On the one-hand data are not homogeneous; on the other, annual rainfall alone is highly correlated with annual T (overall R^2 = 0.31!), so could bias the regression if there is a wet or dry regime near the start/end of the dataset. (There is also missing data; which can be filtered by ignoring data where N<320 (say), or for monthly data where (say) N<10% of total days/month (cutoffs can be experimented with).
Because data are not homogeneous; residuals won't be either; and ignoring rainfall is likely to cause residual autocorrelation. While annual data tends to be better behaved, a simple regression on annual Tmax looks significant (P(yr) <0.001), but due to embedded problems it is not (inferences are wrong). Data actually consist of non-trending segments separated by step-changes; and because factors are additive, those that are unexplained remain embedded in residuals. (I think its best to check for step-changes first, then go from there; I routinely use sequential t-test analysis of regime shifts, which is an excel addin; just takes a bit of practice.)
I always think its best to fully explain the data (or the process); then check for trend significance.
(If you analyse as monthly data it becomes quite messy (spaghetti-like)). This is because there are a whole bunch of lags. Because it is stored in the soil and evaporation is low, rainfall in June for instance can still affect temperature months later.
Cheers,
Bill.

Reply to  BruceC
August 27, 2017 5:05 pm

bill johnston
August 27, 2017 at 2:54 pm : I calculated the anomaly as Nick explained in a previous post: got a Jan, Feb, Mar… Dec average for all the years of 1981-2010, then used that against the Jan…Dec averages for the whole period. The station had very good data over its period. I loaded it into my Oracle database from the GHCN .dly file, and checked the counts that were used in each month’s average. Only one month had less than 10% of the available days.
I compared the monthly averages I generated against the BoM averages from the ghcnm.tavg.v3.3.0.20170808.qcu.dat file, and they agreed very well, with only some rounding differences; the BoM figures use two places to the right of the decimal, and I only used one.
I don’t see how rainfall comes into the calculation; I’m using raw temperatures, and the temperature of the day is the temperature of the day, regardless of precipitation. It almost sounds as though you’re trying to use precip records as a further adjustment to the raw temperatures.

bill johnston
Reply to  James Schrumpf
August 27, 2017 6:12 pm

Hi James,
I don’t disagree that the temperature of the day is what it is. However, in time, the temperature of the day also accumulates irreversible effects due to site changes and stochastic effects due to rainfall. Both are influential on trend. A small screen measuring “warm” is not comparable with historic data, which was measured in a large screen. Linear regression presumes factors like those don’t underlie the T signal. (Missing data can be a problem; 1983 had only 170 days of data; 8-years had less than 302 days; I just made that note as a caution.)
Why did you calculate anomalies based on 1981 to 2010? There is an instrument related step change in 2004 – what happens to it in your anomaly calculation? The earlier step-change in 1965, will show as a dip in your calculation, which then contributes to a “trend”. Also, presuming you are interested in temperature trend (unadulterated by step-changes and the external stochastic effect of rainfall); those factors should be accounted for (In Taralga’s neck of the woods, rainfall also picks up (proxies for) ENSO etc.)
This kind of analysis is quite straight forward. If you want a confounded signal of T & rainfall (& step-changes); leave everything in; but acknowledge that is what your trend includes. (The Tmax jump in 1965 (0.71 degC) plus the small-screen jump in 2004 (0.91 degC (median-rainfall adjusted)) = 1.62 degC of data warming (in 59 yrs) (which won’t be the same as OLS trend).
Your regression residuals are likely to show residual step-changes or other evidence that all is not well with straight regression that ignores underlying factors contributing to the relationship.
Cheers,
Bill

bill johnston
August 25, 2017 11:43 pm

Thanks Nick; but all the trends are wrong. At Wagga for example the enclosure was originally on the RAAF-side of the AP; the one in town (Kooringal) was not in Kooringal; coordinates put it in Macleay St Turvey Park near where I used to live; its actual location was at the post office, then Court House, Police station and possibly the TAFE. Canbera AP, also moved around; I tracked the Aeradio office to the rear of the original (initial) pre-war hanger; cant find the enclosure on 1940s aerial photos, but its there somewhere.
Like at Taralga, the “trend’ at Wagga and Canberra (and Sydney Observatory) is related to installation of the AWS in a small screen (oh … and did I mention the wind-profiling radar array at Canberra; there is one at Sydney too – check it out on Goolge Earth (pro)?)
So you see without locating various moves and adjusting for site and instrument related inhomogeneties, trends are meaningless. Using aerial photographs and Google Earth I tracked all the changes at Sydney AP. The screen there is just 35m from the southbound exit of the General Holmes drive traffic tunnel (and up-hill from the roadway). The tunnel was widened in 2000, and guess what – T increased! However, the temperature of increased-traffic is not a climate signal is it?
And anyway, you can’t average data over 100 years, if you only have 20 or 50 years of data with a bump at the end because the screen was replaced or the road was widened. Reality check please!
It doesn’t matter if data are adjusted or not; I can’t find any that are NOT faulty; while ACORN applies seemingly random/arbitrary adjustments that are not justified.
Cheers,
Bill

Nick Stokes
Reply to  bill johnston
August 26, 2017 2:11 am

Bill
“So you see without locating various moves and adjusting for site and instrument related inhomogeneties, trends are meaningless.”
The picture I showed of unadjusted trends tells a different story. It shows a whole area of Southern Australia with a fairly consistent positive trend, and North Australia with a smaller but also consistent trend. It’s hard to see that happening as a succession of accidents. And you see the same in the rest of the world, and over different periods.

Reply to  Nick Stokes
August 26, 2017 6:28 am

Re: Nick Stokes August 26, 2017 at 2:11 am
Nick, that is just straight up disingenuous, you know full well that the development of the station data set includes homogenisation techniques (Where many stations are adjusted and some omitted)! And you would also know, that these “adjustments” are made, based on the assumption of spatial coherence for nearby series. And yet, you continue to use the words “unadjusted” and “raw” to reference this kind of “data”!
Anomalies are not a cure-all for the real problem of accurately representing the spatial variability of climate data. And I can’t imagine that you would not be aware of this fact, given that it is well known in the literature of Climate Science.

HotScot
Reply to  Nick Stokes
August 26, 2017 10:10 am

Nick Stokes
“And you see the same in the rest of the world, and over different periods.”
No you don’t. More of the world doesn’t have measurement facilities than does.

bitchilly
Reply to  Nick Stokes
August 26, 2017 4:26 pm

of course it is a series of accidents nick, the same bloody accidents appear to be happening across the board.

Nick Stokes
Reply to  bill johnston
August 27, 2017 3:18 am

Bill,
“Using aerial photographs and Google Earth I tracked all the changes at Sydney AP.”
You don’t need to do that. There is very complete metadata, including a progression of maps, here (BoM).

Tom Harley
August 26, 2017 12:56 am

Didn’t “HarryReadMe” of climategate fame call the Aussie data ‘crap’?

Editor
Reply to  Tom Harley
August 26, 2017 2:55 pm

It doesn’t appear so. It’s a little hard to tell just what he’s looking at when he calls things crap.

Charles Nelson
August 26, 2017 2:19 am

Leave poor Nick Stokes alone.
I hate to see people humiliated like that.

bill johnston
Reply to  Charles Nelson
August 26, 2017 3:00 am

I agree Charles, let’s move-on. How about to a conversation with all those homogenisers at the Bureau of Meteorology?
Come on Blair Trewin, emeritus professor Neville Nicholls and WWF climate-witness professor David Karoly flash your biases. You have all done conversations on “The Conversation”; but when push comes to shove you melt.
How is it possible that faulty data from small Stevenson screens like at Taralga, are used to “prove unequivocally” the climate has warmed at Sydney Observatory?
The concept is bizarre.
So, just how?
Cheers,
Bill

Nick Stokes
Reply to  bill johnston
August 26, 2017 4:37 am

Bill
“How is it possible that faulty data from small Stevenson screens like at Taralga, are used to “prove unequivocally” the climate has warmed at Sydney Observatory?”
Just not true. I don’t believe that Taralga has any influence on Sydney. But if it does, then adjustment at Sydney
reduces the trend. Unadjusted has warmed rapidly.
You need to get actual facts right.

Reply to  bill johnston
August 26, 2017 2:33 pm

To Nick:
Make up your mind:
“Just not true” (definitive) then “I don’t believe” (opinion) then “But if it does” (doubt)

Reply to  bill johnston
August 26, 2017 4:41 pm

Which sites have been adjusted by others can be see here below. There is no need to theorise. Bill has checked each claim he has made but if you do not accept that or think he has made a mistake check at this link. http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT-Station-adjustment-summary.pdf

bill johnston
Reply to  siliggy
August 26, 2017 5:17 pm

Indeed siliggy;
However, it takes a bit of work to wrap a spreadsheet around it; get a table of station names (available from the Bureau), then build a look-up table which cross-references station numbers and names. Then its a snack to get blocks of station names used to homogenise ACORN sites.However, that is just the beginning.
Nick, to check for bias it’s necessary to go the next step and check out if all the comparator series are homogeneneous; useful; unbiased; which is why I went to Taralga when I was actually checking out Sydney Observatory.
If you are interested in Wagga airport for instance Nick, you should also check-out
WAGGA WAGGA RESEARCH CENTRE, WAGGA WAGGA AGRICULTURAL INSTITUTE, COOTAMUNDRA POST OFFICE, NARRANDERA POST OFFICE, GRIFFITH CSIRO, LEETON CARAVAN PARK, ALBURY GRAMMAR SCHOOL, WODONGA, KHANCOBAN SMHEA, HILLSTON AIRPORT,
JUNEE TREATMENT WORKS, HUME RESERVOIR, ADELONG (TUMUT ST), COROWA AIRPORT, BEECHWORTH COMPOSITE, CONDOBOLIN RETIREMENT VILLAGE, QUANDIALLA POST OFFICE WYALONG POST OFFICE, WANGARATTA, PARKES (MACARTHUR STREET) (at least).
Homogenisation is like alphabet soup; good-fun, but really messy if you spill it!
There is no indication that the Forum people investigated the next-down layer of stations.
Cheers,
Bill

Nick Stokes
Reply to  bill johnston
August 26, 2017 11:38 pm

Thanks Siliggy, that’s a useful doc. It gives perspective. So now we see
1. Taralga was used once to adjust Sydney, in 1964. It was one of 10, for a TOBS adjustment, and the total adjustment to Sydney was 0.2°C. TOBS is a straightforward adjustment – you just need an estimate for the diurnal cycle. Minor vagaries in minima at Taralga would have no effect here.
2. It was used for the same adjustment, same date, for Richmond and for Nowra. Again 10 stations, total adjustment 0.2C. Looks like they just used a colection of stations to get the diurnal.
3. Taralga was used twice, in 1963 and 2006, for adjustments in Canberra. Again it was one of 10, and the adjustments to the min were about 0.45C.
And that’s it. That’s the whole contribution of Taralga to any dataset beyond Taralga.

A C Osborn
August 26, 2017 2:52 am

Typical arguement between NS (sits at a Computer and uses algorithms) and BJ who looks at the Real data, which does not mean just the values recorded.
I know who I believe.

Nick Stokes
Reply to  A C Osborn
August 26, 2017 4:45 am

“I know who I believe.”
The chap who knows all about the values that weren’t recorded?

lewispbuckingham
Reply to  Nick Stokes
August 26, 2017 2:07 pm

No, the chap who found out how the values were recorded.

toorightmate
Reply to  Nick Stokes
August 26, 2017 3:46 pm

Nick,
If I was your GP, I would recommend a strong prescription of “homogenisation” – for your good self.

August 26, 2017 5:06 am

Dr. Johnston:
Thank you for the post. Most of all, thank you for presenting a non-parametric analysis of data with different underlying distributions. Wherever Taralaga went, it has provided something of value for the possibly greater cause of proper analysis of data.

steverichards1984
August 26, 2017 5:06 am

When Volkswagen were first accused/caught out fiddling the emissions on many of their cars, there first instinct was to say – not me .
As it was investigated and the ‘various’ truths came out, it did show that the company had been cheating to make money.
They are corporation that needs to make money, you could understand that the many people invovled would be covering their backs and finger pointing.
Now, the Australian BOM is a publicly funded body whose role/job is to use the best of science and engineering to measure environmental matters that affect temperature and rain fall across the continent.
My naive view is – if the BOM was a scientifically driven body, when confronted with the ‘missing tmin temperature gaff’ the people in charge of BOM would have instigated an urgent review, got the heads of departments together and after lunch, get a preliminary report out stating what had happened, why it was done and who did it.
Obviously the BOM have as much scientific integrity as I have in my little finger.
They caused a government minister to speak falsehoods (faulty equipment etc etc).
With all of the BOM measurement estate now suspect, the only answer that I can see is to add error bars to any output from the BOM, sufficient to make any measurements worthless, only then will the measurements be considered reliable.

August 26, 2017 5:19 am

This is the most ridiculous discussion I have ever had the misfortune to witness. You look at the thermometer, you write down the temperature, end of story.
“Homogenization” isn’t.
“Adjusted data” don’t exist.
Anyone lucky enough to be entrusted with temperature records and asked to improve or massage them in any way will obviously do this to support a political agenda, which is why it should
NEVER EVER
be done.

dudleyhorscroft
August 26, 2017 8:05 am

I do recall a story about a remote weather station where the rainfall, pressure, wind speed and direction, and temperature statistics were after some months noted as being wildly discrepant from those of the surrounding weather stations. Eventually someone was sent out to investigate. Report was that this was at a country post office where the husband was the one who got the stipend to read the data. However, he had died, and as the wife wanted to still receive the stipend for reading the data, she continued to send in the data. However, as she had no idea what to do, she sent in the data for the previous year. Homogenization – or something – desperately needed!

Reply to  dudleyhorscroft
August 26, 2017 2:34 pm

That is not data, obviously.

Ferdberple
August 26, 2017 8:41 am

Why adjust anything? Sampling errors are random. Over time, the error term will aggregate to zero.
Adjustments on the other hand are clearly not random. The do not aggregate out to zero over time.
Thus, by adjusting the temperature data, you are reducing data quality, not improving it.
Adjusted data has GREATER UNCERTAINTY than does raw data, while giving the false impression of reduced uncertainty.
For example, removing outliers. This is a common adjustment. It reduces the variance and the standard error making the data appear reliable. Gone are the outliers the tell you the data is not reliable.
This is the legacy of climate science. Faulty conclusions based on faulty data processing.

Duster
Reply to  Ferdberple
August 26, 2017 10:40 pm

The “adjustments” are supposed to correct systematic errors like errors due to TOBS. Sampling errors can be handled statistically, just as you say. But a bias is a real problem. The issue is where the bias is actually located, in the data, or in the heads of analysts. A bias in data can be corrected for if it can be measured or reasonably estimated. An analyst’s bias is entirely different and can be as entrenched and resistant to reason as a religious conviction.

Nick Stokes
Reply to  Duster
August 26, 2017 11:56 pm

“But a bias is a real problem.”
Yes, it is. And that is the point of homogenisation. It increases noise, but that washes out. But it picks up errors that might have had a bias. Like the drift of TOBS changes from afternoon to morning. You can check the noise that it introduces on synthetic data so ensure bias is not introduced. That is the tradeoff – more noise but less bias.

Ken Seton
August 26, 2017 9:33 pm

I am a Taralga property owner and will not comment on homogenizations etc. It is indisputable however that data that has been entered into the log books “as read” by the good folk at the Taralga Post Office, and initially published online that same day – has been subsequently (3-6 days later) “updated” to null. I have the before / after screen shots to prove this and it was also discussed in an article in the Australian newspaper 18-Aug P3. Temperatures that were -10C and -8C on the 10th and 16th May 2017 respectively were republished as null a week later. The claim is that the technician “read the max/min thermometer up-side down”. The fact that these would have been record lows for May and were clearly only removed after someone somewhere (BOM HQ ?) decided it was “impossible” is highly suspicious. But short of confession, we will never know. Meanwhile, even if the actual temperature (say on the 10th May) was -4C (a more likely expectation from other corroborating data) we are left with a bad taste in the mouth, that yet more cold days have been removed from the official record.

bill johnston
Reply to  Ken Seton
August 26, 2017 11:41 pm

Thanks Ken.
I pitched the paper at homogenisation, because that’s the reason I visited the site last year. I had detected the 2004 up-step and was keen to understand what caused it. However, I don’t understand how new-screen data less than the lower quartile (2degC) kick-up and become positive.
Unfortunately there is no parallel data (and volunteers are unlikely to be keen to read thermometers from two screens for a year or so anyway) So the type of analysis I’ve presented here is all that is possible.
A back-of-the-envelope calculation of the number of days potentially affected is 4/100 times the data-sample (3610) which is around 144; around 14 days/yr. You say perhaps -4degC; for the large screen in the place it used to be, you could be right. (Based on old screen percentiles; data suggests that around 7 days/yr would experience temperature less than -4degC.)
Cheers,
Bill

Nick Stokes
Reply to  Ken Seton
August 26, 2017 11:50 pm

Ken,
“Meanwhile, even if the actual temperature (say on the 10th May) was -4C (a more likely expectation from other corroborating data) we are left with a bad taste in the mouth”
So -10 would have beaten the record by 3.5C. And it sounds as if even you don’t think it was that cold. Goulburn had -1.6 and -2.3C on those days. May 8 was cold, Goulburn had -7.5, but -4.8 in Taralga. So if you don’t believe it, and BoM doesn’t believe it, why do you think it should be retained?

Ken Seton
Reply to  Nick Stokes
August 27, 2017 1:03 am

Hi Nick
I do believe it was probably -10C or -8C on those days, because (1) the fact that that’s how they were originally read and recorded and (2) one of my sons was at our property on the 10th May AM in question and told me quite independently and prior that he wrapped in several doonas and was frozen all night. But the property is 10km from Taralga (too far to consider equal) and anecdotes may not necessarily say a lot.
But you have misconstrued the meaning of my comment: “Meanwhile, even if the actual temperature (say on the 10th May) was -4C” as meaning I don’t believe the recorded value. Sceptical minds can give the benefit of the doubt and not jump to partisan conclusions. Basically I was saying that my understanding of the area and history and other surrounding temperatures and measures on the day would suggest somewhere in negative territory. But what values ? I would back the recorded values. But to give the benefit of doubt to the folk at the PO who a week later said they made a mistake and “by way of thought experiment” for the possible scenario that errors really were made on those days, I said EVEN IF … EVEN IF … then even then (what we call) very cold days have been replace by null, pushing the mean for that month up. I think its a worry.

Nick Stokes
Reply to  Nick Stokes
August 27, 2017 2:27 am

Ken,
I think this one is between you and the folks at Taralga PO. They say (I presume) that they read it wrong. And it’s way out of line with Goulburn. As said above, the manual Taralga readings are basically for the benefit of locals. They don’t go anywhere else that matters. Even if the mean minimum for May 2017 is biased low by a fraction of a degree, it’s really only of interest (if any) to locals.

Ken Seton
August 27, 2017 2:49 am

Yep, and there goes the media opportunity of saying “record cold for Taralga May 2017”. But you can bet your boots that this avoidance of a “record” statement would be a lot less likely for a Tmax. Gotta keep the public perception on-message after all. The thing is, I would be a lot more receptive to an actual increase in Tmax over time (whatever the cause, not talking AGW here) if I didn’t have this sneaking suspicion that there are shenanigans and peer pressure operating throughout the system. But truth will out. Keep the faith Nick.

August 28, 2017 3:06 am

See how rainfall reduces Tmax but increases Tmin.
GH gasses reduce extremes of temperature.

August 28, 2017 4:21 pm

When there’s a step change from whatever cause, how is it determined which record is “correct”? Since there’s no independent, objective “true” measure of the temperature at the site, isn’t it just as possible that the new equipment reads high as the old reads low, or vice versa?

bill johnston
Reply to  James Schrumpf
August 28, 2017 5:27 pm

James,
If you measure T on say the southern side of your house, then after a few decades shift the thermometer to the western side; or build a shed, or concrete the yard or put in a garden and keep it watered; a pool. Would you have changed the weather?
Measured T is an anomaly relative to the long-term mean, which is the site benchmark (the balance of heat sources and sinks). Shifts in the mean track changes in that balance. If the place T is measured is chaotic (not a consistent background heat balance), although there will still be a mean (daily mean, monthly mean, annual mean); data will be chaotic and useless for describing the weather. So reliable temperature measurements are only achieved if the conditions under which they are observed are consistent. Hence the value of statistical tests (and visualisations) for determining if data are fit-for-purpose.
(There are many examples of where a site moved somewhere cooler for a few decades; then warmer again. There are also many examples of where the data obviously changed, but no one remembers why and it was not written down, or the metadata is lost or ignored.) I don’t know what happened in 1965 at Taralga for instance; and I don’t have much interest in investing the time and effort into finding out. However, something did change that affected exposure of the screen.
Thanks for your interest,
Cheers,
Bill