From Jo Nova: BOM finally explains! Cooling changed to warming trends because stations “might” have moved!
It’s the news you’ve been waiting years to hear! Finally we find out the exact details of why the BOM changed two of their best long term sites from cooling trends to warming trends. The massive inexplicable adjustments like these have been discussed on blogs for years. But it was only when Graham Lloyd advised the BOM he would be reporting on this that they finally found time to write three paragraphs on specific stations.
Who knew it would be so hard to get answers. We put in a Senate request for an audit of the BOM datasets in 2011. Ken Stewart, Geoff Sherrington, Des Moore, Bill Johnston, and Jennifer Marohasy have also separately been asking the BOM for details about adjustments on specific BOM sites. (I bet Warwick Hughes has too).
The BOM has ignored or circumvented all these, refusing to explain why individual stations were adjusted in detail. The two provocative articles Lloyd put together last week were Heat is on over weather bureau and Bureau of Meteorology ‘altering climate figures, which I covered here. This is the power of the press at its best.
more here: http://joannenova.com.au/2014/08/bom-finally-explains-cooling-changed-to-warming-trends-because-stations-might-have-moved/
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
An independent expert panel reviewed the BOM ACORN-SAT dataset of selected 112 stations and states:
“Before public release of the ACORN-SAT dataset
the Bureau should determine and document
the reasons why the new data-set shows a lower
average temperature in the period prior to 1940
than is shown by data derived from the whole
network, and by previous international analyses
of Australian temperature data.”
Says it all really.
Not only does it get colder before 1940 once the ‘homogenisation’ process is done to the selected 112 stations compared to the ‘whole network’ before 1940, it also manages to get them cooler before 1940 than in any other previous international attempts.
This is obviously what the process is specifically designed to do, increase the warming trend by making it cooler before 1940 after each iteration, and then warmer thereafter. Apparently this has increased the warming since 1910 from about 0.6 degrees C to about 1 degrees C in Australia, an increase of about 60%, yet elsewhere in the report it states that the effect of homogenisation has not greatly effected the warming trend.
The only useful thing is that these propaganda documents will be available for all to see in future, the hubris and chicanery is breathtaking.
But why didn’t any of the expert review panel have the guts to stand up and say the methodology is flawed and only increases any warming trend on each iteration? It’s just another hockeystick.
Please excuse me for attempting to interject common sense into a Nick Stokeian discussion, but:
how is it that a station that is moved can be considered “the same station”. If it is taking readings from a different location than shouldn’t it be a new station?
“Nick Stokes August 26, 2014 at 5:55 am
Have you looked at the graphs that I showed? The need for the adjustment is obvious”
Why is the Need for adjustment obvious ?
The data is what it is.
Just because its slightly different to other stations doesn’t make it wrong.
There might be something about the stations microclimate, or anything.
Data is just that. Data. When you adjust it, you introduce your own perceived biased into “what the data should be”
Other wise know as cheating in experiments. Or making the data up.
Just because YOU think it should be adjustment, doesn’t make the adjustments valid of necessary.
@John from Au……….How do I load your data into Excel?
Cut and paste the text strings into excel. Highlight it. Go to the Data group and find “Text to columns”. Select the “delimiter” and set to spaces (could be tabs, or even commas, but probably spaces). Ratchet that over until it’s separated into a nice set of columns.
statistically, the adjustments should not introduce a trend. some will be adjusted higher, some lower, but over a large number of stations the net effect should be zero.
thus, if the adjustments are introducing a trend, a correction is required to remove the overall trend. it would appear that the current algorithms used for adjustments do not include a correction for induced trend.
this can be see in the comparison of raw to actual s, where the net adjustments very much mimic the existing trend. for example, the adjustments themselves add a net increase during the 1980’s and 1990’s, then level off during the 2000’s. Just like the raw data.
this suggest that the adjustment mechanism has an unrecognized bias.
A similar thing happens in models. For examples, in a GCM model small errors in total energy accumulate such that the model gains or loses energy independent of external forces. This residual needs to be apportioned back into the model to prevent introduced bias.
It would appear however that the currently in use temperature homogenization processing routines do not recognize or deal with introduced bias. That for currently unrecognized reasons, the current algorithms introduce a bias that has a high correlation with the underlying trend in the raw data.
Overall, it could be argued that there is no need for adjustments, unless you are interested in individual stations.
Because the adjustments should not introduce a trend, when you are dealing with averages, it makes no sense to correct individual stations beforehand, as the corrections should on average balance out to a net zero. which means that the adjustments should have zero effect on the averages, and thus are not required.
Only when you are dealing with an individual station is corrected data required. otherwise, when an average is calculated over all stations, the raw data should statistically produce an average that is at a minimum as accurate as the adjusted data.
the adjusted data on the other hand can produce an average that is less accurate than the raw data, due to the possibility of introduced and unrecognized bias. thus, when one is interested in overall averages, the raw data is most likely to provide the correct answer.
Ralph kramden you are 100% correct. I live 3 miles from where I work (I work in town and live outside of town). The town is small just 6000 people yet UHI is easily noticeable as it is always 2+ degrees warmer in town. This past winter I ran into many instances that it was -5 degrees in town and -12 degrees just outside of town. Again this is a puny town of 6000. Nick Stokes would tell us UHI is negligible. No need to adjust 2014 temps down for UHI. Hey Stokes I see a step change in the global temps in 1998. I guess we need to adjust all temps after 1998 down because that step change is very noticeable. If all these adjustments that Stokes and the boys say are negligible then why even make them. Just use the raw data and stop wasting your time adjusting if the adjusting is so negligible. Hmmmmmmm
All ground based sites are biased, and all adjustments build warming trends. It is NOT science but political purpose that drives the adjusters. Perhaps it’s time to scrap land based sites that only cover 30% of the world (and poorly distributed at that) and take this propaganda tool out of the hands of dishonest agenda based individuals. I personally have never seen adjustments that do not create a steeper slope that benefits alarmist ideology. I do not use or believe ANY ground based data and rely on satellite data despite their own problems. There simply is no need to have records that we can’t trust to be honest. I have a personal system that has been it place for nearly 30 years that correlates very nicely with satellite data. I haven’t made any adjustments to further my agenda.
I show a slow decline of about .5C here in California’s Central Valley over the last 14 years. I trust that far more than any government site.
How do these adjustments take into account any micro-climate differences? It would seem these differences are lost when the data is Homogenised.
Nick Stokes
August 26, 2014 at 5:55 am
Have you looked at the graphs that I showed? The need for the adjustment is obvious.
==============
Nick, are you saying the majority of stations were moved to a warmer location?
..or the majority of stations were moved to a cooler location?
Using your logic……it should be evenly split….so no adjustments are necessary at all
JustAnotherPoster August 26, 2014 at 6:45 am
“The data is what it is.
Just because its slightly different to other stations doesn’t make it wrong.
There might be something about the stations microclimate, or anything.”
The data is what it is for that particular combination of micro-sites.
Adjustments are made for spatial averaging. We aren’t interested in the microclimate. We want to use Amberley as representative of a large area around. Amberley experienced a 1.4°C drop in August 1980, but the region didn’t, as shown by the other three stations. So you don’t want to project that onto the whole area.
don penman August 26, 2014 at 6:21 am
“The claim is that the station was moved in 1980 and resulted in a lower minimum temperature being recorded since then, would it not be more accurate to add a constant term to the minimum temperatures since 1980 and leave the past temperatures alone,”
This point is related. It actually doesn’t matter which you do, so the convention is to leave the present unchanged. The reason that it doesn’t matter is that we aren’t trying to capture microclimate. That’s why anomalies are used; they take that factor out. We want Amberley to tell us about that region, not what location it is in on the airbase.
The arrogance that drives these adjustments and the fact that they thought they could do this and have it be accepted shows how highly they hold themselves as much smarter than the rest of us. Did they think no one would notice? Compounding this is the” loss”, in some cases, of the raw data, obviously in an attempt to cover their asp. Ignore these false records they are a distraction and unusable to real research.
willnitschke August 26, 2014 at 3:24 am
In a nutshell, the BOM claims that a possible thermometer shift of only metres, can cause an incorrect reading of 2C. They can determine this by consulting thermometer trends hundreds of kilometers away.
The changes are in many cases necessary, for example if you look at the raw data for the Darwin weather station in Australia you’ll see that it exhibits a cooling trend. Closer analysis reveals that there was a substantial drop in 1939-42. Prior to that date the station was based at Darwin Post Office and didn’t have a Stevenson’s screen and the postmaster had to move the thermometer so that the direct sunlight didn’t shine on it, also a tree grew such that by the 30s the site was shaded. In 1941 the station was moved to the Darwin airport hence the sudden change, just as well because the Darwin PO was destroyed by the Japanese bombing raid in 1942.
JohnWho August 26, 2014 at 6:41 am
“how is it that a station that is moved can be considered “the same station”. If it is taking readings from a different location than shouldn’t it be a new station?”
Mosh would applaud. That’s what BEST does (scalpel). But it loses information. Because the two stations are very close, you expect them to have very close correlation. But if you throw that away and regard them as separate, information in the data is used to establish the relation between them, losing degrees of freedom. Long records are very valuable for trends.
This example shows one facet of homogenisation. The three comparison stations aren’t in ACORN. It’s not because of quality; it is short duration. But there is enough data to sort out what happened in 1980. That means we have a homogeneous 70 yr series for that part of Qld, using stations that individually couldn’t provide it.
Nick, why is Amberly “wrong”?
Then don’t. Spatial averaging of temperatures (intensive properties) gives you nothing physically meaningful in return.
Define “very close”. The distance between where I work and where I live is about 13 miles as the crow flies. I’ve seen temperature differences between the two vary by as much as 27F. Which one should be adjusted and why?
Anthony, have you considered starting a reference page on “temperature adjustments” ? Please?
MarkW
August 26, 2014 at 5:39 am
Nick,
I love the way you assume that if a station doesn’t show what you believe it should show, it must be adjusted.
Make no effort to determine which station is accurate and which are being polluted. Just find the one station that doesn’t show what you want it to show, assume that it, not the others are in error, and just blindly adjust.
It’s better than actually studying the situation and determining the cause, isn’t it.
—
I looked at Nick’s analysis and too thought that his adjustment was quite arbitrary (no “calculation” involved – just guess a number and plot. Hey it looks “good”.). In my opinion, UNLESS you have photographic and eyewitness confirmation of a station move, the data should NEVER be changed or “homogenized” in any way whatsoever! If you suspect the data is corrupt for some reason, you can choose not to use it and clearly state the reason for doing so when you present your temperature analysis. Any time trends are flipped by adjustments such as these (either warming or cooling), alarm bells should start ringing loudly…
Nick Stokes, without a written justification for the adjustment and without a clear method statement the adjustment is wrong.
It doesn’t matter whether the actual temperature rose, fell or stayed the same.
The data has been corrupted. It is now meaningless.
Good work in trying to replicate the adjustment and so find meaning… but you can’t tell if that is what happened.
And you can’t tell if that is why the adjustment was made.
So the data is still corrupted.
Adjustments are made for spatial averaging. We aren’t interested in the microclimate.
“We want to use Amberley as representative of a large area around”. <——
you can't do this….
A thermometer records the temperature at its own location. Nothing more nothing less.
You are just assuming "something" was odd. You can't prove it. Your basically guessing "something" was wrong with the data.
There is absolutely no justification for changing or adjusting the raw temperature.
“The data is what it is for that particular combination of micro-sites.
Adjustments are made for spatial averaging. We aren’t interested in the microclimate. We want to use Amberley as representative of a large area around.”
So when we look at a graph of monthly minimum temps ‘deaseasonalised’ of a particular combination of micro-sites that happen to be all over the joint where the ancestors decided to plonk themselves, we can see some squiggly coloured lines that deviate quite a bit from each other and yet move somewhat in unison. Yeah I got that this is not the equator and Antarctica we’re talking about here but Mount Glorious isn’t in any of those places so obviously we need to give it the flick and stick with our evenly gridded temp stations to get a proper spatial average for any adjusting of the odd outlier or problem thermometer.
Err no wait a bit the temp stations haven’t been chosen in a nice a priori grid like a surveyor would taking levels for a contour map. Just a few levels hundreds of kilometres apart and he’s missed Mt Glorious. Not to worry our intrepid surveyor Nick reckons he’s got it all sorted with some clever averaging and there’s no mountains to climb or deep ravines to fall into here folks, just a nice gentle incline as far as his eye can see.
The global temperature is a poor way to determine if the temperatures we measure at individual stations are trending upwards as predicted by agw because it is too easy.If all the stations were to increase over time (which we don’t observe) or if we always observe a high temperature record when conditions are ideal for this to happen(which we don’t see) then we would have evidence that measured temperatures were trending upwards.
They are called un documented station moves. Happens all the time.
Some of the most un reliable data folks have is the meta data.
So you are faced with a situation.
You have the time series of multiple stations.
You have incomplete and often un verifiable station metadata that may or may not accurately
record the location and the instrument.
You compare the stations and find that one sticks out like a sore thumb.
What you DONT HAVE is answers. You have choices
1. assume the metadata is correct and that somehow over decades one region has a cooling trend
while surrounding it you have warming trends. Or the opposite, you see one site with sky rocketing
trends while all around it the world cools. You try to make thermodynamic sense of this? huh? how
could one little pocket of the world warm by 2C while all around it cooled? or how could one place
cool while trends around it warmed. Hard to make thermodynamic sense of that.
2. Assume that the metadata is in correct or incomplete and create a field based on that assumption.
How do you decide between these choices.
Well, for number 1, the first thing you do is update the metadata so you dont repeat the problem in the future. And you watch the sites that exhibited this weird behavior. You also would try to develop a physical
theory that explained how a patch of earth can cool for decades while a few km away things warmed.
This is not done by “hand waving”. I’ve spent a considerable amount of time looking at “cooling” stations.
I can say this:
1. The phenomena occurs in two places: the US and Australia.
2. There is no unique geography that all of these sites share.
3. In the case of the US, the cooling is associated with station moves, ( if one trusts the metadata)
For number 2, you can simply compute the field under two cases: case number 1. No adjustment.
Case number 2: adjusted. Then look at your global result which is all that matters. What you find
is that the global answer doesnt change. Such that you might have the local detail wrong, but in the
big picture the global answer doesnt change.
Bottom line: you get the same global answer whether you include cooling stations or not.
whether you adjust them or not. Given the absence of any physical mechanism to explain how
one patch of earth can cool while the rest warms, Given that the metadata record is not
gods word, given that the global answer doesnt change, it is reasonable and justifiable to apply occams razor and assume that the metadata missed a station move. It’s the most simple explanation.
Reminds me of the tale the now retired civil engineer bro could tell about the not very old rural home he got to urgently assess that was suddenly collapsing in the middle. Nice rural land you have here folks but sorry no-one is to enter the home again because it’s built smack bang over an old abandoned copper mine and oops the footings core boys seemed to have missed that at the time.
It was bulldozed into your cute average hole in the ground in case you were wondering Mr Stokes, et al.