BoM's bomb on station temperature trend fiddling

From Jo Nova: BOM finally explains! Cooling changed to warming trends because stations “might” have moved!

It’s the news you’ve been waiting years to hear! Finally we find out the exact details of why the BOM changed two of their best long term sites from cooling trends to warming trends. The massive inexplicable adjustments like these have been discussed on blogs for years. But it was only when Graham Lloyd advised the BOM he would be reporting on this that they finally found time to write three paragraphs on specific stations.

 

875141-a5eda3f6-2a03-11e4-80fd-d0db9517e116[1]Who knew it would be so hard to get answers. We put in a Senate request for an audit of the BOM datasets in 2011. Ken Stewart, Geoff Sherrington, Des Moore, Bill Johnston, and Jennifer Marohasy have also separately been asking the BOM for details about adjustments on specific BOM sites. (I bet Warwick Hughes has too).

The BOM has ignored or circumvented all these, refusing to explain why individual stations were adjusted in detail. The two provocative articles Lloyd put together last week were  Heat is on over weather bureau  and  Bureau of Meteorology ‘altering climate figures, which I covered here. This is the power of the press at its best.

more here: http://joannenova.com.au/2014/08/bom-finally-explains-cooling-changed-to-warming-trends-because-stations-might-have-moved/

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

130 Comments
Inline Feedbacks
View all comments
TedM
August 26, 2014 1:42 pm

It is not a matter that the station may have been moved. Previous BOM employees confirm that the station has not been moved; repeat, not been moved.
http://joannenova.com.au/2014/08/bom-claims-rutherglen-data-was-adjusted-because-of-site-move-but-it-didnt-happen/

rgbatduke
August 26, 2014 2:44 pm

Have you looked at the graphs that I showed? The need for the adjustment is obvious.

Did you read anything at all that I wrote?
And clearly, we have very different meanings for the word “obvious”.
There are three issues here: a) identifying a “rejectable” data outlier on the basis of some objective statistical criterion; and b) instead of rejecting it as an outlier, claiming that you can fix it; and c) once you’ve fixed it, including it in the data averages and error estimates with the same weight.
a) is basically impossible. As the wikipedia page on data outliers points out, identification of outliers is an essentially subjective process (by which they mean that one cannot justify it with the a priori application of statistical principles, one has to at some point make a subjective choice as an implicit Bayesian prior). If one makes the usual assumption of a smooth unimodal normal distribution, one basically cannot possibly argue that given four samples as in your graphs above, one of those samples qualifies as an “outlier” simply because the trends do not agree. You don’t a priori know what the correct trend is, or what the correct variance should be, for the four sites you select. Consequently, you cannot make a quantitative statement for how likely it is that your correction is correct, or what the distribution of reasonable corrections might be.
b) OK, so according to your subjective beliefs, it is an outlier even with only four samples. So reject it. Don’t claim that you can fix it, and acknowledge that in doing so, you cannot really shrink the variance as much as you would like. The existence of the outlier and the lack of metadata means that you cannot be certain that you understand either why it is different or why it appears perfectly reasonable and well-formed and yet cannot be right. Your evidence that it isn’t right is weak at best and can equally easily be interpreted as evidence that the other three sites are deviating the other way systematically from a “true” behavior somewhere in between. In this case just going from four samples to three is going to substantially lower N-1 (from 3 to 2) but again, the resulting sample standard error is going to be too large as it fails to account for the Bayesian prior probability that your rejection is in fact justified. It might not be, and you cannot be certain that it is.
c) But whatever you do, don’t try to fix it! This adds several more degrees of freedom — the “fit parameters” of your fix, in this case two independent numbers. You now have several Bayesian (subjective) assumptions — that the rejected data is in fact an outlier, that the other data is in fact accurate, that the rejected data is an outlier for a specific, modellable cause (less likely than it is an outlier for any possible cause) and that the rejected data can be “fixed” by optimizing the model-adjusted data against the remaining unrejected data. This process by definition is going to (on average) preserve the mean trend of the unrejected data, since that (or something extremely similar) is the criterion you optimize against. If you then use the fixed data to form the mean/determine the trend it is a self-fulfilling prophecy from the unrejected data — you merely affirm your subjective beliefs by optimally fitting the rejected data to conform to them. This is fine, but you can hardly blame people for not agreeing with your subjective beliefs.
The problem arises when one tries to form the standard error from the data including the fixed data. That data came with a triple price tag of Bayesian assumptions, each one of which should have reduced your certainty that your fix was correct. It certainly no longer counts as an “independent and identically distributed” sample from the point of view of evaluating standard error. You have gained nothing in the way of certainty along the way. You’ve simply substituted a subjective decision to deliberately reduce the variance of your samples around the original mean, the one that conforms to your subjective expectations, for the objective variance of the actual samples around an intermediate mean, or the objective variance of the reduced number of unfixed samples around their even less reliable mean.
The point being that your corrections could be correct. It might even be the case that they are probably correct, although I think you’d be hard pressed indeed to make that a quantitatively defensible assertion (which all by itself should give you pause, by the way — perhaps one can, as Mosh asserts, use global metadata from many other sites to justify a posterior model used to “fix” the data, but that seems very dicey to me as it makes a lot of assumptions about local spatial homogeneity of temperature trends that I’d be very skeptical about just based on looking at temperature measurements sampled at different sites in my own back yard over time, let alone sampled in different yards tens of miles or more apart).
rgb

Reply to  rgbatduke
August 26, 2014 3:15 pm

rgbatduke commented

The point being that your corrections could be correct. It might even be the case that they are probably correct, although I think you’d be hard pressed indeed to make that a quantitatively defensible assertion (which all by itself should give you pause, by the way — perhaps one can, as Mosh asserts, use global metadata from many other sites to justify a posterior model used to “fix” the data, but that seems very dicey to me as it makes a lot of assumptions about local spatial homogeneity of temperature trends that I’d be very skeptical about just based on looking at temperature measurements sampled at different sites in my own back yard over time, let alone sampled in different yards tens of miles or more apart).

The really devastating point of this is if you don’t do all of this hacking to the only data we have, the results are different!

Jared
August 26, 2014 2:54 pm

Pretty amazing how Mosher and Stokes emphatically claimed the site moved. Then we get 2 people that worked there and both say it did not move. WHOOPS. The real world isn’t a flawed formula computer program. Fix your formula because it is downright horrendous. Are you going to fix your horrendous formula Mosher or Stokes? Doubt it as you both seem to believe your formula is God even though it once again completely whiffed at real world analysis.

August 26, 2014 2:55 pm

Why was the change only in the slope/trend. There should have been a step change when the adjustments started… no?.

August 26, 2014 2:59 pm

Maybe there was no step change because the site slid slowly down the hill. So slowly the folks that worked there didn’t notice that it had moved.

Nick Stokes
August 26, 2014 3:23 pm

Duster August 26, 2014 at 1:11 pm
“What bugs me isn’t the homogenization so much as the apparent alteration of individual station records. Homogenized data should only be used to create a separate table of data entirely, area weighted, and not linked to any specific station(s). The resulting records can be tied to a centroid defined by the polygon delimited by the locations of the various stations used in the local homogenization.”

That’s pretty much what is done. People here complain the BoM and others are altering the record. In fact what they produce is an announced adjusted file, separately, and for some reason sceptics are drawn like moths to a flame, and don’t even want to kinow about the unadjusted data.. BoM announced a specific set ACORN which is intended for the area use you describe, which is basically a spatial integration.
You say that the station name should not then be used. Well, what’s in a name? I actually agree with you; keeping the name is easier for them to remember, but causes more trouble than it is worth.

Nick Stokes
August 26, 2014 3:25 pm

Jared August 26, 2014 at 2:54 pm
“Pretty amazing how Mosher and Stokes emphatically claimed the site moved. Then we get 2 people that worked there and both say it did not move. WHOOPS.”

Different place

August 26, 2014 3:30 pm

I see the usual suspects are here arguing that changing the recorded data to suit their warmist religion is the only way to do it. It is astounding to see people defend blatant wrongdoing and pretend that they are ”doing science” when they are destroying the very idea of science.
Karma my friends. One hopes they get what they deserve.

pete
August 26, 2014 3:33 pm

Mosher, you have another choice: you can simply discard the dodgy data instead of making adjustments based on unknowns.
That’s what a high-quality statistical analysis would do in any case. But then again, a high-quality analysis wouldnt try to create a meaningless single “gloabl temperature” statistic in the first place….

Nick Stokes
August 26, 2014 3:47 pm

rgbatduke August 26, 2014 at 2:44 pm
“c) But whatever you do, don’t try to fix it! This adds several more degrees of freedom — the “fit parameters” of your fix, in this case two independent numbers. “

I don’t think you are taking account of the purpose of homogenization. It’s for spatial integration. Most degrees of freedom will disappear.
We aren’t trying to “fix” Amberley. We’re trying to get a series that is representative of the subregion, for integration. So when there is an outlier that looks as if it is caused by something that isn’t related to the region climate, we switch to relying on other data for some period in that sub-region. I agree with Duster that it would be better to give it another name.
“b) OK, so according to your subjective beliefs, it is an outlier even with only four samples. So reject it. “
Mine is just a rustled up calc – I’m sure BoM would use more than four.

Reg Nelson
August 26, 2014 3:58 pm

What adjustments were made for the UHI effect, which is a much more well known, documented and widespread problem?
Truth is a two way street.

1sky1
August 26, 2014 4:18 pm

Nick Stokes claims: “We’re trying to get a series that is representative of the subregion, for integration.”
Pray tell, by what magic can a series that diverges materially from those at neighboring stations be made “representative” of an uncircumscribed “subregion” whose spatially integrated temperature field is unknown? Such grossly non-conforming series should be simply discarded! But that would leave even greater gaps in usually sparse geographic coverage. Let’s get real: the patent purpose of ad hoc “homogenization” schemes is to maintain the pretense of adequate coverage, while introducing a surreptitious means of manipulating the “trend.”

Reply to  1sky1
August 26, 2014 11:31 pm

AGREED! Of course agreed!

August 26, 2014 6:02 pm

[snip – be nice- Anthony] like Nick Stokes don’t actually live in the real world. I, as a farmer with a science degree, am able to apply theory and reject it when it does not match reality. On my small holding I have 3 thermometers in 3 different locations but all within 500 metres of each – they vary by as much as 15% on some days. Last summer there was a day when the exposed thermometer near my sheds and cement was 44 degrees Celsius, whilst the one about 25o metres away near the septic water disposal area was 39 degrees Celsius (and not in shade), whilst the third thermometer near the horse stables was 41 degrees. All had been calibrated within the last 3 months. That’s just one example of many.

Nick Stokes
August 26, 2014 6:13 pm

Tom In Indy August 26, 2014 at 10:07 am
“Was Samford adjusted downward? If so, cheers. If not, why not?”

Samford has never been adjusted.The only adjustment done is in preparing the special ACORN set for spatial averaging.

August 26, 2014 6:44 pm

The current font doesnt display well.

Curious George
August 26, 2014 9:28 pm

Nick: “So when there is an outlier that looks as if it is caused by something that isn’t related to the region climate, we switch to relying on other data for some period in that sub-region.”
It sounds rather subjective to me. Who evaluates the looks?

August 26, 2014 11:14 pm

@Duster: Thank you for the response. Mosher slices in “poorly sited” stations that don’t show enough warming or show cooling. Way back in 2013, I responded to the nonsense!
Mario Lento at 12/13 3:27 pm
@Steven Mosher at 12/12 10:53 pm
Mosher wrote “Regarding “3. Next I wanted to use methods suggested by skeptics””
+++++++++++++
When did skeptics say the stations with poor siting should be subjectively sliced and added to [the] mix so their warming could fit the narrative?
Neither BEST nor you [Mosher] have ever honestly addressed why “if only urban areas show warming, while rural areas don’t, that you could slice (in?) the poorly sited urban stations to make their [so called] “crap” value warm the entire temperature record.
++++++++++
BEST looked for a presumed conclusion and then invented improved data to prove they were right all along. I find it so sad that smart people can collude so disingenuously.

August 26, 2014 11:22 pm

Steven Mosher December 12, 2013 at 12:18 pm
There are no adjustments.
There is the raw data if you like crap.
There is qc data
There is breakpoint data.
Then there is the estimated field.
We dont adjust data. We identify breakpoints and slice.
Then we estimate a field.
+++++++++++++
And here he admits what he does! In summary: The value in the conclusions by BEST is based on data which begins as “crap” that gets adjusted by three different (trusted?) sources, and for value-added BEST science, is then sliced and estimated so it can be served with conclusions that include “CO2 accounts for the warming”. This does not sound like science, but instead politics. Shame!

August 26, 2014 11:28 pm

Mosher says:
“Well, for number 1, the first thing you do is update the metadata so you dont repeat the problem in the future. And you watch the sites that exhibited this weird behavior. You also would try to develop a physical
theory that explained how a patch of earth can cool for decades while a few km away things warmed.”
++++++++++++
Let me fix this for you, with my emphasis in [brackets]
Well, for number 1, the first thing you do is update the metadata so you dont repeat the problem [of unexplained cooling] in the future. And you watch the sites that exhibited this weird behavior [that does not show the warming we expected]. You also would try to develop a physical theory that explained how a patch of earth can cool for decades while a few km away things [affected by UHI] have warmed. [You know the warming is expected, so these stations will be used to fix the cooling]. [The resultw cqn be used to prove that only warming can occur, because of CO2 forcing]. [This is settled science after all.]

August 26, 2014 11:30 pm

Steven Mosher August 26, 2014 at 8:25 am
They are called un documented station moves. Happens all the time.
++++++++++
This needs fixing:
Steven Mosher August 26, 2014 at 8:25 am
They are called [ILLEGAL] station moves. Happens all the time.
[there it’s politically incorrect now]

Solomon Green
August 27, 2014 5:59 am

lNick Stokes
“The BoM got this one right. The need for the adjustment is very clear from neighbouring stations. I’ve done the analysis here”.
I went to Nick Stokes’s site and was impressed by his work and his replies to bloggers. What I did not find, however, was any evidence as to why the raw data for Amberley, which was out of line with the other three stations he selected, fell into line in August 1980. It is all very well supposing that there was a station change but what physical evidence is there to support the supposition? For example, perhaps the other three had station changes at about that time and Amberley was the only one that did not. Highly unlikely, I know, but it is a hypothesis that needs to be disproved.
One thing that any half decent statistician learns is never to discard an outlier without first finding out why it is an outlier. The second is never try to adjust the outlier to make it fit the pattern even if one has found the reason why it is an outlier – just do not use the data.
Interpolation can destroy information. One can (and I have in the past) fiddle(d) many figures through judicious interpolation.

rgbatduke
Reply to  Solomon Green
August 27, 2014 10:50 am

Yeah. What I said too. And even discarding outliers when you THINK you know a reason is deadly dangerous, unless you are gifted with perfect prior knowledge.
The bete noire of empirical human reason is confirmation bias. There are some truly (in)famous examples of confirmation bias (and its statistical companions, data dredging, cherrypicking, etc) producing horrendous conclusions that were — eventually soundly rejected when somebody sane and unbiased re-examined the problem.
Rejecting outliers — especially outliers that fail to conform to a belief about the way the data should behave — can easily be data dredging and cherrypicking both in disguise. The only “safe” way to do it is via an unbiased algorithm that works completely automatically and that can be shown in application to simulated data to actually work without bias (not just theoretically be without bias). Mann’s hockey stick was built via a selection process that rejected data as “unsuitable” or an “outlier” if it failed to conform to his prior beliefs about what tree rings were supposed to proxy. Again, there was a huge Bayesian assumption built right into the algorithm, one that took simulated random noise and turned it into hockey sticks and that would have been soundly rejected as a posterior probability if anybody had bothered to do this sort of analysis ahead of time.
It also — literally — overrides the very power of statistics that one hopes to exploit when forming averages in the first place. The default assumption in most experimental science is that when one makes measurement errors, they are as likely to be too large as too small. Hence if one samples many times, one homes in on the true mean. In some cases, one can even support this assumption via the central limit theorem if the measurements themselves are in some sense an “average” of some microscopic process and hence are already likely to be normally distributed according to the CLT.
There are, of course, examples of cases where a measurement has a systematic bias. Somebody always rounds down, never up. A piece of dynamic measurement apparatus is “sticky” and never reads as high as it should. The problem even there is that it is supremely dangerous to assume that you know how to correct it. Perhaps another person always rounds up, never down! Perhaps a different apparatus in a different location is sticky the other way and never reads as low as it should.
The classic example of this in climate science is UHI. Note well: HADCRUT4, IIRC, does not correct for the UHI effect at all. GISS — from what I’ve read — has invented a “UHI correction” that actually warms the urban present compared to the rural past more often than it works the other way, which makes absolutely no sense, since the UHI should nearly always introduce a warming bias compared to non-urban (but nearby) sites. UHI is a specific example of an entire class of systematic biases that can result from weather station siting — another one is the horrendous placement of official weather stations at airports, often right next to concrete runways and underneath air that contains many times the average concentrations of CO_2 and H_2O simply because enormous jets burn thousands of gallons of kerosene every few minutes directly overhead as they take off and land. Again this is almost invariably a warming effect — active hot greenhouse gas production concentration plus solar heated runways a meter or so thick surrounded by shops, parking lots and car-filled expressways in no way compare to evaporatively cooled grassland surrounded by evaporatively cooled trees. All ignored in HADCRUT4 and turned into more warming of the present compared to the past in GISS.
This warming bias can be seen at a glance on e.g. Weather Underground’s own personal weather station maps, in spite of the mediocre precision/accuracy of over-the-counter personal weather stations. I see it every day — the predicted weather (predicted to conform to the area “official” readings at RDU airport) is invariably 1-3 F warmer than the weather in my own back yard, or the temperatures reported by the many PWS in the surrounding rural countryside. The PWS temperatures in town are similarly warmer by a degree or two. You can even clearly identify hot side outliers — one sits only a mile or so from my house — where a PWS is systematically 4 or 5 F warmer than anyplace else, usually even RDU. I’m guessing that the PWS sits square over a south-facing driveway or that the thermometer is directly exposed to the sun. A second PWS, less than a mile from that one, reads numbers that are in reasonable agreement with my backyard and the general field of readings.
Now consider — suppose one has four “official” weather stations that one wants to keep in the official record (perhaps because they have long-running records). Two of them are located at airports one in a town, and the fourth is in an area park in the comparatively rural country. The two cities have, over the lifetime of the temperature record, gone from populations of a few thousand people riding horses to a few hundred thousand people driving cars to and from the houses that have covered the landscape over 100 square miles around. The airports have gone from handling a propeller-driven flight or two a day on a single runway to becoming local hubs servicing hundreds of flights with hundreds of acres of tarmac and runway where maybe ten or twenty acres was all that was paved at the beginning.
Along comes the automated “data homogenizer”. It notes that three of these sites have experienced substantial warming, and the fourth is neutral or maybe even cooled a bit. Aha! it thinks. An outlier! It rejects it, or worse, corrects it, and concludes that ten thousand square miles of surrounding countryside warmed like the cities simply because the thermometric record has a really, really significant urban bias today, and the major “official” anomaly computations correct the wrong way by both ignoring the UHI altogether and by “homogenizing” the record so that dissident sites — which might be the only ones reading the correct average temperature in spite of being outnumbered — are rejected in favor of the sites that outnumber them but have a common systematic bias.
As I said way up at the beginning, somewhere (or perhaps on a different thread) — it is almost impossible to tell what the temperature is today compared to the temperature thirty, fifty, a hundred, two hundred years ago, on the basis of the thermometric record. We (apparently) cannot agree on how to handle the dominant source of systematic error in this record — the relentless urbanization of the places where thermometers, including/especially the most “official” government run thermometers with the longest running records, are located. The breaks visible when (some) of those thermometers are moved are pure evidence that they were not consistently reliable on either side of the breaks, nothing more. Many of the changes that affect their consistency are simply not visible in “metadata” — they aren’t discrete changes that come from resiting or poor siting, they are changes that come about because of gradual changes in the entire surroundings of the siting, slow changes that one cannot observe, measure, or correct for other than to note that we “expect” most of those changes to result in warming from many things — including, BTW, increased atmospheric CO_2. But some of that might be local increases. Some of it might come from alteration of groundwater retention as forestland is converted to farmland is converted to suburban backyards surrounded by shopping and business centers and expressways with acre after acre of pure asphalt. A nontrivial fraction of the land surface area of the U.S. (for example) is pure pavement, especially in selected urban zones.
I read HADCRUT4 and GISS, as being “the temperature anomaly, relative to an arbitrary set point evaluated in the present and accurate to no more than 1C either way, as computed by a biased model that ignores or enhances UHI warming by some unknown amount and presented without any error bars as if it is a simple fact, a fait accompli, beyond question or doubt”. I then mentally subtract a guestimate for the ill-compensate UHI trend of a few tenths of a degree per century, add error bars that start at the HADCRUT4 acknowledged, modern error of 0.15C and scale up smoothly with time into the past to end up close to maybe 0.5 to 1.0C by the mid-19th century, where the error starts being constrained by independent proxy measurements and a string of plausible but possibly mistaken assumptions as much as by the lack of thermometers. Remember, over well over half of HADCRUT4 we basically knew next to nothing about the SS temperature of the oceans and much of the land area of the major continents.
By the time you properly dress the corrected curves with error bars, it is actually rather difficult to be certain it has warmed at all on a century timescale. Lief has indicated how they have systematically fixed systematic biases in the sunspot record. This latter record was made by responsible, competent scientists! Metadata was insufficient to make the correction, and even the corrected sunspot record doesn’t correct reflect the absolute state of the sun! They were only able to manage this because they had four independent ways to measure solar state — not four sets of sunspot measurements, four independent methods — which was enough to use two to check the third and correct the sunspots. Suddenly solar activity no longer has a grand maximum.
How improbable is it that we will eventually manage the same thing with the thermal record? How much of the “grand maximum” of temperature in the latter 20th century is a mix of systematic relative bias in the high frequency, highly accurate modern measurements compared to the less accurate and more sparse measurements made in the past, the uncorrected UHI effect corrupting the station data, the utter neglect and mistreatment of the SST component responsible for 70% of the Earth’s surface and the cavalier assumption that we have any knowledge at all about the temperatures in, say, Antarctica prior to the very recent past, if indeed we know them now?
It won’t come easy, and it won’t come soon. Lief comments on how hard it is to get “grand maximum” solar researchers to quit because their funding depends on it now, even after the notion is pretty thoroughly rejected. GISS was under the tutelage of James Hansen for most of its evolution, and neglect of UHI is the least of its sins — GISS’s funding today is almost entirely a result of the predicted progressive grand maximum in global temperature — it just didn’t exist until he talked the US congress into it after addressing them in a meeting with the capitol building air conditioning deliberately turned off, until an unknown person named Michael Mann wrote his own “special” version of PCA code that could turn a single series of bristlecone pine records from one part of the US into an international multicentury hockey stick. At this point, the record is so muddled that the only thing that could motivate an objective re-treatment of the data by (newly) objective researchers is a stretch of blatent and inexplicable global cooling, cooling too large to be “corrected away” in GISS or HADCRUT, cooling that is openly constrained by the harder-to-futz satellite LTT measurements and ARGO. Or perhaps enough years without much warming.
In the meantime, who really knows what the temperature anomaly is compared to 1850? We see assertions that it is 0.7C or thereabouts (sometimes even higher) but that neglects the error both systematic and not. Throw in the error and it might not have warmed much at all or it might have warmed by far more, 1.5 C, say. Arguably, the systematic error would make less warming more likely than more warming, especially in the recent past compared to the intermediate past. Was the latter half of the 20th century really a lot warmer than the first half? It’s hard to say. Just about exactly 1/2 of the state high temperature records were set in a single decade, the 1930s, in the first half of the century, with considerably less of a contribution from the UHI. Even with “global warming”, those records still stand well into the 21st century. Arctic ice was documented as almost disappearing during that same general period (although not documented as well given the lack of satellites or airplanes capable of overflying the Arctic).
In my opinion, we will not have a reliable picture of global temperatures prior to maybe 1950 or 1960 ever. A more conservative person might even push it up to the mid-to-late 1970s and the beginning of satellite measurements and make the reliable record only 30-40 years long. We are decades away from having enough data to say much of anything about the climate that might not be neglected or mistreated bias, measurement error and plain old statistical sampling error in the data and models used to estimate past temperatures, pardon me, past temperature anomalies since we know we cannot accurately estimate the global average temperature within more than about a degree now, in the modern era. This greatly complicates the science, the modeling, the politics, the economics. We very probably are trying to build models that cannot possibly work (based on what we know of chaotic nonlinear fluid dynamic systems and the limits on the scale of integration over the globe, limits on our knowledge of initial state, limits on our knowledge of many of the actual input numbers to those models) that are nevertheless being tuned to try to reproduce the output of other models that are being tuned to reflect each other and hence the collective bias of the people that are paid to build and maintain them because they show warming that the other models predict will eventually have a catastrophic impact.
Somewhere in there lies reality, but where? How would we even know? The bulk of “climate papers” are based on the predictions of the models that cannot even predict the past or present data models for the temperature outside of the tiny time period used as a reference/training set. The best thing to do is just wait. Time will tell. It usually does.
And in the meantime, it might be nice to take the infinitude of thumbs off of the scales, and not assume that we know the answer better than the data itself, well enough to krige, interpolate, homogenize, infill, backfill, and what the heck, just shift the data around wholesale for everything but a sane implementation of the UHI.
rgb

ripshin
Editor
August 27, 2014 6:34 am

Late to the game here, but I was too tied up yesterday to post…
So, if I understand correctly, with the specific example of Amberley, as well as many other “outliers”, an undocumented site move is suspected of causing the seemingly aberrant behavior. What follows is a statistical approach to resolving the data into something that makes sense with the wider regional expectations/observations. But, as stated, why would these site moves cause different trends? Shouldn’t they merely change the y-intercept of the trend line? Do we really expect a gauge at the bottom of the hill to demonstrate a different pattern than at the top? And, furthermore, if we do expect a different behavior, how can it be accurate to just assume you know what it is? Shouldn’t we, like, empirically test it?
Going back to the example of Amberley, my understanding is this physical site still exists. The suspected hill hasn’t been flattened. So, go stick another gauge on the hill and see what it does in relation to the one in the valley. Correlate the behaviors to each other. At least then you’d have some basis (other than your own confirmation bias) for adjusting.
And before we exclaim in dismay that there are too many sites and records to do this, that it’s “just not feasible,” well, if the goal is simply academic, then I agree, probably not worth it. But if you’re advocating for massive societal changes that will literally impoverish millions, I think it’s reasonable to request a tougher standard. If you’re so worried that we’re responsible for sending the world into a death spiral, then get off your $$ and go do some field work. Stop just playing with numbers on a computer.
rip

SteveT
August 27, 2014 10:54 am

Nick Stokes
August 26, 2014 at 5:50 am
MarkW August 26, 2014 at 5:39 am
“Nick,
I love the way you assume that if a station doesn’t show what you believe it should show, it must be adjusted.”
The thing is, it should be adjusted. You want the best estimate of the temperature. If the data shows a sudden change that is outside the expected variation, taking account of the history and neighbors, then it is very likely to be due to a move or other event.
Moves happen. Sometimes an adjustment is wrongly made. But if the policy is right more often that it is wrong, then it should be followed. It’s better than doing nothing. And people like Menne do the statistics.
In this case the graphs alone show that the adjustment is getting it right.
*****************************************************************************
I haven’t had time to read the whole thread so this might have been raised already.
NO we don’t want the best estimate of the temperature, what we want is the best temperature measurement. It may not be precisely accurate but it is every bit more valid than an estimate based on a guess (probably a warmist guess at that).
If there is no meta-data support – NO Changes
SteveT

Nick Stokes
Reply to  SteveT
August 27, 2014 5:12 pm

SteveT,
“NO we don’t want the best estimate of the temperature, what we want is the best temperature measurement”
I could have put it better – best estimate of the temperature in the local region. Because when you say, best measurement, the question is, of what? The measurements themselves are fine, but it’s a question of what to make of them. We don’t know that they represent the same point, we don’t know otherwise. We assume continuity of location because we have no evidence to the contrary. But data such as Amberley shows, relative to others, is evidence to the contrary.
And of course, location isn’t the only cause of inhomogeneity. A change of observation time for min/max would do it, though I suspect Amberley would not have used min/max.

KenB
September 1, 2014 10:26 am

I agree with RGB, and even further, this needs the KISS principle applied, the record is what it is and no need to artificially change anything especially using an algorithm that assumes and applies its assumptions, so that each specific period keeps changing. the simple thing is most agree that over a century we have warmed slightly under a degree C, we also know in a simple observation that the bigger cities have a UHI that can be three to four degrees or more above rural temperatures.
Those of us that have lived through many really hot summers in Australia know the difference between those hot summers and the last few years that have been very mild indeed, apart from the BOM and CSIRO clamour when averaging or smearing desert heat to urban areas just to give them a half degree above a past historical “record”!! But then we find the historical records have been adjusted down and the modern temperature record subject to automatic adjustment.
Seems to me that in the cities, the UHI SHOULD boost any record by at least four or more degrees, but that is just not happening, they struggle by artificial adjustments to concoct these claimed half degree “record” hot temperatures and ignore any acknowledgement of colder temperature “records” and that in itself exposes the confirmation bias of those involved
Then when you take a simple approach to Global temperatures, it is apparent that world surface temperatures have not risen for 15 to 18 years depending upon the global temperature set used and the stupid thing is that the actual previous warming period is shorter than the present hiatus that is likely to become a downward temperature trend.
Suggestion for Mosher and Stokes that p**ing into a gale force wind on the good ship warming is now reaching a ludicrous stage and time to pack it in and admit the obvious, the public don’t buy stupid propaganda even if it is dressed up, mixed up and homogenized by algorithm trickery!

1 3 4 5