From the told ya so department, comes this recently presented paper at the European Geosciences Union meeting.
Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C].
Here’s the part I really like: of 67% of the weather stations examined, questionable adjustments were made to raw data that resulted in:
“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”
And…
“homogenation practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”
The paper abstract and my helpful visualization on homogenization of data follows:
Investigation of methods for hydroclimatic data homogenization
Steirou, E., and D. Koutsoyiannis, Investigation of methods for hydroclimatic data homogenization, European Geosciences Union General Assembly 2012, Geophysical Research Abstracts, Vol. 14, Vienna, 956-1, European Geosciences Union, 2012.
We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant.
From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends.
One of the most common homogenization methods, ‘SNHT for single shifts’, was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence.
The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.
Conclusions
1. Homogenization is necessary to remove errors introduced in climatic time
series.
2. Homogenization practices used until today are mainly statistical, not well
justified by experiments and are rarely supported by metadata. It can be
argued that they often lead to false results: natural features of hydroclimatic
time series are regarded errors and are adjusted.
3. While homogenization is expected to increase or decrease the existing
multiyear trends in equal proportions, the fact is that in 2/3 of the cases the
trends increased after homogenization.
4. The above results cast some doubts in the use of homogenization procedures
and tend to indicate that the global temperature increase during the
last century is smaller than 0.7-0.8°C.
5. A new approach of the homogenization procedure is needed, based on
experiments, metadata and better comprehension of the stochastic
characteristics of hydroclimatic time series.
- Presentation at EGU meeting PPT as PDF (1071 KB)
- Abstract (35 KB)
h/t to “The Hockey Schtick” and Indur Goklany
UPDATE: The uncredited source of this on the Hockey Schtick was actually Marcel Crok’s blog here: Koutsoyiannis: temperature rise probably smaller than 0.8°C
Here’s a way to visualize the homogenization process. Think of it like measuring water pollution. Here’s a simple visual table of CRN station quality ratings and what they might look like as water pollution turbidity levels, rated as 1 to 5 from best to worst turbidity:
In homogenization the data is weighted against the nearby neighbors within a radius. And so a station might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as a new value, say weighted at “2.5”. Even single stations can affect many other stations in the GISS and NOAA data homogenization methods carried out on US surface temperature data here and here.
In the map above, applying a homogenization smoothing, weighting stations by distance nearby the stations with question marks, what would you imagine the values (of turbidity) of them would be? And, how close would these two values be for the east coast station in question and the west coast station in question? Each would be closer to a smoothed center average value based on the neighboring stations.
UPDATE: Steve McIntyre concurs in a new post, writing:
Finally, when reference information from nearby stations was used, artifacts at neighbor stations tend to cause adjustment errors: the “bad neighbor” problem. In this case, after adjustment, climate signals became more similar at nearby stations even when the average bias over the whole network was not reduced.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.







Biased homogenization results are what one would expect from biased homogenizers. E.g., Hansenko et al.
Somehow this doesn’t surprise me.
I believe the other half is the UHI effect.
Stay braced for the team of rapid response and little science, how dare they!! …sarc.
PS: How’d “BEST” miss this? (A rhetorical question.)
Statistical homogenisation as practiced currently, without experimental justification is totally out of whack with reality. Better methods need to be developed.
And still GHCN have not explained why Arctic temperatures up to about 1960 have been adjusted downwards.
http://notalotofpeopleknowthat.wordpress.com/2012/03/11/ghcn-temperature-adjustments-affect-40-of-the-arctic/
“Conclusions”
# 2 nails it !!
Well! 🙂
Thats something for Hansen och Lubchenko to chew on! Just sent an e mail to GISS “Told ya”! Got Ya!
Very good. Glad to see this peer-reviewed. Results exactly as expected. See my page.
Now can we look at getting Climate Science (the scientists, communicators, and politicians) to embrace the Twelve-Step Program?
Arctic and Antarctic have no markers on that global map. I’d be particularly interested in these, given the paucity and loss of Arctic stations in particular, and their extra vulnerability to bias.
But…. wasn’t this the whole point of BEST?
This provides a beautiful confirmation of what many sceptics, including myself, have long suspected. It seems that old data from many decades ago is still being ‘adjusted’, so that the overall warming trends steadily increase. If our suspicions of wrongdoing are right, then global warming really is man-made!
.
However, I don’t think it is the result of any organised conspiracy. It’s more likely to be a kind of scenario fulfillment effect, in which the results of thousands of small decisions are almost unconsciously biased by personal beliefs. In individual cases the effect would be extremely subtle and difficult to detect, but on the large scale the effect could be massive. Virtually doubling the measured amount of warming is certainly massive, and will probably cost the world trillions of dollars.
.
Does this paper have the obligatory paragraph in which the authors reaffirm their belief in the global warming religion?
Chris
Next subject: a systematic comparison between stations dropped and not dropped during the last decades of the twentieth century.
Sounds about right – half natural variation, half data corruption is my usual rule of thumb. Pity it’s taken so many years to get a paper published to say something like.
The full presentation is excellent, beautiful, graphic, comprehensible, and full of statistics too. I hope Steve McIntyre runs a thread to get confirmation of its statistical significance. It doesn’t mention your own groundbreaking work here, but I’d like to think that recognition is implicit in the focus on things like discontinuities, instrumentation, and actual comparisons of some of the worst differences between raw and homogenized temperature graphs.
a station with data should never be homogenized … it doesn’t need to be … homogenization doesn’t reduce errors but simply averages them out over multiple stations … and why assume there are errors ? if there are then id them and toss them out otherwise assume the raw data is good … this assumption of errors is just an excuse to allow UHI to pollute nearby stations …
“Man made global warming” indeed: they made it up.
…..cue “trends are the things that matter, absolute values irrelevant…” yadda yadda. The idea that we can detect GLOBAL trends of the magnitude of the poorly documented thermometer adjustments and trees just scares the crap out of me. Im glad to see some folks taking a good close look.
“Hansenko”! Brilliant!
Stephen have some info on how it is done in the US:
http://stevengoddard.wordpress.com/2012/07/16/how-ushcn-hides-the-decline-in-us-temperatures/
“Hansenko”
Ouch! 🙂
So, if it’s only .4 warming and we know half (.2) is natural, that is fully consistent with what we’ve said all along, that incresed water vapor is a negative, not positive feedback…
It’s good to see research like this (which disputes the monopolistsic consensus) is being accepted and published.
The problem statement in their presentation bothered me a bit because it seemed to say that if two nearby instruments’ readings differ, then one of them must be wrongr:
What if the weather was different at the two locations? But, reading further, I saw that “microclimate changes” are considered in this process:
I can offer a “miicro” example of one of these microclimate changes, from the data of my NWS/CWOP station.
http://www.findu.com/cgi-bin/wxpage.cgi?call=AF4EX
On July 15 you can see a rather large and rapid drop in mid-afternoon temperatures (10F decrease in a 2-3 hours) caused by a small local rain shower. Yesterday (July 16) an even bigger drop, due to a big shower (almost 2 inches of rain)
But other stattions saw it differently. Two nearby CWOP stations, CW2791 (2.4 miles) and DW1247 (12.6 miles) both reported the July 16 anomaly, but DW1247 didn’t report a big anomaly on July 15 because it didn’t report a mid-afternoon shower.
http://www.findu.com/cgi-bin/wxnear.cgi?call=AF4EX
Of course all such readings are subject to measurement error, and these CWOP stations certainly can’t claim perfection in their accuracy. But it should be clear that the large July 15 temperature anomaly at AF4EX was “real weather” and only observable within a radius of a few miles.
I believe that these mesoscale readings are important, for example, for observing and predicting squall lines, derechos and such.
Also,I don’t believe that instruments in large cities, subject to the urban-island heating effects, should be moved. They should report the heat retained and re-radiated from out planet from these warmer areas. But these readings should be weighted, with larger cooler rural areas having more weight, to give a more accurate picture of the planetary radiation balance.
So homogenization is as much sludge factor as fudge factor. These people have no shame…
“weather station data homgenization” homgenization?
Add this to the UHI effect and it does not leave much, if any, warming trend at all>