From the told ya so department, comes this recently presented paper at the European Geosciences Union meeting.
Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C].
Here’s the part I really like: of 67% of the weather stations examined, questionable adjustments were made to raw data that resulted in:
“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”
And…
“homogenation practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”
The paper abstract and my helpful visualization on homogenization of data follows:
Investigation of methods for hydroclimatic data homogenization
Steirou, E., and D. Koutsoyiannis, Investigation of methods for hydroclimatic data homogenization, European Geosciences Union General Assembly 2012, Geophysical Research Abstracts, Vol. 14, Vienna, 956-1, European Geosciences Union, 2012.
We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant.
From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends.
One of the most common homogenization methods, ‘SNHT for single shifts’, was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence.
The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.
Conclusions
1. Homogenization is necessary to remove errors introduced in climatic time
series.
2. Homogenization practices used until today are mainly statistical, not well
justified by experiments and are rarely supported by metadata. It can be
argued that they often lead to false results: natural features of hydroclimatic
time series are regarded errors and are adjusted.
3. While homogenization is expected to increase or decrease the existing
multiyear trends in equal proportions, the fact is that in 2/3 of the cases the
trends increased after homogenization.
4. The above results cast some doubts in the use of homogenization procedures
and tend to indicate that the global temperature increase during the
last century is smaller than 0.7-0.8°C.
5. A new approach of the homogenization procedure is needed, based on
experiments, metadata and better comprehension of the stochastic
characteristics of hydroclimatic time series.
- Presentation at EGU meeting PPT as PDF (1071 KB)
- Abstract (35 KB)
h/t to “The Hockey Schtick” and Indur Goklany
UPDATE: The uncredited source of this on the Hockey Schtick was actually Marcel Crok’s blog here: Koutsoyiannis: temperature rise probably smaller than 0.8°C
Here’s a way to visualize the homogenization process. Think of it like measuring water pollution. Here’s a simple visual table of CRN station quality ratings and what they might look like as water pollution turbidity levels, rated as 1 to 5 from best to worst turbidity:
In homogenization the data is weighted against the nearby neighbors within a radius. And so a station might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as a new value, say weighted at “2.5”. Even single stations can affect many other stations in the GISS and NOAA data homogenization methods carried out on US surface temperature data here and here.
In the map above, applying a homogenization smoothing, weighting stations by distance nearby the stations with question marks, what would you imagine the values (of turbidity) of them would be? And, how close would these two values be for the east coast station in question and the west coast station in question? Each would be closer to a smoothed center average value based on the neighboring stations.
UPDATE: Steve McIntyre concurs in a new post, writing:
Finally, when reference information from nearby stations was used, artifacts at neighbor stations tend to cause adjustment errors: the “bad neighbor” problem. In this case, after adjustment, climate signals became more similar at nearby stations even when the average bias over the whole network was not reduced.







“Steven Mosher
You just calculated the transient climate response. ( TCR) at 1.6.
the ECR ( equillibrium Climate response) is anywhere from 1.5 to 2x higher.
so if you calculate a TCR ( what you did) then you better multiply by 2…
Giving you 3.2 for a climate sensitivity. (ECR)”
Because the Earth does not rotate, nor does it orbit the sun.
Because the stitching together of stations introduces a whole other raft of problems which were not addressed in the paper.
They have one complete set of station data end to end, no sample changes, no stitching. One continuous record with consistent sampling.
So, they are then narrowly presenting the consequence of that one issue, and not conflating it with a hundred and one other side issues simultaneously.
Nick Stokes says:
And only six of the stations in the diagram are north of 60 degrees latitude, where most of the warming is happening.
Werner Brozek says:
July 17, 2012 at 11:42 am
John Finn says:
July 17, 2012 at 10:00 am
Since UAH satellite temperatures show an increase of ~0.4 deg over the past 30 years then ALL warming over the past century must have been since the 1970
mikef2 says:
July 17, 2012 at 11:17 am
Funny thing though, is we have those same surface records showing half a degree swings in the early and mid part of last century….as you say, without any CO2 input, so natural variation can swing 0.4C.
I agree with Mike. They say a picture is worth a thousand words. Here is the ‘picture’.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1900/plot/hadcrut3gl/from:1912.33/to:1942.33/trend/plot/hadcrut3gl
I think you may be missing the point. The study discussed concludes that the amount of warming over the past century is 0.42 deg. However, the UAH satellite record tells us that LT has warmed ~0.14 deg per decade. Coincidentally, this gives a total warming of ~0.42 over 30 years. In other words there was little or no warming in the period between 1900 and 1980. There was, therefore no continuation of the warming from the LIA. Higher solar activity in the middle of the 20th century had no discernible effect on temperature compared to the period of lower activity in 1900 or thereabouts.
I note you’ve chosen to use the Hadley land surface record (1912-1942) to illustrate your point, but surely this data is contaminated. The trend in the data is, therefore, presumably an artifact of UHI or similar.
In summary:
The warming over the past century is 0.42 deg (according to the study)
According to UAH the warming since ~1980 is ~0.42 deg.
CONCLUSION: There was no warming between 1900 and 1980, i.e. the trend was flat.
There are other, non mathematical, explanations for “record highs” …
(Climategate I – Wigley to Jones, Subject – 1940s )
I always come back to this exchange when considering the various temperature series (and the integrity of “scientists” generally).
At no point during the [e-mail] exchange does any recipient suggest that altering [inconvenient] data is not something a reputable “Scientist” should be involved in. They are simply worried about being caught out by not altering other data sets.
Getting rid of “the blip” would of course silence those “deniers” [2012 is the hottest year in some state since records began (and since we reduced the past by 0.15 deg C)] but I’m sure that was never a consideration. Odd though that the the period (30/40’s) keeps on getting flatter with each iteration- why it’s almost like the MWP and all the other “inconvenient messages” that just disappear.
“Dr. Richard Muller and BEST, please take note prior to publishing your upcoming paper”
Muller already has what he needed – mounds of “climate cash” flowing into Berkeley (which had been missing out big time when compared to its rivals). I’m sure there will soon be a Berkeley chair with his arse print on it.
Steven Mosher wrote:
“Of course I’ll bring this up today at our BEST meeting. Dr. K is an old favorite. That said, since we dont use homogenized data but use raw data instead I’m not sure what the point will be”
That’s a lie, a big lie, and you make a terrible PR for BEST, mr. Mosher.
http://www.ecowho.com/foia.php?file=4427.txt&search=nordli
“Dear Phil Jones,
The homogenisation of the Bergen temp. series is now completed since
1876. Some adjustments are applied to the data.
Our intention have not been to remove urban heat island effects. However,
these are not too large. Compared to a group of rural stations (two
lighthouse stations also included), the series seems to be biased about 0.2
deg. in the time interval 1876 – 1995.
Before 1876 Birkeland’s homogenisation of the series is maintained.
The whole series follows in a separated file in the “standard NACD-format”.
Best regards
Oyvind Nordli”
Phil Jones chose to adjust the UHI-effect on this homogenized series for Bergen the wrong way. You can check all this against your beloved child BEST. I’m sure you will tell us about what you find…
When you’re done with not doing that, it vil be my pleasure to give you the raw data for all the different series of Bergen, Norway.
The URL that you provided will work if you replace the hyphen with an underscore, like so:
climate.geog.udel.edu/~climate/html_pages/Ghcn2_images/air_loc.mpg
That’s also how it’s recorded in 2005 at archive.org
Steven Mosher:
“Situation: When have station named Mount Molehill. It is located at 3000 meters above sea level. It records nice cool temperatures from 1900 to 1980. Then in 1981 they decide to relocate the station to the base of Mount Molehill 5 km away. Mount Molehill suddenly because much warmer.
But won’t they rename the station? Nope! they may very well keep the station name the same.”
Isn’t the obvious solution to this, to treat these as two entirely different stations, with their own unique and non-overlapping temperatures? That way, one can look at the anomalies in each station as a way of describing temperature trends, rather than accepting the second, and “adjusting” the first to some create one seamless set of station data. I mean, the truth is that they are two separate stations, not one. So why not simply use their data that way, and thus eliminate any confusion.
And why not do this every time the station is significantly changed, using new instruments, etc.? It seems a lot more honest and requiring less changing of actual data.
One point that seems to be missed here is that in their presentation the authors are not claiming that the true temperature rise is only 0.4C. They seem to be saying that’s only what the raw, unadjusted data shows. They are saying that the true temp rise is somewhere between that, at the low end, and the 0.7-0.8C rise being claimed after adjustments. The implication is that these adjustments have to be scrupulously reviewed and justified, and after that review a new record including all appropriate adjustments should be generated. That’s of course what BEST was supposed to do. Perhaps these guys can assemble a team to accomplish that task. Hope they can find funding for the project.
I am amazed to see the “No global warming” crowd “warming” to the idea of global warming, this report is sort of like being half pregnant! Congrats for stepping into the real world. I am afraid the ground trurth is leaving you all behind.
Nick: “I got 0.66 ° per century.”
F or C?
I’ve been looking at the Washington State raw daily data. And drop everything with a Quality flag = blank.
What filtering did you do?
I get 0.0032F / decade from 1895 to 2011. NOAA gets .05F / decade for the same period.
The interesting thing is the monthly mean’s are all over the place.
January 0.294
February 0.284
March 0.065
April -0.236
May -0.135
June -0.156
July -0.115
August -0.013
September 0.206
October -0.09
November -0.036
December -0.029
Weather stations can only report on the conditions of the local micro-climate, which may change gradually or suddenly over time. We have no control over the siting, observation practice, accuracy, recording, conversion, digitising, quality control etc in the past especially. Nor can we have great faith in such matters today. To rely on even “raw” data (let alone homogenised) to create a climate record is pie in the sky. In Australia, we have demonstrated that the temperature record is so abysmally poor that it should not be used at all. I have shown that adjustments to annual temperature create a 40% warming bias in the national record. Yet the Bureau of Meteorology insists that adjustments are “neutral”. 23 of the 104 sites in the new ACORN dataset have no neighbours within 100km- some have no neighbours within 400km. Alice Springs is one. It contributes about 7% of the national signal on its own because of its remoteness, and has had a huge warming adjustment. Whether or not this article is properly peer reviewed or not, it rings true to me.
Ken
Re: Ben U 4:28 PM
Thank you so much.
I’ve been waiting for a good opportunity to show this chart.
It is the temperature trend for the US from UAH lower troposphere and from USHCN V2 since 1979.
First thing to note is that they are extremely similar. I don’t know if we would have expected this but it almost looks like the UAH satellite record appears to be accurate enough even on the small scale of the US.
But most importantly, the USHCN V2 trend is 27.0% higher than UAH. It is really supposed to be the other way around according to the theory.
The lower troposphere is supposed to be warming at a faster rate than the surface, particularly in the Tropics where it is supposed be 27.3% higher according to the climate models, but also extending to mid-latitudes like the US.
So, there is something like a 27% to a 50% error in USHCN V2 since 1979 according to the UAH lower troposphere measurements.
http://img339.imageshack.us/img339/4647/usuahvsushcnv2june2012.png
Every time we revisit this subject I just imagine the dedicated person in the 1930s trudging out to the station every day and recording the temperature, squinting to get that last tenth of a degree and wondering what they would think if they knew 80 years later some armchair climatologist was going to “adjust” their reading up three degrees.
John Finn: ” CONCLUSION: There was no warming between 1900 and 1980, i.e. the trend was flat.”
Not flat. Up and down.
1900 -0.225
1944 0.121
1976 -0.255
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
As brokenyogi pointed out, the study concluded that the warming is between 0.4 and 0.7.
If the warming from adjustment was 100% in error (i.e. a correct adjustment would have resulted in zero additional warming) then the conclusion is that there has been warming since 1980, and there was similar warming from 1900 to 1940, negated by cooling from 1940 to 1980. Just drawing a trend line through 1900-1980 glosses over the up and down nature of the temperature history.
Reblogged this on The GOLDEN RULE and commented:
More climate science, as distinct from climate pseudo-science!
John Finn says:
July 17, 2012 at 3:36 pm
I note you’ve chosen to use the Hadley land surface record (1912-1942) to illustrate your point, but surely this data is contaminated….CONCLUSION: There was no warming between 1900 and 1980, i.e. the trend was flat.
Actually Hadcrut3 is both land and water. Of course I could not use the satellite data since it does not go way back. However the trend according to Hadcrut3 is 0.00528699 per year between 1900 and 1980. See:
http://www.woodfortrees.org/plot/hadcrut3gl/from:1900/to:1980/plot/hadcrut3gl/from:1900/to:1980/trend
So we will have to come to some other conclusion.
So lessee, 25%-50% due to solar variation, 25-50% due to ENSO/PDO/NAO/AMO variation, and 50% due to homogenization effects. So that comes to 100-150% of warming reported, so therefore its actually cooling, statistically…
Soooooo lemme get this right: 25-50% of climate change is due to solar variation, 25-50% is due to ENSO/AMO/PDO/NAO variations, and now 50% is due to homogenization effects. That means 100-150% of warming is now accounted for, and its not CO2, which means it must be cooling.
As I read it, this heading is not quite accurate. The abstract states:
I do not read the paper as extending the results from the sample of 163 GHCNM v2 stations (lower estimate of 0.42C from the raw data, higher estimate of 0.76C from the adjusted data) to a claim that half the warming results from homogenization – the abstract merely suggests that the warming is likely to lie somewhere between that indicated by the raw data and that indicated by the adjusted data.
Trevor says:
July 17, 2012 at 4:46 pm
I am amazed to see the “No global warming” crowd “warming” to the idea of global warming,….
____________________________________
Of course there is “Global Warming” we are in a interglacial, if there wasn’t “Global Warming” NYC would be under a mile of Ice.
Here is a graph of Temperature and CO2 over the Past 400 Thousand years Note the earth warms and cools and the present increase in CO2 (on right) hasn’t caused “CAGW” Actually the CO2 sky rockets while the temperatures are MORE stable than in the other four interglacials.
Also see:
Lesson from the past: present insolation minimum holds potential for glacial inception Ulrich C. Müller, Jörg Pross, Institute of Geosciences, University of Frankfurt
In Defense of Milankovitch by Gerard Roe, Department of Earth and Space Sciences, University of Washington, Seattle, WA, USA
Article on above: http://motls.blogspot.com/2010/07/in-defense-of-milankovitch-by-gerard.html
http://wattsupwiththat.com/2012/03/16/the-end-holocene-or-how-to-make-out-like-a-madoff-climate-change-insurer/
People here at WUWT are just not hung-up on CO2 as the “control knob” for the climate. I at least consider CO2 a life giving gas that was dwindling to a dangerously low amount in the atmosphere. The recent evolution of C4 and CAM photosynthesis and the current GREENING of the biosphere show just how bad the situation was getting.
Or there is this paper in the proceeding of The National Academy of Sciences: Carbon starvation in glacial trees recovered from the La Brea tar pits, southern California by Joy K. Ward * , † , ‡, John M. Harris §, Thure E. Cerling † , ¶, Alex Wiedenhoeft ∥, Michael J. Lott †, Maria-Denise Dearing †, Joan B. Coltrain **, and James R. Ehleringer †…..
Since there is no paper, only an abstract to a presentation, i don’t see how people can judge the validity of the claims. Perhaps someone here will file a freedom of information request to get a copy of the paper.
A question. Are record highs and lows also homogenized?