New paper blames about half of global warming on weather station data homogenization

From the told ya so department, comes this recently presented paper at the European Geosciences Union meeting.

Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C].

Here’s the part I really like:  of 67% of the weather stations examined, questionable adjustments were made to raw data that resulted in:

“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”

And…

“homogenation practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”

The paper abstract and my helpful visualization on homogenization of data follows:

Investigation of methods for hydroclimatic data homogenization

Steirou, E., and D. Koutsoyiannis, Investigation of methods for hydroclimatic data homogenization, European Geosciences Union General Assembly 2012, Geophysical Research Abstracts, Vol. 14, Vienna, 956-1, European Geosciences Union, 2012.

We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant.

From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends.

One of the most common homogenization methods, ‘SNHT for single shifts’, was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence.

The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.

Conclusions

1. Homogenization is necessary to remove errors introduced in climatic time

series.

2. Homogenization practices used until today are mainly statistical, not well

justified by experiments and are rarely supported by metadata. It can be

argued that they often lead to false results: natural features of hydroclimatic

time series are regarded errors and are adjusted.

3. While homogenization is expected to increase or decrease the existing

multiyear trends in equal proportions, the fact is that in 2/3 of the cases the

trends increased after homogenization.

4. The above results cast some doubts in the use of homogenization procedures

and tend to indicate that the global temperature increase during the

last century is smaller than 0.7-0.8°C.

5. A new approach of the homogenization procedure is needed, based on

experiments, metadata and better comprehension of the stochastic

characteristics of hydroclimatic time series.

PDF Full text:

h/t to “The Hockey Schtick” and Indur Goklany

UPDATE: The uncredited source of this on the Hockey Schtick was actually Marcel Crok’s blog here: Koutsoyiannis: temperature rise probably smaller than 0.8°C

 =============================================================

Here’s a way to visualize the homogenization process. Think of it like measuring water pollution. Here’s a simple visual table of CRN station quality ratings and what they might look like as water pollution turbidity levels, rated as 1 to 5 from best to worst turbidity:

CRN1-bowlCRN2-bowlCRN3-bowl

CRN4-bowlCRN5-bowl

In homogenization the data is weighted against the nearby neighbors within a radius. And so a station might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as a new value, say weighted at “2.5”. Even single stations can affect many other stations in the GISS and NOAA data homogenization methods carried out on US surface temperature data here and here.

bowls-USmap

In the map above, applying a homogenization smoothing, weighting stations by distance nearby the stations with question marks, what would you imagine the values (of turbidity) of them would be? And, how close would these two values be for the east coast station in question and the west coast station in question? Each would be closer to a smoothed center average value based on the neighboring stations.

UPDATE: Steve McIntyre concurs in a new post, writing:

Finally, when reference information from nearby stations was used, artifacts at neighbor stations tend to cause adjustment errors: the “bad neighbor” problem. In this case, after adjustment, climate signals became more similar at nearby stations even when the average bias over the whole network was not reduced.

0 0 votes
Article Rating
224 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
rogerknights
July 17, 2012 3:49 am

Biased homogenization results are what one would expect from biased homogenizers. E.g., Hansenko et al.

Eyal Porat
July 17, 2012 3:51 am

Somehow this doesn’t surprise me.
I believe the other half is the UHI effect.

KenB
July 17, 2012 3:54 am

Stay braced for the team of rapid response and little science, how dare they!! …sarc.

rogerknights
July 17, 2012 3:56 am

PS: How’d “BEST” miss this? (A rhetorical question.)

Venter
July 17, 2012 4:12 am

Statistical homogenisation as practiced currently, without experimental justification is totally out of whack with reality. Better methods need to be developed.

Editor
July 17, 2012 4:13 am

And still GHCN have not explained why Arctic temperatures up to about 1960 have been adjusted downwards.
http://notalotofpeopleknowthat.wordpress.com/2012/03/11/ghcn-temperature-adjustments-affect-40-of-the-arctic/

July 17, 2012 4:22 am

“Conclusions”
# 2 nails it !!

Slabadang
July 17, 2012 4:24 am

Well! 🙂
Thats something for Hansen och Lubchenko to chew on! Just sent an e mail to GISS “Told ya”! Got Ya!

July 17, 2012 4:25 am

Very good. Glad to see this peer-reviewed. Results exactly as expected. See my page.
Now can we look at getting Climate Science (the scientists, communicators, and politicians) to embrace the Twelve-Step Program?

July 17, 2012 4:28 am

Arctic and Antarctic have no markers on that global map. I’d be particularly interested in these, given the paucity and loss of Arctic stations in particular, and their extra vulnerability to bias.

steveta_uk
July 17, 2012 4:28 am

But…. wasn’t this the whole point of BEST?

Chris Wright
July 17, 2012 4:31 am

This provides a beautiful confirmation of what many sceptics, including myself, have long suspected. It seems that old data from many decades ago is still being ‘adjusted’, so that the overall warming trends steadily increase. If our suspicions of wrongdoing are right, then global warming really is man-made!
.
However, I don’t think it is the result of any organised conspiracy. It’s more likely to be a kind of scenario fulfillment effect, in which the results of thousands of small decisions are almost unconsciously biased by personal beliefs. In individual cases the effect would be extremely subtle and difficult to detect, but on the large scale the effect could be massive. Virtually doubling the measured amount of warming is certainly massive, and will probably cost the world trillions of dollars.
.
Does this paper have the obligatory paragraph in which the authors reaffirm their belief in the global warming religion?
Chris

Mindert Eiting
July 17, 2012 4:33 am

Next subject: a systematic comparison between stations dropped and not dropped during the last decades of the twentieth century.

Steve C
July 17, 2012 4:39 am

Sounds about right – half natural variation, half data corruption is my usual rule of thumb. Pity it’s taken so many years to get a paper published to say something like.

July 17, 2012 4:43 am

The full presentation is excellent, beautiful, graphic, comprehensible, and full of statistics too. I hope Steve McIntyre runs a thread to get confirmation of its statistical significance. It doesn’t mention your own groundbreaking work here, but I’d like to think that recognition is implicit in the focus on things like discontinuities, instrumentation, and actual comparisons of some of the worst differences between raw and homogenized temperature graphs.

JeffC
July 17, 2012 4:52 am

a station with data should never be homogenized … it doesn’t need to be … homogenization doesn’t reduce errors but simply averages them out over multiple stations … and why assume there are errors ? if there are then id them and toss them out otherwise assume the raw data is good … this assumption of errors is just an excuse to allow UHI to pollute nearby stations …

July 17, 2012 4:54 am

“Man made global warming” indeed: they made it up.

LearDog
July 17, 2012 4:54 am

…..cue “trends are the things that matter, absolute values irrelevant…” yadda yadda. The idea that we can detect GLOBAL trends of the magnitude of the poorly documented thermometer adjustments and trees just scares the crap out of me. Im glad to see some folks taking a good close look.

Brent Hargreaves
July 17, 2012 5:07 am

“Hansenko”! Brilliant!

John Silver
July 17, 2012 5:09 am
Jason Calley
July 17, 2012 5:11 am

“Hansenko”
Ouch! 🙂

MattN
July 17, 2012 5:15 am

So, if it’s only .4 warming and we know half (.2) is natural, that is fully consistent with what we’ve said all along, that incresed water vapor is a negative, not positive feedback…

John Day
July 17, 2012 5:22 am

It’s good to see research like this (which disputes the monopolistsic consensus) is being accepted and published.
The problem statement in their presentation bothered me a bit because it seemed to say that if two nearby instruments’ readings differ, then one of them must be wrongr:

The problem
● Historical and contemporary climatic time series contain inhomogeneities –
errors introduced by changes of instruments, location etc.

What if the weather was different at the two locations? But, reading further, I saw that “microclimate changes” are considered in this process:

Homogenization methods do not take into consideration some characteristics of
hydroclimatic data (long-term persistence, microclimatic changes, time lags).

I can offer a “miicro” example of one of these microclimate changes, from the data of my NWS/CWOP station.
http://www.findu.com/cgi-bin/wxpage.cgi?call=AF4EX
On July 15 you can see a rather large and rapid drop in mid-afternoon temperatures (10F decrease in a 2-3 hours) caused by a small local rain shower. Yesterday (July 16) an even bigger drop, due to a big shower (almost 2 inches of rain)
But other stattions saw it differently. Two nearby CWOP stations, CW2791 (2.4 miles) and DW1247 (12.6 miles) both reported the July 16 anomaly, but DW1247 didn’t report a big anomaly on July 15 because it didn’t report a mid-afternoon shower.
http://www.findu.com/cgi-bin/wxnear.cgi?call=AF4EX
Of course all such readings are subject to measurement error, and these CWOP stations certainly can’t claim perfection in their accuracy. But it should be clear that the large July 15 temperature anomaly at AF4EX was “real weather” and only observable within a radius of a few miles.
I believe that these mesoscale readings are important, for example, for observing and predicting squall lines, derechos and such.
Also,I don’t believe that instruments in large cities, subject to the urban-island heating effects, should be moved. They should report the heat retained and re-radiated from out planet from these warmer areas. But these readings should be weighted, with larger cooler rural areas having more weight, to give a more accurate picture of the planetary radiation balance.

UK Sceptic
July 17, 2012 5:23 am

So homogenization is as much sludge factor as fudge factor. These people have no shame…

Disko Troop
July 17, 2012 5:24 am

“weather station data homgenization” homgenization?
Add this to the UHI effect and it does not leave much, if any, warming trend at all>

rilfeld
July 17, 2012 5:25 am

“science” is now in the process of whitewashing cupidity as unfortunate but unintentional statisitical process errors; understandable mistakes by well-meaning and honorable people. We’re moving from ‘hide the decline’ to ‘rehab mine”.

cba
July 17, 2012 5:27 am

Seems like there are paper(s) ascribing about 40-50% of the original increase value to natural variation. Now is that 50% of the total – what ever that might be – or is that about 0.4 deg C ? If it’s the 0.4 deg C, then aren’t we just about out of warming? Thermostat controls work by allowing small variations it the controlled variable to set the values of the control variables. Because of this and because of the fact that there should be some temperature rise caused by co2 concentration, there should be something left over in the way of an increased temperature. Otherwise, we are headed in the wrong direction and in for the truly serious consequences of cooling.

July 17, 2012 5:38 am

1. Homogenization is necessary to remove errors introduced in climatic time
series.
No it isn’t. Nor does it.
William Briggs on the topic.
http://wmbriggs.com/blog/?p=1459

Editor
July 17, 2012 5:39 am

If I understand the IPCC website correctly, this paper is in time to be considered in AR5. Someone needs to make sure it is submitted to the IPCC. They can easily ignore it, of course, as they have ignored everything that doesn’t fit the pre-written Summary for Policymakers, but at least it needs to be put in front of them.
Cut-Off Dates for literature to be considered for AR5
Updated 17 January 2012
Working Group I – 31 July 2012 Papers submitted – 15 March 2013 Papers accepted
Working Group II – 31 January 2013 Papers submitted – 31 August 2013 Papers accepted
Working Group III – 31 January 2013 Papers submitted – 3 October 2013 Papers accepted
http://www.ipcc.ch/pdf/ar5/ar5-cut-off-dates.pdf

Bill Marsh
July 17, 2012 5:43 am

Isn’t 30% of the past century’s warming ascribed to solar effects? Seems like there isn’t much CO2 induced warming at all given that we are now better than halfway to a doubling in CO2 and the logarithmic nature of the CO2 effect means that the majority of the warming associated with CO2 increase has already occurred.

rgbatduke
July 17, 2012 5:57 am

Makes me feel all warm and fuzzy. Recall that I recently pointed out that one perfectly reasonable interpretation of the recent “run” of 13 months in the top 1/3 of all months in the record, warming trend or not, is that the data is biased! That is, a very small p-value argues against the null hypothesis of unbiased data. That is all the more the case the more unlikely the result is made, so the more “unusual” it is — especially in the absence of anything otherwise unusual about the weather or climate — the more the p-value can be interpreted as evidence that something in your description of the data is horribly wrong.
At that time I thought about the temperature series and almost did a similar meta-analysis (but was too lazy to run down the data) on a different thing — GISS and Hadley have, IIRC, made a number of adjustments over the years to the algorithm (secret sauce) that “cooks” the raw thermometric data into their temperature series. I fully admit to having little but anecdotal recollection of a few of them, mostly from reading about them on WUWT. Anthony has duly noted that somehow, they always seem to warm the present and cool the past, on average.
Let’s say there have been 8 of those adjustments, and all of them warmed the present and cooled the past to the point where they quantitatively increased the rate of warming over the entire dataset but especially the modern era. There is no good reason to think that the thermometers used back in the 30’s were systematically biased towards cold temperature — the default assumption is (as always with statistics) that the error in thermometric readings from the past with the exception of cases that are clearly absurd or defective series (the one kept by the alcoholic postmaster of Back-of-the-Woods, Idaho, for example, back in the 1890s) is unbiased and indeed, probably normal on the true reading. In particular, it is as likely to be a positive error as a negative one, so the best thing to do is not adjust it at all.
In this case the probability that any sort of data correction will produce net warming — even of a little tiny bit — should be fifty-fifty. This is the null hypothesis in hypothesis testing theory: If the corrections are unbiased, they have an even chance of resulting in net warming.
Next one computes the probability of getting 8 warmings in a row, given the null hypothesis: 1/2^8 = 1/256 = 0.004. This is the probability of getting the result from unbiased corrections. Even if there have been only 4 corrections (all net warming) it is 1/16. It’s like flipping a coin and getting 4, or 8, heads in a row.
In actual hypothesis testing, most people provisionally reject the null hypothesis at some cutoff. Frequently the cutoff used is 0.05, although I personally think this is absurdly high — one in twenty chances happen all the time (well, one in twenty times, but that’s not that infrequently). 4 in a thousand chances not so often — I’d definitely provisionally reject the null hypothesis and investigate further upon 8 heads in the first 8 flips of a supposedly unbiased coin — that coin would have to behave very randomly (after careful scrutiny to make sure it didn’t have two heads or a magnet built into it) in future trials or betting that it is unbiased is a mug’s game.
TFA above has extended the idea to the actual microcorrections in the data itself, where of course it is far more powerful. Suppose we have 3000 coin flips and 2/3 of them (2000) turn out to be heads. The null hypothesis is that the coin is unbiased, heads/tails are equally likely p = 0.5. The distribution of outcomes in this case is the venerable binomial distribution, and I don’t have to do the actual computation to know the result (because I do random number generator testing and this one is easy). The p-value — probability of getting the result given the null hypothesis is zero. The variance is np(1-p) or 3000/4 = 750. The square root of 750 is roughly 27. The observation is 500 over the mean with n = 3000. 500/27 = 18, give or take. This is an 18 sigma event — probability a number close enough to zero I’d get bored typing the leading zeros before reaching the first nonzero digit.
Now, I have no idea whether or not there are 3000 weather stations in their sample above, but suppose there are only 300. 300/4 = 75, \sigma \approx \sqrt{75} \approx 8, \Delta x = 50, \Delta x/ = 6. The probability of a six sigma event is still — wait for it — zero. I wouldn’t get bored, exactly, writing the zeros, but the result is still in the not-a-snowball’s-chance-in-hell-it’s-unbiased range.
Visibly, there look like there could easily be 300 dots in the map above, and frankly, who would trust a coin to be unbiased if only 100 flips produced 2/3 heads? One is finally down to only a 3 sigma event, sorta like 8 heads in a row in 8 flips, p-value of well under 1 in a 100. I don’t know Steirou, but Koutsoyiannis is, I predict, going to be the worst nightmare of the keepers of the data, quite possibly eclipsing even Steven McIntyre. The guy is brilliant, and the paper sounds like it is going to be absolutely devastating. It will be very interesting to be a fly on the proverbial wall and watch the adjusters of the data formally try to explain why the p-value for their adjustments isn’t really zero, but is rather something in the range 0.05-1.00, something not entirely unreasonable. Did our drunken postmaster weathermen in the 1920s all hang their thermometers upside down and hence introduce a gravity-based bias in their measurements? Silly them. Is the modern UHI adjustment systematically positive? I never realized that cities and airports produces so much cooling that we have to adjust their temperatures up.
Or, more likely, it is simply something that Anthony has pointed out many times. The modern UHI adjustments are systematically too low and this is biasing the trends substantially. One could even interpret the results of TFA is being fairly solid proof that this is the case.
rgb

Pamela Gray
July 17, 2012 6:18 am

Interesting observation happening in NE Oregon. The noon temps have been warm to hot, but less than an hour later have dropped like a stone. Why? Thunderstorms. And we have at least two weeks of them. So was the whole day hot? Yes according to noon temps on old sensors. No according to sensors that average the temps together.
Which leads me to another issue. Those record temps are nonsensical. A moment at 107 degrees now may be compared to who knows how many hours at 107 degrees back then or visa versa and be called equal. Really? How was the peak temp measured way back when, compared to the peak temps of today? Are we comparing two different types of records? If so, why would we be calling the record temps this year “new records” that take the place of the old records set years and decades ago?

Jason Lewis
July 17, 2012 6:21 am

If I remember correctly, some of the alarmists were saying that solar forcing could only account for about one third of the warming seen this century. If one third is solar, and one half is an artifact, then what’s left for the anthropogenic component? This was assuming that there was zero internal forcing from ocean currents. What’s the latest estimate from the alarmists for the solar component?

David
July 17, 2012 6:22 am

Here is ushcn USA only adjustments in a blink chart…
http://stevengoddard.wordpress.com/2012/07/16/how-ushcn-hides-the-decline-in-us-temperatures/
Mann made GW for certain.

Skeptikal
July 17, 2012 6:24 am

The data doesn’t need homogenization. If one location is hotter or colder than a neighbouring location, that’s weather. Raw data is the only data that’s worth anything. Once you bend the data out of shape, it becomes worthless.

July 17, 2012 6:29 am

Isn’t it about time that the homogenizers are pasteurized?

H.R.
July 17, 2012 6:32 am

From the post:
“Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C].”
So the net effect of CO2 is most likely some small fraction of 0.42 degrees C. I interpret this to mean that the coal trains of death, aren’t.

JC
July 17, 2012 6:35 am

Now about those GCMs ……

Victor Venema
July 17, 2012 6:45 am

The two citations are plainly wrong.The abstract is not a ”new peer reviewed paper recently presented at the European Geosciences Union meeting.” It is an conference abstract by E. Steirou and D. Koutsoyiannis, which was presented at the session convened (organized) by Koutsoyiannis himself.
http://meetingorganizer.copernicus.org/EGU2012/oral_programme/9221
Conference abstracts are not peer reviewed, you cannot review an abstract of 260 words. At EGU all abstracts, which are halfway about geoscience are accepted. Their purpose is to select, which people get a talk and which ones get a poster presentation.
REPLY: I was of the impression that it was “in press” but I’ve change the wording to reflect that. Hopefully we’ll know more soon. – Anthony

July 17, 2012 6:49 am

Skeptikal says:
July 17, 2012 at 6:24 am
The data doesn’t need homogenization. If one location is hotter or colder than a neighbouring location, that’s weather. Raw data is the only data that’s worth anything. Once you bend the data out of shape, it becomes worthless.

You’re wrong. The data does need some kind homogenization to correct for inaccurate or poorly situated instruments. We also need it to be able to summarize the weather over larger regions to make predictions and comparisons.
Here’s an example of how you yourself can use homogenization to help guarantee the next thermometer you buy will be more accurate.
Go to a place that sells cheap thermometers (Walmart etc). Normally there will be 5 or 10 instruments on display of various brands. You will immediate notice that they are all predicting different temperatures. Maybe some will read in the mid or low 70’s, some in the high 70’s. There will always be a maverick or two with readings way of into the impossible range.
Which thermometer, if any, should you buy?
Well, it is likely that there are several instsruments in the bunch reporting fairly accurately. Best way to find the most accurate thermometer is to whip out your pocket calculator, add up all the temps and divide by the number of thermometers. (Throw away any obviously bogus readings first, such as a thermometer reading zero.) The resulting average value is most likely to be closest to the “real” temperature.
That is how homogenization works, on a small scale.

R Barker
July 17, 2012 6:50 am

Something that troubles me about the USHCN v2 homogenization process is that it apparently assumes a linear relationship between station temperatures. That is OK when dealing with measurement errors but not, in my opinion, when creating the influence of each station’s temperature in the areal product.
Considering that heat flows from hot to cold, it would seem to me that a more representative temperature relationship between a hotter station and a colder one would be a function of the square of the distance between the stations. The quantity of heat represented by the difference between the two station temperatures disburses radially from the hot station and the temperature of the surface layer air would drop in proportion to the diffusion of the differential heat. For instance, the temperature difference at a point half way between the hot station and the cold station would be 1/4 the two station difference. With the current process, the temperature difference half way would be 1/2 the two station difference.
If vertical mixing is also considered, the influence of the hot station on the path to the cold station would be even more quickly diminished. My opinion is that the current homogenization process tends to give too much weight to the hot stations on surrounding areas. The process appears to be manufacturing additional heat where none exists which then is misinterpreted as global warming due to CO2.
But maybe I am missing something in my understanding of the homogenization process.

Bill Illis
July 17, 2012 6:51 am

This is why the NCDC and BEST like using the Homogenization process.
They know what the end-result does – it increases the trend. Its not like they didn’t test it out in a number of ways before implementing it.
So they write a paper about the new process and provide a justification that it is reducing errors and station move problems. But what they are really doing is introducing even more error – and one might describe it as done on purpose.
The Steirou and Koutsoyiannis paper is about GHCN Version 2. The NCDC has already moved on to using GHCN-M Version 3.1 which just inflates the record even further.
And then someone needs to straighten out the TOBS adjustment process as well. The trend for this adjustment keeps rising and rising and rising, even 5 year old records. The TOBS issue was known about 300 years ago. Why are they still adjusting 10 year old and 5 year old records for this? Because it increases the trend again.

Andrew
July 17, 2012 6:52 am

Misspelling! Homogenization not homgenization

Victor Venema
July 17, 2012 6:58 am

Anthony Watts cited the two major errors of the abstract:
“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”
You would not expect proportions to be 1/2, inhomogeneities can be have a bias, e.g. when an entire network changes from North wall measurements (19 century) to a fully close double-louvre Stevenson screen or from a screen that is open to the North or bottom (Wild/Pagoda) type-screen to a Stevenson screen, or from a Stevenson screen to an automatic weather stations as currently happens to save labor. The UHI produces a bias in the series, thus if you remove the UHI the homogenization adjustments would have a bias. There was a move from stations in cities to typically cooler airports that produces a bias and again this would make that you do not expect that the proportions are 1/2. Etc. See e.g. the papers by Böhm et al. (2001) Menne et al., 2010; Brunetti et al., 2006; Begert et al., 2005.
Also the change from roof precipitation measurements to near ground precipitation measurements cause a bias (Auer et al., 2005).
Anthony Watts citation:
“homogenation practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”
Personally I just finished a study with a blind numerical experiment, which justified statistical homogenization and clearly showed that homogenization improves the quality of climate data (Venema et al., 2012). http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
Many simpler validation studies have been published before.
Recently the methods were also validated using meta data in the Swiss; see:
http://www.agu.org/pubs/crossref/pip/2012JD017729.shtml
The size of the biased inhomogeneities is also in accordance with experiments with parallel measurements. It is almost funny that Koutsoyiannis complains about the use of statistics in homogenization, he does a lot of statistics in his own work. I guess he just did not read the abstract of this student (at least E. Steirou affiliation is the same as the one of Koutsoyiannis, but Steirou is not mentioned in the list of scientists).
http://itia.ntua.gr/en/pppp/

John West
July 17, 2012 6:58 am

Even if one attributes all (yes, 100%, LOL) of that 0.42 degrees Celsius trend to CO2 increase that would put sensitivity to 2XCO2 between 1 and 2 degrees Celsius.
Assuming:
dF = 5.35ln(CO2f/CO2i)
dT=S(dF)
Using values @ the century endpoints:
dF = 5.35ln(375/300)=1.19
S=0.42/1.19×3.7=1.3
And allowing for some lag in the system(s) by using a CO2 from several years prior to 2000:
dF = 5.35ln(360/300)=0.97
S=0.42/0.97×3.7=1.6
More evidence that the 3-5 degree Celsius sensitivity to 2XCO2 claim is exaggerated.

David
July 17, 2012 6:59 am

Yeah – and having weather stations at airports, nicely located next to taxiways to benefit from nice warm jetwash, doesn’t help the accuracy of raw data either…

cd_uk
July 17, 2012 7:00 am

I suppose the question I have is what algorithm did they use for “homogenisation” and do they use it on
1) “suspect” stations only
2) all stations using the original neighbouring station data.
I guess case 1 otherwise the effect would be a spatial smoothing as in case 2 (again assuming on the routine: sounds like simple IDW mean). Case 2 wouldn’t have a bias unless there were more anomalously high temperature stations than low temperature ones. But then this would give you the suspected spatial average anyway.
Anyways, this seems like bad stats especially if they didn’t do a QC plot (interpolated vs actual) to see how the “high fideility” stations fair when their values are interpolated from other nearby – high fideility stations – using the same/similar algortihm. If they didn’t do this then the adjustments are all BS based on just a load of assumptions.

Steve Keohane
July 17, 2012 7:27 am

Pamela Gray says:
July 17, 2012 at 6:18 am
Interesting observation happening in NE Oregon. The noon temps have been warm to hot, but less than an hour later have dropped like a stone. Why? Thunderstorms. And we have at least two weeks of them. So was the whole day hot? Yes according to noon temps on old sensors. No according to sensors that average the temps together.
Which leads me to another issue. Those record temps are nonsensical. A moment at 107 degrees now may be compared to who knows how many hours at 107 degrees back then or visa versa and be called equal. Really? How was the peak temp measured way back when, compared to the peak temps of today? Are we comparing two different types of records? If so, why would we be calling the record temps this year “new records” that take the place of the old records set years and decades ago?

This has always bothered the hell out of me. We really need the integral of the curve for the day, and the RH% to tell if there is heating or cooling.

Claude Harvey
July 17, 2012 7:30 am

Re: Victor Venema says:
July 17, 2012 at 6:45 am
“The two citations are plainly wrong.The abstract is not a ”new peer reviewed paper recently presented at the European Geosciences Union meeting.” It is an conference abstract by E. Steirou and D. Koutsoyiannis, which was presented at the session convened (organized) by Koutsoyiannis himself.”
What about it Anthony? I’d already blown this one out to everyone I knew calling it “peer reviewed and presented at….”

Owen
July 17, 2012 7:32 am

Massage the data until it tells you what you want it to tell. That’s how science is done by the Climate Liars/Alarmists.
Why would anyone be surprised the data is corrupt. Global warming has never been about climate science. It’s about imposing a fascist political agenda by frightening people into thinking the world is coming to an end.
I don’t know why we keep debating the Climate Liars. They won’t change their minds because it’s never been about the facts; facts don’t matter to these people. They’ll continue to believe what they believe despite reality.
The people we should be reaching out to are the masses who still think that global warming is real. Once they realize global warming is a hoax, Gore, Hansen and their compadres in crime will become irrelevant – and hopefully prosecuted for fraud. These endless debates with people who have no ethical or moral standards are a waste of time.

David
July 17, 2012 7:33 am

Bill Illis says:
July 17, 2012 at 6:51 am
This is why the NCDC and BEST like using the Homogenization process.
“The Steirou and Koutsoyiannis paper is about GHCN Version 2. The NCDC has already moved on to using GHCN-M Version 3.1 which just inflates the record even further.”
==================================================================
All please read this as Bill illis is 100% correct. Version 3.1 is far worse. Also, the TOBS comment is spot on.
———————————————————————————————-

Physics Major
July 17, 2012 7:33 am


If I wanted to buy an accurate thermometer, I would leave my calculator at home and take along a thermometer of known calibration. I would buy the one that read closest to calibrated thermometer even if it was the highest or lowest of the samples.

Juice
July 17, 2012 7:39 am

Lucy Skywalker says:
July 17, 2012 at 4:25 am
Very good. Glad to see this peer-reviewed.

It appears to be a seminar presentation. These are not typically peer reviewed. I don’t know about this one, but I doubt it.

cd_uk
July 17, 2012 7:39 am

Victoria
You have me at an advantage I can’t view the Venema et al., 2012 paper only the abstract. Although the abstract does allude to the metric their using to ascertain “better” performance. From the abstract:
“Although relative homogenization algorithms typically improve the homogeneity of temperature data”
Is that not the point of the piece here, essentaiily homogeneity may not be such a good goal as discussed above. So it doesn’t address what is being suggested.
To be honest I don’t know enough about this to make a judgement but it does stand to reason if you’re adjusting data, in order to account for irregular superficial effects such as experimental error, then these should have no influence, or nominally so, on the final result: as many pushed up as pushed down. …unless (and as stated), there is good experimental reasons for the net gain. What is this reason?
As for the general point about statistics. All statistical methods have the their strengths and weakness – none are perfect – but you chose the one (or better many) that address the inherent limitations of the data. It appears that the methodology applied here was not the most appropriate. But then I’m not an expert.

July 17, 2012 7:46 am

Actually, I would expect that unbiased homogenization would result in more temperature trends being decreased, not an equal proportion of increases and decreases (because one of the goals of homogenization should be to erase UHI). So, I would suggest that the temperature record is more corrupt than the authors suggest.

Jimbo
July 17, 2012 7:47 am

Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C

And now deduct natural climate variability from the 0.42°C and we are left with……….false alarm. Move along, nothing to see here folks. Enter fat lady.

July 17, 2012 7:52 am

Anthony.
Of course I’ll bring this up today at our BEST meeting. Dr. K is an old favorite. That said, since we dont use homogenized data but use raw data instead I’m not sure what the point will be

cd_uk
July 17, 2012 7:54 am

Sorry Victoria – my spoilling is atroceous 😉
That should’ve been “fare” nor “fair” and “choose” not “chose”.

July 17, 2012 7:56 am

Erm, this wasn’t published or peer reviewed (just presented at a conference). I have no idea how they got the results they did, since simply using all non-adjusted GHCN stations (in either v2 or v3) gets you pretty much the same results as using the adjusted data. Both the Berkeley Group and NCDC have relatively small adjustments on a global level:
http://rankexploits.com/musings/wp-content/uploads/2010/09/Screen-shot-2010-09-17-at-3.20.37-PM.png
http://curryja.files.wordpress.com/2012/02/berkeley-fig-2.png
What is the justification for choosing the specific stations that they did? I could easily enough pick 181 stations that show that homogenization has decreased the trend, but I’d rather just use all the stations rather than an arbitrary subset to avoid bias.
REPLY: I was of the impression that it was “in press” but I’ve change the wording to reflect that. Hopefully we’ll know more soon.
I wouldn’t expect you’d consider any of this, so I’m not going to bother further right now. – Anthony

bubbagyro
July 17, 2012 7:57 am

The paper is available for any and all to see. All of the supporting data is there. I would call this paper, and its presentation at a conference, “Hyper-peer reviewed”. I have had papers accepted in major journals and presented at conferences as well. A “peer-reviewed” paper (I like “pal-reviewed as a better descriptor), maybe has 5 or 6 people reviewing it, and then it is “accepted”. A public paper plus a conference has orders of magnitude more review.
My take from the paper is 50% of cAGW is bogus. The other half comprises data exaggeration to the upside from UHI, plus comparison with extinct station data (stations in rural or remote areas that have been “removed”). So, in my opinion, there was likely no net warming in the 20th century till now.

JayPan
July 17, 2012 7:57 am

Should send such papers to Dr. Angela Merkel. She has warned these days that global temperature could increase by 4°C soon, as her chief climate change advisor, Mr. Schellnhuber, has told her. And many Germans are proud to show the world how to run a de-carbonized economy successfully … one day.

July 17, 2012 7:59 am

Is Industrialization the cause of for AGW or is global warming the creation of Academic Graffiti Man?

July 17, 2012 8:02 am

Is it possible to estimate degree of any human interference (with data, UHI or CO2) with natural change in the global temperature data?
I may have identified a good proxy for natural temperature oscillation totally independent of any climatic factor.
Since it is a natural oscillation with no trend, it was necessary to de-trend the temperature data, I used Northern hemisphere data, as more reliable than the global compound, from 1880-2011 is used:
http://www.vukcevic.talktalk.net/GSCnh.htm
It can be assumed that CO2 effect and UHI may had some or no effect since 1950, but in the de-trended signal from 1880-1998 none could be detected. To the contrary, 1950-1998 period is a particularly good agreement between the proxy and the data available.
Only period of contention is 1998-2011, which would be odd for either the CO2 or UHI to show up so late in the data.
Homogenization ?
Possibly.

Kev-in-Uk
July 17, 2012 8:06 am

Steven Mosher says:
July 17, 2012 at 7:52 am
Does that mean we have to await the definition of raw data? and more importantly who used what version of said ‘raw’ data?

July 17, 2012 8:07 am

John West.
You just calculated the transient climate response. ( TCR) at 1.6.
the ECR ( equillibrium Climate response) is anywhere from 1.5 to 2x higher.
so if you calculate a TCR ( what you did) then you better multiply by 2…
Giving you 3.2 for a climate sensitivity. (ECR)

Alexej Buergin
July 17, 2012 8:13 am

“John West says:
July 17, 2012 at 6:58 am
And allowing for some lag in the system(s) by using a CO2 from several years prior to 2000:
dF = 5.35ln(360/300)=0.97”
If there is a lag, should that not be considered in the number for the beginning (300), too?

July 17, 2012 8:14 am

Physics Major says:
July 17, 2012 at 7:33 am

If I wanted to buy an accurate thermometer, I would leave my calculator at home and take along a thermometer of known calibration. I would buy the one that read closest to calibrated thermometer even if it was the highest or lowest of the samples.

But it might be bulky and you would have to allow some time to reach equilibrium. You’re choice of course.
The cheapie store thermometers should be “unbiased” estimators of temperatures, in the sense that they are just as likely to be too low as too high. So, the average should converge asymptotically to the most accurate (“homogenized”) estimate of the temperature.
So, if one the cheapies was within a half degree or so of that estimate (out of at least a half dozen or more instruments), then I would buy it. Otherwise I’d try another store.
I’ll bet most of the time we’d end up buying the same instrument (or rejecting them all).

observa
July 17, 2012 8:22 am

Congratulations Mr Watts for something elementary and curious you noticed about those whitewashed Stevenson Screens so long ago it seems. The scientific community and the scientific method are deeply indebted to you for refusing to deny the evidence before you, despite all the political pressure to do so.
As for the greenwash and the bought and paid for claque of post-normal political scientists, their day of reckoning fast approaches.

cd_uk
July 17, 2012 8:22 am

Isn’t this all a bit immaterial anyway. The whole thing about measuing the Earth’s average temperature seems crazy to me. Measuring changes on the order of a few tenths of a degree is futile given that the methods employed – if I understand them.
First there is the spatial average. How do you grid temperature data and at what resolution? Do you use IWD mean, natural neighbours, BSpline, Kriging (and which type), declustering (cell vs polygonal) etc.
Then you have to decide whic hprohection system you use or whether you use angular distances but then which geoid do you use.
It’s bias upon bias.
Surely this would be just funny if so much money wasn’t spent on it.

Dave
July 17, 2012 8:28 am

They won’t listen but this is one huge nail in a coffin already crowded with nails. Please don’t mind if I gloat a little!

D. J. Hawkins
July 17, 2012 8:35 am

Johanus says:
July 17, 2012 at 6:49 am
Skeptikal says:
July 17, 2012 at 6:24 am
The data doesn’t need homogenization. If one location is hotter or colder than a neighbouring location, that’s weather. Raw data is the only data that’s worth anything. Once you bend the data out of shape, it becomes worthless.
You’re wrong. The data does need some kind homogenization to correct for inaccurate or poorly situated instruments. We also need it to be able to summarize the weather over larger regions to make predictions and comparisons.
Here’s an example of how you yourself can use homogenization to help guarantee the next thermometer you buy will be more accurate.
Go to a place that sells cheap thermometers (Walmart etc). Normally there will be 5 or 10 instruments on display of various brands. You will immediate notice that they are all predicting different temperatures. Maybe some will read in the mid or low 70′s, some in the high 70′s. There will always be a maverick or two with readings way of into the impossible range.
Which thermometer, if any, should you buy?
Well, it is likely that there are several instsruments in the bunch reporting fairly accurately. Best way to find the most accurate thermometer is to whip out your pocket calculator, add up all the temps and divide by the number of thermometers. (Throw away any obviously bogus readings first, such as a thermometer reading zero.) The resulting average value is most likely to be closest to the “real” temperature.
That is how homogenization works, on a small scale.

Since the temperature in Walmart at the thermometer display area is likely to be, in fact, uniform, your method has some merit. However, in the real world, it’s unlikely that stations separated by even as little as 5-10 miles see absolutely identical conditions. If it’s 75F at “A” and 71F at “B”, the “real” temperature at both of them isn’t likely to be 73F.

Bill Illis
July 17, 2012 8:37 am

Zeke, you need to start pinning all your charts to the last year of data. Normalizing all the lines in the centre of the graph distorts just how much change there is over time.
For example, here is how the US adjustments should be shown.
http://img692.imageshack.us/img692/6251/usmonadjcjune2012i.png
Versus the way you chart it up.
http://rankexploits.com/musings/wp-content/uploads/2012/05/Berkeley-CONUS-and-USHCN-Adj-tavg-v2.png

Lester Via
July 17, 2012 8:39 am

Not being a climatologist or meteorologist, I have little understanding of the logic behind where monitoring stations are placed. I do know from observing my car’s outside temperature indicator that on exceptionally calm, sunny days the indication varies significantly between treeless areas and wooded areas. This variation is minimal, if even detectable at all, on windy or cloudy days.
If the intent of a monitoring station is to measure a temperature representative of the air temperature in the general area in a way that is not influenced by wind speed or direction and cloudiness, then the task is not at all a simple straight forward one.
As a former metrologist, thoroughly familiar with thermometers and their calibration, I would think quality instrumentation was used at monitoring stations, making random instrumentation errors small compared to the variations experienced due to the physical location of the monitoring station. I am guessing that the homogenization process used only corrects for random errors – those having an equal probability of being positive or negative, and will not take out a bias due to physical location.

Don
July 17, 2012 8:43 am

Perhaps I am being too simplistic, but it seems to me that any methodology they come up with is easily tested. Simply pick any number of reporting stations and use them to calculate the temps at other known stations, pretending they didn’t exist.
If your prediction is close, then you are on to something. If it is consistently higher or lower, then your methods are garbage.
What am I missing?

Jeff Condon
July 17, 2012 8:45 am

Anthony,
I hope the Best team does take note. So far I’ve not even received a reply to my concerns. Apparently I need to take their matlab class.
I’m annoyed with it so it is a good thing for them that I don’t have any time.

July 17, 2012 8:46 am

D. J. Hawkins says:
………………………………………………………………………………………….
Since the temperature in Walmart at the thermometer display area is likely to be, in fact, uniform, your method has some merit. However, in the real world, it’s unlikely that stations separated by even as little as 5-10 miles see absolutely identical conditions. If it’s 75F at “A” and 71F at “B”, the “real” temperature at both of them isn’t likely to be 73F.

———————————————————————————-
I share the same concern. But there are elegant (and sometimes useful) methods (interpolation, kriging etc) for estimating values on a partially sampled gradient. Some better than others. None are perfect.
As George Box famously said: All models are wrong. Some are useful.

July 17, 2012 8:48 am

Steve Keohane says:
July 17, 2012 at 7:27 am
……
Put the sensor of your digital thermometer in large enclosed water container, read it at sunset for the day- and at sunrise for the night-time averages.

Skeptikal
July 17, 2012 8:50 am

Johanus says:
July 17, 2012 at 6:49 am
Here’s an example of how you yourself can use homogenization to help guarantee the next thermometer you buy will be more accurate.
Go to a place that sells cheap thermometers (Walmart etc). Normally there will be 5 or 10 instruments on display of various brands. You will immediate notice that they are all predicting different temperatures. Maybe some will read in the mid or low 70′s, some in the high 70′s. There will always be a maverick or two with readings way of into the impossible range.
Which thermometer, if any, should you buy?
Well, it is likely that there are several instsruments in the bunch reporting fairly accurately. Best way to find the most accurate thermometer is to whip out your pocket calculator, add up all the temps and divide by the number of thermometers. (Throw away any obviously bogus readings first, such as a thermometer reading zero.) The resulting average value is most likely to be closest to the “real” temperature.
That is how homogenization works, on a small scale.

That might work when there are 10 thermometers in the same location. Try doing that with your 10 thermometers spread across 10 different stores with varying store temperatures. The thermometer you purchase in that situation will likely be no more accurate than a random selection.
Temperature varies with location so it’s absurd to suggest that one thermometer is incorrect simply because it reads a different temperature to another thermometer at a different location. Unless there is a known fault with a particular instrument, homogenization is more likely to introduce errors than to correct any.

Stefan
July 17, 2012 8:54 am

There’s some sort of Climate Change Uncertainty Principle that the super high temps are reality and happening but not where they’re being measured, and a cat.

Ray
July 17, 2012 9:02 am

In title… Homogenization… one “o” is missing.
[REPLY: Fixed. Thanks. -REP]

Tilo Reber
July 17, 2012 9:04 am

This probably sounds like a broken record by now. But I’ve been claiming for about four years that half of the warming trend is due to “adjustments”. And a portion of the remainder is due to coming out of an LIA. This again reinforces the climate sensitivity numbers that were produced by Lindzen and Spencer. The climate sensitivity is somewhere between .5C and 1.2C per CO2 doubling.

OhMyGosh
July 17, 2012 9:07 am

Steven Mosher says:
July 17, 2012 at 7:52 am
Anthony.
Of course I’ll bring this up today at our BEST meeting. Dr. K is an old favorite. That said, since we dont use homogenized data but use raw data instead I’m not sure what the point will be
————————————————————————————
Steve McIntyre explains what the point is.
It is pretty amazing how lame the BEST guys deal with severe issues.
Their UHI paper is just not plausible, but they don’t care.
BEST is not compatble with ocean temperature data but they don’t care.
Nor is it compatible with satellite data, who cares ?
And if this would only affect homogenized datasets (McIntyre thinks otherwise), BEST would be an outlier and the the whole point of BEST of confirming other data sets falls apart. And that’s the point where Mosher is not sure what the point is…

Michael Larkin
July 17, 2012 9:13 am

How are Steirou and Koutsoyiannis regarded by the warmists? Are they regarded as sceptics, or what?

Nigel Harris
July 17, 2012 9:26 am

D J Hawkins says:

Since the temperature in Walmart at the thermometer display area is likely to be, in fact, uniform, your method has some merit. However, in the real world, it’s unlikely that stations separated by even as little as 5-10 miles see absolutely identical conditions. If it’s 75F at “A” and 71F at “B”, the “real” temperature at both of them isn’t likely to be 73F.

I think that’s a straw man. Nobody homogenizes temperatures like that. Of course different locations, even quite nearby, have different absolute temperatures. But if you see the temperature record at one location suddenly change relative to all the other nearby locations, it maybe suggests that something has changed in the way the temperature is being recorded, or in the location of the thermometer. If the data are being used to try to understand climate, it makes sense to try to correct for such problems.

tadchem
July 17, 2012 9:36 am

The Fundamental Premise, that “Homogenization is necessary to remove errors introduced in climatic time series,” is intellectually dishonest.
It presumes a priori knowledge of what “error-free” data would look like, and totally disregards the physical fact of natural variability and its contribution to the uncertainty of whatever ‘trends’ are later inferred.
Either ‘smoothing’ OR ‘trending’ can be performed on a raw data set, but to perform BOTH is wrong and inevitably produces misleading results.

MarkW
July 17, 2012 9:37 am

Between this, UHI, micro-site contamination, solar changes (both TSI and the Svensmark affect), oceanic cycles (PDP, AMO, etc.)
Not a lot of warming left for CO2 to be responsible for.

DR
July 17, 2012 9:50 am

Careful Zeke, realclimatescientists tried the same sort of attacks before against Koutsoyiannis and got their ears pinned back.
http://climateaudit.org/2008/07/29/koutsoyiannis-et-al-2008-on-the-credibility-of-climate-predictions/
He published an updated version in 2009 or 2010 to shut them up. I’m just sayin’

John Day
July 17, 2012 9:51 am

@Nigel Harris
> But if you see the temperature record at one location suddenly change
> relative to all the other nearby locations, it maybe suggests that something
> has changed in the way the temperature is being recorded, or in the location
> of the thermometer. If the data are being used to try to understand climate,
> it makes sense to try to correct for such problems.
But what if the anomaly at that one location is legimitate (due to local precipitation or whatever). How can you justify “correcting” the observation? The expected value for the temperature in the region must account for all the temps, plus or minus the trend. Else you can’t correctly balance the incoming and outgoing radiation.
See my post above for an example of such a “legitimate” anomaly:
http://wattsupwiththat.com/2012/07/17/new-paper-blames-about-half-of-global-warming-on-weather-station-data-homgenization/#comment-1034593

John Finn
July 17, 2012 10:00 am

Since UAH satellite temperatures show an increase of ~0.4 deg over the past 30 years then ALL warming over the past century must have been since the 1970 – i.e just about the time the CO2 effect would be expected to become distinguishable from natural variability.
This paper does nothing to debunk the CO2 effect. On the contrary it suggests that TSI, Svensmmark, PDO and other natural effects are negligible over time. Note Satellite readings are not contaminated by UHI.

Duster
July 17, 2012 10:02 am

Johanus says:
July 17, 2012 at 6:49 am
Skeptikal says:
July 17, 2012 at 6:24 am
*The data doesn’t need homogenization. If one location is hotter or colder than a neighbouring location, that’s weather. Raw data is the only data that’s worth anything. Once you bend the data out of shape, it becomes worthless.*
You’re wrong. The data does need some kind homogenization to correct for inaccurate or poorly situated instruments. We also need it to be able to summarize the weather over larger regions to make predictions and comparisons.
Here’s an example of how you yourself can use homogenization to help guarantee the next thermometer you buy will be more accurate.

That is how homogenization works, on a small scale.

Skeptickal is right here. The process you describe is basic QC statistics and applies to the instrument(s) you purchase. You are not planning to purchase the entire lot, including the mavericks, and then employ them all to generate analytical quality data.
Also, consider what happens if you do not have even a clue as to which data instruments were the good ones and which were the bad ones (e.g. GHCN) – or you got all your new instruments out of their packages and droppped them all before individually marking the “bad” instruments, and in fact did not know in which direction the “bad” instruments were biased. No “correction” you could possibly apply is more than guesswork. Also, since you do not have clue as to which instruments are the good and which are not, your “correction” should be applied in both directions weighted possibly by the fact that when you strolled out of the store you knew there were two bad ones in the lot. The outcome of the correction is precisely as good as your guess limited by the quality and specific biases of the worst instruments. If you cannot identify which one are bad and how they err, then …
As to whether homogenization is necessary at all, suppose you are interested in detecting trends, which is in fact the major point of the AGW argument – the trend in the global average temperature employed as a measure of climate. Whether your instruments are “good” or “bad”, if they respond essentially uniformly to temperature changes – this assumes that the instrument error is simply related shifted scales on the instrument rather than problems with the sensor itself – then you can look at the trends from each instrument without data hamburgerization, and still estimate a regional trend, one with less potential error than you would have after homogenization. There’s no need for the actual data from your instruments to be “adjusted.” It should be left strictly alone. Nor do you even need to estimate the global average temperature.

Alexej Buergin
July 17, 2012 10:06 am

@West, Mosher
According to this
http://www.ipcc.ch/ipccreports/tar/wg1/345.htm
the IPCC shows 3.5°C ECS from equlibrium to equilibrium (of temperature and CO2) for a doubling of CO2 in 70 years. The lag is hundreds of years. IPCC-TCR (from equlibrium to end of doubling after 70 years) is 2°C.
Since CO2 was already increasing in the year 1900, the first calculation of West goes from CO2 increasing to CO2 increasing, which gives the same temperature difference as from equilibrium to equlibrium (3.5°C according to IPCC for doubling, thus years 70 to 140).
Therefore I see no need to consider any lag and think 1.3°C is the West-ECS.
(P.S. I assume that IPCC-ECS is the same thing as Mosher-ECR)

rogerknights
July 17, 2012 10:14 am
John West
July 17, 2012 10:19 am

Steven Mosher says:

John West.
You just calculated the transient climate response. ( TCR) at 1.6.
the ECR ( equillibrium Climate response) is anywhere from 1.5 to 2x higher.
so if you calculate a TCR ( what you did) then you better multiply by 2…
Giving you 3.2 for a climate sensitivity. (ECR)

No, I calculated the TCR @ 1.3 & took a stab @ ECR (minus feedbacks) by taking out about a decade of CO2 increase. Admittedly, not a “climate science” approved method, but for some reason I think around 10 years is plenty enough time to realize the temperature difference from a change in forcing, considering we see temperature changes daily and seasonally from changes in forcing with short lags (see Willis’ post); and whether feedbacks are positive, negative or neutral is still a very open question in my mind. Why should I accept (on faith?) an extremely long lag and net positive feedbacks when I can’t observe that in the real world and have no substantial evidence for it? It’d be like believing in the North American Wood Ape (Bigfoot). Anyway, taking your method of 1.5-2x the ECR would be from 1.95 to 2.6 IF all the warming is due to CO2 increase.

Victor Venema
July 17, 2012 10:20 am

Only one of the three sentences cited from the “peer reviewed paper”, can actually be found in the abstract, which is lightly reviewed, the others are from the talk, which is not reviewed. Anthony get your facts right! A longer response can be found on my blog:
http://variable-variability.blogspot.com/2012/07/investigation-of-methods-for.html
In this community it does not seem to be well known how homogenization is actually performed, thus I also link an introductory post on homogenization:
http://variable-variability.blogspot.com/2012/01/homogenization-of-monthly-and-annual.html

John West
July 17, 2012 10:28 am

Alexej Buergin says:

temperature difference as from equilibrium to equlibrium

Excellent point!
The temperature in 1900 is in equilibrium [as much?] as the temperature in 2000 and [as much?] as the temperature will be in 2100. Hmmm.

July 17, 2012 10:34 am

John Finn “Note Satellite readings are not contaminated by UHI.”
Are we sure? NASA has no trouble finding huge amounts of UHI from their satellites.
“Summer land surface temperature of cities in the Northeast were an average of 7 °C to 9 °C (13°F to 16 °F) warmer than surrounding rural areas over a three year period, the new research shows.”
http://www.nasa.gov/topics/earth/features/heat-island-sprawl.html
http://www.nasa.gov/images/content/505071main_buffalo_surfacetemp_nolabels.jpg

Gordon Richmond
July 17, 2012 10:37 am

If one is buying a thermometer, and expects any degree of accuracy at all, it would be wise to check its calibration at two points bracketing the temperature range of interest.
After all, a broken clock is right twice a day.

Victor Venema
July 17, 2012 10:38 am

cd_uk says: “Victoria
You have me at an advantage I can’t view the Venema et al., 2012 paper only the abstract. Although the abstract does allude to the metric their using to ascertain “better” performance. From the abstract:”
Venema et al., 2012 is published in a open access journal, so you can read it:
http://www.clim-past.net/8/89/2012/
I thought, that open-access was important here given the amount of rubbish and confusion being spread by these kind of blogs.
“To be honest I don’t know enough about this to make a judgement but it does stand to reason if you’re adjusting data, in order to account for irregular superficial effects such as experimental error, then these should have no influence, or nominally so, on the final result: as many pushed up as pushed down. …unless (and as stated), there is good experimental reasons for the net gain. What is this reason?”
You guys are afraid that the urban heat island pollutes the global climate signal. Homogenization aims to remove such artifacts and if we only had the UHI homogenization would make the temperature trend smaller. That is, it would have an influence. Otherwise, one would not need to homogenize data to compute a global mean temperature.
In practice homogenization makes the temperature trend stronger. This is because temperatures in the past were too high. In the 19th century many measurement were performed at North facing walls, especially in summer the rising of setting sun would still burn on these instruments. Consequently these values were too high and homogenization makes the lower again. Similarly, the screens used in the first half of the 20th century were open to the North and to the bottom. This produced too high temperatures on days with little wind and strong sun as the soil would heat up and radiate at the thermometer. These too high temperatures are reduced by homogenization. In the US you have similar problems with the time of observation bias and with the transition from Stevenson screens to automatic weather stations.
The size of the corrections is determined by comparison with neighboring stations, but not by averaging as Anthony keeps on writing wrongly. The key ideas are explained on my blog:
http://variable-variability.blogspot.com/2012/01/homogenization-of-monthly-and-annual.html

davidmhoffer
July 17, 2012 10:39 am

Steven Mosher says:
July 17, 2012 at 8:07 am
John West.
You just calculated the transient climate response. ( TCR) at 1.6.
the ECR ( equillibrium Climate response) is anywhere from 1.5 to 2x higher.
so if you calculate a TCR ( what you did) then you better multiply by 2…
Giving you 3.2 for a climate sensitivity. (ECR)
>>>>>>>>>>>>>>>
You cannot make this statement unless you know to some degree of precision what the time constant is, which you do not. Further, even if you knew the time constant for one particular forcing, you must also know the time constants for all other forcings that are active to the point of still being significant, what their sign is, and how far along each is in terms of a total of 5 time constants. There are WAY too many factors all in place at the same time, we know the time constant of pretty much none of them, let alone which ones are at the beginning of the cycle and which ones are at the end.

Victor Venema
July 17, 2012 10:44 am

REPLY: I was of the impression that it was “in press” but I’ve change the wording to reflect that. Hopefully we’ll know more soon. – Anthony
I had not seen the reply before my previous answer. Thank you for correcting this factual error. Maybe you should write less posts per day. Almost any post on a topic I am knowledgeable about contains factual errors. Your explanation of how homogenization works could not have been more wrong. Why not take some time to study the topic? One post a day is also nice.
REPLY: Thank you for your opinion, reading your blog, clearly you wish to prevent opinion and discourse, unless its yours. See what Willis says below. See also what statistician Steve McIntyre has to say about it. Be sure of yourself before making personal accusations – Anthony

Editor
July 17, 2012 10:45 am

Victor Venema says:
July 17, 2012 at 6:58 am

Anthony Watts cited the two major errors of the abstract:

“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”

You would not expect proportions to be 1/2, inhomogeneities can be have a bias, e.g. when an entire network changes from North wall measurements (19 century) to a fully close double-louvre Stevenson screen or from a screen that is open to the North or bottom (Wild/Pagoda) type-screen to a Stevenson screen, or from a Stevenson screen to an automatic weather stations as currently happens to save labor.

Victor, thanks for your comments. Nobody is saying that the inhomogeneities do not have a bias. What was said was that if you have a number of phenomena biasing the records (you point out several different ones above), you would expect the biases to cancel out, rather than reinforce. They may not do so, but that has to be the default assumption.

Personally I just finished a study with a blind numerical experiment, which justified statistical homogenization and clearly showed that homogenization improves the quality of climate data (Venema et al., 2012). http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
Many simpler validation studies have been published before.

Thank you also for your fascinating and valuable study, but that is a huge oversimplification of the results. The work you report on focused on analyzed a host of methods for homogenization, and a total of 15 such methods were compared. Some of these improved the quality of the data, and some did not. Your abstract states (emphasis mine):

Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important.

So for you to now claim that “homogenization improves the quality of climate data” is a total misrepresentation of the results. According to your study, some types of homogenization improved the temperature data, most didn’t improve the precipitation data, and people often misused the homogenization software. That’s a long, long ways from your statement above that “homogenization improves the quality of climate data”. The best we can say from your study is that sometimes, some homogenization techniques improve the quality of some climate data, depending on the metric chosen to measure the improvement …
Again, thanks for your very interesting work on the question.
w.

highflight56433
July 17, 2012 10:46 am

Seems to me that at temp is a temp. Why mess with it? if one has to homogenize, or adjust, then the site should be moved to an area that does not need such. Airports need real ramp temps so aviators can calculate landing and takeoff distances.
Also, we recognize that there is a difference in airport UHI sites and remote wilderness type stations that have not had an increase in typical infrastructures like asphalt, etc. All seem legitimate as individual sites. I just can’t understand the thinking and reasoning in making adjustments of any sort. We all know how to manipulate data for a cause for a paper or job using statistical methods. Appears as unnecessary busy work. (sounding cynical?)
As quoted from the rogerknights link where Steve McIntyre says: “In commentary on USHCN in 2007 and 2008, I observed the apparent tendency of the predecessor homogenization algorithm to spread warming from “bad” stations (in UHI sense) to “good” stations, thereby increasing the overall trend.” Pretty much says it in my view.

son of mulder
July 17, 2012 10:54 am

This needs proper peer review and crowd review. If correct then certain people deserve to go to jail. There can be no defence for such adjustments (homogenization).

jayhd
July 17, 2012 10:59 am

When I read explanations for homogenization of data like “The data does need some kind homogenization to correct for inaccurate or poorly situated instruments.” I automatically get suspicious of the whole project. How do you tell if a thermometer is inaccurate? If it is truly inaccurate, instead of just recording temperatures that are inconveniently lower than what the reseacher wants, then the thermometer should be discarded and all the data it recorded discarded. And what is the criteria for a poorly placed instrument? If it truly doesn’t meet standard requirements for placement, then all its recorded data should be discarded. If eliminating truly faulty and poorly sited instruments leaves gaps, so be it. Unless an absolutely foolproof and validated method can be used to accurately “fill in the blanks”, only valid temps should be used and the problems with instruments and sites should be duly noted.
Jay Davis

July 17, 2012 11:01 am

Eyal Porat says:
July 17, 2012 at 3:51 am (Edit)
Somehow this doesn’t surprise me.
I believe the other half is the UHI effect.
##############################
That would put us in the LIA. Look if people want to twist and turn the numbers to make this century as cold as the LIA, then that’s a fun little game. But if you actually believe that the sun has anything to do with the climate and you believe in LIA solar minimum, then it makes it rather hard to argue that:
1. The sun was the cause of the warming since the LIA
2. its no warmer now than in the LIA.
But go ahead and knock yourself out with crazy arguments. Don’t expect to convince anyone.

Gail Combs
July 17, 2012 11:05 am

JeffC says:
July 17, 2012 at 4:52 am
a station with data should never be homogenized … it doesn’t need to be … homogenization doesn’t reduce errors but simply averages them out over multiple stations … and why assume there are errors ? if there are then id them and toss them out otherwise assume the raw data is good … this assumption of errors is just an excuse to allow UHI to pollute nearby stations …
____________________________________
Agreed. Either the station is giving good data, in which case it should be left alone or the data is questionable in which case the data (with appropriate documentation of reasons) is tossed.
The fact they tossed so many stations makes the current data set questionable. Geroge E Smith (Chiefio) looked into the Great Dying of the Thermometers starting around HERE
Also see his: Thermometer Zombie Walk
Bob Tisdale looks at the Sea Surface data sets: http://bobtisdale.blogspot.com/2010/07/overview-of-sea-surface-temperature.html
The Cause of Global Warming by Vincent Gray, January 2001
A Pending American Temperaturegate By Edward Long, February 2010
Cumulative adjustments to the US Historical Climatological Network: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Arctic station Adjustments: http://wattsupwiththat.files.wordpress.com/2012/03/homewood_arhangel_before_after.png?w=640
See: http://wattsupwiththat.com/2012/03/19/crus-new-hadcrut4-hiding-the-decline-yet-again-2/
But I think the clincher is AJ Strata’s error analysis article coupled with Jo Nova’s article Austrailian Temperature Records, Shoddy, Inaccurate, Unreliable – SURPRISE!
There is no way you can distinguish the trend from the noise in the temperature record. Especially after it has been tampered with.
I am certainly glad this paper got written but it is not news to anyone who has looked at WUWT.
This paper coupled with the recent article By its Actions the IPCC admits its past reports were unreliable should be sent to every Congresscritter in the Federal and State government with the header GOOD NEWS, there is no global climate change crisis….

July 17, 2012 11:12 am

highflight56433 says:
July 17, 2012 at 10:46 am (Edit)
Seems to me that at temp is a temp. Why mess with it? if one has to homogenize, or adjust, then the site should be moved to an area that does not need such.
###################
That’s not the problem
Situation: When have station named Mount Molehill. It is located at 3000 meters above sea level. It records nice cool temperatures from 1900 to 1980. Then in 1981 they decide to relocate the station to the base of Mount Molehill 5 km away. Mount Molehill suddenly because much warmer.
But won’t they rename the station? Nope! they may very well keep the station name the same.
But won’t the latitude and longitude change? Nope. it depends entirely on the agency recording the position, until recently many only reported to a 1/10 of a degree ( 10km) So, what you get, IF YOU ARE LUCKY, is a piece of metadata that says in 1981 the altitude of the station changed.
Now, my friends, how do you handle such a record. a station at 3000 meters is moved to 0 meters and suddenly gets warmer? That’s some raw data folks. Thats some un adjusted data.
anybody want to argue that it should be used that way??
no wait, you all looked at Roy Spencer new temperature record. right? Did Anthony complain that Roy adjusted his data for differences in altitude? Nope. Weird how that works.
When you have stations that change altitude over their record the data cannot be used as is. Roy Knows this. Anthony knows this. You all know this.

NeilT
July 17, 2012 11:14 am

[Snip. You may disagree with Anthony, but when you call him a liar that crosses the line. ~dbs, mod.]

mikef2
July 17, 2012 11:17 am

John Finn
Since UAH satellite temperatures show an increase of ~0.4 deg over the past 30 years then ALL warming over the past century must have been since the 1970 – i.e just about the time the CO2 effect would be expected to become distinguishable from natural variability.
This paper does nothing to debunk the CO2 effect. On the contrary it suggests that TSI, Svensmmark, PDO and other natural effects are negligible over time. Note Satellite readings are not contaminated by UHI
John…….I’m not sure you meant to let that cat out of the bag did you? You are saying then that all this paper is suggesting is that the general assumptions of the surface records are 0.3 – 0.4 ‘higher’ than they should be? Fair enough…I actually agree with you. Which means then that the surface record should really match UAH at 0.4C since 1979. Fair enough, I agree with you.
Funny thing though, is we have those same surface records showing half a degree swings in the early and mid part of last century….as you say, without any CO2 input, so natural variation can swing 0.4C. Or…..if we junk half the range of surface records as suggested above….natural variability equals around 0.2C……..that leaves you some room for CO2 since the 1970s to give you another 0.2C.
hmmmm……….isn’t that pretty much what Lindzen etc have been saying all along…CO2 effect trivially true but essentially meaningless? Forgive me if I don’t get too worked up about 0.2C over 30+ years….

July 17, 2012 11:17 am

willis:
“of methods for homogenization, and a total of 15 such methods were compared. Some of these improved the quality of the data, and some did not. ”
Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, a product that isn’t used by anyone, a product that has been replaced by version 3. It would make sense to look at how PHA performed rather than SNHT which has known issues.

davidmhoffer
July 17, 2012 11:35 am

There is another aspect to temperature trends once explained to me by Steven Mosher when he was demoloshing an error I had made in regard to analysing NASA/GISS gridded data. I’ve been thinking about it since, and it seems this would be an appropriate thread to bring it up in.
IIRC, Mr Mosher explained to me that GISS doesn an “in fill” of missing grid data provided that over some percentage (50% I think) is avaiable for that grid cell for the time period in question. As an exampled, if a grid cell had temps of 10, nodata, and 11, GISS would for those three time segments, “in fill” the nodata with the average of the other two, for a temp of 11.5. At the time, it explained how GISS could show a value in a grid cell truncated to a specific point in time when taking a different time segment would show that grid cell as being empty. But the over all method has bothered me even since and this thread reminded me of it.
If the explanation as I understand it is correct, then is can have no other effect on a global basis across the entire time series but to warm the present and cool the past. That the earth’s temperature has been warming since the LIA is reasonably well accepted. If that is the case, we have to consider that there are is no more data being added to the “cold end” of the GISS record. We’ve got what we’ve got. But there IS data being added at the WARM end of the graph which is in the present. Each year, this extends the to timNe series of gridded data for which we have more than 50% of the gridded data over the entire time period. This in turn allows for older data with missing grid cells to be “in filled” that would otherwise be blank. But because the “in fill” is predicated upon new data that is BY DEFINITION warmer than the old data, the linear trend that calculates the previously empty grid cell in at the beginning of the temperature record must calculate an increasingly colder temperature for that cell. To illustrate:
1, 2, 3, 4, 5, 6 (year)
N,10, N, N 10, N (deg)
In this series, we have 6 points in time, with 4 missing values. The series as a whole is not a candidate for “in filling”. However, if we ran GISS’s reporting program to look at only poiints 2,3,4 and 5, we would meet the 50% threshold, and report that grid cell considered for those time periods ONLY as 10, 10, 10, 10.
Now let’s add a one more years of data” assuming newer years are warmer than older years”
1, 2, 3, 4, 5, 6, 7 (year)
N, 10, N, N, 10, N 12 (deg)
We still cannot “in fill” the whole record because we have less than 50% of the data points. But what if we were to only look at the last four? GISS would “in fill” those by calculating a trend from the two data point that alread exist giving:
4, 5, 6, 7 (year)
9, 10, 11, 12 (deg)
Looks almost sensible, does it not? But wait! what if we looked only at years 2,3,4,5 and applied the same technique? We’d get 10, 10, 10, 10! Year 4 would be a 10, not a 9! By adding one more year of data to the original 6, and looking only at the last 4, we can “cool” year 4 by one degree. Let’s expand the data series with some more years:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 (year)
N, 10, N, N, 10, N, 12, 13, 14, 14, 14. (deg)
This is where I think the “system” as I understand it starts to fall down. If one looks at years 5 through 11, “in filling” a value of 11 deg for year 6 makes a certain amount of sense. What gets broken is that we now have enough data to in fill for the enture series. So, while if one looked at years 1 through 5 as raw data, one would most likely conclude that the first 5 years or so were “flat”. But in fill based on a linear trend due to the recent years being added to the record and “in fill” the whole thing, one would get something like:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 (year)
9, 10, 9, 9, 10, 11, 12, 13, 14, 14, 14. (deg)
By all means, please correct me if my understanding of this whole snarled up system of “in filling” is wrong. But if I do understand it correctly, I don’t see how temps where are higher today being added each year to the temperarture record can have any other effect than to “cool the past” as the number of grid cells with data gets added in warmer periods of times and so increasingly biases the extrapolation to earlier time periods when we have less data. If we were adding data at the same rate to both ends of the scale, this would make some sense. But since we’re adding data at the WARM end (the present) only, cramming a linear trend through the data to “in fill” data at the “cold” end can only cool the past, and with no real justification for doing so.

tom
July 17, 2012 11:35 am

interesting and relevant paper in current issue of Nature chimate change on temp trends over the past 2 millenia
http://www.nature.com/nclimate/journal/vaop/ncurrent/pdf/nclimate1589.pdf
{Reply: Thank you, but that has already been discussed at WUWT here. -REP]

Victor Venema
July 17, 2012 11:35 am

Willis Eschenbach says:
“Victor, thanks for your comments. Nobody is saying that the inhomogeneities do not have a bias. What was said was that if you have a number of phenomena biasing the records (you point out several different ones above), you would expect the biases to cancel out, rather than reinforce. They may not do so, but that has to be the default assumption.”
As explained in a bit more detail above radiation errors in early instruments explain much of the effect and has a clear bias. With a small number of causes for inhomogeneities and these changes happening to most stations in a network, it would be very unlikely that they all cancel out.
Willis Eschenbach says:
“So for you to now claim that “homogenization improves the quality of climate data” is a total misrepresentation of the results. According to your study, some types of homogenization improved the temperature data, most didn’t improve the precipitation data, and people often misused the homogenization software.”
Thank you for the correction. Yes, the word “temperature” would have been more accurate as “climate”. Precipitation is a real problem, not only for homogenization also for climate modeling. As the discussion here was about temperature I had not thought of that.
I think the statement that homogenization improves temperature data is a fair one-sentence summary. Scientist know which the good algorithms are and these algorithms are used to homogenize the important data sets. For example, the method you care about most, the one used to homogenize the USHCN dataset performed very well. Some more obscure or new methods produced problems and some people homogenizing data for the first time made the data more inhomogeneous. In a scientific study you mention such detail, in a normal conversation you typically do not. For the details you can read the open-access study.

davidmhoffer
July 17, 2012 11:36 am

ugh… shoufd have said 10.5 in my initial example above, not 11.51

July 17, 2012 11:40 am

Anthony
“REPLY: Thank you for your opinion, reading your blog, clearly you wish to prevent opinion and discourse, unless its yours. See what Willis says below. See also what statistician Steve McIntyre has to say about it. Be sure of yourself before making personal accusations – Anthony”
I’m not sure what Steve McIntyre would say about a paper he hasnt read. I’ve spoken to him about this on occasion and he wasnt very interested in the paper or the question of late. Kenneth Fritsche ( from CA and Lucias ) is the most up to date blog commenter on this topic that I know. Also, I’m not sure that appealing to authority is the best argument.
The benchmarking study was a blind test a contest of sorts between various homogenization techniques. Its just the kind of test with synthetic data that a statistician like steve would approve of. basically, a true series of temperatures were created and then various forms of bias and error were added to stations. As I recall they ended up with 8 different versions of the world. The various teams then ran their algorithms on the corrupt data and they were scored by their ability to come closest to the “truth” data. As willis notes some did better than others.
He fails to note the success of the PHA approach which is really the question at hand because PHA is used on GHCN products.
Opinions and bias about homogenization are a good place to start a conversation. In the end the question is settled by some good old fashion testing. Create some ground truth data. Inject error and bias ( various forms ) into that ground truth data and test whether a method can find and correct the error or not. Every body has opinions, even opinions about who is the best person to appeal to. In the end, run the test.

Henry Clark
July 17, 2012 11:41 am

If averaging several years at a time and very careful to look at non-fudged data, northern hemisphere temperatures rose around 0.5 degrees Celsius from the 1900s through the 1930s, declined around 0.5 degrees Celsius from the 1940s through the 1960s, and rose around 0.5 degrees Celsius from the 1970s through the 1990s (having not risen any more by now — 2012 — since then, not after the 1998 El Nino and 1997 albedo and cloud cover shift).
As +0.5 – 0.5 + 0.5 = +0.5, that would be a net northern hemisphere temperature rise of around a half a degree Celsius over the 20th century.
Arctic temperatures rose a little more than the northern hemisphere average but in a similar pattern. Southern hemisphere temperatures and thus global average temperatures rose a little less.
So 0.4 degrees global net temperature rise over the century in the Steirou and Koutsoyiannis paper would fit, with the preceding corresponding to these graphs of non-fudged data:
http://earthobservatory.nasa.gov/Features/ArcticIce/Images/arctic_temp_trends_rt.gif
(with the arctic varying more than the global average but having a similar pattern)
and
http://wattsupwiththat.files.wordpress.com/2012/07/nclimate1589-f21.jpg?w=640&h=283
(from http://wattsupwiththat.com/2012/07/09/this-is-what-global-cooling-really-looks-like/ )
and, especially, for the average over the whole northern hemisphere, the original National Academy of Sciences data before Hansen fudged it:
http://stevengoddard.files.wordpress.com/2012/05/screenhunter_1137-may-12-16-36.jpg?w=640&h=317
(from and discussed more at http://stevengoddard.wordpress.com/2012/05/13/hansen-the-climate-chiropractor/ )
Global temperatures having close to +0.4 degrees rise 1900s->1930s, -0.4 degrees fall 1940s->1960s, and +0.4 degrees rise 1970s->2012, making
recent years be 0.4 degrees warmer than a bit more than a century ago fits a 60 year ocean cycle (AMO+PDO) on top of solar/GCR activity change as in http://www.appinsys.com/globalwarming/GW_Part6_SolarEvidence_files/image023.gif plus shorter ocean oscillations, leaving very little room at all for manmade global warming. The net human effect (net with cooling effects of aerosols) could be up to a few hundredths of a degree warming, but there is not justification to ascribe even multiple tenths of a degree to it when the pattern predominately fits natural trends (like the cooling 1940s-1960s — very major cooling in non-fudged data, the reason the global cooling scare existed, before Hansen hid the decline — was during a period of continuous rise in human emissions but during a cooling period in the far more dominant natural influences). Again, such fits even how sea level rise was no more (actually less) in the second half of the 20th century than its first half despite an order of magnitude rise in human emissions meanwhile, as http://www.agu.org/pubs/crossref/2007/2006GL028492.shtml implies.

Werner Brozek
July 17, 2012 11:42 am

John Finn says:
July 17, 2012 at 10:00 am
Since UAH satellite temperatures show an increase of ~0.4 deg over the past 30 years then ALL warming over the past century must have been since the 1970
mikef2 says:
July 17, 2012 at 11:17 am
Funny thing though, is we have those same surface records showing half a degree swings in the early and mid part of last century….as you say, without any CO2 input, so natural variation can swing 0.4C.

I agree with Mike. They say a picture is worth a thousand words. Here is the ‘picture’.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1900/plot/hadcrut3gl/from:1912.33/to:1942.33/trend/plot/hadcrut3gl/from:1982.25/to:2013/trend

D. J. Hawkins
July 17, 2012 11:42 am

Steven Mosher says:
July 17, 2012 at 11:12 am
highflight56433 says:
July 17, 2012 at 10:46 am (Edit)
Seems to me that at temp is a temp. Why mess with it? if one has to homogenize, or adjust, then the site should be moved to an area that does not need such.
###################
That’s not the problem…

That was a nice, straighforward example. Now…
Do we warm the past, or cool the present? By how much? Why?
What do you do about creeping UHI? How do you detect it?
I have seen examples here at WUWT and elsewhere where several decades worth of data are suddenly shifted by 0.5C. Exactly. For every month. Does this seem reasonable to you, or shouldn’t there be some variability in the corrections?
Perhaps it exists, but I’ve never seen any comments regarding adjusted data as to why it was adjusted. And they keep adjusting the past! You’d think once would be enough. And every re-adjustment of the past seems to make it colder. Perhaps my impression is wrong, but it seems to be widely held. A consensus, if you will ;-). Is there a genuine reason for this apparent adjustment trend?

July 17, 2012 11:47 am

david
“You cannot make this statement unless you know to some degree of precision what the time constant is, which you do not. Further, even if you knew the time constant for one particular forcing, you must also know the time constants for all other forcings that are active to the point of still being significant, what their sign is, and how far along each is in terms of a total of 5 time constants. There are WAY too many factors all in place at the same time, we know the time constant of pretty much none of them, let alone which ones are at the beginning of the cycle and which ones are at the end.”
Sure one can make this statement with a precise knowledge of the time constant. You need to do more reading on how these can be estimated from data. You dont need precision and we dont have precision, by the ECR is like to be 1-2x of the TCR. Fingers crossed there is a really good paper on this, hoping it gets submitted by the end of the month. Until then you get to figure out the math on your own.

July 17, 2012 11:50 am

Nope anthony I didnt miss that. I’m talking about the paper on testing homogenization.
see victors reference.

Victor Venema
July 17, 2012 11:50 am

Steven Mosher says:
“Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, ”
The pairwise homogenization algorithm used by NOAA to homogenize USHCN version 2, is called “USHCN main” in the article. It performed well. It has a very low False Alarm Rate (FAR). As there is always a trade of between FAR and detection power, the algorithm could probably have been more accurate overall. And the pairwise algorithm has a fixed correction for every month of the year. Inhomogeneities can, however, also have an annual cycle. For example, in case of a radiation error, the jump will be larger in summer as in winter. With monthly corrections USHCN would have performed better, especially as the size of the annual cycle of the inhomogeneities in the artificial data used in this study was found to be a little too large.

Editor
July 17, 2012 11:51 am

Steven Mosher says:
July 17, 2012 at 11:17 am

willis:

“of methods for homogenization, and a total of 15 such methods were compared. Some of these improved the quality of the data, and some did not. ”

Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, a product that isn’t used by anyone, a product that has been replaced by version 3. It would make sense to look at how PHA performed rather than SNHT which has known issues.

No, it would make sense for you to check the Venema report and see how PHA did. Why? Because when I look at the report, I see no reference to PHA anywhere in it at all.
Also, your point about GHCN V2 is curious. It was superseded by V3, it is true … but it was used for years, it was claimed to be a valid method, and folks say (I haven’t checked it) that V3 results are not much different. So errors in V2 seem like they are relevant to the discussion.
Not only that, but the Venema paper didn’t analyze the GHCN methods (either V2 or V3) at all, there’s not one mention of GHCN in the paper. Go figure …
w.

July 17, 2012 11:55 am

This paper Anthony
“I’m not sure what Steve McIntyre would say about a paper he hasnt read. I’ve spoken to him about this on occasion and he wasnt very interested in the paper or the question of late. ”
http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
And as he notes in his post he’s not interested in looking into it.
REPLY: Thanks for clarifying what you said, always good to cite – Anthony

davidmhoffer
July 17, 2012 12:01 pm

Steven Mosher;
Sure one can make this statement with a precise knowledge of the time constant. You need to do more reading on how these can be estimated from data. You dont need precision and we dont have precision, by the ECR is like to be 1-2x of the TCR. Fingers crossed there is a really good paper on this, hoping it gets submitted by the end of the month. Until then you get to figure out the math on your own.
>>>>>>>>>>>>
I’ll be very interested in the paper when it comes out. In the meantime, I submit to you that I am capable of calculating a time constant if, and ONLY if, I have sufficient data on ALL the processes involved and EACH of their time constants and EACH of their progress through 5 time constants. I think that’s pretty problematic. We’ve got dozens, perhaps hundreds or thousands of physical process all going on at the same time, and many causing feedbacks both positive and negative to each other. Isolating ONE factor (forcing from CO2 doubling for example) from all the others requires that I know what all the others are, what their time constants are, and when they started, and how they related to each other from a feedback perspective. This makes the hunt for park matter and the Higg’s Boson look like Grade 1 arithmetic. Now knowing what ALL the forcings are, what the time constant for EACH is, and WHEN each began makes the calculation of any single forcing from the data a fools errand. IMHO.

July 17, 2012 12:12 pm

Victor Venema says:
July 17, 2012 at 11:50 am (Edit)
Steven Mosher says:
“Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, ”
The pairwise homogenization algorithm used by NOAA to homogenize USHCN version 2, is called “USHCN main” in the article. It performed well. It has a very low False Alarm Rate (FAR). As there is always a trade of between FAR and detection power, the algorithm could probably have been more accurate overall. And the pairwise algorithm has a fixed correction for every month of the year. Inhomogeneities can, however, also have an annual cycle. For example, in case of a radiation error, the jump will be larger in summer as in winter. With monthly corrections USHCN would have performed better, especially as the size of the annual cycle of the inhomogeneities in the artificial data used in this study was found to be a little too large.
################
Thanks Victor I’m pretty well aware of how PHA did, but thanks for explaining to others. Most wont take time to read the article or consider the results. Those who do take the time to skim the article for a word ( like PHA ) will not find it. But of course if they were current on the literature they would know that PHA is USHCN main. (hehe. told me everything I needed to know)

July 17, 2012 12:16 pm

REPLY: Thanks for clarifying what you said, always good to cite – Anthony
Yes, Steve’s position is pretty clear. he thinks the best approach is to start with the best stations and work outward, rather than, using all the data. and trying to correct or throw out the worst. He is, as he says, not to interested in looking at these issues, no matter how many times I ask.

July 17, 2012 12:32 pm

here.
When steve first posted on Menne Zeke suggested that he look at this
https://ams.confex.com/ams/19Applied/flvgateway.cgi/id/18288?recordingid=18288
Not sure if he did, but its easy to watch

July 17, 2012 12:39 pm

Mosher: “Look if people want to twist and turn the numbers to make this century as cold as the LIA”
NOAA has 30 states where the warmest month is in the 1890s. 11 of those were May.
Why should I believe you that the temperature was coming up from a lower value before 1895?
Is it possible that it was warmer for some period of time before 1895? If the US temperature record started in 1890 or 1880 would there before records from those decades?
http://sunshinehours.wordpress.com/2012/07/11/noaa-warmest-months-for-each-state-june-2012-edition/

Billy Liar
July 17, 2012 12:45 pm

Steven Mosher says:
July 17, 2012 at 11:12 am
Why do it the hard way? If the Mount Molehill weather station moves to the bottom of the hill and doesn’t change its name, instead of trying to homogenize the hell out of it, the sensible person would simply end the Mount Molehill record and start the Mount Molehill – New Place record.
What’s not to like?

Nigel Harris
July 17, 2012 12:48 pm

John Day,
As I understand it, the kind of event you describe – a low temperature on a single day because of a localized thundershower – is not the sort of thing anyone attempts to “homogenize” out of the record. It is clearly a legitimate weather event.
What I believe people are looking for are instances where, for example, someone plants a large area of forest surrounding the weather station, where previously it stood in open fields. So the microclimate of the station changes, not just transiently but permanently. But the surrounding climate has not changed. So to take the changed temperatures as evidence of changed climate in that region would be wrong.
This is why most climate analyses look for evidence of changes in siting, instrumentation and microclimate by looking for step changes (not transient spikes) in the records from one site that are not matched by similar step changes in records from other nearby sites. They then attempt to correct for those changes, or throw the data out altogether.
Again, as I understand it (which isn’t very far) the BEST approach had the great merit of not attempting to “homogenize” any data series by adjusting the numbers. They still looked for suspicious step changes in data series, but whenever they found them, they simply split the historical series into two – one before and one after – and treated them as though they were completely separate weather stations.
Other people know a lot more about this than I do and I’m sure they will correct me if I’m wrong. I’m also sure there are some good and easily understood introductions to this area out there on the web, and I may now go and try to find one to refresh and expand my knowledge in this area.

Editor
July 17, 2012 12:52 pm

Victor Venema says:
July 17, 2012 at 11:50 am

Steven Mosher says:

“Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, ”

The pairwise homogenization algorithm used by NOAA to homogenize USHCN version 2, is called “USHCN main” in the article. It performed well. It has a very low False Alarm Rate (FAR). As there is always a trade of between FAR and detection power, the algorithm could probably have been more accurate overall. And the pairwise algorithm has a fixed correction for every month of the year. Inhomogeneities can, however, also have an annual cycle. For example, in case of a radiation error, the jump will be larger in summer as in winter. With monthly corrections USHCN would have performed better, especially as the size of the annual cycle of the inhomogeneities in the artificial data used in this study was found to be a little too large.

Victor, thank you for the clarifications. I’m not sure what you mean when you say PHA performed “well”, since 7 out of the 12 other methods outperformed it per Figures 2 and 3, which would mean to me that it performed about average …
Also, it’s not clear to me that your study looked at the same thing as the Koutsoiannis study. That is to say, if there is a mix of good data and bad data, did your study consider whether the bad data “pollutes” the good data? It seems to me that your study assumed that all data had inhomogeneities, and then looked to see who corrected them better, but I could be wrong.
Please be clear that I have no theoretical problem with correcting inhomogeneities. Nor am I of the opinion that there are huge biases in the recent ground station records, because they agree within error with the satellite records, although that says nothing about the earlier part of the ground station records … and the existing difference between e.g. GISS and UAH MSU is still about 0.2°C per century, about a third of the century-long trend, so while it is small it is not zero.

As always, however, the devil is in the details.
w.

JR
July 17, 2012 12:57 pm

Re: Billy Liar
There is no such thing as a Mount Molehill. I’ve repeatedly asked for just one single example of Mount Molehill and all I ever hear is silence.

otsar
July 17, 2012 12:58 pm

The paper seems to suggest that the homogenization contains homgenization.

Spence_UK
July 17, 2012 1:00 pm

@Steven Mosher
I’ve always been a fan of Richard Muller, he strikes me as being open and amenable to critical viewpoints. To me, this is more important than necessarily agreeing with him on all detail points.
If you have his ear, the lesson I would take from this paper would be to check his methods against the presence of long-term persistence. You could probably word it better than me but the justification would be as follows:
1. There is dispute among climate scientists about the importance of long term persistence in temperature time series, but most climate scientists who have tested for it have tended to agree that it is present (or at least, failed to reject its presence). E.g. Koutsoyiannis, Cohn, Lins, Montanari, von Storch, Rybski, Halley etc. etc.
2. This paper shows that statistical methods which intuitively work reasonably well on short-term persistent time series can fall apart when faced with data containing long-term persistence.
3. As such, it would be prudent to test any methods applied to temperature series to check to see if the methods are effective in the presence of long-term persistence.
This may mean that there are useful lessons that can be learned from this study, even if the results are not directly applicable.

Mindert Eiting
July 17, 2012 1:12 pm

Victor Venema said; “This is because temperatures in the past were too high. In the 19th century many measurement were performed at North facing walls, especially in summer the rising of setting sun would still burn on these instruments. Consequently these values were too high and homogenization makes the[m] lower again.”
Suppose a psychologists had taken an IQ test of a large group of children and found that those without a certain education scored on average 5 points less. By ‘homogenisation’ he would give those children 5 bonus points. In practice these differences are handled in statistical models and not by altering the data. For an outsider like me data handling in climate science looks like an incredible mess. Moreover outsiders should also believe that this mess produces a perfect unbiased result. Perhaps one day I will start to believe in miracles.

Spence_UK
July 17, 2012 1:28 pm

Hmm, of course, when I say “paper” I mean “presentation”… we should aim to be accurate in these details.

Editor
July 17, 2012 1:46 pm

Steven Mosher says:
July 17, 2012 at 12:12 pm

Those who do take the time to skim the article for a word ( like PHA ) will not find it. But of course if they were current on the literature they would know that PHA is USHCN main. (hehe. told me everything I needed to know)

Hehe? Yeah, that’s hilarious. If you knew it was referred to as USHCN main in the paper, then referring to it as PHA can only be described as sneaky, malicious, and underhanded. Some of the readership here is not totally current on every obscure branch of climate science, no surprise, it is a very broad field and no one can stay totally current on everything, myself included … so your response to laugh at us?
So your own knowledge is so encyclopedic and it covers every part of climate science so well that you can afford to heap scorn on others who don’t know the intricacies of some particular topic? Really?
I and others are here to learn, Steven, so abusing and laughing at people who may not know some minor fact that you know is not helpful. It just makes you look arrogant and uncaring. I doubt greatly that you are either, but you are sure putting up a good imitation …
w.

Tonyb
Editor
July 17, 2012 1:46 pm

Can anyone confirm if the BEST study has been peer approved and published in a science journal yet
Tonyb

cd_uk
July 17, 2012 1:57 pm

Victor
The point about the UHI effect is that this is a “process” that introduces bias – not experimental error that homogenisation is meant to correct for(?). The adjustments being criticised are for experimental error they are not for a progressive “thermal-pollutant” that moves in one direction. If you’re correcting for the UHI then one would expect that most would be lowered not raised.
As for your link to your page on homogenisation. Thanks for that. It only refers to optimisation, many systems of linear combinations (e.g. some type of weighted mean) are derived via optimsation where the process is to minimise the error between the estimated and true value. This would be a type of averaging. But can’t say one way or another given your page. But thanks anyway.

cd_uk
July 17, 2012 2:05 pm

Steven Mosher
I know you’ve had a lot of quick fire posts here to answer but if your could spare a second:
As a member of the BEST team(?), one of the many things that you often hear from Prof. Muller is the use of Kriging to grid their data (correct?). I have asked this of others but never got an answer. Why have the Kriging variance maps never been released, surely these are of huge importants – then again may be not. For example, if for each year 50+% of the gridded points (or blocks?) have kriging variances equal to the error of the set (beyond the range of spatial correlation) one would have to wonder if there is much point in continuing to put together a time series where the spread in values is less than/similar to the dominant kriging variances for each year in that time series.

cd_uk
July 17, 2012 2:08 pm

Sorry Steven by:
(beyond the range of spatial correlation)
By this I mean:
That most of the gridded values lie at distances from control points that are greater than the range of the variogram models.
Sorry not very clear.

ColdinOz
July 17, 2012 2:16 pm

“Removal of outliers”…with the ratio of urban to rural stations, and the commonly observed temperature differential leads one to assume that readings from rural stations are more likely to fall into the outlier category. Perhaps giving an even greater bias.

July 17, 2012 2:29 pm

This is great to see.
I’ve been disappointed that the website that used to let you see how the number
of stations dropped off dramatically at the time the temperature went up is no longer
accessible.
climate.geog.udel.edu/~climate/html-pages/Ghcn2_images/air_loc.mpg
I wonder why they do not want us to see it anymore.

wayne
July 17, 2012 2:33 pm

SNHT — standard normal homogeneity test
PHA — progressive hedging algorithm or pairwise homogenization algorithm
Assuming one of these acronyms is what Mosher is tossing about and saying that GHCN uses to adjust the temperatures. Which one?

phlogiston
July 17, 2012 2:42 pm

Isn’t this similar to what E.M. Smith was saying in his post a couple of weeks back?
http://wattsupwiththat.com/2012/06/22/comparing-ghcn-v1-and-v3/

July 17, 2012 2:47 pm

Willis Eschenbach
@
Steven Mosher
I and others are here to learn, Steven, so abusing and laughing at people who may not know some minor fact that you know is not helpful.
Here is a major fact that every climate scientist should get to know.
300 year temperature record of zero trend.
300 year of regular oscillations
No CO2 effect
No UHI
Just simply a natural oscillation.
http://www.vukcevic.talktalk.net/GSO-June.htm

Nick Stokes
July 17, 2012 3:07 pm

I can’t see the justification for using a small subset of stations – it’s easy enough to do them all. I did a comparison here of the period from 1901-2005. I chose those years because the trends were cited in the AR4 report. Using the unadjusted GHCN v2 data (no homogenization) I got 0.66 ° per century. The corresponding trends cited in AR4 for CRU, NCDC and GISS, all using homogenization, were, respectively, 0.71, 0.64 and 0.60 °C per century. No big difference there.
On GHCN V2, the effect of homogenization on trends, for all stations, was discussed a few years ago. Here is the histogram of adjusted 70-year trends, and here is the unadjusted.
Here is the histogram of trend differences created by adjustment over the 70 years. It’s not far from symmetrical. The mean is 0.175 °C/century.
The fact that homogenization does not have zero effect on the mean is not a “homogenization error”. It’s the result. No one guaranteed that non-climate effects have to exactly balance out.

John Silver
July 17, 2012 3:08 pm

Steven Mosher says:
July 17, 2012 at 11:12 am
Situation: When have station named Mount Molehill. It is located at 3000 meters above sea level. It records nice cool temperatures from 1900 to 1980. Then in 1981 they decide to relocate the station to the base of Mount Molehill 5 km away. Mount Molehill suddenly because much warmer.
But won’t they rename the station? Nope! they may very well keep the station name the same.
But won’t the latitude and longitude change? Nope. it depends entirely on the agency recording the position, until recently many only reported to a 1/10 of a degree ( 10km) So, what you get, IF YOU ARE LUCKY, is a piece of metadata that says in 1981 the altitude of the station changed.
Now, my friends, how do you handle such a record. a station at 3000 meters is moved to 0 meters and suddenly gets warmer? That’s some raw data folks. Thats some un adjusted data.
anybody want to argue that it should be used that way??
—————————————————
The location is the station. By definition, you can not move a location.
You have closed a station and ended a series. You have opened another station and started another series.
Homogenization does not apply.

John Finn
July 17, 2012 3:08 pm

sunshinehours1 says:
July 17, 2012 at 10:34 am
John Finn “Note Satellite readings are not contaminated by UHI.”

Are we sure? NASA has no trouble finding huge amounts of UHI from their satellites. </i.
Read your link, The satellites in this case are actually measuring the land surface temps. UAH satellite readings are measurements of temperatures in the troposphere. The latter are unaffected by UHI.

DocMartyn
July 17, 2012 3:12 pm

“Steven Mosher
You just calculated the transient climate response. ( TCR) at 1.6.
the ECR ( equillibrium Climate response) is anywhere from 1.5 to 2x higher.
so if you calculate a TCR ( what you did) then you better multiply by 2…
Giving you 3.2 for a climate sensitivity. (ECR)”
Because the Earth does not rotate, nor does it orbit the sun.

Spence_UK
July 17, 2012 3:12 pm

I can’t see the justification for using a small subset of stations

Because the stitching together of stations introduces a whole other raft of problems which were not addressed in the paper.
They have one complete set of station data end to end, no sample changes, no stitching. One continuous record with consistent sampling.
So, they are then narrowly presenting the consequence of that one issue, and not conflating it with a hundred and one other side issues simultaneously.

July 17, 2012 3:26 pm

Nick Stokes says:

I can’t see the justification for using a small subset of stations – it’s easy enough to do them all.

And only six of the stations in the diagram are north of 60 degrees latitude, where most of the warming is happening.

John Finn
July 17, 2012 3:36 pm

Werner Brozek says:
July 17, 2012 at 11:42 am
John Finn says:
July 17, 2012 at 10:00 am
Since UAH satellite temperatures show an increase of ~0.4 deg over the past 30 years then ALL warming over the past century must have been since the 1970
mikef2 says:
July 17, 2012 at 11:17 am
Funny thing though, is we have those same surface records showing half a degree swings in the early and mid part of last century….as you say, without any CO2 input, so natural variation can swing 0.4C.
I agree with Mike. They say a picture is worth a thousand words. Here is the ‘picture’.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1900/plot/hadcrut3gl/from:1912.33/to:1942.33/trend/plot/hadcrut3gl

I think you may be missing the point. The study discussed concludes that the amount of warming over the past century is 0.42 deg. However, the UAH satellite record tells us that LT has warmed ~0.14 deg per decade. Coincidentally, this gives a total warming of ~0.42 over 30 years. In other words there was little or no warming in the period between 1900 and 1980. There was, therefore no continuation of the warming from the LIA. Higher solar activity in the middle of the 20th century had no discernible effect on temperature compared to the period of lower activity in 1900 or thereabouts.
I note you’ve chosen to use the Hadley land surface record (1912-1942) to illustrate your point, but surely this data is contaminated. The trend in the data is, therefore, presumably an artifact of UHI or similar.
In summary:
The warming over the past century is 0.42 deg (according to the study)
According to UAH the warming since ~1980 is ~0.42 deg.
CONCLUSION: There was no warming between 1900 and 1980, i.e. the trend was flat.

3x2
July 17, 2012 4:24 pm

rgbatduke says:
July 17, 2012 at 5:57 am
Makes me feel all warm and fuzzy. Recall that I recently pointed out that one perfectly reasonable interpretation of the recent “run” of 13 months in the top 1/3 of all months in the record, warming trend or not, is that the data is biased!

There are other, non mathematical, explanations for “record highs” …

“Here are some speculations on correcting SSTs to partly
explain the 1940s warming blip.
If you look at the attached plot you will see that the
land also shows the 1940s blip (as I’m sure you know).
So, if we could reduce the ocean blip by, say, 0.15 degC,
then this would be significant for the global mean — but
we’d
still have to explain the land blip.”

(Climategate I – Wigley to Jones, Subject – 1940s )
I always come back to this exchange when considering the various temperature series (and the integrity of “scientists” generally).
At no point during the [e-mail] exchange does any recipient suggest that altering [inconvenient] data is not something a reputable “Scientist” should be involved in. They are simply worried about being caught out by not altering other data sets.
Getting rid of “the blip” would of course silence those “deniers” [2012 is the hottest year in some state since records began (and since we reduced the past by 0.15 deg C)] but I’m sure that was never a consideration. Odd though that the the period (30/40’s) keeps on getting flatter with each iteration- why it’s almost like the MWP and all the other “inconvenient messages” that just disappear.
“Dr. Richard Muller and BEST, please take note prior to publishing your upcoming paper”
Muller already has what he needed – mounds of “climate cash” flowing into Berkeley (which had been missing out big time when compared to its rivals). I’m sure there will soon be a Berkeley chair with his arse print on it.

July 17, 2012 4:27 pm

Steven Mosher wrote:
“Of course I’ll bring this up today at our BEST meeting. Dr. K is an old favorite. That said, since we dont use homogenized data but use raw data instead I’m not sure what the point will be”
That’s a lie, a big lie, and you make a terrible PR for BEST, mr. Mosher.
http://www.ecowho.com/foia.php?file=4427.txt&search=nordli
“Dear Phil Jones,
The homogenisation of the Bergen temp. series is now completed since
1876. Some adjustments are applied to the data.
Our intention have not been to remove urban heat island effects. However,
these are not too large. Compared to a group of rural stations (two
lighthouse stations also included), the series seems to be biased about 0.2
deg. in the time interval 1876 – 1995.
Before 1876 Birkeland’s homogenisation of the series is maintained.
The whole series follows in a separated file in the “standard NACD-format”.
Best regards
Oyvind Nordli”
Phil Jones chose to adjust the UHI-effect on this homogenized series for Bergen the wrong way. You can check all this against your beloved child BEST. I’m sure you will tell us about what you find…
When you’re done with not doing that, it vil be my pleasure to give you the raw data for all the different series of Bergen, Norway.

Ben U.
July 17, 2012 4:28 pm

Alvin W says:
July 17, 2012 at 2:29 pm
I’ve been disappointed that the website that used to let you see how the number of stations dropped off dramatically at the time the temperature went up is no longer accessible.

The URL that you provided will work if you replace the hyphen with an underscore, like so:
climate.geog.udel.edu/~climate/html_pages/Ghcn2_images/air_loc.mpg
That’s also how it’s recorded in 2005 at archive.org

July 17, 2012 4:31 pm

Steven Mosher:
“Situation: When have station named Mount Molehill. It is located at 3000 meters above sea level. It records nice cool temperatures from 1900 to 1980. Then in 1981 they decide to relocate the station to the base of Mount Molehill 5 km away. Mount Molehill suddenly because much warmer.
But won’t they rename the station? Nope! they may very well keep the station name the same.”
Isn’t the obvious solution to this, to treat these as two entirely different stations, with their own unique and non-overlapping temperatures? That way, one can look at the anomalies in each station as a way of describing temperature trends, rather than accepting the second, and “adjusting” the first to some create one seamless set of station data. I mean, the truth is that they are two separate stations, not one. So why not simply use their data that way, and thus eliminate any confusion.
And why not do this every time the station is significantly changed, using new instruments, etc.? It seems a lot more honest and requiring less changing of actual data.

July 17, 2012 4:36 pm

One point that seems to be missed here is that in their presentation the authors are not claiming that the true temperature rise is only 0.4C. They seem to be saying that’s only what the raw, unadjusted data shows. They are saying that the true temp rise is somewhere between that, at the low end, and the 0.7-0.8C rise being claimed after adjustments. The implication is that these adjustments have to be scrupulously reviewed and justified, and after that review a new record including all appropriate adjustments should be generated. That’s of course what BEST was supposed to do. Perhaps these guys can assemble a team to accomplish that task. Hope they can find funding for the project.

Trevor
July 17, 2012 4:46 pm

I am amazed to see the “No global warming” crowd “warming” to the idea of global warming, this report is sort of like being half pregnant! Congrats for stepping into the real world. I am afraid the ground trurth is leaving you all behind.

July 17, 2012 4:51 pm

Nick: “I got 0.66 ° per century.”
F or C?
I’ve been looking at the Washington State raw daily data. And drop everything with a Quality flag = blank.
What filtering did you do?
I get 0.0032F / decade from 1895 to 2011. NOAA gets .05F / decade for the same period.
The interesting thing is the monthly mean’s are all over the place.
January 0.294
February 0.284
March 0.065
April -0.236
May -0.135
June -0.156
July -0.115
August -0.013
September 0.206
October -0.09
November -0.036
December -0.029

July 17, 2012 4:55 pm

Weather stations can only report on the conditions of the local micro-climate, which may change gradually or suddenly over time. We have no control over the siting, observation practice, accuracy, recording, conversion, digitising, quality control etc in the past especially. Nor can we have great faith in such matters today. To rely on even “raw” data (let alone homogenised) to create a climate record is pie in the sky. In Australia, we have demonstrated that the temperature record is so abysmally poor that it should not be used at all. I have shown that adjustments to annual temperature create a 40% warming bias in the national record. Yet the Bureau of Meteorology insists that adjustments are “neutral”. 23 of the 104 sites in the new ACORN dataset have no neighbours within 100km- some have no neighbours within 400km. Alice Springs is one. It contributes about 7% of the national signal on its own because of its remoteness, and has had a huge warming adjustment. Whether or not this article is properly peer reviewed or not, it rings true to me.
Ken

July 17, 2012 4:56 pm

Re: Ben U 4:28 PM
Thank you so much.

Bill Illis
July 17, 2012 5:04 pm

I’ve been waiting for a good opportunity to show this chart.
It is the temperature trend for the US from UAH lower troposphere and from USHCN V2 since 1979.
First thing to note is that they are extremely similar. I don’t know if we would have expected this but it almost looks like the UAH satellite record appears to be accurate enough even on the small scale of the US.
But most importantly, the USHCN V2 trend is 27.0% higher than UAH. It is really supposed to be the other way around according to the theory.
The lower troposphere is supposed to be warming at a faster rate than the surface, particularly in the Tropics where it is supposed be 27.3% higher according to the climate models, but also extending to mid-latitudes like the US.
So, there is something like a 27% to a 50% error in USHCN V2 since 1979 according to the UAH lower troposphere measurements.
http://img339.imageshack.us/img339/4647/usuahvsushcnv2june2012.png

Rob Dawg
July 17, 2012 5:06 pm

Every time we revisit this subject I just imagine the dedicated person in the 1930s trudging out to the station every day and recording the temperature, squinting to get that last tenth of a degree and wondering what they would think if they knew 80 years later some armchair climatologist was going to “adjust” their reading up three degrees.

July 17, 2012 5:15 pm

John Finn: ” CONCLUSION: There was no warming between 1900 and 1980, i.e. the trend was flat.”
Not flat. Up and down.
1900 -0.225
1944 0.121
1976 -0.255
http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt

scarletmacaw
July 17, 2012 5:16 pm

John Finn says:
July 17, 2012 at 3:36 pm
The warming over the past century is 0.42 deg (according to the study)
According to UAH the warming since ~1980 is ~0.42 deg.
CONCLUSION: There was no warming between 1900 and 1980, i.e. the trend was flat.

As brokenyogi pointed out, the study concluded that the warming is between 0.4 and 0.7.
If the warming from adjustment was 100% in error (i.e. a correct adjustment would have resulted in zero additional warming) then the conclusion is that there has been warming since 1980, and there was similar warming from 1900 to 1940, negated by cooling from 1940 to 1980. Just drawing a trend line through 1900-1980 glosses over the up and down nature of the temperature history.

July 17, 2012 5:36 pm

Reblogged this on The GOLDEN RULE and commented:
More climate science, as distinct from climate pseudo-science!

Werner Brozek
July 17, 2012 5:42 pm

John Finn says:
July 17, 2012 at 3:36 pm
I note you’ve chosen to use the Hadley land surface record (1912-1942) to illustrate your point, but surely this data is contaminated….CONCLUSION: There was no warming between 1900 and 1980, i.e. the trend was flat.

Actually Hadcrut3 is both land and water. Of course I could not use the satellite data since it does not go way back. However the trend according to Hadcrut3 is 0.00528699 per year between 1900 and 1980. See:
http://www.woodfortrees.org/plot/hadcrut3gl/from:1900/to:1980/plot/hadcrut3gl/from:1900/to:1980/trend
So we will have to come to some other conclusion.

Editor
July 17, 2012 6:26 pm

So lessee, 25%-50% due to solar variation, 25-50% due to ENSO/PDO/NAO/AMO variation, and 50% due to homogenization effects. So that comes to 100-150% of warming reported, so therefore its actually cooling, statistically…

Editor
July 17, 2012 6:27 pm

Soooooo lemme get this right: 25-50% of climate change is due to solar variation, 25-50% is due to ENSO/AMO/PDO/NAO variations, and now 50% is due to homogenization effects. That means 100-150% of warming is now accounted for, and its not CO2, which means it must be cooling.

July 17, 2012 6:28 pm

New paper blames about half of global warming on weather station data homogenization

As I read it, this heading is not quite accurate. The abstract states:

tend to indicate that the global temperature increase during the last century is between 0.4C and 0.7C, where these two values are the estimates derived from raw and adjusted data, respectively.

I do not read the paper as extending the results from the sample of 163 GHCNM v2 stations (lower estimate of 0.42C from the raw data, higher estimate of 0.76C from the adjusted data) to a claim that half the warming results from homogenization – the abstract merely suggests that the warming is likely to lie somewhere between that indicated by the raw data and that indicated by the adjusted data.

Gail Combs
July 17, 2012 6:58 pm

Trevor says:
July 17, 2012 at 4:46 pm
I am amazed to see the “No global warming” crowd “warming” to the idea of global warming,….
____________________________________
Of course there is “Global Warming” we are in a interglacial, if there wasn’t “Global Warming” NYC would be under a mile of Ice.
Here is a graph of Temperature and CO2 over the Past 400 Thousand years Note the earth warms and cools and the present increase in CO2 (on right) hasn’t caused “CAGW” Actually the CO2 sky rockets while the temperatures are MORE stable than in the other four interglacials.
Also see:
Lesson from the past: present insolation minimum holds potential for glacial inception Ulrich C. Müller, Jörg Pross, Institute of Geosciences, University of Frankfurt
In Defense of Milankovitch by Gerard Roe, Department of Earth and Space Sciences, University of Washington, Seattle, WA, USA
Article on above: http://motls.blogspot.com/2010/07/in-defense-of-milankovitch-by-gerard.html
http://wattsupwiththat.com/2012/03/16/the-end-holocene-or-how-to-make-out-like-a-madoff-climate-change-insurer/
People here at WUWT are just not hung-up on CO2 as the “control knob” for the climate. I at least consider CO2 a life giving gas that was dwindling to a dangerously low amount in the atmosphere. The recent evolution of C4 and CAM photosynthesis and the current GREENING of the biosphere show just how bad the situation was getting.
Or there is this paper in the proceeding of The National Academy of Sciences: Carbon starvation in glacial trees recovered from the La Brea tar pits, southern California by Joy K. Ward * , † , ‡, John M. Harris §, Thure E. Cerling † , ¶, Alex Wiedenhoeft ∥, Michael J. Lott †, Maria-Denise Dearing †, Joan B. Coltrain **, and James R. Ehleringer †…..

mbw
July 17, 2012 7:36 pm

Since there is no paper, only an abstract to a presentation, i don’t see how people can judge the validity of the claims. Perhaps someone here will file a freedom of information request to get a copy of the paper.

July 17, 2012 7:50 pm

A question. Are record highs and lows also homogenized?

kim
July 17, 2012 7:51 pm

Global warming, we hardly knew ye.
===========

DR
July 17, 2012 8:18 pm

Bill Illis says:
July 17, 2012 at 5:04 pm
The lower troposphere is supposed to be warming at a faster rate than the surface, particularly in the Tropics where it is supposed be 27.3% higher according to the climate models, but also extending to mid-latitudes like the US.

Roy Spencer says:
if the satellite warming trends since 1979 are correct, then surface warming during the same time should be significantly less, because moist convection amplifies the warming with height.

DR
July 17, 2012 8:22 pm

Dr. Koutsoyiannis followed the same steps for a previous publication. First, a presentation, then publish the paper.
http://climateaudit.org/?s=Koutsoyiannis

July 17, 2012 8:39 pm

Chris Wright says:
July 17, 2012 at 4:31 am
“…However, I don’t think it is the result of any organized conspiracy…”
Never attribute to stupidity that which is best explained by a collection of ###****s with an agenda.
Yeah, it’s not the way it actually goes, but with flip side of that you are giving the agenda pushers and easy win while you sit around waiting for better indications.

John Brookes
July 17, 2012 8:59 pm

It worries me that when I read these posts, I think, “Wow, there really is something dodgy about global warming!”. Then I read the comments, and somewhere there is always some “alarmist” who points out annoying details, and I start to doubt. How come just about every nail in the coffin of AGW seems to be made of jelly?
[REPLY – What it means is that WUWT, unlike nearly all alarmist blogs, does not censor contrary points of view. Science is a very back-and-forth kind of thing. Anyone can be wrong. Anything can be wrong. Consider that. ~ Evan]

David Gould
July 17, 2012 9:09 pm

It would seem to me that if the warming has been that minute then the earth as a system is incredibly sensitive, given the rapid changes in in the Arctic and the quite dramatic fall in soil moisture levels globally (http://climexp.knmi.nl/ps2pdf.cgi?file=data/ipdsi_0-360E_-90-90N_n.eps.gz). If the earth is that sensitive, then we are in just as much trouble as if the temperature increase had been large …

July 17, 2012 9:54 pm

In calculating correlation coefficients between pairs of stations, Tmax and Tmin will give rather different correlations over a given time period. Further, different compilers use different ways to calculate Tmean, sometimes by simply averaging Tmax and Tmin. From these 2 sentences alone, one can see scope for error. (Note to BEST – might be an idea to check this if you have not already).
Here is a short-cut look at correlations, calculated by taking a single site and lagging the annual data by 1, 2, 3 etc years. Calculations like this support the above contention.
http://www.geoffstuff.com/Melb-correl.jpg

SteveS
July 17, 2012 10:40 pm

Steven Mosher says:
July 17, 2012 at 12:12 pm
Victor Venema says:
July 17, 2012 at 11:50 am (Edit)
Steven Mosher says:
“Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, ”
The pairwise homogenization algorithm used by NOAA to homogenize USHCN version 2, is called “USHCN main” in the article. It performed well. It has a very low False Alarm Rate (FAR). As there is always a trade of between FAR and detection power, the algorithm could probably have been more accurate overall. And the pairwise algorithm has a fixed correction for every month of the year. Inhomogeneities can, however, also have an annual cycle. For example, in case of a radiation error, the jump will be larger in summer as in winter. With monthly corrections USHCN would have performed better, especially as the size of the annual cycle of the inhomogeneities in the artificial data used in this study was found to be a little too large.
################
Thanks Victor I’m pretty well aware of how PHA did, but thanks for explaining to others. Most wont take time to read the article or consider the results. Those who do take the time to skim the article for a word ( like PHA ) will not find it. But of course if they were current on the literature they would know that PHA is USHCN main. (hehe. told me everything I needed to know)
Interesting Steve……at 12:12 pm today you chastize people for not being abreast of the latest literature in not knowing what pha main is. An hour earlier at Steve M site, you yourself don’t seem quite to sure what it is. I take “hmmmm” and ” pretty sure ” to be, well, not so sure
“Steven Mosher
Posted Jul 17, 2012 at 10:53 PM | Permalink | Reply
Hmm. yes he describes the process of correcting GHCN v3 as SNHT.
hmm. I pretty sure that the PHA algorithm is not SNHT.”
Apparently you yourself were not up to date on this one, until of course you looked it up and came back to look like a genius. :0)

Spence_UK
July 17, 2012 11:27 pm

And only six of the stations in the diagram are north of 60 degrees latitude, where most of the warming is happening.

How does this compare, as a ratio, to HadCRU/GISS then? I’d be surprised if they were much different. Also, why do you think the homogenisation process will suddenly start working beyond 60 degrees N?

nc
July 17, 2012 11:39 pm

Now with this revelation and the earlier post about the IPCC, anyone with thoughts on a climategate 3 release. Keep punching while the consensus is dazed.

John Finn
July 18, 2012 12:41 am

scarletmacaw says:
……..
As brokenyogi pointed out, the study concluded that the warming is between 0.4 and 0.7.
If the warming from adjustment was 100% in error (i.e. a correct adjustment would have resulted in zero additional warming) then the conclusion is that there has been warming since 1980, and there was similar warming from 1900 to 1940, negated by cooling from 1940 to 1980. Just drawing a trend line through 1900-1980 glosses over the up and down nature of the temperature history.

Which means that solar activity has a negligible effect since solar activity was higher in 1940-1980 than it was in 1900-1940. It also brings into question the UHI effect
But you’re also grabbing at the straw whereby you seem to be suggesting that temps rose 0.4 – fell by 0.4 – and then rose again by 0.4. Presumably we’ll now fall by 0.4 deg again. Have fun with that particualr hypothesis.

Alexej Buergin
July 18, 2012 1:07 am

“mikelorrey says:
July 17, 2012 at 6:27 pm
Soooooo lemme get this right: 25-50% of climate change is due to solar variation, 25-50% is due to ENSO/AMO/PDO/NAO variations, and now 50% is due to homogenization effects. That means 100-150% of warming is now accounted for”
Lets get it right: If 50% is due to homogenization, climate change is 0.4°C and not 0.8°C. Of these 0.4°C 25% would be due to solar variation and 25% is due to ENSO etc. …
That leaves up to 0.2°C for CO2.

Alexej Buergin
July 18, 2012 1:29 am

“Steven Mosher says:
July 17, 2012 at 11:12 am
Situation: When have station named Mount Molehill. It is located at 3000 meters above sea level. It records nice cool temperatures from 1900 to 1980. Then in 1981 they decide to relocate the station to the base of Mount Molehill 5 km away. Mount Molehill suddenly because much warmer.
But won’t they rename the station? Nope! they may very well keep the station name the same.”
An example for “Mount Molehill” in the real world can be seen here:
http://wattsupwiththat.com/2009/12/06/how-not-to-measure-temperature-part-92-surrounded-by-science/
Note: They did rename the station instead of calling both of them “Wellington”.
But otherwise …

John Doe
July 18, 2012 1:59 am

Steven Mosher says:
July 17, 2012 at 11:12 am
“Now, my friends, how do you handle such a record. a station at 3000 meters is moved to 0 meters and suddenly gets warmer? That’s some raw data folks. Thats some un adjusted data.
anybody want to argue that it should be used that way??”
What you do is presume that with thousands of stations for every one that moved to a lower altitude another somewhere moved to a higher altitude and they cancel out.
These instruments were never designed for this task in any case. They cover only a tiny fraction of the earth’s surface and are not accurate to tenths of a degree and until recently only recorded two instantaneous temperatures per day. You can’t make a silk purse out of a sow’s ear.
Follow the satellite data. It’s only 33 years but it’s the only network capable of establishing a global average temperature. As of now it shows 0.14C/decade warming and falling as the warming was faster in the earlier part of the record.
http://woodfortrees.org/plot/rss/every/mean:12/offset:0.13/plot/rss/every/trend/offset:0.13/plot/uah/every/mean:12/offset:0.23/plot/uah/every/trend/offset:0.23
When we overlay the satellite record on the AMDO we see a reasonable explanation for why it was rising faster in the early part of the record – the satellite measurements happened to begin coincident with the warming side of a 60-year cycle.
http://woodfortrees.org/plot/rss/every/mean:12/offset:0.13/plot/rss/every/trend/offset:0.13/plot/uah/every/mean:12/offset:0.23/plot/uah/every/trend/offset:0.23/plot/esrl-amo/every
Another 5-10 years of satellite temperature following the AMDO on the downside of the cycle further reducing the current 0.14C/decade should settle the matter one way or the other. It’s not looking for the warmists at this point. IPCC AR1 in 1990 predicted 0.30C/decade warming if CO2 emission was not curtailed. It wasn’t curtailed and less than half the predicted warming actually occured. That’s game over as far as IPCC consensus “skill”. The only thing left to determine is exactly how badly wrong they were and why. The post mortem will be interesting.

Power Engineer
July 18, 2012 3:51 am

“..and the other half is due to the UHI urban heat island”
I believe we should rename this HI Heat Island as it is present in small towns also. My hometown is only 4000 people yet over the last 60 years most of the large trees have died, the lawns have been paved over to create parking lots, the homes have been converted to offices with extensive air conditioning, the car traffic has increased 10-fold.
It is not uncommon to notice a 5 deg F temperature decrease as you leave town.
Had this town been a temperature monitoring location it would have shown warming over the last 60 years but little of it would have been due to the climate.
I see the same the same thing happening all over America…..and Europe.
This is doubly important because some of the studies minimize UHI by showing that small towns and large cities have similar temp increases. I say they are both showing the heat island effect.

July 18, 2012 4:07 am

The problem with shelter that are open to the bottom is actually also “thermal pollution”. The problem with these shelters is that on days with strong insolation and little wind, the soil heats up and the thermal radiation from the ground heats the thermometer. This is very similar to the case of the UHI where the surface heats the air and then the thermometer.
See figure below, where two Stevenson screens are compared with one Montsouri screen (right), which is open to the bottom and to the North. Any “skeptic” can build such old screen and validate that indeed these measurements were biased and thus need to be corrected to obtain trends in the true climate.
cd_uk says: “If you’re correcting for the UHI then one would expect that most would be lowered not raised.”
Exactly and that the temperature trend is higher after homogenization means that the UHI is less important than all the other inhomogeneities.
cd_uk says: “As for your link to your page on homogenisation. Thanks for that. It only refers to optimisation, many systems of linear combinations (e.g. some type of weighted mean) are derived via optimsation where the process is to minimise the error between the estimated and true value. This would be a type of averaging. But can’t say one way or another given your page. But thanks anyway.”
There are two types of homogenization algorithm, ones that work with pairs of stations, such as USHCN and ones that use a reference time series computed from multiple surrounding stations. This reference is indeed a weighted average of the surrounding stations. (Some people use kriging weights, which is optimal if the time series do not contain inhomogeneities, it still has to be studied whether it is optimal for homogenization.) However, this reference is not used to replace the station data, but a difference time series is computed by subtracting this reference from the candidate station. In this way the regional climate signal is removed and a jump can be detected much more reliably. The jump size found in this difference series is added to the candidate station for homogenization. The homogenized data is thus the original data plus a homogenization adjustment, it is not the averaged signal of the neighbors, there is no smearing the error as Anthony Watt keeps on repeating.

vvenema
July 18, 2012 4:16 am

“Steven Mosher
Posted Jul 17, 2012 at 10:53 PM | Permalink | Reply
Hmm. yes he describes the process of correcting GHCN v3 as SNHT.
hmm. I pretty sure that the PHA algorithm is not SNHT.”
The detection algorithm of PHA is SNHT. The standard version of SNHT uses a kriged reference time series, that is it computes the difference between a reference time series and the candidate and detects the inhomogeneities on this difference time series.
The PHA uses a pair wise comparison, that is is computes the difference of pairs of stations with the candidate station and its surrounding station and then applies the test of SNHT to detect the breaks on these pairs. Then you know the breaks in the pairs, but you still need to determine in which station the break actually is. If there is break at a certain date in the difference between A and B and between A and C, but not between B and C, you can attribute the break to station A. With more stations this become much more reliable. This attribution part of the PHA is not part of the original, simpler SNHT algorithm.
Thus both statements are okay. You can call PHA a version of SNHT if you focus on the detection part, but you can also see it as too different if you want to emphasis the full algorithm.

wayne Job
July 18, 2012 4:33 am

Take one hundred long term rural stations across America making due allowance for any UHI effect, graph them individually for trend and divide by 100. The AGW crowd are all about trends they would be pleased with the results. Or maybe not. Statistical homoginization using the algorithyms of the AGW crowd is some what like the company advertising for an accountant.
The recruiting officer only asked one question, what is two plus two, all failed the answer until one applicant said “what would you like it to be” he got the job.

vvenema
July 18, 2012 6:23 am

DR says: “Dr. Koutsoyiannis followed the same steps for a previous publication. First, a presentation, then publish the paper. ”
That is okay, that is what conferences are for, to discuss your preliminary results with colleagues and improve the analysis before you publish. On your own, you are likely to oversee something, especially for a new topic, as far as I know Koutsoyiannis did not work on homogenization before. That is why it is such a pity that the climate “skeptics” are never at conferences. (Except for people like Roger Pielke, who do not deny climate change, but only say it is more complex, which is always true, real life is always more complex.)
The problem is Anthony acting as if these power point slides were a finished scientific study, a “peer reviewed paper”, which he fortunately corrected, although the title of the post still claims it was a “paper”. The main problem is the lack of critical thinking here when the results point in the “right” direction. If this study had shown that the trend is actually twice as strong, the study would have been heavily criticized.
Mark Harrigan says it beautifully:
http://tamino.wordpress.com/2012/07/18/wheres-the-skepticism/#comment-64174

J Crew
July 18, 2012 6:27 am

I noticed each troll talked down from their lofty position in climate science. But in open scientific debate they were exposed as narrowly opinionated and not truth seekers, still hung up on CO2 as the control knob. A part from funding, their foundation continues to crumble beneath them before many.

izen
July 18, 2012 6:56 am

This is a classic example of confirmation bias.
Take a small cherry-picked sample of temperature data records, mainly from areas which have shown less warming than the whole globe and which required more correction for time of observation, sensor type and microclimate change than most and compare the uncorrected trend with the result after correction.
When this limited sample shows an uncorrected trend lower than the global trend from every other data source, including satellite data that does not have any homogenisation correction, or the BEST temperature series the skeptical response would be to doubt the validity of the analysis. Only the dogmatically devoted who avidly embrace any and all suggestions that the observed warming may be smaller than the full diversity of the data indicates would elevate such tendentious and uncertain work to something that calls into question the mainstream record.

cd_uk
July 18, 2012 7:09 am

Victor
Is the point of the article not, as one would expect for UHI effect, that most station data would are adjusted up rather than adjusted down as in the homogenised data? The other point I’d add is that most urbanisation would be gradual and therefore may not be idntified by a discrete jump. Furthermore, if your homogenisation uses neighbouring stations suffering from the same process the effect would be to push the temperatures up.
As for your qualification on temperature homogenisation thanks. I think the result will still be the same – smoothing.
If I’ve got this right:
1) You use interpolated data (for candidate station) to predict the temperature relative to neighbouring stations. This is carried out for each year of the time series.
2) This difference (for each year): diff = observed – interpolated
3) This diff is then added onto the observed to give the homogenised value and thus:
interpolated = diff + observed
whichas you can see is the same as just assigning the interpolated value to the candidate station and hence the smoothing.

cd_uk
July 18, 2012 7:14 am

Sorry Victor the should’ve been “…adjusted down rather than up as in the homogenised data…”

cd_uk
July 18, 2012 7:28 am

izen
I don’t think that is what is going on here. The homogenisation process does appear to be a purely statistical approach that will effectively smooth real/false climatological data indiscriminately. The reason for these adjustments is to remove experimental error, but this should have a 50:50 split and therefore will not affect the final result. This is not what is happening.
As for global records you are correct. The satellite data does corroborate the instrumental record. However, some of the heaviest adjustment (and downward ones) are pre-satellite.
As for BEST they still have to release their Kriging Variance maps for each year (as far as I know). Without these we don’t know what proportion of their gridded data has uncertainties equal to the variance of the set. If the majority have “variances” on the order of the range seen in the time series they produced the chronology has no – casually speaking – statistical significance.

scarletmacaw
July 18, 2012 7:43 am

vvenema says:
July 18, 2012 at 4:16 am

Thank you for your detailed explanation.
That method sounds like it would do a very good job of finding discontinuities due to station moves, equipment changes, and microclimate changes. It doesn’t sound like it would solve the problem of a relatively slow increase in UHI, and might end up correcting the few non-UHI stations in the wrong direction.
Air conditioning and a switch to asphalt paving both occurred (at least in the developed world) from the mid-1960s through the 1970s. This would give a significant increase in UHI mainly during that period and explains why the relatively flat temperature history of GHCN et al. disagrees with the concerns of rapid cooling of some scientists in the late 1970s. The UHI in the temperature record masked the real cooling apparent in the actual weather at the time.

scarletmacaw
July 18, 2012 7:50 am

izen says:
July 18, 2012 at 6:56 am
This is a classic example of confirmation bias.
Take a small cherry-picked sample of temperature data records, mainly from areas which have shown less warming than the whole globe and which required more correction for time of observation, sensor type and microclimate change than most and compare the uncorrected trend with the result after correction.

‘Cherry-picked'” How could you possible know? Who is the dogmatic one here?

Nils
July 18, 2012 8:42 am

This is based on a masters thesis (I presume) in Greek found here http://itia.ntua.gr/en/docinfo/1183/
Similar abstracts, but sadly I don’t understand Greek

July 18, 2012 8:42 am

Rob Dawg said:
Every time we revisit this subject I just imagine the dedicated person in the 1930s trudging out to the station every day and recording the temperature, squinting to get that last tenth of a degree and wondering what they would think if they knew 80 years later some armchair climatologist was going to “adjust” their reading up three degrees.
Down 3 degrees is more likely…

jayhd
July 18, 2012 9:53 am

vvenema 7-18 @6:23
I don’t know what your “climate science” credentials are, but with comments like “That is why it is such a pity that the climate “skeptics” are never at conferences. (Except for people like Roger Pielke, who do not deny climate change” you prove you have a “doctorate in ignorance” when it comes to your knowlege of those who are skeptical of CAGW and the claims that man is the primary driver of climate change. Your clearly expressed disdain for skeptics shows you have no desire whatsoever in listening to counter arguments and reasoning.
Jay Davis

beng
July 18, 2012 10:48 am

****
Victor Venema says:
July 17, 2012 at 10:38 am
In practice homogenization makes the temperature trend stronger. This is because temperatures in the past were too high. In the 19th century many measurement were performed at North facing walls, especially in summer the rising of setting sun would still burn on these instruments. Consequently these values were too high and homogenization makes the lower again. Similarly, the screens used in the first half of the 20th century were open to the North and to the bottom. This produced too high temperatures on days with little wind and strong sun as the soil would heat up and radiate at the thermometer.
****
Well, you’d need some pretty detailed photos and/or metadata to quantify that.
But let’s assume it for the moment. I’d think the “radiation” aspect would go both ways. Having an opening to the ground & north sky at nite would also lower the min temp on a calm nite (the ground surface cools first), no? Some rather simple experimental setups could prb’ly quantify it in a reasonable time. One would think with the billions & billions of bucks going to “climate research”, a couple setups like that wouldn’t be hard to do?

July 18, 2012 11:43 am

John Brookes says:
July 17, 2012 at 8:59 pm
It worries me that when I read these posts, I think, “Wow, there really is something dodgy about global warming!”. Then I read the comments, and somewhere there is always some “alarmist” who points out annoying details, and I start to doubt. How come just about every nail in the coffin of AGW seems to be made of jelly?
[REPLY – What it means is that WUWT, unlike nearly all alarmist blogs, does not censor contrary points of view. Science is a very back-and-forth kind of thing. Anyone can be wrong. Anything can be wrong. Consider that. ~ Evan]
===================================================
I’d like to add that the Nail in the Tree Ring is still firmly imbedded. Also the Wizard of COz’s predictions about what the increase in CO2 would do are already way off the mark. Those are the foundations of an erroneous hypothesis that is driving energy and economic policies. And civilization into the ground.
“Just about every nail in the coffin”? You’re exagerating but it only takes one. There are many. It’s the coffin itself of CAGW that is made of jelly.

July 18, 2012 12:05 pm

John Finn : “solar activity was higher in 1940-1980 than it was in 1900-1940”
There are two main types of solar. TSI and bright sunshine.
Bright sunshine changes may have played a role:
http://sunshinehours.wordpress.com/category/sunshine/

July 18, 2012 1:51 pm

This subject is nothing new. I remember that Steve McKintyre posted on the NOAA TOB adjustments some or 8 years ago. Like other homogenizations, the TOBs lowered the 1930s while increasing the post 1990 temps. Plus ca change….

Owen
July 18, 2012 2:43 pm

With this tiny amount of warming in the surface temperature over the past century, I find it remarkable that the UAH tropospheric temperatures in the past 33 years have risen 0.46 degrees, and that the Greenland and Antarctic land ice packs are melting at accelerating rates, and that the northern hemisphere snow cover is dropping dramatically, and that the sea level continues to rise, and that arctic sea ice area and volume are dropping dramatically, and that the ocean heat content down to 2000 meters is rising inexorably, and that the tundra is melting, and that ……..

JP
July 18, 2012 3:25 pm

@Owen,
You may wish to re-evaluate your statements concerning the Artic. Alaska as well as Northwest Europe are suffering through abnormally cold wet summers; and the ice pack melt is not accelerating. And as far as sea level rises, have you ever visited Kwajalien atolls in the Pacifc in recent years? The atolls are still there, same as before. And you may wish to re-evaluate sea surface temps. Even with an approaching El Nino, there is nothing abnormal about them. As a matter of fact, other than the US, most of the world is having a normal to below normal time of it temperature wise these last several years.

July 19, 2012 1:38 am

beng says: “Some rather simple experimental setups could prb’ly quantify it in a reasonable time. One would think with the billions & billions of bucks going to “climate research”, a couple setups like that wouldn’t be hard to do?”
Has been done:
Böhm, R., P.D. Jones, J. Hiebl, D. Frank, M. Brunetti,· M. Maugeri. The early instrumental warm-bias: a solution for long central European temperature series 1760–2007. Climatic Change, 101, pp. 41–67, doi: 10.1007/s10584-009-9649-4, 2010.
Abstract. Instrumental temperature recording in the Greater Alpine Region (GAR) began in the year 1760. Prior to the 1850–1870 period, after which screens of different types protected the instruments, thermometers were insufficiently sheltered from direct sunlight so were normally placed on north-facing walls or windows. It is likely that temperatures recorded in the summer half of the year were biased warm and those in the winter half biased cold, with the summer effect dominating. Because the changeover to screens often occurred at similar times, often coincident with the formation of National Meteorological Services (NMSs) in the GAR, it has been difficult to determine the scale of the problem, as all neighbour sites were likely to be similarly affected. This paper uses simultaneous measurements taken for eight recent years at the old and modern site at Kremsmünster, Austria to assess the issue. The temperature differences between the two locations (screened and unscreened) have caused a change in the diurnal cycle, which depends on the time of year. Starting from this specific empirical evidence from the only still existing and active early instrumental measuring site in the region, we developed three correction models for orientations NW through N to NE. Using the orientation angle of the buildings derived from metadata in the station histories of the other early instrumental sites in the region (sites across the GAR in the range from NE to NW) different adjustments to the diurnal cycle are developed for each location. The effect on the 32 sites across the GAR varies due to different formulae being used by NMSs to calculate monthly means from the two or more observations made at each site each day. These formulae also vary with time, so considerable amounts of additional metadata have had to be collected to apply the adjustments across the whole network. Overall, the results indicate that summer (April to September) average temperatures are cooled by about 0.4°C before 1850, with winters (October to March) staying much the same. The effects on monthly temperature averages are largest in June (a cooling from 0.21° to 0.93°C, depending on location) to a slight warming (up to 0.3°C) at some sites in February. In addition to revising the temperature evolution during the past centuries, the results have important implications for the calibration of proxy climatic data in the region (such as tree ring indices and documentary data such as grape harvest dates). A difference series across the 32 sites in the GAR indicates that summers since 1760 have warmed by about 1°C less than winters.

vvenema
July 19, 2012 1:48 am

scarletmacaw says: “That method sounds like it would do a very good job of finding discontinuities due to station moves, equipment changes, and microclimate changes. It doesn’t sound like it would solve the problem of a relatively slow increase in UHI, and might end up correcting the few non-UHI stations in the wrong direction.”
In case of a slow increase, you would see such an slow increase in the difference time series as well. You can correct such local trends with several small breaks. The pair-wise homogenisation method used for the USHCN explicitly corrects local trends, if I remember correctly.
In our validation study of homogenisation algorithms we also inserted local trends to simulate the UHI effect or the growing of bushes around the station. See our open-access paper:
http://www.clim-past.net/8/89/2012/cp-8-89-2012.html

vvenema
July 19, 2012 2:16 am

cd_uk says: “Is the point of the article not, as one would expect for UHI effect, that most station data would are adjusted up rather than adjusted down as in the homogenised data?”
I am not sure, I understand the question. If the UHI effect would be the only inhomogeneity in the climate records, you would expect homogenization to reduce the trend. That homogenization increases the trend shows that other inhomogeneities are more important.
cd_uk says: The other point I’d add is that most urbanisation would be gradual and therefore may not be idntified by a discrete jump.”
There are also homogenization methods that correct using a local trend over a certain period. The others detect as such local trends a number of small jumps and and thus also correct them with several small jumps, which seems to work just as well.
cd_uk says: “Furthermore, if your homogenisation uses neighbouring stations suffering from the same process the effect would be to push the temperatures up.”
Yes, that would be a problem. If more than half of the data is affected by a local trend, it become impossible to distinguish a true climate trend from a local trend. I did not study it myself, but what I understood is that even if all of the stations would be in urban areas, not all of the data would be affected by a trend due to the urban heat island because such temperature trends only happen during part of the urbanisation. In the centre of large urban areas the temperature is no longer increasing, but at a fixed higher level, which does not cause any problems for computing trends.
cd_uk says: “As for your qualification on temperature homogenisation thanks. I think the result will still be the same – smoothing.
interpolated = diff + observed”
The right equation for the correction of a homogeneous subperiod is:
homogenised(t) = observed(t) + constant
with
constant = mean( diff(after break) ) – mean( diff(before break) )
(This would assume that you correct one break after the other, modern methods correct all breaks simultaneously, which is more accurate, but makes the equation complicated; the idea is basically the same.) No smoothing.

cd_uk
July 19, 2012 4:35 am

vvenema
Thanks for getting back. It is good to see that you don’t just dismiss points out-of-hand.
1) My argument was that assuming, and it is a fair assumption, that the UHI inflates the temperature then surely the adjustments should, around urban centres, have a net downward effect. This doesn’t appear to be the case in the adjusted time series.
2) The point about gradual trends does appear more complicated and I can’t see how homogenisation can address these at all.
3) In short the main issue I have with the approach is that you’re assuming that homogenisation produces a more accurate story because the outcome matches the outcome one would expect of the processing procedure. This only validates that the procedure works as one would expect: if you ran a simple smooth operation on a noisey image you would reduce the appearance of noise in the image but that doesn’t mean it is a more accurate representation of the object being imaged.
4) Where does this constant in your equation come from? You say it is the
mean( diff(after break) ) – mean( diff(before break) )
Is this the mean difference between the neighbouring stations and the candidate station. Then the effect is exactly the same as smooth. Your first term “mean( diff(after break) )” is a smooth operation and the second “mean( diff(before break) )” is a smooth. You’re effectively finding the difference between two “convolutions” that smooth the data and applying this to your station data. The effect is always a type of smoothing and yes the degree of smoothing is function of the constant (not just the spatial arrangement) but it is still a smoothing.

vvenema
July 19, 2012 8:30 am

cd_uk says: “1) My argument was that assuming, and it is a fair assumption, that the UHI inflates the temperature then surely the adjustments should, around urban centres, have a net downward effect. This doesn’t appear to be the case in the adjusted time series.”
That is because the UHI effect is typically small compared to the other inhomogeneities.
cd_uk says: “2) The point about gradual trends does appear more complicated and I can’t see how homogenisation can address these at all.”
Instead of using a constant, which amounts to a step function, you can also use a linear function which changes in time. Alternatively, you can detect and correct a gradual change by multiple small jumps in the same direction. In practice both methods work fine, other problems are more important.
cd_uk says: “3) In short the main issue I have with the approach is that you’re assuming that homogenisation produces a more accurate story because the outcome matches the outcome one would expect of the processing procedure. This only validates that the procedure works as one would expect: if you ran a simple smooth operation on a noisey image you would reduce the appearance of noise in the image but that doesn’t mean it is a more accurate representation of the object being imaged.”
It operates as expected and this is the operation we need to remove inhomogeneities. I see no problem, just confusion.
cd_uk says: “4) Where does this constant in your equation come from? You say it is the
mean( diff(after break) ) – mean( diff(before break) )”
The constant added to the entire homogeneous subperiod is:
mean( diff(homogeneous period after break) ) – mean( diff(homogeneous period before break) )
That is one number you add to the raw data for a certain homogeneous subperiod. If you want to see smoothing in this, I cannot help you. Keep reading WUWT.

Steve Garcia
July 19, 2012 11:09 am

In the .pdf, on the 7th page, there is this, which actually doesn’t surprise me:

Discussion on the homogenization-1
► Homogenization results are usually not supported by metadata or experiments (a known exception in literature is the experiment at the Kremsmünster Monastery, Austria).
Example: change of thermometers–shelters in the USA in the 1980s (Quayle et al., 1991)
No single case of an old and a new observation station running for some time together for testing of results is available!
● On the contrary, comparison and correction were made using statistics of remote (statistically correlated) stations. [bold added, but the exclamation point is in the original]

See, now this is just brain dead. What scientist wouldn’t think of doing this? And when ALL of them fail to think of it, it just boggles the mind. It has NEVER been done? Geez…
You know, folks, it isn’t too late to do this somewhere, at least once. And if they do, they should not only calibrate the new one to the old one, but then ALSO run a THIRD one (a second new one) for a long period of time, for ongoing comparison. In theory the two new ones should track 100% parallel. Empirically? Who knows?
Steve Garcia

Steve Garcia
July 19, 2012 12:02 pm

I’d point out that if the 2/3 of the adjusted results were LOWER (instead of higher) the 0.42C Raw Data would not be 0.76C, but would be 0.08C. If that negative direction were their end result, they’d be yammering about how constant the climate is. But that doesn’t suit their alarmism. There is no money ever going to be granted to tell us the climate is super stable.
Also, the fact that the overall adjusted delta is 0.34C above the raw average rise of 0.42C means that the 2/3 high adjusted values were considerably above that 0.34C delta value. My simplistic brain suggests those must have been twice that 0.34C, since that group (2/3) was twice the size of the adjusted-low group (1/3). If so, the adjusted-high group was 0.68C higher than what the raw data showed (0.42C), or 1.10C. And the adjusted-low group averaged +0.08C, 0.34C below their raw values. For 2/3 of the stations to cause a rise when 1/3 is showing a drop after adjusting, the 2/3 has to have half the rise per average station (adjusted) of what the 1/3 group is pulling the average down. A 1.10C 2/3 group is 0.34C above the 0.76C result, and the 0.08C 1/3 group is 0.68C below the 0.76C.
I probably got that wrong, but it seems correct right now. For the overall average to rise by 0.34C while 1/3 of them actually show a decline, the two populations have to have a serious difference in their adjusted values. This is not some small variation between the two groups. The difference between the two groups is 1.02C. I am sorry, but I have to say that that big a difference doesn’t happen by accident. If the 1/3 group had also risen after adjustment, but to a lesser extent, then this could be seen as trivial or accidental. But since that 1/3 group DID actually decline after the adjustments, we are left with no other conclusion than that the 2/3 group was not only increased intentionally, but that the values were intentionally large.
Steve Garcia

cd_uk
July 19, 2012 1:42 pm

vvenema
“That is because the UHI effect is typically small compared to the other inhomogeneities.”
No the processing suggests that the UHI effect is small – again reference to the smoothing algorithm as carried out on a noisy image.
“Instead of using a constant, which amounts to a step function, you can also use a linear function which changes in time. Alternatively, you can detect and correct a gradual change by multiple small jumps in the same direction. In practice both methods work fine, other problems are more important.”
Yes but then you’re making all the same assumptions are you not? You’re assuming that the homogenisation improves the quality of the data rather than just processes it in order to produce another bias. I agree that the homogenisation method, as method for finding anomalies, is great but that doesn’t mean that it identifies instrumental error – in short that’s an assumption not an experimental reason.
“It operates as expected and this is the operation we need to remove inhomogeneities. I see no problem, just confusion.”
Yes but again you’re assuming that the inhomogeneities degrade the accuracy of the final result. Again, the image analogy who is to say that the subject is not inherently noisy.
“That is one number you add to the raw data for a certain homogeneous subperiod. If you want to see smoothing in this, I cannot help you. Keep reading WUWT.”
No! Do you understand convolution? What you’re finding is the average, finding the difference and then adding the difference – J’s man – THAT GIVES YOU THE AVERAGE! You are doing this for two different points in time (both an average: smooth as described)! Then modifying the final average by this temporal different. In short you are MODULATING the average by the difference – that is all! IT’S STILL A SMOOTH!
As for keep reading WUWT. Where do you suggest I read? You’re blog? Why? You don’t seem to understand the nature of filtering – which by the way is what homogenisation is and they are two main types of filtering: low pass and high pass. So which one is homogenisation: low pass, what a bit like smoothing.

E.M.Smith
Editor
July 20, 2012 11:50 am

@Gail Combs:
George E. Smith is another Smith… I’m E.M. Smith (but the E is the same for both of us 😉
Yeah, I know, naming Smiths is functionally “Anonymous Anonymous”… I had a guy with an identical name in my Calculus class at U.C. and we had to use our student ID numbers to keep us straight, so I’m used to it…
IMHO the cut off of thermometer records causes all sorts of subtle mayhem. Not the least of which is that the majority of thermometers are now at airports and they are used for comparison to non-airports in the past. (And despited the humorous claim of ‘cooler airports’ above, any one who has been at an airport on the tarmac knows it’s a hot place.) Oh, and with much more vertical mixing (which raises temps) and with much less transpiration (that cools natural green landscapes – plants have a built in air conditioner via evaporation and work to maintain a limited max temperature, concrete and tarmac does not…)
So a “grid box average” is made from one set of thermometers richer in green fields, and then compared with a “grid box average” full of modern large jet airports data. Just nuts on the face of it.
As there are not nearly enough instruments in the record for the total number of grid / boxes (last time I looked there were 8,000 grid/boxes and only 1280 or so current GHCN stations in GIStemp; but since then they went to 16,000 grid/boxes… and the max thermometers in the past was about 6,000 and with most of THEM in two geographies: Europe and USA. So by definition most of the grid / box values are a complete and utter fiction. One can only hope they are representative of something real.)
On top of that, then, comes the issue of “homogenizing” what little real data exists.
One comment on the WalMart Thermometer model:
As you walk up to the bin, you notice that the sun is shining on half of it, but not on the other half. Someone just watered the plants behind the display and some overspray happened, but you don’t know what instruments it hit as they have mostly dried off. They were all made in China on machinery designed in C but are painted in F. The thermistors came from 4 batches, 3 of them with a tendency to read low by a fixed amount, the other batch with a 3 sigma variation between individual values ( Operator of QA station needed a tea break…)
Now what is your homogenizing going to do for you? Hmmm???
“Crap is Crap. Averaging it together gives you average crap. -E.M.Smith”
I also find it humorous that some folks are all a-twitter about the variation in one screen type vs. another in the same location; but blissfully certain that putting a thermometer in a grass field in 1900 can be compared to miles of concrete and tarmac with tons of kerosene burned per hour today and not have the slightest worry… (Most large airports today that make up the bulk of GHCN current data were grass fields in 1900 and often even into the 1940s…)
Guess it’s “climate science”… /sarcoff>;
I’m going to wander through the rest of the comments later, but it looks like the usual “Warmers asserting if you do just right just like they do everything is perfect” and other folks saying “Um, looks like crap data to me.”…
So if you actually look at the temperature data, you find large dropouts. Most obvious is the dropout during times like the World Wars. Then there are the whole countries that just drop out (as the creators of GHCN are just sure you only need a nearby country to fill in the missing ones today…).
Now there are two basic ways to fix those “dropouts”. One is the “infill and homogenize”. The other is “bridge the gap”. In the first case you make up fictional values and fill them in. In the second case you look at ONE instrument and ONE location and ONE time period ( like June) and make the assumption that “June in Sacramento” is more like another “June in Sacramento” than anything else; and if you have two values with a gap between them you can do some kind of interpolation between them. ( This fails if the gap is long enough to cover a 1/2 of a major cycle, so for a 30 year dropout you could mis a PDO 1/2 cycle – yet the first and last data would be unmolested and the infill would at most dampen the global trend excursion in between by a very small amount while not introducing longer term bias. IFF dropouts are modestly uniformly distributed this ought to be acceptable.)
Because of the massive amount of “missing data” from dropped instruments, the bulk of all cells are filled with fabricated values based on ‘homogenizing’ and infill from what instruments do exist (mostly at airports near the runways… Airport thermometers MUST report a reasonably accurate runway temperature or folks get a wrong “density altitude” calculation and can crash. They simply can not be in the nice green grassy or treed area nearby and do their primary job. Concrete and asphalt runways are significantly warmer than nearby forests and green fields.) On the face of it, this is just a bogus thing to do.
Instead, those airport thermometers ought to have their trend calculated ONLY with respect to themselves just as the “nearby” grassy / treed areas ought to have their trend calculated ONLY with respect to themselves. One ought not be used to “fill in” or “homogenize” the other.
So, IMHO, it is the interaction of increasing Airports in the data, the dropping of treed / grassy / truly rural instruments, and the infilling and homogenizing all those airport data into the now missing grassy / treed areas that causes the problem.
This shows up rather dramatically when you inspect the range of the monthly averages of thermometers. Either in large aggregate or in smaller size areas (down to even the scale of a dozen or so in some countries.) There is a consistent “artifact” in the recent data. As of about 1987 – 1990 the “low excursions” just get squashed. The graphs have a ‘bottle brush’ effect where the older data have much wider ranges and the recent data approximate a slightly wobbling line. The highs do not go higher, but the low excursions are washed out. IMHO this is direct evidence for “the problem” in the data. I just have not been able to show if it is an artifact of the massive homogenizing lately, the ‘infilling’ of missing data, the “QA Process” that can replace a value with “The average of nearby ASOS stations” at airports, or simply the fact that the recent data come from electronic devices at airports and it just doesn’t get very cold there. (Never a ‘still air cold night’ with a deep cold surface layer as jumbo jets come and go.)
A good example is this “hair graph” of the Pacific Basin GHCN data. Notice how much it gets “squashed” recently. All the variability just ironed out of it:
http://chiefio.files.wordpress.com/2010/04/pacific_basin_hair_seg.png
From: http://chiefio.wordpress.com/2010/04/11/the-world-in-dtdt-graphs-of-temperature-anomalies/
Until that very unusual anomaly in the data distribution is explained, the data are “not fit for purpose” if your purpose is to say what long term temperature trends have been via a homogenize / grid-box / infill method.

Bernard J.
July 21, 2012 8:20 am

Was there not a reference to Richard Muller and BEST in the original version of the post?

vvenema
July 21, 2012 11:20 am

feet2thefire says:”
● No single case of an old and a new observation station running for some time together for testing of results is available!

See, now this is just brain dead. What scientist wouldn’t think of doing this? And when ALL of them fail to think of it, it just boggles the mind. It has NEVER been done? Geez… ”
How about these papers?
From a time before man-made climate change:
Margary, I.D., 1924. A comparison of forty years’ observations of maximum and minimum temperatures as recorded in both screens at Camden Square, London. Q.J.R. Meteorol. Soc., 50:209-226 and 363.
Marriott, W., 1879. Thermometer exposure — wall versus Stevenson screens. QJ.R. Meteorol. Soc., 5:217-221.
Or from a reliable Dutchman:
Brandsma, Theo. Parallel air temperature measurements at the KNMI-terrain in De Bilt (the Netherlands) May 2003-April 2005, Interim report, 2004. http://wap.knmi.nl/onderzk/klimscen/papers/Hisklim7.pdf
Van der Meulen, J.P. and T. Brandsma. Thermometer screen intercomparison in De Bilt (The Netherlands), Part I: Understanding the weather-dependent temperature differences) Int. J. Climatol, 28, pp. 371-387, doi: 10.1002/joc.1531, 2008.
Or from a reliable Norwegian guy:
Nordli, P. Ø. et al. The effect of radiation screens on Nordic time series of mean temperature. International Journal of Climatology 17(15), doi: 10.1002/(SICI)1097-0088(199712)17:153.0.CO;2-D, pp. 1667-1681, 1997.
Or from a reliable Austrian guy:
Böhm, R., P.D. Jones, J. Hiebl, D. Frank, M. Brunetti,· M. Maugeri. The early instrumental warm-bias: a solution for long central European temperature series 1760–2007. Climatic Change, 101, pp. 41–67, doi: 10.1007/s10584-009-9649-4, 2010.
Or from a reliable Spanish lady:
Brunet, M. Asin, J. Sigró, J. Bañón, M. García, F. Aguilar, E. Palenzuela, J.E. Peterson, TC. Jones, PD. 2011. The minimisation of the “screen bias” from ancient Western Mediterranean air temperature records: an exploratory statistical analysis. Int. J. Climatol., 31: 1879-1895 DOI: 10.1002/joc.2192.
You will find many more older references on different weather shelters and their influence on the mean temperature in:
Parker, D.E. Effects of changing exposure of thermometers at land stations. Int. J. Climatol., 14, pp. 1–31, 1994.
I did not read all of these papers yet, but I guess the titles are already sufficient to disproof the original claim that there are no parallel measurements to validate the breaks found during homogenization. It is just not the kind of literature that makes it into Science, Nature or the New York Times. Luckily some colleagues still do it because it is important work.
And yes, some of there papers have Phil Jones as co-author. If Phil Jones would not be interested in homogenization you would also complain.

Heystoopidone
July 22, 2012 3:26 am

Ah, thank you Anthony, the paper was an interesting read, especially the temperature data graphs for “De Bilt station – The Netherlands” and “Sulina station – Romania”, both of which terminated in the year 1990.
Cheers 🙂

Jesse Farmer
July 23, 2012 9:25 am

Haven’t we been through this before? Berkeley Earth Surface Temperature reconstruction.
Scientific meetings like EGU (and the similar AGU Fall Meeting in the US) are an opportunity to present novel analyses to a broader audience of scientists and field specialists, in order to gain feedback prior to considering publication. These abstracts/presentations are NOT peer-reviewed literature and should NOT be considered anything more than scientific speculation. This is how the process works.

Greg
July 29, 2012 4:39 pm

How come you lobbyists only manage to find non-peer-reviewed abstracts (not even proper papers) to support your claims? Everyone can put an abstract into EGU. This has no validity whatsoever.