From the told ya so department, comes this recently presented paper at the European Geosciences Union meeting.
Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C].
Here’s the part I really like: of 67% of the weather stations examined, questionable adjustments were made to raw data that resulted in:
“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”
And…
“homogenation practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”
The paper abstract and my helpful visualization on homogenization of data follows:
Investigation of methods for hydroclimatic data homogenization
Steirou, E., and D. Koutsoyiannis, Investigation of methods for hydroclimatic data homogenization, European Geosciences Union General Assembly 2012, Geophysical Research Abstracts, Vol. 14, Vienna, 956-1, European Geosciences Union, 2012.
We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant.
From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends.
One of the most common homogenization methods, ‘SNHT for single shifts’, was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence.
The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.
Conclusions
1. Homogenization is necessary to remove errors introduced in climatic time
series.
2. Homogenization practices used until today are mainly statistical, not well
justified by experiments and are rarely supported by metadata. It can be
argued that they often lead to false results: natural features of hydroclimatic
time series are regarded errors and are adjusted.
3. While homogenization is expected to increase or decrease the existing
multiyear trends in equal proportions, the fact is that in 2/3 of the cases the
trends increased after homogenization.
4. The above results cast some doubts in the use of homogenization procedures
and tend to indicate that the global temperature increase during the
last century is smaller than 0.7-0.8°C.
5. A new approach of the homogenization procedure is needed, based on
experiments, metadata and better comprehension of the stochastic
characteristics of hydroclimatic time series.
- Presentation at EGU meeting PPT as PDF (1071 KB)
- Abstract (35 KB)
h/t to “The Hockey Schtick” and Indur Goklany
UPDATE: The uncredited source of this on the Hockey Schtick was actually Marcel Crok’s blog here: Koutsoyiannis: temperature rise probably smaller than 0.8°C
Here’s a way to visualize the homogenization process. Think of it like measuring water pollution. Here’s a simple visual table of CRN station quality ratings and what they might look like as water pollution turbidity levels, rated as 1 to 5 from best to worst turbidity:
In homogenization the data is weighted against the nearby neighbors within a radius. And so a station might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as a new value, say weighted at “2.5”. Even single stations can affect many other stations in the GISS and NOAA data homogenization methods carried out on US surface temperature data here and here.
In the map above, applying a homogenization smoothing, weighting stations by distance nearby the stations with question marks, what would you imagine the values (of turbidity) of them would be? And, how close would these two values be for the east coast station in question and the west coast station in question? Each would be closer to a smoothed center average value based on the neighboring stations.
UPDATE: Steve McIntyre concurs in a new post, writing:
Finally, when reference information from nearby stations was used, artifacts at neighbor stations tend to cause adjustment errors: the “bad neighbor” problem. In this case, after adjustment, climate signals became more similar at nearby stations even when the average bias over the whole network was not reduced.







‘Cherry-picked'” How could you possible know? Who is the dogmatic one here?
This is based on a masters thesis (I presume) in Greek found here http://itia.ntua.gr/en/docinfo/1183/
Similar abstracts, but sadly I don’t understand Greek
Rob Dawg said:
Every time we revisit this subject I just imagine the dedicated person in the 1930s trudging out to the station every day and recording the temperature, squinting to get that last tenth of a degree and wondering what they would think if they knew 80 years later some armchair climatologist was going to “adjust” their reading up three degrees.
Down 3 degrees is more likely…
vvenema 7-18 @6:23
I don’t know what your “climate science” credentials are, but with comments like “That is why it is such a pity that the climate “skeptics” are never at conferences. (Except for people like Roger Pielke, who do not deny climate change” you prove you have a “doctorate in ignorance” when it comes to your knowlege of those who are skeptical of CAGW and the claims that man is the primary driver of climate change. Your clearly expressed disdain for skeptics shows you have no desire whatsoever in listening to counter arguments and reasoning.
Jay Davis
****
Victor Venema says:
July 17, 2012 at 10:38 am
In practice homogenization makes the temperature trend stronger. This is because temperatures in the past were too high. In the 19th century many measurement were performed at North facing walls, especially in summer the rising of setting sun would still burn on these instruments. Consequently these values were too high and homogenization makes the lower again. Similarly, the screens used in the first half of the 20th century were open to the North and to the bottom. This produced too high temperatures on days with little wind and strong sun as the soil would heat up and radiate at the thermometer.
****
Well, you’d need some pretty detailed photos and/or metadata to quantify that.
But let’s assume it for the moment. I’d think the “radiation” aspect would go both ways. Having an opening to the ground & north sky at nite would also lower the min temp on a calm nite (the ground surface cools first), no? Some rather simple experimental setups could prb’ly quantify it in a reasonable time. One would think with the billions & billions of bucks going to “climate research”, a couple setups like that wouldn’t be hard to do?
John Brookes says:
July 17, 2012 at 8:59 pm
It worries me that when I read these posts, I think, “Wow, there really is something dodgy about global warming!”. Then I read the comments, and somewhere there is always some “alarmist” who points out annoying details, and I start to doubt. How come just about every nail in the coffin of AGW seems to be made of jelly?
[REPLY – What it means is that WUWT, unlike nearly all alarmist blogs, does not censor contrary points of view. Science is a very back-and-forth kind of thing. Anyone can be wrong. Anything can be wrong. Consider that. ~ Evan]
===================================================
I’d like to add that the Nail in the Tree Ring is still firmly imbedded. Also the Wizard of COz’s predictions about what the increase in CO2 would do are already way off the mark. Those are the foundations of an erroneous hypothesis that is driving energy and economic policies. And civilization into the ground.
“Just about every nail in the coffin”? You’re exagerating but it only takes one. There are many. It’s the coffin itself of CAGW that is made of jelly.
John Finn : “solar activity was higher in 1940-1980 than it was in 1900-1940”
There are two main types of solar. TSI and bright sunshine.
Bright sunshine changes may have played a role:
http://sunshinehours.wordpress.com/category/sunshine/
This subject is nothing new. I remember that Steve McKintyre posted on the NOAA TOB adjustments some or 8 years ago. Like other homogenizations, the TOBs lowered the 1930s while increasing the post 1990 temps. Plus ca change….
With this tiny amount of warming in the surface temperature over the past century, I find it remarkable that the UAH tropospheric temperatures in the past 33 years have risen 0.46 degrees, and that the Greenland and Antarctic land ice packs are melting at accelerating rates, and that the northern hemisphere snow cover is dropping dramatically, and that the sea level continues to rise, and that arctic sea ice area and volume are dropping dramatically, and that the ocean heat content down to 2000 meters is rising inexorably, and that the tundra is melting, and that ……..
@Owen,
You may wish to re-evaluate your statements concerning the Artic. Alaska as well as Northwest Europe are suffering through abnormally cold wet summers; and the ice pack melt is not accelerating. And as far as sea level rises, have you ever visited Kwajalien atolls in the Pacifc in recent years? The atolls are still there, same as before. And you may wish to re-evaluate sea surface temps. Even with an approaching El Nino, there is nothing abnormal about them. As a matter of fact, other than the US, most of the world is having a normal to below normal time of it temperature wise these last several years.
beng says: “Some rather simple experimental setups could prb’ly quantify it in a reasonable time. One would think with the billions & billions of bucks going to “climate research”, a couple setups like that wouldn’t be hard to do?”
Has been done:
Böhm, R., P.D. Jones, J. Hiebl, D. Frank, M. Brunetti,· M. Maugeri. The early instrumental warm-bias: a solution for long central European temperature series 1760–2007. Climatic Change, 101, pp. 41–67, doi: 10.1007/s10584-009-9649-4, 2010.
Abstract. Instrumental temperature recording in the Greater Alpine Region (GAR) began in the year 1760. Prior to the 1850–1870 period, after which screens of different types protected the instruments, thermometers were insufficiently sheltered from direct sunlight so were normally placed on north-facing walls or windows. It is likely that temperatures recorded in the summer half of the year were biased warm and those in the winter half biased cold, with the summer effect dominating. Because the changeover to screens often occurred at similar times, often coincident with the formation of National Meteorological Services (NMSs) in the GAR, it has been difficult to determine the scale of the problem, as all neighbour sites were likely to be similarly affected. This paper uses simultaneous measurements taken for eight recent years at the old and modern site at Kremsmünster, Austria to assess the issue. The temperature differences between the two locations (screened and unscreened) have caused a change in the diurnal cycle, which depends on the time of year. Starting from this specific empirical evidence from the only still existing and active early instrumental measuring site in the region, we developed three correction models for orientations NW through N to NE. Using the orientation angle of the buildings derived from metadata in the station histories of the other early instrumental sites in the region (sites across the GAR in the range from NE to NW) different adjustments to the diurnal cycle are developed for each location. The effect on the 32 sites across the GAR varies due to different formulae being used by NMSs to calculate monthly means from the two or more observations made at each site each day. These formulae also vary with time, so considerable amounts of additional metadata have had to be collected to apply the adjustments across the whole network. Overall, the results indicate that summer (April to September) average temperatures are cooled by about 0.4°C before 1850, with winters (October to March) staying much the same. The effects on monthly temperature averages are largest in June (a cooling from 0.21° to 0.93°C, depending on location) to a slight warming (up to 0.3°C) at some sites in February. In addition to revising the temperature evolution during the past centuries, the results have important implications for the calibration of proxy climatic data in the region (such as tree ring indices and documentary data such as grape harvest dates). A difference series across the 32 sites in the GAR indicates that summers since 1760 have warmed by about 1°C less than winters.
scarletmacaw says: “That method sounds like it would do a very good job of finding discontinuities due to station moves, equipment changes, and microclimate changes. It doesn’t sound like it would solve the problem of a relatively slow increase in UHI, and might end up correcting the few non-UHI stations in the wrong direction.”
In case of a slow increase, you would see such an slow increase in the difference time series as well. You can correct such local trends with several small breaks. The pair-wise homogenisation method used for the USHCN explicitly corrects local trends, if I remember correctly.
In our validation study of homogenisation algorithms we also inserted local trends to simulate the UHI effect or the growing of bushes around the station. See our open-access paper:
http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
cd_uk says: “Is the point of the article not, as one would expect for UHI effect, that most station data would are adjusted up rather than adjusted down as in the homogenised data?”
I am not sure, I understand the question. If the UHI effect would be the only inhomogeneity in the climate records, you would expect homogenization to reduce the trend. That homogenization increases the trend shows that other inhomogeneities are more important.
cd_uk says: The other point I’d add is that most urbanisation would be gradual and therefore may not be idntified by a discrete jump.”
There are also homogenization methods that correct using a local trend over a certain period. The others detect as such local trends a number of small jumps and and thus also correct them with several small jumps, which seems to work just as well.
cd_uk says: “Furthermore, if your homogenisation uses neighbouring stations suffering from the same process the effect would be to push the temperatures up.”
Yes, that would be a problem. If more than half of the data is affected by a local trend, it become impossible to distinguish a true climate trend from a local trend. I did not study it myself, but what I understood is that even if all of the stations would be in urban areas, not all of the data would be affected by a trend due to the urban heat island because such temperature trends only happen during part of the urbanisation. In the centre of large urban areas the temperature is no longer increasing, but at a fixed higher level, which does not cause any problems for computing trends.
cd_uk says: “As for your qualification on temperature homogenisation thanks. I think the result will still be the same – smoothing.
interpolated = diff + observed”
The right equation for the correction of a homogeneous subperiod is:
homogenised(t) = observed(t) + constant
with
constant = mean( diff(after break) ) – mean( diff(before break) )
(This would assume that you correct one break after the other, modern methods correct all breaks simultaneously, which is more accurate, but makes the equation complicated; the idea is basically the same.) No smoothing.
vvenema
Thanks for getting back. It is good to see that you don’t just dismiss points out-of-hand.
1) My argument was that assuming, and it is a fair assumption, that the UHI inflates the temperature then surely the adjustments should, around urban centres, have a net downward effect. This doesn’t appear to be the case in the adjusted time series.
2) The point about gradual trends does appear more complicated and I can’t see how homogenisation can address these at all.
3) In short the main issue I have with the approach is that you’re assuming that homogenisation produces a more accurate story because the outcome matches the outcome one would expect of the processing procedure. This only validates that the procedure works as one would expect: if you ran a simple smooth operation on a noisey image you would reduce the appearance of noise in the image but that doesn’t mean it is a more accurate representation of the object being imaged.
4) Where does this constant in your equation come from? You say it is the
mean( diff(after break) ) – mean( diff(before break) )
Is this the mean difference between the neighbouring stations and the candidate station. Then the effect is exactly the same as smooth. Your first term “mean( diff(after break) )” is a smooth operation and the second “mean( diff(before break) )” is a smooth. You’re effectively finding the difference between two “convolutions” that smooth the data and applying this to your station data. The effect is always a type of smoothing and yes the degree of smoothing is function of the constant (not just the spatial arrangement) but it is still a smoothing.
cd_uk says: “1) My argument was that assuming, and it is a fair assumption, that the UHI inflates the temperature then surely the adjustments should, around urban centres, have a net downward effect. This doesn’t appear to be the case in the adjusted time series.”
That is because the UHI effect is typically small compared to the other inhomogeneities.
cd_uk says: “2) The point about gradual trends does appear more complicated and I can’t see how homogenisation can address these at all.”
Instead of using a constant, which amounts to a step function, you can also use a linear function which changes in time. Alternatively, you can detect and correct a gradual change by multiple small jumps in the same direction. In practice both methods work fine, other problems are more important.
cd_uk says: “3) In short the main issue I have with the approach is that you’re assuming that homogenisation produces a more accurate story because the outcome matches the outcome one would expect of the processing procedure. This only validates that the procedure works as one would expect: if you ran a simple smooth operation on a noisey image you would reduce the appearance of noise in the image but that doesn’t mean it is a more accurate representation of the object being imaged.”
It operates as expected and this is the operation we need to remove inhomogeneities. I see no problem, just confusion.
cd_uk says: “4) Where does this constant in your equation come from? You say it is the
mean( diff(after break) ) – mean( diff(before break) )”
The constant added to the entire homogeneous subperiod is:
mean( diff(homogeneous period after break) ) – mean( diff(homogeneous period before break) )
That is one number you add to the raw data for a certain homogeneous subperiod. If you want to see smoothing in this, I cannot help you. Keep reading WUWT.
In the .pdf, on the 7th page, there is this, which actually doesn’t surprise me:
See, now this is just brain dead. What scientist wouldn’t think of doing this? And when ALL of them fail to think of it, it just boggles the mind. It has NEVER been done? Geez…
You know, folks, it isn’t too late to do this somewhere, at least once. And if they do, they should not only calibrate the new one to the old one, but then ALSO run a THIRD one (a second new one) for a long period of time, for ongoing comparison. In theory the two new ones should track 100% parallel. Empirically? Who knows?
Steve Garcia
I’d point out that if the 2/3 of the adjusted results were LOWER (instead of higher) the 0.42C Raw Data would not be 0.76C, but would be 0.08C. If that negative direction were their end result, they’d be yammering about how constant the climate is. But that doesn’t suit their alarmism. There is no money ever going to be granted to tell us the climate is super stable.
Also, the fact that the overall adjusted delta is 0.34C above the raw average rise of 0.42C means that the 2/3 high adjusted values were considerably above that 0.34C delta value. My simplistic brain suggests those must have been twice that 0.34C, since that group (2/3) was twice the size of the adjusted-low group (1/3). If so, the adjusted-high group was 0.68C higher than what the raw data showed (0.42C), or 1.10C. And the adjusted-low group averaged +0.08C, 0.34C below their raw values. For 2/3 of the stations to cause a rise when 1/3 is showing a drop after adjusting, the 2/3 has to have half the rise per average station (adjusted) of what the 1/3 group is pulling the average down. A 1.10C 2/3 group is 0.34C above the 0.76C result, and the 0.08C 1/3 group is 0.68C below the 0.76C.
I probably got that wrong, but it seems correct right now. For the overall average to rise by 0.34C while 1/3 of them actually show a decline, the two populations have to have a serious difference in their adjusted values. This is not some small variation between the two groups. The difference between the two groups is 1.02C. I am sorry, but I have to say that that big a difference doesn’t happen by accident. If the 1/3 group had also risen after adjustment, but to a lesser extent, then this could be seen as trivial or accidental. But since that 1/3 group DID actually decline after the adjustments, we are left with no other conclusion than that the 2/3 group was not only increased intentionally, but that the values were intentionally large.
Steve Garcia
vvenema
“That is because the UHI effect is typically small compared to the other inhomogeneities.”
No the processing suggests that the UHI effect is small – again reference to the smoothing algorithm as carried out on a noisy image.
“Instead of using a constant, which amounts to a step function, you can also use a linear function which changes in time. Alternatively, you can detect and correct a gradual change by multiple small jumps in the same direction. In practice both methods work fine, other problems are more important.”
Yes but then you’re making all the same assumptions are you not? You’re assuming that the homogenisation improves the quality of the data rather than just processes it in order to produce another bias. I agree that the homogenisation method, as method for finding anomalies, is great but that doesn’t mean that it identifies instrumental error – in short that’s an assumption not an experimental reason.
“It operates as expected and this is the operation we need to remove inhomogeneities. I see no problem, just confusion.”
Yes but again you’re assuming that the inhomogeneities degrade the accuracy of the final result. Again, the image analogy who is to say that the subject is not inherently noisy.
“That is one number you add to the raw data for a certain homogeneous subperiod. If you want to see smoothing in this, I cannot help you. Keep reading WUWT.”
No! Do you understand convolution? What you’re finding is the average, finding the difference and then adding the difference – J’s man – THAT GIVES YOU THE AVERAGE! You are doing this for two different points in time (both an average: smooth as described)! Then modifying the final average by this temporal different. In short you are MODULATING the average by the difference – that is all! IT’S STILL A SMOOTH!
As for keep reading WUWT. Where do you suggest I read? You’re blog? Why? You don’t seem to understand the nature of filtering – which by the way is what homogenisation is and they are two main types of filtering: low pass and high pass. So which one is homogenisation: low pass, what a bit like smoothing.
@Gail Combs:
George E. Smith is another Smith… I’m E.M. Smith (but the E is the same for both of us 😉
Yeah, I know, naming Smiths is functionally “Anonymous Anonymous”… I had a guy with an identical name in my Calculus class at U.C. and we had to use our student ID numbers to keep us straight, so I’m used to it…
IMHO the cut off of thermometer records causes all sorts of subtle mayhem. Not the least of which is that the majority of thermometers are now at airports and they are used for comparison to non-airports in the past. (And despited the humorous claim of ‘cooler airports’ above, any one who has been at an airport on the tarmac knows it’s a hot place.) Oh, and with much more vertical mixing (which raises temps) and with much less transpiration (that cools natural green landscapes – plants have a built in air conditioner via evaporation and work to maintain a limited max temperature, concrete and tarmac does not…)
So a “grid box average” is made from one set of thermometers richer in green fields, and then compared with a “grid box average” full of modern large jet airports data. Just nuts on the face of it.
As there are not nearly enough instruments in the record for the total number of grid / boxes (last time I looked there were 8,000 grid/boxes and only 1280 or so current GHCN stations in GIStemp; but since then they went to 16,000 grid/boxes… and the max thermometers in the past was about 6,000 and with most of THEM in two geographies: Europe and USA. So by definition most of the grid / box values are a complete and utter fiction. One can only hope they are representative of something real.)
On top of that, then, comes the issue of “homogenizing” what little real data exists.
One comment on the WalMart Thermometer model:
As you walk up to the bin, you notice that the sun is shining on half of it, but not on the other half. Someone just watered the plants behind the display and some overspray happened, but you don’t know what instruments it hit as they have mostly dried off. They were all made in China on machinery designed in C but are painted in F. The thermistors came from 4 batches, 3 of them with a tendency to read low by a fixed amount, the other batch with a 3 sigma variation between individual values ( Operator of QA station needed a tea break…)
Now what is your homogenizing going to do for you? Hmmm???
“Crap is Crap. Averaging it together gives you average crap. -E.M.Smith”
I also find it humorous that some folks are all a-twitter about the variation in one screen type vs. another in the same location; but blissfully certain that putting a thermometer in a grass field in 1900 can be compared to miles of concrete and tarmac with tons of kerosene burned per hour today and not have the slightest worry… (Most large airports today that make up the bulk of GHCN current data were grass fields in 1900 and often even into the 1940s…)
Guess it’s “climate science”… /sarcoff>;
I’m going to wander through the rest of the comments later, but it looks like the usual “Warmers asserting if you do just right just like they do everything is perfect” and other folks saying “Um, looks like crap data to me.”…
So if you actually look at the temperature data, you find large dropouts. Most obvious is the dropout during times like the World Wars. Then there are the whole countries that just drop out (as the creators of GHCN are just sure you only need a nearby country to fill in the missing ones today…).
Now there are two basic ways to fix those “dropouts”. One is the “infill and homogenize”. The other is “bridge the gap”. In the first case you make up fictional values and fill them in. In the second case you look at ONE instrument and ONE location and ONE time period ( like June) and make the assumption that “June in Sacramento” is more like another “June in Sacramento” than anything else; and if you have two values with a gap between them you can do some kind of interpolation between them. ( This fails if the gap is long enough to cover a 1/2 of a major cycle, so for a 30 year dropout you could mis a PDO 1/2 cycle – yet the first and last data would be unmolested and the infill would at most dampen the global trend excursion in between by a very small amount while not introducing longer term bias. IFF dropouts are modestly uniformly distributed this ought to be acceptable.)
Because of the massive amount of “missing data” from dropped instruments, the bulk of all cells are filled with fabricated values based on ‘homogenizing’ and infill from what instruments do exist (mostly at airports near the runways… Airport thermometers MUST report a reasonably accurate runway temperature or folks get a wrong “density altitude” calculation and can crash. They simply can not be in the nice green grassy or treed area nearby and do their primary job. Concrete and asphalt runways are significantly warmer than nearby forests and green fields.) On the face of it, this is just a bogus thing to do.
Instead, those airport thermometers ought to have their trend calculated ONLY with respect to themselves just as the “nearby” grassy / treed areas ought to have their trend calculated ONLY with respect to themselves. One ought not be used to “fill in” or “homogenize” the other.
So, IMHO, it is the interaction of increasing Airports in the data, the dropping of treed / grassy / truly rural instruments, and the infilling and homogenizing all those airport data into the now missing grassy / treed areas that causes the problem.
This shows up rather dramatically when you inspect the range of the monthly averages of thermometers. Either in large aggregate or in smaller size areas (down to even the scale of a dozen or so in some countries.) There is a consistent “artifact” in the recent data. As of about 1987 – 1990 the “low excursions” just get squashed. The graphs have a ‘bottle brush’ effect where the older data have much wider ranges and the recent data approximate a slightly wobbling line. The highs do not go higher, but the low excursions are washed out. IMHO this is direct evidence for “the problem” in the data. I just have not been able to show if it is an artifact of the massive homogenizing lately, the ‘infilling’ of missing data, the “QA Process” that can replace a value with “The average of nearby ASOS stations” at airports, or simply the fact that the recent data come from electronic devices at airports and it just doesn’t get very cold there. (Never a ‘still air cold night’ with a deep cold surface layer as jumbo jets come and go.)
A good example is this “hair graph” of the Pacific Basin GHCN data. Notice how much it gets “squashed” recently. All the variability just ironed out of it:
http://chiefio.files.wordpress.com/2010/04/pacific_basin_hair_seg.png
From: http://chiefio.wordpress.com/2010/04/11/the-world-in-dtdt-graphs-of-temperature-anomalies/
Until that very unusual anomaly in the data distribution is explained, the data are “not fit for purpose” if your purpose is to say what long term temperature trends have been via a homogenize / grid-box / infill method.
Was there not a reference to Richard Muller and BEST in the original version of the post?
feet2thefire says:”
● No single case of an old and a new observation station running for some time together for testing of results is available!
…
See, now this is just brain dead. What scientist wouldn’t think of doing this? And when ALL of them fail to think of it, it just boggles the mind. It has NEVER been done? Geez… ”
How about these papers?
From a time before man-made climate change:
Margary, I.D., 1924. A comparison of forty years’ observations of maximum and minimum temperatures as recorded in both screens at Camden Square, London. Q.J.R. Meteorol. Soc., 50:209-226 and 363.
Marriott, W., 1879. Thermometer exposure — wall versus Stevenson screens. QJ.R. Meteorol. Soc., 5:217-221.
Or from a reliable Dutchman:
Brandsma, Theo. Parallel air temperature measurements at the KNMI-terrain in De Bilt (the Netherlands) May 2003-April 2005, Interim report, 2004. http://wap.knmi.nl/onderzk/klimscen/papers/Hisklim7.pdf
Van der Meulen, J.P. and T. Brandsma. Thermometer screen intercomparison in De Bilt (The Netherlands), Part I: Understanding the weather-dependent temperature differences) Int. J. Climatol, 28, pp. 371-387, doi: 10.1002/joc.1531, 2008.
Or from a reliable Norwegian guy:
Nordli, P. Ø. et al. The effect of radiation screens on Nordic time series of mean temperature. International Journal of Climatology 17(15), doi: 10.1002/(SICI)1097-0088(199712)17:153.0.CO;2-D, pp. 1667-1681, 1997.
Or from a reliable Austrian guy:
Böhm, R., P.D. Jones, J. Hiebl, D. Frank, M. Brunetti,· M. Maugeri. The early instrumental warm-bias: a solution for long central European temperature series 1760–2007. Climatic Change, 101, pp. 41–67, doi: 10.1007/s10584-009-9649-4, 2010.
Or from a reliable Spanish lady:
Brunet, M. Asin, J. Sigró, J. Bañón, M. García, F. Aguilar, E. Palenzuela, J.E. Peterson, TC. Jones, PD. 2011. The minimisation of the “screen bias” from ancient Western Mediterranean air temperature records: an exploratory statistical analysis. Int. J. Climatol., 31: 1879-1895 DOI: 10.1002/joc.2192.
You will find many more older references on different weather shelters and their influence on the mean temperature in:
Parker, D.E. Effects of changing exposure of thermometers at land stations. Int. J. Climatol., 14, pp. 1–31, 1994.
I did not read all of these papers yet, but I guess the titles are already sufficient to disproof the original claim that there are no parallel measurements to validate the breaks found during homogenization. It is just not the kind of literature that makes it into Science, Nature or the New York Times. Luckily some colleagues still do it because it is important work.
And yes, some of there papers have Phil Jones as co-author. If Phil Jones would not be interested in homogenization you would also complain.
Ah, thank you Anthony, the paper was an interesting read, especially the temperature data graphs for “De Bilt station – The Netherlands” and “Sulina station – Romania”, both of which terminated in the year 1990.
Cheers 🙂
Haven’t we been through this before? Berkeley Earth Surface Temperature reconstruction.
Scientific meetings like EGU (and the similar AGU Fall Meeting in the US) are an opportunity to present novel analyses to a broader audience of scientists and field specialists, in order to gain feedback prior to considering publication. These abstracts/presentations are NOT peer-reviewed literature and should NOT be considered anything more than scientific speculation. This is how the process works.
How come you lobbyists only manage to find non-peer-reviewed abstracts (not even proper papers) to support your claims? Everyone can put an abstract into EGU. This has no validity whatsoever.