Circularity of homogenization methods

Guest post by David R.B. Stockwell PhD

I read with interest GHCN’s Dodgy Adjustments In Iceland by Paul Homewood on distortions of the mean temperature plots for Stykkisholmur, a small town in the west of Iceland by GHCN homogenization adjustments.

The validity of the homogenization process is also being challenged in a talk I am giving shortly in Sydney, at the annual conference of the Australian Environment Foundation on the 30th of October 2012, based on a manuscript uploaded to the viXra archive, called “Is Temperature or the Temperature Record Rising?”

The proposition is that commonly used homogenization techniques are circular — a logical fallacy in which “the reasoner begins with what he or she is trying to end up with.” Results derived from a circularity are essentially just restatements of the assumptions. Because the assumption is not tested, the conclusion (in this case the global temperature record) is not supported.

I present a number of arguments to support this view. 

First, a little proof. If S is the target temperature series, and R is the regional climatology, then most algorithms that detect abrupt shifts in the mean level of temperature readings, also known as inhomogeneities, come down to testing for changes in the difference between R and S, i.e. D=S-R. The homogenization of S, or H(S), is the adjustment of S by the magnitude of the change in the difference series D.

When this homogenization process is written out as an equation, it is clear that homogenization of S is simply the replacement of S with the regional climatology R.

H(S) = S-D = S-(S-R) = R

While homogenization algorithms do not apply D to S exactly, they do apply the shifts in baseline to S, and so coerce the trend in S to the trend in the regional climatology.

The coercion to the regional trend is strongest in series that differ most from the regional trend, and happens irrespective of any contrary evidence. That is why “the reasoner ends up with what they began with”.

Second, I show bad adjustments like Stykkisholmur, from the Riverina region of Australia. This area has good, long temperature records, and has also been heavily irrigated, and so might be expected to show less warming than other areas. With a nonhomogenized method called AWAP, a surface fit of temperature trend last century shows cooling in the Riverina (circle on map 1. below). A surface fit with the recently-developed, homogenized, ACORN temperature network (2.) shows warming in the same region!

Below are the raw minimum temperature records for four towns in the Riverina (in blue). The temperatures are largely constant or falling over the last century, as are their neighbors (in gray). The red line tracks the adjustments in the homogenized dataset, some over a degree, that have coerced the cooling trend in these towns to warming.

clip_image004

It is not doubted that raw data contains errors. But independent estimates of the false alarm rate (FARs) using simulated data show regional homogenization methods can exceed 50%, an unacceptable high rate that far exceeds the generally accepted 5% or 1% errors rates typically accepted in scientific methods. Homogenization techniques are adding more errors than they remove.

The problem of latent circularity is a theme I developed on the hockey-stick, in Reconstruction of past climate using series with red noise. The flaw common to the hockey-stick and homogenization is “data peeking” which produces high rates of false positives, thus generating the desired result with implausibly high levels of significance.

Data peeking allows one to delete the data you need to achieve significance, use random noise proxies to produce a hockey-stick shape, or in the case of homogenization, adjust a deviant target series into the overall trend.

To avoid the pitfall of circularity, I would think the determination of adjustments would need to be completely independent of the larger trends, which would rule out most commonly used homogenization methods. The adjustments would also need to be far fewer, and individually significant, as errors no larger than noise cannot be detected reliably.

0 0 votes
Article Rating
137 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
richardscourtney
October 15, 2012 7:43 am

David Stockwell:
I think you will want to read
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
especially its Appendix B.
Richard

Ed Reid
October 15, 2012 7:53 am

It seems ‘very late in the game” to be discussing such fundamental issues. We’ve spent more than $100 billion in the US alone on climate research; and, we’ve basically “screwed up” the analysis of the data. Brilliant!

Tom G(ologist)
October 15, 2012 8:17 am

What has been spent on climate research is as nothing compared to that which has been spent on hazardous site remediation since the Stupor-fund was implemented in 1980. I can attest personally that what is being presented in this current thread is virtually identical to the ways in which groundwater flow models have shaped the interpretation of groundwater quality data, which in turn has resulted in vastly over-excessive efforts being made to remediate groundwater “contamination” which has not been a threat to anyone or anything, and which have proved a complete bust because the little bits of ‘contamination’ which have been the target of the circular reasoning decision making process are not remediable by any means other than leaving it alone and let nature take its course – which DOES reuslt in remediation. It has been the largest, money-wasting, do nothing cluster-f@*& . What upsets me more tghan anything about the climate nonsense is that I am watching the inexorable glaciality of the EPA do it all over again – with the same batch of cretins who have simply been shifted from one division to another.
Stockwell has it right. Begin with end in mind and you can interpret whatever you want.

October 15, 2012 8:24 am

The equation: H(S) = S-D = S-(S-R) = R is wrong.
You do use the difference time series (D) to determine the size of the jump, but you do not replace all values by the ones in the regional climate signal.
H(S) = S + d
d = d1 – d2
d1 = mean(S-D) in the homogeneous period before jump
d2 = mean(S-D) in the homogeneous period after jump
d is a value not a time series. And a good homogenisation method does not assume that R is homogeneous.
For more information on homogenisation see:
http://variable-variability.blogspot.com/2012/08/statistical-homogenisation-for-dummies.html
Homogenisation is used to be able study large scale climate variability in a more accurate way. Removing the too low trend for an irrigated region, is what homogenisation is supposed to do. Just as it should remove the too high trend in case of urbanisation. If you are interested in the local climate (either for an agricultural study (irrigation) or urban climates) you should not use homogenised data or make sure that you have multiple stations in the same climatic region, which are all affected by the irrigation in a similar way.
David Stockwell, why don’t you submit an abstract at the General Assembly of the European Geophysical Union? There you would get more qualified feedback on the quality of your work.
http://meetingorganizer.copernicus.org/EGU2013/session/11619

October 15, 2012 8:35 am

@Victor Venema:
What give us any confidence that we are able to reliably Identify the “too high trend in the case of urbanization” when most of the data suffers from urbanization to some unknown degree? We simply do not have the control we need over the time period of study. We assume we know more than we really do.

October 15, 2012 8:38 am

Circularity or circular reasoning is the bias introduced by politically motivated “post modern science” that promotes subjective research. Temperature measurements are just one example of circular reasoning found in climate research. It is introduced as the primary “driving force” in both global mass and energy balance models.

Alan S. Blue
October 15, 2012 8:47 am

A standard thermometer with a 0.1C instrumental error (corrected for adiabatic lapse rate and humidity) does not have a 0.1C error on the measurement of the temperature 100m from it. The error is larger. And there is no guarantee that the error is even centered, let along normally distributed.
So the entire homogenization process has issues. Figuring out how to interpolate into all of the areas in which there simply are no measurements should not be influencing the few actual measurements that are present.
That is: There’s nothing fundamentally in error in having two thermometers 20km apart that read 2 degrees different. It’s quite possible the actual temperature -is- 2 degrees different. Likewise – one can envision reasons why temperature rising slightly over here might not be matched over there. (This is a forested area, that’s a desert. Or a mountainous valley. Or has wind from the south.)

October 15, 2012 9:02 am

Circular arguments are the hallmark of IPCC climate science. The most fundamental one is the assumption that a CO2 increase causes a temperature increase. This is then built in to the computer models, which are constructed with most variables omitted or poorly known.
http://drtimball.com/2012/climate-change-of-the-ipcc-is-daylight-robberyclimate-change-of-the-ipcc-is-daylight-robbery/
My favourite omission is this one;
“Due to the computational cost associated with the requirement of a well-resolved stratosphere, the models employed for the current assessment do not generally include the QBO.”
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-4-9.html
All this guarantees that the model will produce an increase in temperature with an increase in CO2, a result then used to argue that the original assumption is correct.
The problem was nature did not cooperate and despite the claims that atmospheric CO2 continued to rise, which was another self-constructed treadmill because the IPCC produce the annual human production numbers, temperature levelled and declined slightly. This decline occurred despite the apparent best efforts of NASA GISS and others. The problem was recognized at East Anglia not as a scientific problem but a PR one by their communications expert Asher Minns. He is a science communicator at the Tyndall Centre on the same campus, and in one of the leaked emails wrote.
He wrote,
“In my experience, global warming freezing is already a bit of a public relations problem with the media.”
Kjellen:
I agree with Nick that climate change might be a better labelling than global warming.”
Hopefully another form of circular argument will occur, namely that of the fate of the mythical Oozlum bird that flew in ever decreasing circles to fundamentally disappear.
http://en.wikipedia.org/wiki/Oozlum_bird

October 15, 2012 9:23 am

Stephen Rasey says: “@Victor Venema: What give us any confidence that we are able to reliably Identify the “too high trend in the case of urbanization” when most of the data suffers from urbanization to some unknown degree?”
If most of the data suffers from urbanization, homogenization would not remove the additional trend due to urbanization and the resulting trend would not be representative for the large scale climate.
If you know of a study that shows that most of the stations are affected by urbanization most of the time, please let me know. That would be interesting, as that would go against our current understanding of the problem that no more than a few percent of the data are affected by urbanization.

pdtillman
October 15, 2012 9:28 am

Dr. Stockwell’s presentation of the Raw vs “Corrected” temp records from the Australian Riverina are a striking demonstration of the “cool is warm” (or “Lies are Truth”) mindset in these apparently confirmation-biased adjustments. We really need a third party statistical reanalysis of national climate records, published in a respected journal and then appropriately publicized.

Leo Morgan
October 15, 2012 9:38 am

Much as I’d like to be able to conclude that you have disclosed a fundamental error in calculating temperatures, and therefore there is no need to worry about ‘thermageddon’, all I can realistically conclude is that you have not communicated clearly.
I do not mean to give offence. With respect, someone must convey to you how you need to amend your article to write clearly for an intelligent non-technical audience such as the WUWT readership, or even for newspapers.
I make the following comments in the hope you’ll take them seriously, and ideally, revise your article so that I (and presumably many others) can comprehend it.
Give me clear directions. Don’t worry about insulting my intelligence, just make sure you’re clear.
The primary rule of clear communication is that the meaning of your sentence can be determined from the meaning of the words you use in the sentence.
I acknowledge that jargon that disobeys this rule is often developed in many fields. This is why jargon is regarded as unclear, cryptic and obfuscatory. The claim is sometimes made that jargon permits researchers in a field to say things that cannot be said without it. That’s sometimes true, but most often it is a guise for fuzzy thinking. In any case it should be avoided or explained when writing for a non-technical audience.
An example is your expression “These techniques are circular.” With respect, that should be written as “These techniques are faulty because they use circular reasoning.” The techniques themselves can be examined for years without ever displaying a circular shape
At least there I understood your point.
When you get onto your proof, you are more obscure.
With the line “If S is the target temperature series…” I infer that you are denoting a set of temperature measurements that you wish to homogonise. Does this make it a target? Are you talking about a set of measurements from a single station, or from a group, or doesn’t it matter? What time period are you talking about? It may be that the time period doesn’t matter- if so, please tell us.
You use the expression ‘regional climatology’. My knowledge of the dictionary meaning of the words gives me no guidance as to what exactly you are trying to convey by the term. How do I determine what the region is that you are taslking about? The nearest ten kilometers, the nearest ten weather stations, some set arbitrarily chosen, some other method? Presumably this refers to the creation of a data set structured in a similar fashion to the set that you wish to adjust to correct it’s errors. If that’s what you mean, please spell it out.
It’s after three am here. I’d prefer to not continue to describe which uses of language I find unclear. I’d like to be able to quickly paraphrase your argument in a fashion that would demonstrate the style I recommend, e.g.
“Sometimes we want to adjust the records of a weather station for a change that we know has occurred; for example moving the site, or introducing a new thermometer that reads higher than the true temperature.
“The way we do this adjustment is to work out the average temperature series for the region. A region is determined (however it’s determined.) We compare this temperature series to the one that is to be adjusted. If we know the exact date, we get the value of the average difference between the station and and the surrounding region before the change, and that of the average difference between the two sets after the change, subtract oine from the other and declare that the temperature change caused by the change was the difference.
“However, whenever a station being adjusted has a trend different to that of the surrounding region, the adjustment formula wrongly adds the difference in trends to the adjustment figure.
“In times of rising temperature, this will wrongly record a higher figure for those stations that record falling temperatures, (or even those that rise more slowly than there surroundings) and consequently the total temperature set of the toother stations plus the corrected station, will give a higher figure than the true figure.”
Okay, I acknowledge that that’s hardly deathless prose, but it’s a step in the right direction. Of course if I’ve totally misunderstood your point, then you should be able to clearly see where I’ve come unstuck.
Regards
Leo Morgan

Ed Reid
October 15, 2012 9:39 am

We are dealing in a science with many “known unknowns” and an unknown number of “unknown unknowns”. (HT: Donald Rumsfeld)

October 15, 2012 9:48 am

Thanks David, your observations and reasoning are completely right. I worked on that problem also for some time and used for the results of this procedure the term that one “impress” to the assumed false time series the considered right trend of the “normal”.
That may be a good correction method in some cases, but as long as one have no possibility to prove the result, which is rather impossible in meteorology it remains a speculative method only, without much evidence. And is therefore, as you demonstrated, an circular argument. But this type of circular argument seemed to be used in other areas of climatology also.

Luther Wu
October 15, 2012 9:54 am

Victor Venema says:
October 15, 2012 at 9:23 am
If you know of a study that shows that most of the stations are affected by urbanization most of the time, please let me know. That would be interesting, as that would go against our current understanding of the problem that no more than a few percent of the data are affected by urbanization.
______________________
I do believe, Sir, that you have that exactly backwards.

Sun Spot
October 15, 2012 9:57 am

@ Victor Venema says: October 15, 2012 at 9:23 am re: “If you know of a study that shows that most of the stations are affected by urbanization most of the time, please let me know. That would be interesting, as that would go against our current understanding of the problem that no more than a few percent of the data are affected by urbanization.”
Victor, please reference your study that shows ” no more than a few percent of the data are affected by urbanization” ?

October 15, 2012 10:13 am

I did not study urbanization myself and it is a rather extensive literature. I got this statement from talking to colleagues with hands on experience in homogenization. Thus unfortunately I cannot give you a reference.
Contrary to the normal readers of this blog and being a scientist myself, I have no reason to expect my colleagues to be lying. If you know of a study that shows that most of the data has problems with urbanization, that would make me sufficiently interested to study the topic myself. Life is too short to follow every piece of misinformation spread on WUWT.
REPLY: Watts et al 2012 on the sidebar, soon to be updated to handle the TOBs (non)issue, pretty much says all you need to know. All homogenization does is smear the error around. – Anthony

richardscourtney
October 15, 2012 10:38 am

Victor Venema:
At October 15, 2012 at 10:13 am you say

I did not study urbanization myself and it is a rather extensive literature. I got this statement from talking to colleagues with hands on experience in homogenization. Thus unfortunately I cannot give you a reference.
Contrary to the normal readers of this blog and being a scientist myself, I have no reason to expect my colleagues to be lying. If you know of a study that shows that most of the data has problems with urbanization, that would make me sufficiently interested to study the topic myself. Life is too short to follow every piece of misinformation spread on WUWT.

You were not asked to cite the entire literature. You were asked to cite only one reference to justify your assertion that “no more than a few percent of the data are affected by urbanization”. And you admit you can’t.
On WUWT an assertion needs to be justified because this is a science blog frequented by very many scientists. And scientists ‘trust but verify’. No real scientist asserts something as being true merely because some chums said it. Indeed, on WUWT we track down and reveal misinformation of the kind you have asserted but cannot justify.
And we have experienced enough behaviour of the Team to know that nothing asserted by members of the AGW-industry can be taken as being true unless there is good evidence to support it. If you don’t like that then take it up with the members of the ‘Team’ whose nefarious activities were revealed by their own words in the ‘climategate’ emails.
Richard

October 15, 2012 10:40 am

“Watts et al 2012 on the sidebar, soon to be updated to handle the TOBs (non)issue, pretty much says all you need to know. All homogenization does is smear the error around. – Anthony”
I already discussed the problems of Watts et al (2012) and SkS did an even more extensive piece detailing even more errors.
I am sorry to have to say, that the time of observation bias (TOB) is not the only problem of this manuscript. The fundamental problem is that the quality of the stations is assessed at the end, while the trend is computed over the full period without having information on how the quality of the station changed. I see no way to solve this problem (without homogenization). And as the first version of the manuscript already showed, after homogenization there are no problems any more, the trends are similar for all quality classes.
Anthony, as long as you keep on claiming that homogenization only smears the error, I have trouble taking you seriously and can only advice you to try to understand the fundamentals better. That could help in making your critics more qualified.
The work page of the manuscript is very quiet, you can hear the crickets. Still, I look forward to the updated manuscript and I am glad that you are taking your time to improve it.

REPLY:
Since there are people who are trying to actively discredit it (such as yourself), I decided not to update the work page regularly until we had our full revision completed. As for the end, well if you can find more metadata, we’ll use it. No matter what you say though, homgenization simply smears the errors around, and Dr. Stockwell demonstrates. Be as upset as you wish, because I really don’t care if you take me seriously or not. I don’t do this to earn your approval. People like yourself were happy to accept Fall et al when we didn’t find as strong a signal. Your bias is showing. – Anthony

Steve C
October 15, 2012 10:45 am

Victor Venema says:
“Life is too short to follow every piece of misinformation spread on WUWT.”
You should try the alarmist sites sometime.

October 15, 2012 10:57 am

richardscourtney says: “On WUWT an assertion needs to be justified because this is a science blog frequented by very many scientists.
🙂
richardscourtney says: “And scientists ‘trust but verify’.”
You cannot live without trusting something. If this is the point you do not trust. Go study it. If you can proof that most of the data is affected by significant warming due to urbanization, you will be a hero. It may be a bit to applied, but I would give you a chance for a Nobel price. At least there will be a few millions from the Koch brothers.
In the mean time, I will verify something else, whether we can trust the studies on changes in extreme weather will be the topic of my research for the coming years. I expect that that is more fruitful. That is a topic where I am sceptical.
Feel free to prove me wrong and that urbanization is the hot topic. If you do, you will also have to explain why the trend in the satellite temperature and in the reference climate network at pristine locations is about the same as the one of the homogenized surface network. And why the trend in the rural stations is about the same as the ones in the urban stations. In my estimate the chance to become a hero by studying urbanization is close to zero, but if your estimate is higher, please keep me informed about your studies. I would be interested.
REPLY: LOL! The fact that the Climate Reference Network exists at all is proof of the fact that NCDC takes the issue of UHI and siting seriously. It has four years of complete data (since 2008) and you call it “the same” as the old network, yet in your other argument you claim I don’t have enough years of metadata to establish siting trends. You can’t have it both ways. Make up your mind, because your bias is laughable. – Anthony

Editor
October 15, 2012 10:58 am

@Victor Venema
If you know of a study that shows that most of the stations are affected by urbanization most of the time, please let me know. That would be interesting, as that would go against our current understanding of the problem that no more than a few percent of the data are affected by urbanization.
According to Richard Muller
Urban areas are heavily overrepresented in the siting of temperature stations: less than 1%
of the globe is urban but 27% of the Global Historical Climatology Network Monthly
(GHCN-M) stations are located in cities with a population greater than 50,000.

Then add in the smaller urban sites (even comparatively small towns will have UHI effect, particularly a growing town). And you get a significant number.
http://berkeleyearth.org/pdf/uhi-revised-june-26.pdf

Luther Wu
October 15, 2012 11:00 am

Victor Venema-
Even though you are a scientist, you are also welcome here as a layman, as is apparently your current role. Making unfounded assertions will earn you a challenge, every time.
VV said: “ Life is too short to follow every piece of misinformation spread on WUWT.
_________________
Cite one example, please- just one…

DirkH
October 15, 2012 11:02 am

Victor Venema says:
October 15, 2012 at 10:13 am
“Life is too short to follow every piece of misinformation spread on WUWT.”
Whoa. Who gives me back the hours I spent with that abomination, the IPCC AR 4, and all the journalistic drivel built on top of it.
(This is not an attack on scientists, as the IPCC AR4 has mostly not been written by scientists.)

Editor
October 15, 2012 11:09 am

@Victor Venema
you will also have to explain why the trend in the satellite temperature and in the reference climate network at pristine locations is about the same
Not true.
Satellite records show significantly less warming than GISS does since 1979.

outtheback
October 15, 2012 11:22 am

Victor Venema
Homogenization of temp’s is a flawed process at the best of times.
While the land use in a rural area tends not to change much throughout the seasons and years, the variations can be calculated into it, pasture will be pasture year round, cropland will lay bare for a period of time and in summer there will be irrigation on both, if and where needed, all causing minor variations. Providing there is no major change in land use, pasture to forest or to urban, any variation stays constant over the seasons. If one is to homogenize this it should be done per season, or better still take the temp difference of the irrigation into account, this will have a wave effect on the temp, lower on the day after irrigation and increasing until the next one.
Homogenization of urban temps are always behind the eight ball. Before the temp increase due to a new development has influenced the mean temp and therefore has an effect on your “d2” a long time (years?) has gone by and will therefore always indicate a homogenized temp higher then what it should be. In the meantime the next development will do its trick and so on.
There should not be any use of temp stations in urban areas to calculate any regional (country/continent) trend let alone on a global scale.
Urban temp readings are only any good for the citizens of that city to let them know how warm it got in their built up area.

Ian W
October 15, 2012 11:32 am

I have yet to see any justification for ‘homogenization’.
A working and supposedly well sited autonated observation system reports the local temperature as XdegC and 30 miles away to the East another says YdegC and to the West another says YdegC. This does not mean that the local XdegC tremperature is incorrect. Similarly, the fact that I should not really get frost in early summer and the system reports frost – does not mean that it is incorrect. This type of homogenization algorithm should only be used as a flag for an observation to be checked. But the use of undergraduate style software in a batch job just smoothing the data towards each other and the expected climatology worldwide must always be incorrect. If a site is different to the others the reason needs to be checked rather than the value massaged by a parameter to something the programmer thinks is more likely.
This comes back down to a total lack of quality management by the operators of these climate systems NASA/NOAA/Met Office and UEA and the other international bodies. The poor quality management is due to their laziness. Each reporting site needs to be individually assessed and any alteration logged and signed off and countersigned as valid with the reason for that alteration. There are not that many observation sites. Banks don’t give up and homogenize client accounts because there are a lot of them.

Alan S. Blue
October 15, 2012 11:37 am

Paul Homewood: “Then add in the smaller urban sites (even comparatively small towns will have UHI effect, particularly a growing town). And you get a significant number.”
I spent some time working my way down the list with Google Earth. I had a couple pages of stations examined before I found my very first station that was more then a mile from the nearest US Post Office.
The ratio “Percent of the US landmass that’s farther than one mile from a Post Office” to “Percent of the USHCN/CRN/etc. weatherstations farther than one mile from a Post Office” would be … outrageous.
The urban/rural determinations have issues.

richardscourtney
October 15, 2012 12:00 pm

Victor Venema:
In your post at October 15, 2012 at 10:57 am which is apparently aimed at – but not addressed to – me you say

If you can proof that most of the data is affected by significant warming due to urbanization, you will be a hero. It may be a bit to applied, but I would give you a chance for a Nobel price. At least there will be a few millions from the Koch brothers.

Unfortunately, I was beaten to it by Paul Homewood at October 15, 2012 at 10:58 am where he says

According to Richard Muller

Urban areas are heavily overrepresented in the siting of temperature stations: less than 1% of the globe is urban but 27% of the Global Historical Climatology Network Monthly (GHCN-M) stations are located in cities with a population greater than 50,000.

Then add in the smaller urban sites (even comparatively small towns will have UHI effect, particularly a growing town). And you get a significant number.
http://berkeleyearth.org/pdf/uhi-revised-june-26.pdf

However, of itself that does not fit your criterion of “most of the data”. Hence, I add to that by pointing out the effect would be spread to at least half of the rest of the data by homogenisation. Thus, my addition fulfills your “most” criterion.
Can I now have your assurance that you will nominate Richard Muller and me for whichever Nobel Prize you consider to be appropriate? And, importantly, will you please tell us how to get the “millions from the Koch brothers”.
Richard
PS I have not been a “hero” before and I can’t say I notice feeling any difference now you say I am one.

October 15, 2012 12:44 pm

REPLY: Since there are people who are trying to actively discredit it (such as yourself), I decided not to update the work page regularly until we had our full revision completed. … – Anthony
I thought you had requested readers of WUWT to review your study. My suggestion on your work page was constructive and would make the study stronger. Sorry, if I misunderstood your request.
If you have a solid study, it can not be discredited by arguments.
REPLY: LOL! The fact that the Climate Reference Network exists at all is proof of the fact that NCDC takes the issue of UHI and siting seriously. It has four years of complete data (since 2008) and you call it “the same” as the old network, yet in your other argument you claim I don’t have enough years of metadata to establish siting trends. You can’t have it both ways. Make up your mind, because your bias is laughable. – Anthony
The USCRN started in 2004, not 2008. “The data for the period 2004–2008 are extremely well aligned with those derived from the USHCN version 2 temperature data. For these five years, the r2 between the 60 monthly USCRN and USHCN version 2 anomalies is 0.998 and 0.996.” (Menne et al. On the reliability of the U.S. surface temperature record. J Geophys. Res., VOL. 115, D11108, doi:10.1029/2009JD013094, 2010.)
I never claimed it was a proof, but at least the periods of the two dataset are the same. It does make it less likely that there are problems. Together with all the other evidence, I do not see it as productive to study urbanization. You have to pick your fights.
You should be happy, that scientists always try to make their work more accurate. If your blog produced the pressure to get funding for such improvements: thank you very very much. And please keep up the good work. Without your efforts climate science would be seen as a solved problem in Europe and we would mainly be doing applied climate impact studies. Due to your industrious efforts, there is more money to study fundamental questions, which is what scientists like the most.
Paul Homewood says: “Urban areas are heavily overrepresented in the siting of temperature stations: less than 1% of the globe is urban but 27% of the Global Historical Climatology Network Monthly (GHCN-M) stations are located in cities with a population greater than 50,000. Then add in the smaller urban sites (even comparatively small towns will have UHI effect, particularly a growing town). And you get a significant number.”
Thank you for giving a number: 27% of the stations. The period of urbanization is typically in the order of 30 years. After this there is no longer a bias in the trend, the temperature just has a fixed constant bias, which does not affect trend estimates. The the amount of data affected will be much less than 27%.
50 thousand inhabitants is not what I would call an urban region. If you would have a proof for a growing urban heat island effect in such cities, you would have a case that urbanization is a problem. Especially if you can proof this for even smaller town you have a good case. Then you could collect your price with the Koch brothers and live in luxury. You would get a guest post on WUWT and get more than your 15 minutes of fame.
But please do not start about noticing a perceived temperature drop when coming out of a small town. Perceived temperature is influence by radiation, wind and humidity, not just temperature.
Luther Wu says: “VV said: “ Life is too short to follow every piece of misinformation spread on WUWT. Cite one example, please- just one…”
1) The Watts et al (2012) manuscript falsely claiming half of the warming to be due climatologists.
2) The post again falsely claiming half of the warming to be due climatologists on the “peer reviewed paper” by two Greek hydrologists.
http://variable-variability.blogspot.com/2012/07/investigation-of-methods-for.html
3) Calling Fritz Vahrenholt a former environmental minister without mentioning that he is currently manager of a large utility.
DirkH says: “(This is not an attack on scientists, as the IPCC AR4 has mostly not been written by scientists.)”
The report is written by scientists. Only the summary for policy makers is written together with government officials. If you read the report itself, you should be getting good information and you can always go back to the citations.
. NOAA only homogenizes the annual means. In most other countries and studies also the seasonal and monthly means are homogenized. This can help you to find more breaks, in case the sign changes from season to season. And you need this if you would like to study trend changes as a function of season to understand the physical reasons for the changes better. To compute the global mean temperature, which is a summary statistic that attracts most public attention, the annual means are sufficient.
If the changes in the magnitude of the urban heat island (UHI) go in clear jumps (corresponding to large developments), it is relatively easy to correct for them using homogenization. Then you could still see the jumps even if the reference stations contain some jumps as well. The difficult situation is when there is a gradual increase in the UHI and most of the reference stations have a simultaneous similar gradual increase during the same period. Then relative homogenization would not notice this data problem and keep the artificial heating.
Urban readings in itself are not a problem. The problem is urbanization. For stations in the center of major the trend is about the same as for the rural stations. In this case the UHI just causes a constant bias. Studies comparing the trend for rural and urban stations (defined in many different ways) show the same trend for both types of stations.

October 15, 2012 12:56 pm

Ian W says: “I have yet to see any justification for ‘homogenization’. A working and supposedly well sited autonated observation system reports the local temperature as XdegC and 30 miles away to the East another says YdegC and to the West another says YdegC. …”
It is always better not to need homogenization, but that is difficult. For one, this would only help you in future (and is now tried in the US reference network). You would still need homogenization to study the climate of the past. And you will have a hard time to keep your measurement and the surrounding of the station constant over a period of centuries. Climatologists are not the infinitely powerful elites, they are portrayed to be at “sceptics” blogs.

October 15, 2012 12:57 pm

richardscourtney, maybe work on the proof a little more, until it is water tight. 🙂

October 15, 2012 1:26 pm

@Victor Venema: Thank you for your response, but I think each of your rebuttals are incorrect.
You said “The equation: H(S) = S-D = S-(S-R) = R is wrong. You do use the difference time series (D) to determine the size of the jump, but you do not replace all values by the ones in the regional climate signal.” I said “While homogenization algorithms do not apply D to S exactly, they do apply the shifts in baseline to S, and so coerce the trend in S to the trend in the regional climatology.” You have simply restated exactly the same thing that I said with a couple of extra equations.
You said: “Homogenisation is used to be able study large scale climate variability in a more accurate way. Removing the too low trend for an irrigated region, is what homogenisation is supposed to do.” Imagine a perfectly correct temperature record in the irrigated region that shows a falling temperature. By your standards a perfectly correct record should be adjusted to show a rising temperature! That is rewriting history, revisionism, and just plain wrong IMHO.
You say: “why don’t you submit an abstract at the General Assembly of the European Geophysical Union? There you would get more qualified feedback on the quality of your work.” I have published a lot of papers in peer-reviewed journals, but when it comes to critiques of climate science, one of two things happens, the editors cannot find anyone to review it, or it gets through and is never cited. You find better qualified feedback on the technical blogs.

laterite
October 15, 2012 1:33 pm

@Victor Venema: Thank you for your response, but I think each of your rebuttals are incorrect.
You said “The equation: H(S) = S-D = S-(S-R) = R is wrong. You do use the difference time series (D) to determine the size of the jump, but you do not replace all values by the ones in the regional climate signal.” I said “While homogenization algorithms do not apply D to S exactly, they do apply the shifts in baseline to S, and so coerce the trend in S to the trend in the regional climatology.” You have simply restated exactly the same thing that I said with a couple of extra equations.
You said: “Homogenisation is used to be able study large scale climate variability in a more accurate way. Removing the too low trend for an irrigated region, is what homogenisation is supposed to do.” Imagine a perfectly correct temperature record in the irrigated region that shows a falling temperature. By your standards a perfectly correct record should be adjusted to show a rising temperature! That is rewriting history, revisionism, and just plain wrong IMHO.
You say: “why don’t you submit an abstract at the General Assembly of the European Geophysical Union? There you would get more qualified feedback on the quality of your work.” I have published a lot of papers in peer-reviewed journals, but when it comes to critiques of climate science, one of two things happens, the editors cannot find anyone to review it, or it gets through and is never cited. You can find better qualified technical feedback on the technical blogs.
(David Stockwell)

laterite
October 15, 2012 1:50 pm

@Leo Morgan: You said: “Much as I’d like to be able to conclude that you have disclosed a fundamental error in calculating temperatures, and therefore there is no need to worry about ‘thermageddon’, all I can realistically conclude is that you have not communicated clearly.”
The problem of circular reasoning is not that the conclusion is wrong. It could well be right if the assumptions are right. The problem is that it is an incorrect inference, dressed up as a tested result. So it is quite easy to get errors with high significance values, i.e. to fool yourself.
A number of studies now have shown that the contribution of homogenization to the overall warming trends last century is on the order of 0.3 to 0.5 degrees. This would be grounds for concern, I think.
Thanks for writing suggestions.

AndyG55
October 15, 2012 1:54 pm

@Victor “The period of urbanization is typically in the order of 30 years. After this there is no longer a bias in the trend, the temperature just has a fixed constant bias, which does not affect trend estimates.”
And this is EXACTLY why temperatures have leveled off for the last 15 or so years. Thanks you !!!

Dr Burns
October 15, 2012 1:54 pm

Once “adjusted”, it is no longer data.
data – any fact assumed to be a matter of direct observation

AndyG55
October 15, 2012 1:59 pm

Here’s a little exercise in basic maths.
Suppose in a hypothetical region of size 20000 sq miles there are 3 urban areas of size 250, 500, and 250 sq miles. In this region there are 5 weather stations , one in each of the urban areas and 2 rural stations.
Now, over the past 50 years these stations have seen the following trends:
Urban1 = 2.1oC, Urban 2 = 2.0oC, Urban3 = 1.5oC, Rural1 = 0.1oC, Rural2 = -0.3oC
If we want to calculate the Average temperature change, and we just apply equal areas to each station, and we get an average rise of …………………………..
If, however, we apply the urban stations ONLY to their respective urban areas, and split the rural area equally, we get an average temperature rise of ……………………………….
——————————————————————–
Now of course, if you homogenise the rural data first, so it matches the trend of Urban areas, you have truly stuffed up the whole thing and now have a massive trend where, in reality, none exists.

AndyG55
October 15, 2012 2:00 pm

darn degree signs didn’t work, sorry.

AndyG55
October 15, 2012 2:03 pm

And Victor.. if you come here mentioning Skeptical Science.. expect to get laughed at.

richardscourtney
October 15, 2012 2:09 pm

Victor Venema:
You admit to having visited SkS and it seems that you are used to posting on similar warmist ‘echo chamber’ web sites. So, I write offer you some genuine and sincere advice.
My “proof” is much, much more “watertight” than your armwaving about “no more than a few percent of the data are affected by urbanization”. Clearly, you are not aware of the nature of WUWT but you mistakenly think WUWT is similar to warmist blogs although WUWT has a different ’cause’ from them.
WUWT differs from the warmist sites in that WUWT is not censored so all viewpoints are available for scrutiny here, and people of sincerity can obtain much credibility whatever their view. For example, on WUWT Leif Svalgaard is clearly the most respected solar expert who frequents WUWT and he champions the IPCC view of solar (non)influence on climate. Hence, if you want to gain credibilty here then you need to provide clear and logical arguments which are supported by referenced information which can be challenged. Armwaving assertions don’t ‘cut it’: such assertions are usually flatly rejected or are ridiculed (as I ridiculed your silly Nobel Prize assertion).
You can be a respected champion of your views on WUWT if you give clear, logical arguments supported by referenced information. Alternatively, if you make the kinds of posts you have made so far then you will – rightly – be treated in the same manner of condescending tolerance as the existing WUWT resident trolls (i.e. John Brookes, LazyTeanager, Izen, etc.).
Please note that I provide this advice with complete and genuine sincerity.
Richard
REPLY: I echo Richard’s sentiment. SkS has proven themselves to be nothing more than a club of angry petulant children hell bent on smearing anyone, no matter what their standing in science, if they disagree with the premises they hold dear. By their own actions, they have proven themselves to be liars, conspirators, bullies, and post facto revisionists. Citing them is akin to citing Al Gore. -Anthony

October 15, 2012 2:44 pm

laterite: “Imagine a perfectly correct temperature record in the irrigated region that shows a falling temperature. By your standards a perfectly correct record should be adjusted to show a rising temperature! That is rewriting history, revisionism, and just plain wrong IMHO.”
Just as with the urban heat island effect, it depends on what you are interested in. Just as for cities, there will likely be more stations in irrigated areas as in surrounding.
Thus if you are interested in the large scale regional climate this is a local artefact, which you would like to remove. Just as in the case of the urban heat island. If you have sufficient stations (I just read that you had 4 stations in the irrigated region) and the network density is similar outside the irrigated area there is likely no problem. In this case the stations are representative of their surrounding and can be used to compute the large scale climate signal.
I understand your side, if you are interested in biodiversity, agriculture (or city climates), you would like to keep this part of the signal. In that case you should only compare the stations in the irrigated area with each other. (And by the way, comparing stations with the Australian mean temperature as reference is completely wrong, it is no wonder you produce false positives that way. The reference should be the best estimate of the regional climate at the station you are testing.)
I also like working on a variety of topics and thus often work on topics on which I am not an expert. Then I always make sure, that I collaborate with an expert to avoid making errors. In case of your extended abstract, an expert may have convinced you not to cite Steirou and Koutsoyiannis (2012). Especially not as a study on the global temperature, as most stations were from the USA, the station density outside the US was insufficient, and no effort was made to account for differences in station density.
It might not be fair, but using as temperature unit “C” and not “°C” or calling SNHT Standard Normal Homogenization Test (and not rightly Homogeneity Test) makes an unprofessional impression, which makes reviewers more critical. If you use an equation, it should be right, maybe you can be a bit lax in a blog post, but at least in a scientific work it should be right. The Standard Normal Homogeneity Test is based on hypothesis testing. There are homogenization methods using information theoretical measures, but SNHT is not one of them.
If an editor has problems finding a reviewer for a paper of yours on homogenization, feel free to mention my name. To get cited, people need to know that your study exists. Another good reason to visit scientific climate conferences. I am not sure, whether the conference for which you wrote this extended abstract qualifies as scientific, with invited speakers talking on “Demonising carbon dioxide: Science of the absurd”.

October 15, 2012 2:58 pm

AndyG55 says: “And Victor.. if you come here mentioning Skeptical Science.. expect to get laughed at.”
🙂 No problem. To be honest, I do not expect to be able to convince you, Anthony and most of the commenters here. I just hope that some of the readers of WUWT will start questioning the content of this blog.
By their own actions, they [Skeptical Science] have proven themselves to be liars, conspirators, bullies, and post facto revisionists. … -Anthony
I am sure, they feel the same way.
As a scientist, I can only note, that they are better at scientific argumentation and know the scientific literature much better.

richardscourtney
October 15, 2012 3:08 pm

Victor Venema:
At October 15, 2012 at 2:58 pm you say of SkS

As a scientist, I can only note, that they are better at scientific argumentation and know the scientific literature much better.

Well, that has blown any credibility you may have had as a scientist.
Richard

AndyG55
October 15, 2012 3:27 pm

Have you done my little maths exercise yet, V ?

October 15, 2012 3:38 pm

They “feel” the same way? No, they cannot. Their approach is fundamentally different.
SkS DOES revise and rewrite entire threads. They also completely remove comment after comment after comment when the comment is in any way inconvenient to their story.
WUWT DOES NOT revise and rewrite entire threads. And WUWT does not remove any comments for reasons of agreement, disagreement or inconvenience (over the top rudeness is not tolerated, however).

Luther Wu
October 15, 2012 3:40 pm

listening to: Guided By VoicesI Am A Scientist
______________
Am I perpetrating a logical fallacy?
Maybe I should cite the
Dandy Warhols?

ColinD
October 15, 2012 3:47 pm

Tell you what, try this in court to defend a speeding fine: I have homogenised my speed with that of the surrounding traffic and this shows that I was not actually speeding.

John M
October 15, 2012 3:48 pm

Paul Homewood says:
October 15, 2012 at 11:09 am

@Victor Venema
you will also have to explain why the trend in the satellite temperature and in the reference climate network at pristine locations is about the same
Not true.
Satellite records show significantly less warming than GISS does since 1979.

Paul, who are you going to believe, data or a “scientist” who’s been told stuff by his “scientist” friends?
For the record, temperature trends since 1979:
GISS: 0.20/decade
RSS: 0.13/decade
UAH: 0.14/decade
http://www.woodfortrees.org/plot/uah/plot/uah/trend/plot/gistemp-dts/from:1979/plot/gistemp-dts/from:1979/trend/plot/rss/plot/rss/trend
Of course, given uncertainties around measuring temperature anomalies, maybe our “scientist” friend thinks (or has been told by his friends) that those are all the same within the margin of error.

Byron
October 15, 2012 3:52 pm

Victor Venema ,
I usually don`t make personal comments here , just the odd random observations from time to time , but on this occaision I shall make such a comment ,
proselytism and dogma doesn`t cut it , raw data and falsifiable hypotheses do (at least untill they`re falsified )

October 15, 2012 4:08 pm

Victor Venema says:
October 15, 2012 at 2:58 pm
As a scientist, I can only note, that they [ @ Cook’s blog ] are better at scientific argumentation and know the scientific literature much better.

– – – – – – – –
Victor Venema,
Your claim of superior science argument at Cook’s blog can be taken in several ways.
One way your claim can be taken is it makes false presumptions about what constitutes normal modes of scientific argumentation. Your claim is false if you mean that proper scientific argument consists of the following activities at Cook’s blog: broadly applied censoring skeptical comments / unauthorized revisionism of comments / main post alteration without documenting what the revisions were and sometimes when made / appeals to the IPCC as the sole authorities in climate science / uncivil and unequal moderation of comments / myopic causation bias.
Another way your claim can be taken is as just being innocently naive; as possibly being made by a person who is unaccustomed to successful and productive participation in the dynamics of an open and independent climate science blog (WUWT) which has very light handed moderation. If it is innocent naivety that explains your claim, then welcome to the real world of climate science in the 21st century.
Yet another way your claim can be taken is as just a knee jerk emotional repartee; as being of the nature of a child saying, “my mother is better than your mother, because my mother doesn’t wear army boots”, or something like that. : )
But I think the most advantageous way your claim can be taken is that it is a challenge to a debate of WUWT’s best denizens versus Cook’s site best denizens. Is that the essence of your claim? I hope so.
John

October 15, 2012 4:23 pm

We have done a shadow post of David’s paper on the NCTCS blog:
http://theclimatescepticsparty.blogspot.com.au/2012/10/is-temperature-or-temeperature-record.html
Anyone interested in attending the AEF conference mentioned in the second paragraph please note that David will be speaking on the 20th (Twentieth) and NOT the 30th as mentioned above

cohenite
October 15, 2012 4:49 pm

Victor Venema: In your response to laterite you acknowledge that your objections to his argument are not valid. If you want to rebut it, why not explain why reference homogenization is NOT a logical circularity instead of picking nits?

Peter S
October 15, 2012 5:21 pm

“AndyG55 says:
October 15, 2012 at 1:54 pm
@Victor “The period of urbanization is typically in the order of 30 years. After this there is no longer a bias in the trend, the temperature just has a fixed constant bias, which does not affect trend estimates.”
And this is EXACTLY why temperatures have leveled off for the last 15 or so years. Thanks you !!!”
Got it in one AndyG55.
There is so much wrong with the logic in “Victor’s” statement I am not sure he has actually ever sat down and looked at the blatant unproved assumptions he is making.
The only bit of it that is right is that urbanization induces a bias in recorded temperatures.
Assumption 1: Once a site is urbanized that there is no change in rate of bias from changing urban use.
Assumption 2: There is no change in the rate of site urbanization (i.e. the number of sites being urbanized).
The fact is that you can only claim a fixed constant bias from urbanizationfor a single site as long as the area of urbanization around the site remains constant.
Neither Assumption stands up to even cursory scrutiny.

Ian H
October 15, 2012 5:44 pm

UHI “adjustment” is often performed by lowering the early temperature records in adjacent rural areas. Don’t ask me to justify this bizarre procedure – I can’t. The result of this approach is to produce a greater linear trend after the UHI adjustment than before, and that alone should tell anyone with half a brain that a negative sign has been dropped.
While I’ve given up expecting these temperature adjustments to make much sense, is it unreasonable of me to expect that at least the nonsense should be consistent? So is the “irrigation cooling effect” (ICE?) then dealt with by raising early temperatures in adjacent non-irrigated areas producing a lowered trend? It seems not! There is no consistency here. It is all ad hoc.
… an ad hoc-key stick …

ghl
October 15, 2012 6:21 pm

Victor Venema
So you are going down fighting. I admire that.

October 15, 2012 6:44 pm

Victor Venema
It’s not for me to say, but I think you are welcomed here any time.
You did cause a stir, even though it’s been homogenized.

eck
October 15, 2012 7:23 pm

Re: Tom G(ologist) comments. Bingo! Another reason to eliminate the EPA. Their original intended job is done. When is Congress going to do anything about it? (ans.:never)

Chuck Nolan
October 15, 2012 7:57 pm

Victor
I am not a scientist so I have to rely on what makes sense to me.
You said ………” And why the trend in the rural stations is about the same as the ones in the urban stations.” Are you saying they should be increasing at the same rate because of AGW?
That seems wrong to me because if rural areas are basically unchanged and something makes the temperature go up there could be AGW (or maybe some other GW). My understanding is urban areas are unique types of heat sinks. I would expect urban areas to have temperature increases based on the rate of urbanization and build a warm bias. It would seem to me growing urban areas should not be used to identifying AGW.
Although, I would guess Dr. Hansen has an adjustment number handy.
Am I missing something or am I just wrong?
I’m one of the skeptics who believes we could still be exiting the LIA. I would not be surprised to see another 2 degrees C warming before the next big ice age. Not much evidence for runaway CAGW but lots of evidence of ice ages. I don’t believe the loss of the Arctic Ice cap would end the world. Losing the glaciers in Ohio didn’t hurt. They farmed Greenland. The ice caps have melted before. This time, a little warming with more freed-up land and water will help feed and sustain mankind. Some of us think that’s a good thing.
cn

john robertson
October 15, 2012 8:23 pm

Circular reasoning is the mark of a religious belief.The classic god said, therefore its true. Its true because god said.This being one of the reasons society adopted the scientific method.Its first principle is to slow down the bloodletting of fixed convictions by the acceptance of the concept that I/we might be wrong, therefore let us reason using testable methods. Is it possible that we have cycles in mass human nature and its down into unreason we go again?

October 15, 2012 11:46 pm

The principal weather site for a region is chosen to be the best in terms of quality and duration and non-missing data. If an algorithm flags that a heterogeneity is present, then selects a nearby site or sites for homogenisation, there is no prior evidence that the latter sites have qualities that will improve the principal site. They have errors of their own; often it is impossible to recognise these errors because of lack of reporting of meta data etc.
So, what is the point of selecting a primary site then probably making it worse (because you cannot usually know that you will make it better) by comparison with nearby sites. Should you not first compare the nearby sites for differences from the principal site, then correct them for heterogeneities before feeding them back as adjustments? This is a form of circularity that it would be hard to express more clearly. It is merely smearing the errors.
I take exception to the publication of correlation coefficients like those from Victor Venema that “For these five years, the r2 between the 60 monthly USCRN and USHCN version 2 anomalies is 0.998 and 0.996.” (Menne et al. On the reliability of the U.S. surface temperature record. J Geophys. Res., VOL. 115, D11108, doi:10.1029/2009JD013094, 2010.)” The correlation coefficient will vary with length of data (= number of observations), whether it is Pearson or rank or another variant. It will be different for daily data over the same period as averaged monthly data and different for averaged annual data. There is an easy way to test this. Take a single weather site, lag its daily data by 1 day and then try the variables I’ve listed above, original against lagged. You’ll probably also find that Tmax has a different R to Tmin and other interesting confusions. In other words, climate studies benefit from a statistician’s input from the very start.
I simply do not believe that any major tempertaure/time series has it right so far for the last century.
There is no point in sophisticated further analysis like the calibration of proxies until the fundamental temperature/time series can pass tests of reliability.
This is one reason why there is so much junk in climate papers. It’s a golden rule to set the foundations firmly before you start building.

October 16, 2012 2:19 am

One of the reason, I give SkS better grades for science, is reading comprehension. The comment of Peter S above is a good example. There is a disturbing discrepancy between what I wrote and what Peter seem to think I wrote.
But actually, I was comparing the scientific quality of the posts at WUWT and SkS, not the comments. Unfortunately, the posts at WUWT show the same lack of reading comprehension, at best. For example the post on the conference abstract by the Greek hydrologists, who falsely wrote that the temperature trend is between 100% and 50% of the reported value, which Anthony even more wrongly reported as being 50%. Whether this is a problem with reading comprehension or a lie, I leave up to the reader.
J. Philip Peterson says: “Victor Venema It’s not for me to say, but I think you are welcomed here any time.”
Thank you. I must say, reading many of the above comments, I do not feel welcome, but I had not expected much different. I know WUWT and thus know that there is no argument that will ever convince the locals.
Chuck Nolan says: “Are you saying they [urban temperatures] should be increasing at the same rate because of AGW? That seems wrong to me because if rural areas are basically unchanged and something makes the temperature go up there could be AGW (or maybe some other GW). My understanding is urban areas are unique types of heat sinks. I would expect urban areas to have temperature increases based on the rate of urbanization and build a warm bias. It would seem to me growing urban areas should not be used to identifying AGW.”
Urbanization can gradually increase the local temperature observed at an urban station. If this gradual increase is not representative for the area around the station, this would cause a bias in the estimates of the large-scale temperature trend. Thus if you are interested in this large-scale temperature, you have to remove this local effect by homogenization. If you are interested in the urban climate, you should keep it.
Another option would be to remove urban areas from the dataset. As far as I know, this has also been done and the trend similar. (As I mentioned before, I do know a little about homogenization, but an no expert for urbanization. Bad willing people call this “hand waving”, I prefer not to lie about my expertise.) The disadvantage of this approach is that you have to assume that your information on the urban/rural character of the stations is perfect and that you remove more data as needed as only a period of the data will typically be affected by urbanization and the rest of the data is useful and helpful in reducing the errors. All in all, homogenizing the data is more reliable and accurate as removing part of the data.
Chuck, I hope that that answers your question.
Geoff Sherrington: “Should you not first compare the nearby sites for differences from the principal site, then correct them for heterogeneities before feeding them back as adjustments? This is a form of circularity that it would be hard to express more clearly. It is merely smearing the errors.”
Yes, you should homogenize all stations simultaneously. I think the simple graphical examples on my blog on the fundamentals of relative homogenization show that this is not circular reasoning.
Geoff, with which quality would you be satisfied? Please remember that every measurement in every science has an uncertainty. The remaining uncertainty in the trends after homogenization have been studied, also in a article of mine, and found to be smaller as the temperature trend observed. That is how science works, you show that your data is good enough to answer your question. Refusing to analyse data and draw conclusions because the data is not perfect is unreasonable and one of the signs of denialism.
It is interesting that the discussion keep on focussing on urbanization, which is not the topic of this post and on which I am no expert as I have admitted multiple times, but that the other problems with this and previous posts are mainly ignored by the locals.

phi
October 16, 2012 3:38 am

The second figure is particularly interesting. These adjustments are typical of homogenization process (correction of mostly downward jumps). This feature is indicative of anthropogenic disturbance and has been well described in Hansen et al. 2001 (http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf) :
“…if the discontinuities in the temperature record have a predominance of downward jumps over upward jumps, the adjustments may introduce a false warming…”
We can consider this as evidence that most stations are disturbed by urbanization. It is also the demonstration that we should not homogenize the data when we want to use them to evaluate long-term trends.

richardscourtney
October 16, 2012 5:26 am

Victor Venema:
At October 16, 2012 at 2:19 am you assert

That is how science works, you show that your data is good enough to answer your question. Refusing to analyse data and draw conclusions because the data is not perfect is unreasonable and one of the signs of denialism.

NO! That is NOT “how science works”: it is a description of pseudoscience.
No data is ever “perfect”. So, in a real science “you” determine that data emulates reality with adequate reliability, accuracy and precision to provide sufficient confidence that the data is adequate for conclusions to be drawn from it.
It is pseudoscience in its purest form to claim that imperfections in the data should be ignored if the data can provide a desired “answer”.
Therefore, it is necessary for the researcher to provide evidence that the data he/she uses has the necessary reliability, accuracy and precision for his/her conclusions to be valid. In the case of data homogenisation that has not been done. Indeed, the different research teams who provide the various global (and hemispheric) temperature data sets use different homogenisation methods and do not publish evaluations of the different effects of their different methods.
In the above article David Stockwell provides several pieces of evidence which demonstrate that GHCN homogenisation completely alters the empirical data in some cases such that the sign of temperature change is reversed; e.g. compare his figures numbered 1 and 2. That altered data is then used as input to determine a value of global temperature.
It is up to those who conduct the homogenisation to demonstrate that such alterations improve the reliability, accuracy and precision of the data. Claims that such alterations should be taken on trust are a rejection of science.
It seems you do not know the difference between science and pseudoscience so I will spell it out for you.
Science attempts to seek the closest possible approximation to ‘truth’ by attempting to find evidence which disproves an existing understanding in part or in whole and amends the understanding in the light of the evidence.
Pseudoscience decides something is ‘true’ then seeks evidence which supports that understanding while ignoring (usually with excuses) evidence which refutes it.
Richard

David Jay
October 16, 2012 6:00 am

V. Venema: But actually, I was comparing the scientific quality of the posts at WUWT and SkS
Can someone please purchase a clue for this fine gentleman?

Luther Wu
October 16, 2012 6:01 am

Victor Venema says:
October 16, 2012 at 2:19 am
“One of the reason, I give SkS better grades for science, is reading comprehension.
______________
A huge problem of SkS is that their message is controlled. Most
here would never be allowed to post there. There are many papers which profoundly influence the debate which will never be discussed or even mentioned- sometimes for weeks,if at all, until it seems, someone comes up with a twist of logic which seems to refute the paper, but which is nevertheless, artfully twisted. At SkS, there is no discussion of scientific topics outside of the party line, so to speak. They rationalize their exclusivity and controlled message and you may ascribe to the rationalization- you would know- but they are violating the most basic tenets of the scientific method. Another issue with that site is that they use and allow techniques better suited to propagandists than scientists, such as the continual use of highly inflammatory terms with which they describe any and all who disagree with them.
If those two issues alone do not send up red flags for you, then there is nothing else to say to you- you either have the consciousness to “get it”, or you don’t.
__________
VV says:
J. Philip Peterson says: “Victor Venema It’s not for me to say, but I think you are welcomed here any time.”
“Thank you. I must say, reading many of the above comments, I do not feel welcome, but I had not expected much different. I know WUWT and thus know that there is no argument that will ever convince the locals.
_______________
There are others who have expressed to you the same sentiments as J.P.P., in this thread.
Per one of my previous points, the same could not be said of SkS, where any counter point would never see the light of day. You realize that, don’t you?
Your words highlighted above are a blatant attack on the readers, here at WUWT. You have made them over and over, but here you still are. If you do not feel as welcome as you think you should, then remember that what goes around comes around. If you do not believe that or understand it, then again, there is nothing more to say to you about that topic-“let those who have the ears, hear”.
_______________
VV says:
“I am a scientist”.
_____________
We get it.
If we didn’t get it the first time you said it, we surely did the second time.
One gets the feeling that you are using your declaration in order to add credence to your words… employing a logical fallacy, as it were.
There are many here with far greater skills than I posses. As for me, I’m merely an engineer by training and not a scientist. I can read the papers and do the math… I can keep up. . As such, I must say that your own lack of reading comprehension and faulty logic leaves much to be desired; a case in point:
You insist that urbanization (UHI) and station siting have nothing to do with the discussion. Instead, they are really central to the discussion- why homogenize the data, otherwise?

beng
October 16, 2012 7:02 am

Victor says:
The period of urbanization is typically in the order of 30 years. After this there is no longer a bias in the trend, the temperature just has a fixed constant bias, which does not affect trend estimates.
Ridiculous. The “period” of urbanization for almost any site other than truly rural is as long as the whole record or longer to varying degrees. Urbanization hasn’t stopped or reversed anyway I know of in the US or elsewhere. A few exceptions would be insignificant.

October 16, 2012 7:50 am

Luther Wu says: “VV says: “I am a scientist”.”
I am, but I never wrote the sentence your quote. How are the reader supposed to trust you, if you cannot even cite right?
Luther Wu says: “You insist that urbanization (UHI) and station siting have nothing to do with the discussion. Instead, they are really central to the discussion- why homogenize the data, otherwise?”
The post is not about homogenization. If you would be interested in learning more, you would pick my brain about homogenization, that is a topic where I can give qualified answers. It would be better to discus urbanization with some one more knowledgeable. If the atmosphere here would be more welcoming these other experts would be more willing to answer.
There are so many more sources of inhomogeneities in climate data, there has been so much economic and technical development in the last centuries. However, somehow the “sceptics” blogs act as if urbanization is the only problem. One get’s the impression that there is a pattern here and that urbanization is a favoured topic because it is the main inhomogeneity that leads to an artificial warming, most other inhomogeneities lead to an artificial cooling of recent temperatures relative to past ones.
beng says: “Ridiculous. The “period” of urbanization for almost any site other than truly rural is as long as the whole record or longer to varying degrees. Urbanization hasn’t stopped or reversed anyway I know of in the US or elsewhere. A few exceptions would be insignificant.”
Urbanization of the complete city can continue for most cities, but the relevant urbanization is the one of the region around the station. The climate trend in the centre of London and Vienna is about the same as the rural trend outside the city. The temperature inside the city is higher due to the UHI effect, but apparently, the effect is no longer getting stronger.

Luther Wu
October 16, 2012 8:33 am

Victor,
You can certainly make the case that what I spoke to you about was not an exact quote.
What I said was a representation of what you said. You know that.
I think you would have enjoyed the ‘how many angels on the head of a pin’ thing from earlier times.
”Citing right’ has nothing to do with the substance of what was said.
How are (sic) the reader supposed to trust you, if you cannot even…” address the substance of the discussion?

Luther Wu
October 16, 2012 8:47 am

Victor Venema says:
October 16, 2012 at 7:50 am
“It would be better to discus urbanization with some one more knowledgeable. If the atmosphere here would be more welcoming these other experts would be more willing to answer.
__________________
Do you mean, “more welcoming” as in the manner that we would be welcome at that place to which you keep referring?
Or, what did you mean?
Was your purpose just to launch yet another verbal attack at us?
Those “other experts” can speak for themselves, n’est-ce pas?
Can their assessments withstand the real scrutiny which they would undergo, here?

October 16, 2012 9:23 am

laterite (David Stockwell),
I read with great interest your discussion of finding an illogical circularity in the commonly used homogenization techniques of time series data on surface temperature.
I am considering what kind of incorrect premises and subconscious biases could explain how both NOAA (GHCN)and NASA (GISS) could embrace a process like homogenization with its logical circularity faults.
Their apparent uncritical use of the homogenization techniques appears systemic in origin, not the result of a single dominant person. Therefore, a culture of uncritical tolerance likely existed for a method whose results were considered acceptable. So, why did they consider acceptable the warming bias that was somehow built into the illogical circularity of the homogenization process?
My thought is they knew of the warming bias and they possessed an ‘a priori’ premise that there must be that kind of warming from AGW. So I am led to think they embraced the process with its warming bias because it showed them what they already assumed / believed / desired to exist.
That is classic confirmation bias. Remember, this is NOAA & NASA having such a bias not a lone scientific researcher. There appears to be a disturbing problem in the climate science community’s ability to self correct systemic faults in several major US government scientific institutes.
The climate science community has been lazy. They need to revitalized themselves to take on some serious self correction of scientific faults in some major US government scientific institutes.
John

beng
October 16, 2012 9:23 am

***
Victor Venema says:
October 16, 2012 at 7:50 am
The climate trend in the centre of London and Vienna is about the same as the rural trend outside the city.
***
Even if I accept that, there ain’t many truly rural sites near London or Vienna. A comparison of American cities that still have at least some such sites near them would be most instructive. Wash DC, New York, etc, wouldn’t fit that description as suburbs have spread out for scores of miles.
If a core of a large city is relatively unchanged, surrounding/spreading suburbs are still going to increase UHIE there, tho at a lesser pace. Do you think, for example, 1-2C heated air immediately upwind of a city-core isn’t going to affect it? I agree the effect would be diluted somewhat, but certainly not completely.

October 16, 2012 12:28 pm

@beng. The region that will influence the temperature due to the UHI effect at the station is limited. Air unaffected by the UHI is continually mixing in from above. Furthermore, the higher temperatures lead to stronger gradients and thus stronger cooling by heat radiation, turbulence and convection.
I know of an empirical and a recent modelling study that showed that the heated air from Utrecht (a city in The Netherlands) can affect the temperature at De Bilt (the home of the Dutch weather service) if the wind is right. That is a distance of 3 kilometre. That may be a good estimate of a typical footprint around a station (the area that can influence the temperature). If the footprint would be less the modelling study would not have found an effect. If a typical footprint is much larger, it would have been obvious that Utrecht is a problem and that De Bilt should be treated as an urban station and these studies would not have been needed.

laterite
October 16, 2012 2:11 pm

Whitman: I tend to agree. While many methods in use are superficially plausible, such as selecting proxies by correlation and reference homogenization, close examination reveals the logical flaws, and testing on simulated data shows high false alarm rates. While it is possible to do the analysis without circular reasoning, the climate community shows little interest in improving the fundamentals of their analysis and are satisfied with empirical testing of new, more ad-hoc methods, especially if the methods appear to enhance global warming. The homogenization methods in use now are a mess of ad-hockery. Some methods such as pairs analysis seem to significantly diminish the false alarm rate, but my interest would be in “why?” and whether that fits into a standard theoretical framework as recognized by statisticians.

laterite
October 16, 2012 2:47 pm

@Victor Venema: You said: “I think the simple graphical examples on my blog on the fundamentals of relative homogenization show that this is not circular reasoning.” Your simple example does not show its not circular reasoning at all.
You responded previously that you think it perfectly justified to adjust the trend of any deviant temperature record to match the trend of some other sites. So I presume that when you go to the official Australian Bureau of Meteorology website and look up Deniliquin, for example, you would believe the strongly increasing trend and say that it is warming, even if in reality, a perfect thermometer record for the last 100 years would show a cooling trend?
Most people would have a less charitable take on that, and question the veracity of the BoM. You say the data needs to be changes according to the context of the study – biodiversity, climate change, whatever. Perhaps the data should have a label “Warning – this data only to be used for climate change studies.” \sarc off.

October 16, 2012 3:45 pm

laterite (David Stockwell),
That your false alarm rate is high is because you did not use a reference that is representative for the regional climate around the station you are testing. Using the mean temperature series for all of Australia as a reference is simply wrong.
In a recent paper where the most used and the most advanced homogenization algorithms were blindly validated, we also computed the false alarm rate (probability of false detection (POFD), see Table 9 of the article. Except for two methods they were at or below the traditional 5% level.
I do know why pairwise algorithms have a lower false alarm rate (FAR). The FAR is about 5% for the detection of breaks in the pairs of stations. After the detection in the pairs, you still have to attribute the break to one of the stations of the pairs. This attribution is only possible if a break is found in multi pairs. For the pairwise homogenization method of NOAA, the standard condition is that two breaks have to be found on exactly the same date. As a consequence, the FAR for this algorithm was below 1%.
No one prevents statisticians from working on homogenization of climate networks. I know two of them, it would be nice if more would devote their time to this beautiful statistical problem.

October 16, 2012 3:55 pm

laterite (David Stockwell),
I still have to see the first one hundred year time series without any inhomogeneities.
To study the variability of the Australian mean temperature, I would definitely prefer to use the homogenized data of the BoM over using one single series in a region known to be not representative for all of Australia because irrigation was introduced in the period of analysis.
Did you already read the article of Blair Trewin on the new homogenized daily Australian dataset? Just published online. Worth reading.

cohenite
October 16, 2012 4:59 pm

Victor Venema: The false alarm rates I quote in the paper are from Matthew J. Menne and Claude N. Williams. Homogenization of Temperature Series via Pairwise Comparisons. Journal of Climate, 22(7):1700{1717, April 2009, not my analysis. It is they who found FARs around or above 50% for reference homogenization. My ‘extended abstract’ is an example to show that deviant records are adjusted to the trend of the reference, whatever it is. The use of Australia is not important.
The process is like this:
1. The target record is compared with a reference 2. Because the probability of finding a jump on the relative difference between the target and the reference is greater than the probability of finding a jump, more jumps are found (high FARs).
3. After finding the break, if the the target is adjusted relative to the reference, then the trend of the target is coerced towards the reference. If a true break, the trend is biased towards the reference.
Because the trends of the targets have been determined by “peeking” at the overall network, one cannot then make a reliable statement about the overall trend of the network. That would be circularity.
Specific methods, like pairwise as described in Menne and Williams may mitigate this problem. I wouldn;t know without much more thought.
Blair Trewins ‘adaption’ of M&W has enough departures from M&W to invalidate the use of M&W as a source, IMHO. I have been going through the ‘wall of words’ on the ACORN study and there are a great many issues I take exception too.
For example, I believe the widespread use of a quadratic to fit Australian temperatures in the ACORN reports is unjustified, as robust empirical fluctuation analysis shows there is no significant change in trend over the last 100 years. Its transparent alarmism IMHO to use an unjustified quadratic, as the quadratic is suggestive of accelerating warming.
I can’t possibly get rebuttals published for every infraction, and try to put a stake through the heart of the problem.
You seem like a reasonable person. Perhaps you could tell me offline if an approach I am thinking of that is simple and avoids circularity has been tried before?
Posted for:

David Stockwell

October 16, 2012 5:07 pm

laterite says:
October 16, 2012 at 2:11 pm
[ . . . ] Some methods such as pairs analysis seem to significantly diminish the false alarm rate, but my interest would be in “why?” and whether that fits into a standard theoretical framework as recognized by statisticians.

– – – – – – –
laterite / David Stockwell,
Your profession is interesting. I think there is a good future in statistical auditing and statistical consulting on climate science research projects.
I am jealous. : )
John

markx
October 16, 2012 6:44 pm

Victor Venema says: October 15, 2012 at 8:24 am
“…Homogenisation is used to be able study large scale climate variability in a more accurate way. Removing the too low trend for an irrigated region, is what homogenisation is supposed to do. Just as it should remove the too high trend in case of urbanisation….”
It seems it should be very important that the original records are maintained and are readily accessible. The fact that regional records are first homogenized, then averaged is worrying, when they could perhaps simply be averaged.
The potential for ‘interpretation bias’ should be considered.

markx
October 16, 2012 7:24 pm

Victor Venema says: October 16, 2012 at 2:19 am

“…..Another option would be to remove urban areas from the dataset. …..
……The disadvantage of this approach is that you have to assume that your information on the urban/rural character of the stations is perfect and that you remove more data as needed as only a period of the data will typically be affected by urbanization and the rest of the data is useful and helpful in reducing the errors.
All in all, homogenizing the data is more reliable and accurate as removing part of the data…..”

Victor, I appreciate the good discussion.
But I feel (IMHO) the above conclusion fails on logic. If we don’t know the degree of effect from urbanization, or the degree to which rural stations are really tracking the temperature, then any adjustment must simply be an estimate, or best guess.
I’m not sure you can say one is better than the other.
My case would be the original records should be meticulously retained and readily available. The very world “homogenization” is worrying.

Tilo Reber
October 16, 2012 9:25 pm

Homogenization simply takes UHI and mixes it into the record. And with the majority of stations being subjected to UHI, the result is to drive the temperature record up. But it should never be mixed into the record. It should be removed. A record with UHI mixed in will yield a higher trend than one with UHI removed. I’ve been making this point for years.

October 16, 2012 9:48 pm

Hi Victor Venema,
I’m one of the non-“scientist” who’s comments are tolerated here. I ask questions, make wise cracks, make statements that I hope might be sometimes insightful. Sometimes my questions are stupid question that can annoy people who are more informed than I. (Sorry, Ric Werme, about the ‘stream of consciousness’ questions about Curiosity and water on Mars.) But such comments as I usually make are tolerated. People attempt to answer my questions. (Ric directed me to a NASA website.)
Comment on whatever site you like. Here you’ll be “put to the test” and have to put up with wise cracks from people like me if you’re a Gorephile or a Hansenite or a Mannequin. If your comments here are deleted, it won’t be because you disagree with the blog’s “consensus” but because you’ve been consistently and persistently disagreeable.
PS Where I work on one particular day, at one particulare moment, I had access to and checked 3 different temperature sensors. One read 87*F. One read 89*F. One read 106*F. None of them is more than 10 miles from the other at most. All were within 4 or 5 miles of me. One was just a few hundred yards away. Homogenized, what was the temperature where I was that day?

October 16, 2012 10:38 pm

Victor writes, “To study the variability of the Australian mean temperature, I would definitely prefer to use the homogenized data of the BoM over using one single series in a region known to be not representative for all of Australia because irrigation was introduced in the period of analysis.”
I would not prefer to use either method prior to determining which method is more reliable.
The only way to validate if homogenization data of the BoM is better is to to do a very careful analysis of the raw data for a large number of sites and account specifically for the factors impacting the temperature for each one by hand – a manual homogenization procedure. Preferably with more than one person analyzing each set of data independently. Then test the homogenization algorithm against the result to see what the differences are between the data that was manually homogenized and the data from the homogenization algorithm. Analyze any differences between the results of both methods and determine what causes those differences if any.
After a large study like the above is done, then I think it would be possible to determine which data is preferable.

Evan Thomas
October 16, 2012 10:59 pm

Tallbloke has a case study on the temp. records of Alice Springs, a small rural town in pretty much the centre of Australia. The records had been adjusted by the BoM. Records of the nearest (which are many hundreds of km. away) even smaller towns were cited. Worth a look if this matter is of interest to you. Cheers from now sunny Sydney.

richardscourtney
October 17, 2012 1:11 am

BobG:
At October 16, 2012 at 10:38 pm you write

Victor writes,

To study the variability of the Australian mean temperature, I would definitely prefer to use the homogenized data of the BoM over using one single series in a region known to be not representative for all of Australia because irrigation was introduced in the period of analysis.

I would not prefer to use either method prior to determining which method is more reliable.

Yes! Absolutely!
I point out that Victor Venema has not answered my post addressed to him at October 16, 2012 at 5:26 am. It included this

No data is ever “perfect”. So, in a real science “you” determine that data emulates reality with adequate reliability, accuracy and precision to provide sufficient confidence that the data is adequate for conclusions to be drawn from it.
It is pseudoscience in its purest form to claim that imperfections in the data should be ignored if the data can provide a desired “answer”.
Therefore, it is necessary for the researcher to provide evidence that the data he/she uses has the necessary reliability, accuracy and precision for his/her conclusions to be valid. In the case of data homogenisation that has not been done. Indeed, the different research teams who provide the various global (and hemispheric) temperature data sets use different homogenisation methods and do not publish evaluations of the different effects of their different methods.
In the above article David Stockwell provides several pieces of evidence which demonstrate that GHCN homogenisation completely alters the empirical data in some cases such that the sign of temperature change is reversed; e.g. compare his figures numbered 1 and 2. That altered data is then used as input to determine a value of global temperature.
It is up to those who conduct the homogenisation to demonstrate that such alterations improve the reliability, accuracy and precision of the data. Claims that such alterations should be taken on trust are a rejection of science.

Richard

October 17, 2012 4:29 am

Dear David Stockwell, did you read Menne and Claude N. Williams, “Homogenization of Temperature Series via Pairwise Comparisons”? Or did you get this chunk of information from a “sceptic” blog trying to mislead the public by selective quoting?
If you read the paper, you will see that these false alarm rates (FAR) are for the application of the homogenization method SNHT to a very difficult case. (You could even have cited a FAR of 100% for the case without any breaks, but I guess in that case people would have started thinking.) This was a case in which the regional climate signal used as reference was computed from only 5 stations with strong inhomogeneities (up to 2 times the noise). In this case the simple SNHT method interprets the inhomogeneities in the composite reference (a reference based on multiple stations) as breaks in the station.
SNHT was developed for manual homogenization, to guide a climatologist working carefully in the way described by BobG above. Such a climatologist would typically use more stations to compute the reference, at least 10. He would make sure to select stations that do not contain large inhomogeneities and would first homogenize the stations with the largest inhomogeneities.
The goal of Menne and Williams was to develop an automatic homogenization algorithm, because the climate network in the USA is too large to perform the homogenization by hand. Furthermore, as the work at NOAA, the climate sceptics would not accept the careful manual work suggested by BobG and claim that the malicious climatologist inserted the climate trend by homogenization and should use automatic methods, which can be tested independently. You cannot have it both ways. What you can do is compare the results of manual homogenization with the results of automatic methods and then you will see that the results are very similar. For such an automatic algorithm, Menne and Williams did not see SNHT with a composite reference as a good solution and thus preferred the pair wise method, which indeed produced a very small FAR.
By the way, the FAR is not a good indication of the quality of a homogenization method. If the break is detected in the year before or after the real break, this detection would be counted as a false alarm, whereas for the homogenization of the data and their trends this is no problem. Especially, for small breaks the uncertainty in the date of the break is larger. Thus a very good method, which is able to detect many small breaks, may well have a high FAR. It is better to compute how accurately the true trend and the decadal variability are reproduced in the homogenized data. The FAR is interesting for understanding how the homogenization algorithms work, but it is not a good indicator of the quality of the homogenized data.
Just because Trewin improved/changed the pair wise algorithm does not mean that either version is wrong. Both algorithms improve the quality of raw data. Maybe the new version of Trewin is more accurate, maybe it also just fits better to the statistical characteristics of the Australian climate network.

October 17, 2012 5:15 am

markx says: “Victor Venema: “…All in all, homogenizing the data is more reliable and accurate as removing part of the data…..”
But I feel (IMHO) the above conclusion fails on logic. If we don’t know the degree of effect from urbanization, or the degree to which rural stations are really tracking the temperature, then any adjustment must simply be an estimate, or best guess. I’m not sure you can say one is better than the other.”

That is the advantage of homogenization: we do not have to know in advance how strong the effect of urbanization in the city was. We see the magnitude of this in the difference time series between the station in the city and the surrounding rural stations.
Guessing which stations are affected by urbanization, not only now (where surfacestations could help in the USA), but during its entire life time, is difficult and error prone.

October 17, 2012 5:25 am

Gunga Din says: “I’m one of the non-“scientist” who’s comments ..”
Most people are non-scientist. What matters is the quality of the arguments.
Gunga Din says: “PS Where I work on one particular day, at one particulare moment, I had access to and checked 3 different temperature sensors. One read 87*F. One read 89*F. One read 106*F. None of them is more than 10 miles from the other at most. All were within 4 or 5 miles of me. One was just a few hundred yards away. Homogenized, what was the temperature where I was that day?”
After homogenization the temperatures at these stations would still be different. Homogenization makes the data temporally most consistent, it does not average (or even smooth as Anthony falsely claims) the observations of multiple stations. Having so many stations close together is great. That means that they will be highly correlated (if they are of good quality; is the one reading 106F on a wall in the sun?) and that the difference time series between the stations will only contain little weather noise (and some measurement noise). Thus it should be possible to see very small inhomogeneities and correct them very accurately.

October 17, 2012 5:34 am

BobG says: “After a large study like the above is done, then I think it would be possible to determine which data is preferable.”
I am sure that using homogenized data is better than using a single station to study the climate of a continent. Especially if the temperature at this single station is reduced by the introduction of irrigation during the study period.
Validation studies of homogenization methods are regularly performed. A recent blind benchmarking study of mine was very similar to the way you like the validation of homogenization methods to be done. Any comments on this paper are very welcome. We plan to perform similar validation studies in future. Thus if you have good suggestions, we could implement them in the next study.

October 17, 2012 5:42 am

Richardscourtney: “I point out that Victor Venema has not answered my post addressed”
Dear Richard, I have a day job and am not a dog that jumps through hoops. If you have any specific comments or questions, and not just misquotations of my comments and general accusations, which are simply untrue from my perspective, you have a better chance of getting an answer.
Richardscourtney: “It is up to those who conduct the homogenisation to demonstrate that such alterations improve the reliability, accuracy and precision of the data. Claims that such alterations should be taken on trust are a rejection of science.”
Could you maybe indicate specifically in which way you see my validation study as insufficient? Maybe I could then point you to further studies that also included those aspects or consider them in future studies. Such a specific comment would be more helpful.

richardscourtney
October 17, 2012 7:10 am

Victor Venema:
Thankyou for your post addressed to me at October 17, 2012 at 5:42 am which answers a point I first put to you in my post at October 16, 2012 at 5:26 am. And I apologise if my pointing out you had overlooked my post but had answered four subsequent posts “interrupted [your] day job”.
My post said

In the above article David Stockwell provides several pieces of evidence which demonstrate that GHCN homogenisation completely alters the empirical data in some cases such that the sign of temperature change is reversed; e.g. compare his figures numbered 1 and 2. That altered data is then used as input to determine a value of global temperature.
It is up to those who conduct the homogenisation to demonstrate that such alterations improve the reliability, accuracy and precision of the data. Claims that such alterations should be taken on trust are a rejection of science.”

You have replied saying

Could you maybe indicate specifically in which way you see my validation study as insufficient? Maybe I could then point you to further studies that also included those aspects or consider them in future studies. Such a specific comment would be more helpful.

Your “validation study” says

To reliably study the real development of the climate, non-climatic changes have to be removed.

OK. But if that is valid then such “removal” must increase the reliability, accuracy and/or precision of the data for the stated purpose.
In the example I cited from Stockwell’s article, the “removal” has had extreme effects (e.g. changing measured cooling into warming) over large area. It is not obvious how or why “non-climatic changes” would have – or could have – provided such a large difference as exists between the measured data and the homogenised data. The nearest to an explanation in your “validation study” is provided by your comment on your intercomparison study which says

Some people remaining skeptical of climate change claim that adjustments applied to the data by climatologists, to correct for the issues described above, lead to overestimates of global warming. The results clearly show that homogenisation improves the quality of temperature records and makes the estimate of climatic trends more accurate.

But I am not interested in PNS nonsense about “quality”: I am interested in the scientific evaluations of data which are reliability, accuracy and precision. And I fail to understand how it is possible to know that an “estimate” is “more accurate” when there is no available calibration for the estimate.
In other words, my question is
(a) What “non-climatic changes” would require such large alteration to the data of the example?
and
(b) How does the “removal” of those “non-climatic changes” affect the reliability and the accuracy and the precision of the data?

I have failed to find anything in your “validation study” which hints at an answer to these basic questions which apply to all homogenised data and not only to the example.
I await your answer and thank you for it in anticipation.
Richard

richardscourtney
October 17, 2012 7:14 am

Moderators,
I have provided a post with severe formatting errors. Please discard it and replace it with this corrected version. Sorry.
Richard
_____________
Victor Venema:
Thankyou for your post addressed to me at October 17, 2012 at 5:42 am which answers a point I first put to you in my post at October 16, 2012 at 5:26 am. And I apologise if my pointing out you had overlooked my post but had answered four subsequent posts “interrupted [your] day job”.
My post said

In the above article David Stockwell provides several pieces of evidence which demonstrate that GHCN homogenisation completely alters the empirical data in some cases such that the sign of temperature change is reversed; e.g. compare his figures numbered 1 and 2. That altered data is then used as input to determine a value of global temperature.
It is up to those who conduct the homogenisation to demonstrate that such alterations improve the reliability, accuracy and precision of the data. Claims that such alterations should be taken on trust are a rejection of science.”

You have replied saying

Could you maybe indicate specifically in which way you see my validation study as insufficient? Maybe I could then point you to further studies that also included those aspects or consider them in future studies. Such a specific comment would be more helpful.

Your “validation study” says

To reliably study the real development of the climate, non-climatic changes have to be removed.

OK. But if that is valid then such “removal” must increase the reliability, accuracy and/or precision of the data for the stated purpose.
In the example I cited from Stockwell’s article, the “removal” has had extreme effects (e.g. changing measured cooling into warming) over large area. It is not obvious how or why “non-climatic changes” would have – or could have – provided such a large difference as exists between the measured data and the homogenised data. The nearest to an explanation in your “validation study” is provided by your comment on your intercomparison study which says

Some people remaining skeptical of climate change claim that adjustments applied to the data by climatologists, to correct for the issues described above, lead to overestimates of global warming. The results clearly show that homogenisation improves the quality of temperature records and makes the estimate of climatic trends more accurate.

But I am not interested in PNS nonsense about “quality”: I am interested in the scientific evaluations of data which are reliability, accuracy and precision. And I fail to understand how it is possible to know that an “estimate” is “more accurate” when there is no available calibration for the estimate.
In other words, my question is
(a) What “non-climatic changes” would require such large alteration to the data of the example?
and
(b) How does the “removal” of those “non-climatic changes” affect the reliability and the accuracy and the precision of the data?

I have failed to find anything in your “validation study” which hints at an answer to these basic questions which apply to all homogenised data and not only to the example.
I await your answer and thank you for it in anticipation.
Richard

richardscourtney
October 17, 2012 7:43 am

Victor Venema:
As an addendum to my post addressed to you at October 17, 2012 at 7:14 am, in fairness I think I should be clear about “where I am coming from”. This is explained in the item at
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
and especially its Appendix B.
Richard

October 17, 2012 7:56 am

richardscourtney says:
October 17, 2012 at 7:14 am
In the above article David Stockwell provides several pieces of evidence which demonstrate that GHCN homogenisation completely alters the empirical data
=======
Agreed.
It is well established by study after study that humans are incapable of acting without bias. Our sub-conscious drives us to make mistakes in the direction of our beliefs, and such mistakes are incredibly difficult for us to recognize.
Thus, when a methodology is reviewed by one’s peers, if the peers have similar beliefs to your own, they will tend to miss your errors. If the peers have opposing views, they will tend to catch your errors.
Thus, the Precautionary Principle argues that if one wants to be sure that one’s work is correct, it should always be peer reviewed by someone with opposing beliefs. If they cannot spot an error, then it is likely there is no error.
However, if someone with similar beliefs peer reviews your work, it really says nothing about the quality of your work, because the reviewer is likely to miss the same mistakes as the author.
Unfortunately, Climate Science has a long history of seeking like minded reviewers, which has introduced substantial methodology error into the field, undermining the credibility of the results.

October 17, 2012 8:17 am

The example above, of Australia before and after temperature homogenization clearly shows that the methodology is distorting the results not improving them. The long term cooling trend in the interior of Australia suddenly becomes a warming trend. A mild warming in the north east suddenly becomes an intense hot spot. The problem is that the adjustments are feeding back into the adjustments, increasing the error rather than reducing it.
On this basis Australians are facing a massive CO2 tax, which will force them to export their coal to China at reduced prices, rather than use it at home to produce low cost electricity. The Chinese will say thank you very much, burn the Ozzie coal to produce CO2, and make a pile of money in the process. All paid for by the Australian tax payer.
Shows what a few dollars invested in the right places can accomplish over time. The Chinese are turning Australia into their vassal state without ever firing a shot.

October 17, 2012 8:32 am

richardscourtney says: “In the above article David Stockwell provides several pieces of evidence which demonstrate that GHCN homogenisation completely alters the empirical data”
Ferd Berple thank you for repeating that statement; I missed that one. That is a clear statement and thus lends itself to an answer and a statement which is obviously wrong. Stockwell studied SNHT using a composite reference (computed the wrong way) and the GHCN uses a pairwise homogenization method.

October 17, 2012 8:55 am

richardscourtney: “In the example I cited from Stockwell’s article, the “removal” has had extreme effects (e.g. changing measured cooling into warming) over large area.”
Stockwell use the wrong reference and thus corrupted that data. Thus you cannot make any inference based on his study about people using homogenization methods the way they are supposed to be used.
That also answers your question: “(a) What “non-climatic changes” would require such large alteration to the data of the example?”
The are many more examples of non-climatic changes mentioned on my blog. Another example is mention in a paper I am just reading by Winkler on the quality of thermometers used before 1900. The glass had a different chemical composition at the time and thus a tendency to shrink in the first few years, which led to too high temperatures, about half a degree. This problem was discovered in 1842, long before post normal science.
richardscourtney: “But I am not interested in PNS nonsense about “quality”: I am interested in the scientific evaluations of data which are reliability, accuracy and precision.”
If you are really interested, then read the article on the validation study. You will find the root mean square error (which the normal newspaper readers calls quality, you may call it accuracy) in the trends in the raw data and in the homogenized data. You will see that after homogenization the errors are much smaller as in the raw data for temperature, especially for annual mean temperature. You will also see that the size of the remain error is small compared to the trend we had in the 20th century and the uncertainty in the trends in a real dataset will be smaller because metadata (station histories) are used to make the results more accurate and because the global mean temperature averages over many more stations.
That also answers your question: “(b) How does the “removal” of those “non-climatic changes” affect the reliability and the accuracy and the precision of the data?”

richardscourtney
October 17, 2012 9:08 am

Victor Venema:
Your comment at October 17, 2012 at 8:32 am says

richardscourtney says: “In the above article David Stockwell provides several pieces of evidence which demonstrate that GHCN homogenisation completely alters the empirical data”
Ferd Berple thank you for repeating that statement; I missed that one. That is a clear statement and thus lends itself to an answer and a statement which is obviously wrong. Stockwell studied SNHT using a composite reference (computed the wrong way) and the GHCN uses a pairwise homogenization method.

OK. I accept your statement saying
“Stockwell studied SNHT using a composite reference (computed the wrong way) and the GHCN uses a pairwise homogenization method,”
I mention but shall ignore that it has taken until now for you to have noticed what you now say is a fundamental flaw in Stockwell’s article and that you failed to notice my statement until after I had posted it three times and Fred Berple commented on it.
Much more important is what does the GHCN “pairwise homogenization method” do to the data and what are the answers to my questions with respect to the effect(s) of that method?
Richard

October 17, 2012 9:14 am

Richard: “Much more important is what does the GHCN “pairwise homogenization method” do to the data and what are the answers to my questions with respect to the effect(s) of that method?”
Science can only answer specific questions. If you ask it so generally, all I can answer childishly is that the pairwise homogenization method homogenizes the data. And that the effect is that on average the trend in the homogenized data is closer to the true trend as the trend in the raw data. The same goes for natural climatic decadal variability.

Reply to  Victor Venema
October 17, 2012 9:22 am

“…on average the trend in the homogenized data is closer to the true trend as the trend in the raw data. ”
That’s wronger than wrong based on what we learned:
http://wattsupwiththat.files.wordpress.com/2012/07/watts_et_al_2012-figure20-conus-compliant-nonc-noaa.png
And no, I’m not interested in your protests about this specifically…because this shows up in many surface datasets after homogenization is applied.
The only thing homogenization (in its current use in surface data) is good at is smearing around data error so that good data is polluted by bad data. The failure to remove bad data is why homogenization does this. If there was quality control done to the data to choose the best station data as we have done then it wouldn’t be much an issue. Homogenization itself is valid statistically, but as this image shows, in the current application by climate science it makes muddy water out of clean water when you don’t pay attention to data quality control:

October 17, 2012 9:31 am

[snip – reword that and I’ll allow it – Anthony]

phi
October 17, 2012 9:36 am

richarscourtney,
“what does the GHCN “pairwise homogenization method” do to the data and what are the answers to my questions with respect to the effect(s) of that method?”
In principle, the GHCN treatment only removes the discontinuities. The effects of this treatment are described in the link I’ve given above (Hansen et al. 2001):
“However, caution is required in making adjustments, as it is possible to make the long-term change less realistic even in the process of eliminating an unrealistic short-term discontinuity. Indeed, if the objective is to obtain the best estimate of long-term change, it might be argued that in the absence of metadata defining all changes, it is better not to adjust discontinuities.”

richardscourtney
October 17, 2012 9:36 am

Anthony Watts:
Thankyou for your post at October 17, 2012 at 9:22 am. It removes the need for me to answer the post at October 17, 2012 at 9:14 am from Victor Venema.
I only add that Victor Venema has evaded my question and that the issue of “the wrong method” is obfuscation because – as I said – the different teams producing the global (and hemispheric) temperature time series each uses a different homogenisation method and they do not report the advantages/disadvantages of their methods.
Richard
REPLY: The biggest problem is that none of the keepers of these datasets show much interest in the measurement environment and its effects on the final product. If this were forensic science used in court, the data and the conclusions from it would be tossed out due to contamination. But in climate science, such polluted data is considered worthwhile. – Anthony

richardscourtney
October 17, 2012 9:42 am

phi:
re your post at October 17, 2012 at 9:36 am.
Yes, I know. Thankyou.
I repeat your quote from Hansen et al. to ensure that nobody misses it and because it is probably the only agreement I have with Hansen.
““However, caution is required in making adjustments, as it is possible to make the long-term change less realistic even in the process of eliminating an unrealistic short-term discontinuity. Indeed, if the objective is to obtain the best estimate of long-term change, it might be argued that in the absence of metadata defining all changes, it is better not to adjust discontinuities.”
Richard

richardscourtney
October 17, 2012 9:59 am

Anthony:
re your REPLY to me at October 17, 2012 at 9:36 am.
I have good reason to agree your statement that “none of the keepers of these datasets show much interest in the measurement environment and its effects on the final product”.
Indeed, this disinterest is not new. In the 1990s I was undertaking a field trip that involved my visiting three African countries and I offered to spend time examining met. stations while there. Phil Jones was not interested in my offer.
Richard

October 17, 2012 12:38 pm

“AndyG55”:
Your claim that the period of urbanization being 30 years is proof of why temperatures have been constant for 15 years seems illogical.
Urbanization is a continual process which has covered only a very small portion of the earth’s surface. It did not begin 30 years ago. Has it stopped?
Higher density and more pavement instead of dirt/gravel could be differences, but factors need to be properly studied.
If there is a significant rate of increase in urbanization and that affects accuracy of many more measurement stations, then there will be a false trend in the data.

October 17, 2012 1:11 pm

Leo Morgan;
Heh. Your critique of “understandability” of prose is on point, but then (as you smell the barn and rush towards your conclusion) you fall victim to Muphry’s Law, and give us your own howler: “the total temperature set of the toother stations”. What do teeth have to do with stations? 😉

October 17, 2012 1:23 pm

AndyG55 says:
October 15, 2012 at 2:00 pm
darn degree signs didn’t work, sorry.

Presuming you’re using a PC to type on, depressing Alt and entering 248 on the numeric keypad gives you ° .
E.g. 1° 2° 3°

October 17, 2012 2:05 pm

Prima facie, if homogenization consistently results in, e.g., adjusting older measurements down and recent measurements up, that implies knowledge of some way in which thermometers and/or methods of reading them were all biased warm in the past and cool in the present. This is so implausible that it would require very thorough and detailed documentation and analysis to justify. Which, of course, is entirely absent.
The conclusion is thus that the result desired is determining the nature and direction of the adjustments. Which is a Feynman-sin of the highest order.

October 17, 2012 4:45 pm

Victor Venema writes, “Validation studies of homogenization methods are regularly performed. A recent blind benchmarking study of mine was very similar to the way you like the validation of homogenization methods to be done. Any comments on this paper are very welcome. We plan to perform similar validation studies in future. Thus if you have good suggestions, we could implement them in the next study.”
One of the things that I think you should test your method against is UHI effect. For the purposes of a test, assume 100 years of data from 100 stations. Assume that 10% of the stations over 100 years have a 0.5 degree upwards bias due to UHI. Assume 10% of them have a 1.0 degree bias due to UHI, Assume that 10% have a 1.5 degree bias due to UHI. Assume that the other biases cancel out. Assume that the overall real temperature change is an increase in 0.5 degree C. That is for the test purpose what the average temperature change of all 100 stations would be if there were no UHI. How well does your homogenization algorithm do in this case? The various percentage numbers and temperature increases should be variables that can be changed. And the test should be done repeatedly to see how well it performs with various percent of stations with UHI. A good performance is one in which the UHI effect is reduced and the end result is more accurate than simply averaging the temperature readings together. Graphing the results showing how well it performs as the overall percent of stations with UHI increases would be helpful. Basically, find if there are points at which the homogenization algorithm performs well and where it performs poorly.
Next do the same test but with the assumption that there are stations with high levels of UHI clustered together in groups of 3 but less than 6 for example.
Those test cases would be a good start.

October 18, 2012 2:39 am

Let’s hope this comment is allowed. Interesting that a blog that complains about SkS, rejects harmless comments itself.
BobG says: “One of the things that I think you should test your method against is UHI effect. “
In my own validation study, we included local trends to simulate the UHI effect.

“In 10% of the temperature stations a local linear trend is introduced. The station and beginning date of the trend were selected at random. The length of the trend has a uniform distribution between 30 and 60 yr. The beginning and the trend length were reselected as often as necessary to ensure that the local trend ended before the year 2000. The size of the trend at the end is randomly selected from a Gaussian distribution with a standard deviation of 0.8 C. In half of these cases the perturbation due to the local trend continues at the end of the trend, e.g. to simulate urbanization, in the other half the station returns to its original value, e.g. to simulate a growing bush or tree that is cut at the end.”

It would be interesting to increase this fraction of stations with local trends to unrealistic high numbers and quantify at which moment homogenization algorithms no longer work well. Maybe someone can correct me, but as far as I know such a study has not been done yet. It is on my list.

richardscourtney
October 18, 2012 6:29 am

Victor Venema:
At October 18, 2012 at 2:39 am you make the fallacious assertion

Let’s hope this comment is allowed. Interesting that a blog that complains about SkS, rejects harmless comments itself.[no comments from you have been rejected, perhaps you can give times of postings so a follow up can be done . . mod]

WUWT doesn’t and posts when where and from whom a comment has been snipped.
SkS does and makes no mention of having censored comments.
Now, please answer my question that you have repeatedly evaded with pathetic excuses. I remind that it is
When using the method which you claim to be correct and use for homogenisation
(a) What “non-climatic changes” would require such large alteration to the data of the example?
and
(b) How does the “removal” of those “non-climatic changes” affect the reliability and the accuracy and the precision of the data?

You have repeatedly told us you are a scientist and any scientist has such information about the method he/she uses and reports it.
Richard

October 18, 2012 8:00 am

Dear mod,
richardscourtney is citing my remark on the removal of a comment of mine on Watts et al. by Anthony.
Dear Richard,
Question (a) was already answered. Irrigation close to a measurement station leads to a local temperature effect that is not representative for the large-scale climate and is non-climatic in this sense. If it is not a local effect, but multiple stations are affected, then you can keep it. In this example there were multiple stations, Stockwell could have kept the signal, but he has chosen to use the wrong reference (the mean over all of Australia, in stead of the mean over the direct neighbours) and consequently removed the cooling effect in this region.
I cannot answer question (b) with a number, this erroneous study was not mine, you will have to ask Stockwell. Thus may I answer with a return question. You do expect climatologists to remove the temperature trends due to urbanization (an in crease in the urban heat island), why do you see irrigation near a single station as a different case? That sounds in consistent to me.
In general you could read the validation study I have linked here several times to see that homogenization, the removals of non-climatic changes, improves the accuracy of temperature data. How much the improvement will be, depends on the specific case.

phi
October 18, 2012 8:33 am

Victor Venema,
Please, could you tell us by who Hansen et al. 2001(http://pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf) has been refuted or explain why you did not heed his warnings :
“…if the discontinuities in the temperature record have a predominance of downward jumps over upward jumps, the adjustments may introduce a false warming…”
“However, caution is required in making adjustments, as it is possible to make the long-term change less realistic even in the process of eliminating an unrealistic short-term discontinuity. Indeed, if the objective is to obtain the best estimate of long-term change, it might be argued that in the absence of metadata defining all changes, it is better not to adjust discontinuities.”
This applies exactly to the type of homogenization performed with GHCN.

October 18, 2012 9:10 am

Dear Phi,
I only had a short look at the text, but if I understand it correctly Hansen is talking about homogenization using metadata only (data about data, the station history), not about statistical homogenization. At the time there were no good automatic statistical homogenization methods.
The problem with homogenization using only metadata is that typically only the discontinuities are documented, the gradual changes are not. Discontinuities can be caused by relocation, changes in the instrumentation or screen. These are the kind of thing that leave a paper trail. Gradual changes are due to urbanization or growing vegetation, which are typically not noted and whose magnitude is not known a-priory.
If you only homogenize discontinuities and not the gradual changes you can introduce an artificial trend. Imagine a saw tooth signal, which does not have a long term trend. It slowly goes up and after some time jumps down again (multiple times). If you would only remove the jumps, the time series would continually go up and the trend would worse.
Thus if you homogenize, you should homogenize all inhomogeneities, the discontinuities, but also the gradual ones. In the above example of a saw tooth signal the trend would again be flat if you also correct the slowly upward parts. That is what Hansen is saying. I fully agree with that.
It is very good to use metadata. If the size of the breaks is know from parallel measurements to adjust these jumps. Also the time of observation bias corrections are an example of using metadata to homogenize a climate record. However, additionally you should always also perform relative statistical homogenization by comparing a station with its neighbours (in which you can again use metadata to precise the data of the breaks).
I hope that answers your question.

richardscourtney
October 18, 2012 11:29 am

Victor Venema:
Thankyou for your answer to me at October 18, 2012 at 8:00 am. Please note that I genuinely appreciate your taking the trouble. Unfortunately, I am underwhelmed by the answer.
You say

Question (a) was already answered. Irrigation close to a measurement station leads to a local temperature effect that is not representative for the large-scale climate and is non-climatic in this sense. If it is not a local effect, but multiple stations are affected, then you can keep it. In this example there were multiple stations, Stockwell could have kept the signal, but he has chosen to use the wrong reference (the mean over all of Australia, in stead of the mean over the direct neighbours) and consequently removed the cooling effect in this region.

Thankyou for that. I can see how use of that different reference may avoid the changing of sign in the trend in South Australia. But I am not aware of any “irrigation” that has been conducted over the vast area of the bulk of central Australia. Indeed, I am not aware of any significant anthropogenic effects over the bulk of that great area.
I stress that I do recognise the homogenisation method used in the above article was NOT the method which you apply, but I asked about “When using the method which you claim to be correct and use for homogenisation”. And you point to this in your reply which I quote. But your answer omits important information; i.e.
Does your method not alter the temperature data over the bulk of the central region of Australia and if it does then why?
You also say

I cannot answer question (b) with a number, this erroneous study was not mine, you will have to ask Stockwell. Thus may I answer with a return question. You do expect climatologists to remove the temperature trends due to urbanization (an in crease in the urban heat island), why do you see irrigation near a single station as a different case? That sounds in consistent to me.

That is an avoidance of the question because I asked about “When using the method which you claim to be correct and use for homogenisation”.
Of course, it may be that your studies do not cover the regions of Australia and, therefore, you do not have the requested data available to you. Clearly, in that case, you are entitled to say you are not willing to repeat the study using your method merely because I have asked for the data. Indeed, why should you bother?
However, I am surprised if the global data on which you work does not include Australia. And you have not said that is the case. Instead, you say “this erroneous study was not mine, you will have to ask Stockwell” although I asked about “When using the method which you claim to be correct”.
And I do not see any difference between land use change, irrigation and UHI in the context of the putative need for homogenisation. I do not understand why you suggest I am “inconsistent” in this way because I have not made any hint that I could be.
You conclude by saying to me

In general you could read the validation study I have linked here several times to see that homogenization, the removals of non-climatic changes, improves the accuracy of temperature data. How much the improvement will be, depends on the specific case.

I have read it and I quoted from it in my post to you which amended my question in the light of what it says. You say your method “improves the accuracy of temperature data” but I supported Anthony in his rebuttal of that which says homogenisation merely contaminates good data with errors from poor data.
That data contamination is the reason why I am trying to debate a specific case instead of generalisations such as “How much the improvement will be, depends on the specific case”: if the accuracy is reduced in the specific case then the generalisation is shown to be untrue and needs to be shown for each case.
Please note that this is WUWT and, therefore, I am trying to engage in a serious review of the method you espouse. All ideas are subject to challenge here and if you prove your case then I and others here will support it. But you still seem to think WUWT acts like some warmist ‘echo chamber’ where supported generalisations are cheered and not challenged while opposing points are censored and/or demeaned.
Richard

phi
October 18, 2012 12:57 pm

Victor Verena,
Thank you for your answer, it satisfies me and rejoiced me. I should add that I did not expect such a response.
Let me start by clarifying two points:
1. Metadata are probably not at the heart of the problem.
2. Techniques of identification and quantification of discontinuities are not at issue, interpretation of jumps is.
I see that you share the views of Hansen et al. 2001 and for you the treatment of discontinuities must be accompanied imperatively by correction of trends. I think the same. In theory, I have nothing to complain about. In practice, this is something else.
Detection of discontinuities is relatively easy. Attribution of trend to climatic or non-climatic cause is a problem of such other magnitude.
In practice, national offices do not even try this exercise and simply adjust discontinuities. The results are grossly wrong; in my opinion, according to Hansen et al., and if I understand for you so well.
BEST which does not correct trends but practice implicit adjustments of discontinuities therefore also provided unusable results
GHCN crudely homogenizes datasets (the quality is much lower than national offices) and, of course, the correction of trends is excluded. So GHCN series are unusable.
Which order of magnitude are we talking about?
To get an idea, we must consider homogenization of long series, say, continuous on the twentieth century. All those I know are around 0.5 ° C per century.
Is it enough? Probably not but that’s another story.
In any case, I’m very glad that professionals are aware of the homogenization bias issue and I do not doubt that a satisfactory response will soon be made.

laterite
October 18, 2012 1:37 pm

@Victor Venema: Again, the use of Australian temperature is irrelevant. Homogenization would coerce trends towards the price of beans if that was the reference. The coercion of trends is an ‘unavoidable side effect’ of attempting to correct for jumps.

October 18, 2012 2:26 pm

Victor Venema says:
October 17, 2012 at 5:25 am
Gunga Din says: “I’m one of the non-“scientist” who’s comments ..”
Most people are non-scientist. What matters is the quality of the arguments.
Gunga Din says: “PS Where I work on one particular day, at one particulare moment, I had access to and checked 3 different temperature sensors. One read 87*F. One read 89*F. One read 106*F. None of them is more than 10 miles from the other at most. All were within 4 or 5 miles of me. One was just a few hundred yards away. Homogenized, what was the temperature where I was that day?”
+++++++++++++++++
Victor: After homogenization the temperatures at these stations would still be different. Homogenization makes the data temporally most consistent, it does not average (or even smooth as Anthony falsely claims) the observations of multiple stations. Having so many stations close together is great. That means that they will be highly correlated (if they are of good quality; is the one reading 106F on a wall in the sun?) and that the difference time series between the stations will only contain little weather noise (and some measurement noise). Thus it should be possible to see very small inhomogeneities and correct them very accurately.
==============================================================
So siting does matter.
If I understood the gist of Anthony et al., there are 5 basic classes of stations based on the quality of their sitings. BEST adjusted the two better siting classes up based the poorer quality sites in the 3rd class. Anthony et al looked at the difference if the 3rd, poorer sites were not used to raise the data from the two better class sites.
Does anybody care that the better surface temperature data is and has been corrupted by less reliable data? How many trillions are being bet on bad data?
You asked me about the 106* siting conditions. There appears that those sitting behind a desk don’t care that much. They’d raise the 87 and the 89 based on the 106 if they could without the adjustment being noticed.
Anthony et al has noticed and questioned such things. I’m glad. You should be too.

October 18, 2012 3:59 pm

Dear richardscourtney,
I do not have a homogenization algorithm of my own. The people working on homogenization asked me to lead the validation study because I had no stake in the topic. Up to the validation study, I mainly worked on clouds and radiative transfer. A topic that is important for both numerical weather prediction, remote sensing and climate. Thus professionally, I couldn’t care less whether the temperature goes up, down or stays the same.
Nor have I homogenized a dataset. And if I had, I would most likely have homogenized a dataset from my own region and not from the other size of the world. The only people working of the statistical homogenization of a global dataset are your friends from NOAA.
I hope that explains the misunderstanding between the two of us.
For the first part of the answer to question (a) the homogenization method used is irrelevant. Thus the statement: “Irrigation close to a measurement station leads to a local temperature effect that is not representative for the large-scale climate and is non-climatic in this sense. If it is not a local effect, but multiple stations are affected, then you can keep it. ” is still okay.
The post was about a station station in an irrigated area. I thought that was what your word “example” referred to.
I know I am at WUWT and that many people here falsely believe without proof that “homogenisation merely contaminates good data with errors from poor data”.

October 18, 2012 4:21 pm

Dear phi,
Attribution of trend to climatic or non-climatic cause is a problem if you only have one station. If you have two stations close together they will measure the same large-scale climate. If there are no inhomogeneities, the difference time series of these two stations would be noise, without any trend of jumps. If you see a trend in the difference time series between the two stations, you know that this trend is artificial. By looking at multiple pairs, you can infer which of the stations is responsible for the trend in the difference time series.
If there is a trend in the difference time series, you can correct it by inserting a number of small jumps in the same direction. Thus, just because a relative homogenization method does not explicitly take local trends into account, it will correct them.
As you say, GHCN can not homogenize its data as well as the national offices could. The main reason is that the GHCN dataset does not contain as many stations and that thus the correlation between the stations are lower and consequently the noise of the difference time series is large. Consequently, you can only detect the larger inhomogeneities in GHCN.
This mean, by the way, that the trend in the homogenized GHCN dataset is a bit biased towards the trend in the raw data. As temperatures in the past were measured too higher, the trend in the raw data is lower as the trend in the homogenized data. The trend in the real global temperature is thus likely stronger as seen in the homogenized GHCN dataset.
The HadCRU dataset based on nationally homogenized data is thus likely better as GHCN. The GHCN approach has the advantage that everyone can study the code of the homogenization algorithm and verify that it works. This transparency may be more important in the Land of the “Skeptics”. It is also always good to be able to compare two methods and datasets with each other.

October 18, 2012 4:29 pm

laterite says: “@Victor Venema: Again, the use of Australian temperature is irrelevant. Homogenization would coerce trends towards the price of beans if that was the reference. The coercion of trends is an ‘unavoidable side effect’ of attempting to correct for jumps.”
Homogenization removes statistically significant differences between a candidate station and surrounding reference stations with the same regional climate. The key idea you seem to ignore is that the climate varies slowly in space. Thus using a regional reference series will show about the same climate as the candidate (where it homogeneous) and a continental “reference” will not.

richardscourtney
October 18, 2012 4:38 pm

Victor Venema:
It is nearing 1 in the morning here and I am going to bed, but I provide this brief acknowledgement of your post to me at October 18, 2012 at 3:59 pm to demonstrate that I appreciate it.
I hope that I will be able to spend time on a proper reply when I arise and after breakfast. For now, I point out that several here have much evidence for the ‘smearing’ effect of homogenisation.
Richard

October 18, 2012 4:42 pm

@Gunga Din. Naturally siting matters. I am sure nobody said otherwise. It matters for the absolute temperature values recorded. And consequently it also matters for the trends in the raw data if you combine data from two different sites or if the surrounding changes the siting quality. After homogenization the problems for the trends should be minimal.
This is no reason not to care about siting: for more detailed studies you would like the signal to be purely about temperature and not have additional variability due to solar radiation, wind or rain. Also for studying changes in the variability of the weather and extremes, siting is very important. For such studies Anthony’s work on siting quality will become very valuable when this information on the quality of the stations spans a few decades.

October 18, 2012 7:24 pm

Victor Venema says:
October 18, 2012 at 4:42 pm
@Gunga Din. Naturally siting matters. I am sure nobody said otherwise. It matters for the absolute temperature values recorded. And consequently it also matters for the trends in the raw data if you combine data from two different sites or if the surrounding changes the siting quality. After homogenization the problems for the trends should be minimal.
This is no reason not to care about siting: for more detailed studies you would like the signal to be purely about temperature and not have additional variability due to solar radiation, wind or rain. Also for studying changes in the variability of the weather and extremes, siting is very important. For such studies Anthony’s work on siting quality will become very valuable when this information on the quality of the stations spans a few decades.
==============================================================
But meanwhile the data from bad sites now and into the past has colored is coloring the temperature warmer. The UN and the Obama EPA wants to tear down and rebuild the globe’s economies using such flawed data as the lever to do so. The only genuine hockey stick in all this CAGW mess is Al Gore’s and the Solyndra investors’ bank accounts.
The station siting matters. How the data from the stations is handled matters, not just in the tomarrows but in the yesterdays.

phi
October 19, 2012 12:47 am

Victor Venema,
Attribution of trends is not as easy as you say. Even with a large amount of stations nearby. Noise is important and anthropogenic effect has a continuous character that can perfectly affect in parallel the several (or the majority) of stations. The bias of discontinuities is much more reliable to assess the overall impact of perturbations.
Anyway, as I have already said, correcting tendencies is not applied or when so quite marginaly and inadequately. It is fairly easy to demonstrate by analyzing the temperatures differential between individual stations and regional averages, substantial and regular differences in trends persist over periods of several decades. This character can not be assimilated to noise.
The general principle, which is in fact at the basis of the reflection of Hansen et al., is that homogenization is expected to have a neutral effect on trends. If this is not the case, the bias must imperatively be explained. The UHI effect may be a rational explanation for downward and certainly not upward adjustments.

phi
October 19, 2012 3:00 am

As an example, the Geneva station (but all stations follow the same pattern):
http://data.imagup.com/12/1165303601.png
The adjustment of 1962 corresponds to the move of the station from city center to airport. Where are UHI continuous adjustments ? Subsequent cooling to the remoteness from the city is however well done!

October 19, 2012 6:12 am

@phi. Local gradual trends are more difficult to adjust accurately than discontinuities. I only wanted to argue that you can and should do it. Hansen did not argue that homogenization is expected to have a neutral effect on trends. We know that the temperatures measured in the past were too high due to problems with protecting the thermometers for solar and heat radiation. And in a specific network you can also expect biases due to typical changes, such as the transition from Liquid in Glass thermometers to automatic weather stations in the US.
Do you have reasons to expect problems with urbanization in Geneva?

richardscourtney
October 19, 2012 11:37 am

Victor Venema:
I am now able to give a proper and considered reply to your post to me at October 18, 2012 at 3:59 pm. As I said in the early hours of this morning, I am genuinely grateful for your reply.
Firstly, I apologise for mistakenly thinking you were involved in or with the compilers of the GHCN global temperature time series. Much of my questioning of you was based on that misunderstanding and so was my frustration at what seemed to be your evasions.
Clearly, I need to explain how I gained the misunderstanding which has required this apology. I give you that explanation briefly here.
Following your repeated assertions that “no more than a few per cent of the data are affected by urbanization”, at October 15, 2012 at 10:13 am you said

I did not study urbanization myself and it is a rather extensive literature. I got this statement from talking to colleagues with hands on experience in homogenization. Thus unfortunately I cannot give you a reference.

I interpreted this – I now know wrongly interpreted this – to mean you and your “colleagues” were working on compiling the GHCN data set but you have not specifically addressed the UHI issue.
Then, at October 15, 2012 at 2:44 pm you wrote to ‘laterite’ (aka David Stockwell) saying

I understand your side, …

That lifted the hairs on my neck because science is not about “sides”; it is about assessing and challenging data and hypotheses. Importantly, it seemed to confirm that you were ‘at one’ with compilers of the GHCN data set.
Subsequently, you repeatedly referred me to your article – which you linked – that you said may answer my questions. But that link told me nothing I did not know and did not mention the fundamental issues of data reliability, accuracy and precision which I had repeatedly queried. However, it did seem to be a presentation of ‘insider’ knowledge of the GHCN data compilation.
Hence, I progressively obtained impressions which I put together so (2+2)-=5.
I wrongly thought you were a compiler of the GHCN data set who was defending the GHCN method while avoiding evaluation of that method. That thought was an error. I completely apologise for my misunderstanding and any difficulties which my misunderstanding may have created.
Having got that out of the way, I can respond to your substantive point which is

I do not have a homogenization algorithm of my own. The people working on homogenization asked me to lead the validation study because I had no stake in the topic. Up to the validation study, I mainly worked on clouds and radiative transfer. A topic that is important for both numerical weather prediction, remote sensing and climate. Thus professionally, I couldn’t care less whether the temperature goes up, down or stays the same.

I also “couldn’t care less whether the temperature goes up, down or stays the same” but I would like the omniscience to know which it is going to do. 😉
It pleases me that you were asked “to lead the validation study because [you] had no stake in the topic”. That is how it should be. However, you and I would have differed in our approaches to that. As I understand your article you were interested in data “quality” whereas I would have investigated effects of homogenisation on data reliability, accuracy and precision and to what degree those effects could be determined.
As an addendum I point out that interacting with people outside the immediate research domain provided me with many benefits when I was directly involved in scientific research. I was at the UK’s Coal Research Establishment (CRE) and then would find excuses to discuss the work with non-scientists such as mechanics in the workshops, gardeners and lavatory cleaners. This was rewarding for several reasons.
Firstly, if I could not explain the work to one of them then I knew I did not have sufficient clarity of understanding of the work myself.
Secondly, it gave them an ‘involvement’ in the research which was our common purpose so they wanted to perform well with several resulting benefits including improved conduct of the work (e.g. best quality constructed research equipment).
Thirdly, they were not constrained by their training and background so would ask ‘naïve’ questions which those of us ‘in the box’ would never ask. This could have unforeseeable benefits. For example, I had a theoretical explanation of – so possible solution to – the longstanding problem of heat-exchanger tube wear in FBC fluidised combustor beds. A discussion with a mechanic in CRE’s workshops informed me of a new ability to make long, narrow, longitudinal holes in tube walls. I recognised that this could enable thermocouples to be positioned inside a tube wall along the length of a tube. And a conical insert inside the tube would provide a range of outside wall temperatures along the tube. With that knowledge the problem was solved. And the idea of linking this new ability (to erode long, thin holes) with my problem may never have occurred to me without the discussion with the mechanic.
So, I think you may find the very wide range of backgrounds, knowledge and experience on WUWT may prove useful to you if you ‘tap in’ to it.
Richard

climatereason
Editor
October 19, 2012 12:22 pm

Victor Venema:
Camuffo wrote a very detailed exposition on 7 historic european data sets via the ‘Improv’ project. He believed there to be a consistently warm bias.
I think the problem is much more complicated than that and wrote about the problems with historic temperatures here
http://wattsupwiththat.com/2011/05/23/little-ice-age-thermometers-%e2%80%93-history-and-reliability-2/
.In short the methodology is highly suspect until the advent of digital stations in the 1980’s which then had their own problem with siting as Anthony WAtts has chronicled.
In general-when you also take into account information such as crop prices and observational evidence of the time, there are many times when camuffos instrumental detective work must be questioned and the warm bias doesn’t always exist. This is complicated by uhi and a physical move of the station to a different micro climate which still bears the stations original name .
Personally I think UHI is very real, but once urbanisation reaches a certain level the effect spreads over a wider geographical area, rather than intensifies.
There are some 30000 stations worldwide of which around one third are cooling accrdig to BEST. A great percentage of the remainder are in an urbanisation where the warming could be caused by concrete rather than co2 and this is not properly accounted for.
tonyb

October 19, 2012 2:36 pm

Dear climatereason, thank you for pointing us to your posts on historical temperature measurements. Your explanation of the many reasons why observations from before 1900 likely have a warm bias are more likely to believed here at WUWT. It explains why the trend in the raw data is too shallow and becomes steeper and closer to the true trend after reducing these problems by homogenization.
Many of the problems you mention “only” cause a bias (for some stations) or make the measurement more noisy. This is a problem if you want to use the data to validate a weather prediction or if you would like to draw maps with isotherms and for many more detailed studies, but as long as a bias stays constant it does not preclude an analysis of the temporal variability and trends in the climate. If the bias changes and you have multiple stations, you can use such data to study temporal changes after applying homogenization to correct for the change in the bias.

phi
October 20, 2012 5:09 am

Victor Venema,
I reply to your message as follows: I read on your blog (http://variable-variability.blogspot.ch/2012/01/homogenization-of-monthly-and-annual.html
) :
“For example, for the Greater Alpine Region a bias in the temperature trend between 1870s and 1980s of half a degree was found, which was due to decreasing urbanization of the network and systematic changes in the time of observation (Böhm et al., 2001).”
So, according to Böhm, on average in this area (including Geneva) UHI effect on thermometers in the nineteenth century was 0.5 ° C higher than in 1980.It is perfectly paradoxical.When one knows the evolution of urbanization in the twentieth century (UHI sources more than tenfold), it is just obvious that it has had a significant warming effect on temperatures measured. A continuous effect which must imperatively be taken into account.

phi
October 20, 2012 5:20 am

You will reply perhaps that I forget TOBS. Actually, no. In the particular case of Böhm et al. 2001, I could demonstrate that the consideration of TOBS hardly changes anythingt to the value of 0.5 ° C for a century. But that is another topic.

October 20, 2012 11:40 am

No it is not because you forgot the TOBS. The statement claims that both TOBS and the decreasing urbanization of the network biases the trend.
With the “decreasing urbanization of the network” of the HistAlp dataset Böhm did not mean that the stations in the cities experience less urbanization, but that at the end of the series a larger fraction of stations was not situated in urban areas.
Zürich is probably a good example for this. The station is now at the airport and thus experiences a smaller Urban Heat Island effects as when it was in the city. Also cities are often founded in valleys, thus airports are often situated at higher altitudes and thus colder. This is typical for many countries.
I never studied it, but I would expect that the first stations were at universities, monasteries, capitals and courts and often operated by scientists, school teachers, apothecaries, lawyers, etc. Consequently, in the beginning most stations were probably in cities. Later on, when the network became denser and people tried to spread the stations evenly, more stations needed to be in smaller towns and villages. If this tendency exists, it would probably also be happing in most countries.
In Austria as a mountainous country and with lots of winter tourism, I can also imagine that having mountain stations and locating them in touristic villages became more and more important and improvements in communication and transport made them easier to maintain.

phi
October 21, 2012 1:53 am

Victor Venema,
I talk about the steady increase of UHI which must imperatively be corrected. It was, according to Böhm already at least 0.5 ° C in 1870.
At this time the station of Geneva was still surrounded by meadows near a city of 50,000 inhabitants. In 1961, the typology is that of a city center of 300,000 people with a consumption of energy which has increased tenfold.
For Böhm for GHCN for the CRU, BEST and national offices, there is no further increase of UHI since the nineteenth century. In Geneva and around the world !

markx
October 23, 2012 9:17 am

It is important, (and not difficult to achieve), that the original raw data be preserved and be readily available. Data sets should be linked to this raw data. Information on the homogenization methods used should be also accessible.

October 23, 2012 1:14 pm

I agree that data should be open and that is the current climate of mistrust it is preferred that the algorithms used to produce data are also published.
There are a number of open datasets (GHCN, USHCN, ISTI, HadISD) and also most state-of-the-art methods for homogenization can downloaded from http://www.homogenization.org.
Thus you can do your climate research, if you do not trust climatologists to do it right or simply have an interesting new question.
Unfortunately much of the data from Europe is still closed. The European small-government advocates, wanting to pay less taxes, want the weather services to make money by selling the data. Due to abuse of raw data, such as in the book “State of Fear” by Michael Crichton, climatologists used to have reservations against giving out the original raw data. Nowadays, all colleagues I speak with are in favour of releasing the data, but the weather services are not allowed to. Putting pressure on the government to release climate data is a cause climatologists and climate “sceptics” can work on jointly.
The original data is nowadays always preserved. In the past, some of the quality control (removal of measurement mistakes, outliers) was performed on the paper records or in the beginning of the computer era when computer memory was expensive directly on the digitized records. For the monthly or annual means this is completely insignificant (the percentage of the data involved is very small), but for research on changes in extreme weather on daily data this could be more important and the daily data may need to be re-digitized so that the quality control is identical for the entire period. Sometimes, the original data is lost, for example in case of Austria the original data was lost during the WWII and all we have left are the monthly averages, which were reproduced in annual reports.