Preliminary comments on Hausfather et al 2013

There’s a new paper out today, highlighted at RealClimate by Hausfather et al titled Quantifying the Effect of Urbanization on U.S. Historical Climatology Network Temperature Records and published (in press) in JGR Atmospheres.

I recommend everyone go have a look at it and share your thoughts here.

I myself have only skimmed it, as I’m just waking up here in California, and I plan to have a detailed look at it later when I get into the office. But, since the Twittersphere is already demanding my head on a plate, and would soon move on to “I’m ignoring it” if they didn’t have instant gratification, I thought I’d make a few quick observations about how some people are reading something into this paper that isn’t there.

1. The paper is about UHI and homogenization techniques to remove what they perceive as UHI influences using the Menne pairwise method with some enhancements using satellite metadata.

2. They don’t mention station siting in the paper at all, they don’t reference Fall et al, Pielke’s, or Christy’s papers on siting issues. So claims that this paper somehow “destroys” that work are rooted in failure to understand how the UHI and the siting issues are separate.

3. My claims are about station siting biases, which is a different mechanism at a different scale than UHI. They don’t address siting biases at all in Hausfather et al 2013, in fact as we showed in the draft paper Watts et al 2012, homogenization takes the well sited stations and adjusts them to be closer to the poorly sited stations, essentially eliminating good data by mixing it with bad. To visualize homogenization, imagine these bowls of water represent different levels of clarity due to silt, you mix the clear water with the muddy water, and end up with a mix that isn’t pure anymore. That leaves data of questionable purity.

bowls-USmap

4. In the siting issue, you can have a well sited station (Class1 best sited) in the middle of a UHI bubble and a poorly sited (Class5 worst sited) station in the middle of rural America. We’ve seen both in our surfacestations survey. Simply claiming that homogenization fixes this is an oversimplification not rooted in the physics of heat sink effects.

5. As we pointed out  in the Watts et al 2012 draft paper, there are significant differences between good data at well sited stations and the homogenized/adjusted final result.

We are finishing up the work to deal with TOBs criticisms related to our draft and I’m confident that we have an even stronger paper now on siting issues. Note that through time the rural and urban trends have become almost identical – always warming

up the rural stations to match the urban stations. Here’s a figure from Hausfather et al 2013 illustrating this. Note also they have urban stations cooler in the past, something counterintuitive. (Note: John Nielsen-Gammon observes in an email: “Note also they have urban stations cooler in the past, something counterintuitive.”, which is purely a result of choice of reference period.” He’s right. Like I said, these are my preliminary comments from a quick read. My thanks to him for pointing out this artifact -Anthony)

Hausfatheretal_figure

I never quite understand why Menne and Hausfather think that they can get a good estimate of temperature by statistically smearing together all stations, the good, the bad, and the ugly, and creating a statistical mechanism to combine the data. Our approach in Watts et al is to locate the best stations, with the least bias and the fewest interruptions and use those as a metric (not unlike what NCDC did with the Climate Reference Network, designed specifically to sidestep the siting bias with clean state of the art stations). As Ernest Rutherford once said: “If your experiment needs statistics, you ought to have done a better experiment.”

6. They do admit in Hausfather et al 2013 that there is no specific correction for creeping warming due to surface development. That’s a tough nut to crack, because it requires accurate long term metadata, something they don’t have. They make claims at century scales in the paper without supporting metadata at the same scale.

7. My first impression is that this paper doesn’t advance science all that much, but seems more like a “justification” paper in response to criticisms about techniques.

I’ll have more later once I have a chance to study it in detail. Your comments below are welcome too.

I will give my kudos now on transparency though, as they have made the paper publicly available (PDF here), something not everyone does.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
106 Comments
Inline Feedbacks
View all comments
jc
February 15, 2013 9:15 am

Theo Goodwin
Derrida is, as you identify, a touchstone in all this. I have to take issue with you however – although I know it reflects convention – that what is described as post – modernism reflects something that can be called “thought”. Although, so far as it is expressible, words are used and writing is employed to do so, this gives the impression that it is part of a body of human comprehension and social niceties can only go so far!
It is more accurate to say that it is an absence of thought, in that thought must contain at least the potential for meaning. It is more akin to a psychological state, whereby all things being conditional no apprehension of anything, including itself, can occur. This can be revelatory when first encountered as a means of clearing the mind of any and all pre-conceptions, however this can only exist for a moment. It is just the intellectual equivalent of being struck dumb by something unforeseen and therefore not immediately absorbed. The fact that it is constituted and dissected as an actual position is self- defeating and exhibits the inherently fraudulent nature of the whole show.
I can’t say I’ve read Derrida since that implies that what appears as writing is intended to and can communicate something of meaning, and that the reader can discern something within the script to engage with, rather I can say I have exposed myself, to the degree I thought bearable, to his meanderings.
Derrida was simply a purveyor of gibberish. This gibberish did not come from nothing however and was not produced with no point in mind.
Such gibberish is a god-send to those for whom it is useful to be able to justify anything and to never be pinned down or held to account. Since it is delivered in such a manner as to say “this has a basis in intelligence”, with all the accoutrements of culture attached, then there must be some reasonable base for any claim mustn’t there, even if it cannot be seen? So its YOUR problem if things don’t seem amenable to sense. This is just the technique of any con-man carried to an absolute degree. This is its achievement.
As such it suited the ignorance and self-evident existential worth of those liberated in the 1960’s and beyond from any real sense of responsibility and whose livelihood was derived from activities one or ten steps removed from the basis for the material wealth that enabled it.
What better than a source of justification for not having to exist in the straightjacket of values or concern for others, who of course all have their alternative conception of reality and why should anyone cater to that?
And what better possible life could there be for Derrida himself, where any musings would do?
Groovy man.

Crispin in Waterloo
February 15, 2013 9:49 am


The comment by Roger P Sr above shows that your comments have currency in this thread. One can easily misrepresent the whole truth by simply not mentioning criticial elements that change the whole conclusion, were they to be acknowledged.
In this case, the matter of the absolute humidity of the air, which is analogous to its heat-containing capacity, means that temperature is not the only consideration when checking on the validity of homogenisation routines. I have noticed that each time Willis E tries to discuss something involving heat transfer and energy, not just ‘temperature’, the comments from readers are about the most obtuse on WUWT. People simply do not understand the concept of enthalpy and how measuring one (temperature) of three essential metrics (temperature, heat capacity Cp, and density at the time) leads to meaningless statements about ‘climate’. The comment above that we are not talking about, nor need to talk about, heat when discussing temperature indicates that ‘even the elect’ may struggle with basic concepts of what it means to ‘warm the globe’.
Our senior science officer (a nuclear physicist) and I were talking about the pointlessness of a metric that has been used for some years. It involves taking a task-dependent quantity of energy and dividing it by a volume that is not necessarily related to the energy number. He agreed it was invalid and commented that ‘we could divide by the distance to Mars and get a number’. He is right – we will get ‘a number’ but that number does not have any meaning. Because the back-yard gardner deals with temperature in terms of ‘a number’ there is a lot of discussion about what are no more than ‘numbers’. The paper under discussion takes a very close look at how certain numbers are influenced when treated in a certain way, and concludes that the treatment has not affected the numbers in a way that matters. Well, OK, but as Roger Sr. (indirectly) points out, the exercise has no more value than dividing all the numbers by the distance to Mars and plotting the anomalies. It is not an error of commission, it is an error of omission.
Re the value of schooling, training and education: nothing was so sobering about the worth of a PhD as when I started guiding candidates through the writing of papers reflecting intelligent, coherent thoughts on a subject that comes naturally to me (I guess). I was appalled. I have experienced getting someone out the door simply to vacate the space for someone else who might be worthier of the effort. Egad, I am unimpressed by worth of paper!
I used to be impressed by papers appearing in reviewed journals. The glory days were the 60’s in my memory. Science was King! Buck Rogers lived around the corner. Now, having been published and been a reviewer for papers and grants, sobriety once again places its cold hand on my warm throat. Thorough-going ignorance abounds. Was it ever thus?
It is very different now and we have, in large measure, climate science to ‘thank’ for it. The review process with respect to climate-related issues is broken. Anyone can see that. The greatest sins of omission in the modern era belong to climate-oriented publications. Simultaneously the rent-seeking sheep bleat their chant ever-louder across the Farm, “Four citations good! Two devastating counterpoints, bad!” That is noble-cause corruption without the chador. We deserve and can do much better than that.

Stephen Richards
February 15, 2013 11:41 am

I never quite understand why Menne and Hausfather think that they can get a good estimate of temperature by statistically smearing together all stations, the good, the bad, and the ugly, and creating a statistical mechanism to combine the data
Howsyerfather has been blogging at Julia’s for some time now and always does so with his global warming agenda up front. He’s a waste of space.

Crispin in Waterloo
February 15, 2013 2:55 pm


Your are reinforcing the comments of others that there was an agenda behind the purported purpose of this technical review (which is what the paper is). I don’t have an opinion on this, but you might be right. They are not claiming that what they are doing gives a good estimate of temperature. That is kinda the point. Other people are claiming that their own processes produce usable and meaningful temperatures, but this paper just examines in a certain manner, very narrow technical aspects and as I indicated above, they are saying they if they paint it blue, it looks blue. Well, my reply is, so what? If what you are painting blue is not a valid temperature set, how will the blue paint help?

Theodore
February 19, 2013 6:25 am

Zeke et. al. Congratulations on the paper. I think you have shown clearly that spatially gridded data fails to remove UHI from the trends and is not a viable method for determining trends without UHI pollution.
I see a couple of issues that I’d like to see addressed in regards to this though.
First, solving for the difference in trend between urban and nonurban stations does not determine how much UHI or LHI is impacting the trend. That equation can only solve for how much extra UHI warming the subset of urban sites have over nonurban sites. So instead of solving for Urban Trend minus UHI Trend equals Rural Trend, what you are actually solving for with your method is Urban Trend minus Urban UHI Trend equals Rural Trend plus Rural UHI Trend. So the UHI trend among nonurban (which really isn’t rural) stations is still present in both sets of data. At best you have removed any surplus UHI that Urban stations show over the UHI that your selection of possibly rural stations show. Until you compare the Urban Trend with a trend for a set of Rural sites that have both remained rural (avoiding UHI) and have an adequate station siting (to avoid LHI) you cannot solve for the amount of UHI in the data. You can only determine how much worse UHI affects one subset of stations versus the other subset (not entirely urban) of stations.
Your pairing methods would smear the warming from poorly sighted and UHI influenced stations that are classified as nonurban in your data set across the well sited free of UHI rural stations. For example if you have 3 ‘rural’ stations to pair with your urban station: the well sited actually rural one shows a trend of 0, a poorly sighted ranger station sited beside a parking lot has a trend of 2, and a suburban airport station with both UHI and LHI has a trend of 4, and your urban station with UHI has a trend of 6. Your methodology would calculate the mean of the 3 rural stations 2 smearing the UHI and LHI pollution across the actually accurate station and then the homogenization process would lower the urban station to a trend of 2. When there is zero trend in regional warming because failing to eliminate the UHI/LHI station issues in the rural sites prior to homogenization includes those errors into the trend. In order to get to the actual UHI impact, not just surplus UHI, you must control your rural stations only including trends from actual rural sights (not airports, wastewater treatment heat sinks, and suburbs) that have high station quality.
Second, I think the adjustment for station type is not correcting for what you think it is correcting for.
“So the MMTS adjustments do effectively cool the past (since MMTS max temps read ~0.4 C lower than CRS).”
This 0.4 C lower reading is based on what? Is it the difference between the two types of thermometers in a controlled environment showing a 0.4 C difference in identical conditions? Or is it the difference in station measurements estimated after the switch to CRS? I would like to see this addressed because as I understand the switch from MMTS to CRS also typically involved a movement of the station increasing Local Heat Islands. The CRS require power and cabling which resulted in stations that had been sited properly being moved alongside buildings to allow cabling to reach the station. In addition, the surface stations project shows many of these stations also had walkways constructed directly to the station for access. So does this 0.4C difference occur in both controlled environments and station inhomogenaities? If we added a UHI/LHI error into the measurments then it is not appropriate to adjust old temperatures down if the actually deployment of the CRS sensors did not accompany a measured difference of 0.4 C
Another issue with homogenization is what effect does station quality have on homogenization’s ability to detect both step and trend variations? If a station is rated as accurate to 1.0C is the homogenization more likely to detect a step increase than with a station that has a 5.0C accuracy? I would believe that detecting both step and trend variations would be more difficult with the les accurate stations, so step increases at better stations may be homogenized but the errors in poor quality stations are not. There is also the other issue with station quality in climate data in that it is treated as if the error is a standard bell shaped deviation, a station rated as plus or minus 5.0C is just as likely to be 5.0C low as 5.0 C high. This then washes out in the statistical processing as you have enough data points to claim the errors cancel each other out. But is there any actual evidence that station errors are distributed normally? Because the causes of station error are extremely biased toward warming biases, so rather than having a bell curve centered on 0 you may have a bell curve centered on +3.
The paper alos claims that undocumented station moves between the 30s and 60s may have introduced a cooling bias to rural stations making the UHI seem worse than it really was. Is there any measurement data to back up this assertion? That moving a station from downtown to heat sinks such as airports or wastewater treatment plants causes a cooling inhomogeniaty? Or is this simply supposition to explain it away?
In addition, the TOBS adjustment assumes that rural stations read their thermometers later in the day than urban ones. Is there any actual evidence of that? Because knowing rural people, they tend to get to work earlier than urbanites and I have a hard time buying that rural stations need more TOB adjustment than urban ones. It is adding another bias into the stations that masks UHI effects. I know this is how it is typically done, but what is the hard evidence that this divergence is real and necessary as oppossed to convenient?

1 3 4 5