There’s a new paper out today, highlighted at RealClimate by Hausfather et al titled Quantifying the Effect of Urbanization on U.S. Historical Climatology Network Temperature Records and published (in press) in JGR Atmospheres.
I recommend everyone go have a look at it and share your thoughts here.
I myself have only skimmed it, as I’m just waking up here in California, and I plan to have a detailed look at it later when I get into the office. But, since the Twittersphere is already demanding my head on a plate, and would soon move on to “I’m ignoring it” if they didn’t have instant gratification, I thought I’d make a few quick observations about how some people are reading something into this paper that isn’t there.
1. The paper is about UHI and homogenization techniques to remove what they perceive as UHI influences using the Menne pairwise method with some enhancements using satellite metadata.
2. They don’t mention station siting in the paper at all, they don’t reference Fall et al, Pielke’s, or Christy’s papers on siting issues. So claims that this paper somehow “destroys” that work are rooted in failure to understand how the UHI and the siting issues are separate.
3. My claims are about station siting biases, which is a different mechanism at a different scale than UHI. They don’t address siting biases at all in Hausfather et al 2013, in fact as we showed in the draft paper Watts et al 2012, homogenization takes the well sited stations and adjusts them to be closer to the poorly sited stations, essentially eliminating good data by mixing it with bad. To visualize homogenization, imagine these bowls of water represent different levels of clarity due to silt, you mix the clear water with the muddy water, and end up with a mix that isn’t pure anymore. That leaves data of questionable purity.
4. In the siting issue, you can have a well sited station (Class1 best sited) in the middle of a UHI bubble and a poorly sited (Class5 worst sited) station in the middle of rural America. We’ve seen both in our surfacestations survey. Simply claiming that homogenization fixes this is an oversimplification not rooted in the physics of heat sink effects.
5. As we pointed out in the Watts et al 2012 draft paper, there are significant differences between good data at well sited stations and the homogenized/adjusted final result.
We are finishing up the work to deal with TOBs criticisms related to our draft and I’m confident that we have an even stronger paper now on siting issues. Note that through time the rural and urban trends have become almost identical – always warming
up the rural stations to match the urban stations. Here’s a figure from Hausfather et al 2013 illustrating this. Note also they have urban stations cooler in the past, something counterintuitive. (Note: John Nielsen-Gammon observes in an email: “Note also they have urban stations cooler in the past, something counterintuitive.”, which is purely a result of choice of reference period.” He’s right. Like I said, these are my preliminary comments from a quick read. My thanks to him for pointing out this artifact -Anthony)
I never quite understand why Menne and Hausfather think that they can get a good estimate of temperature by statistically smearing together all stations, the good, the bad, and the ugly, and creating a statistical mechanism to combine the data. Our approach in Watts et al is to locate the best stations, with the least bias and the fewest interruptions and use those as a metric (not unlike what NCDC did with the Climate Reference Network, designed specifically to sidestep the siting bias with clean state of the art stations). As Ernest Rutherford once said: “If your experiment needs statistics, you ought to have done a better experiment.”
6. They do admit in Hausfather et al 2013 that there is no specific correction for creeping warming due to surface development. That’s a tough nut to crack, because it requires accurate long term metadata, something they don’t have. They make claims at century scales in the paper without supporting metadata at the same scale.
7. My first impression is that this paper doesn’t advance science all that much, but seems more like a “justification” paper in response to criticisms about techniques.
I’ll have more later once I have a chance to study it in detail. Your comments below are welcome too.
I will give my kudos now on transparency though, as they have made the paper publicly available (PDF here), something not everyone does.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



david:
‘2. Am I to understand that you are averaging temperature from completely different temperature regimes? If so, how do you justify averaging a temperature of +30 which would represent 477.9 w/m2 with one of -30 which would represent 167.1 w/m2? Are you of the opinion that averaging such disparate temperature ranges has any value in understanding the earth’s at surface energy balance?”
You need to read the papers. First we are not at all interested in the energy balance. The method estimates the temperatures at un observed locations. It does that by using information at observed locations. That says nothing about energy balance and no claims about energy balance are made. The method estimates the temperature at un observed locations. The test of that proceedure is simple:
A) Hold out a sample of stations.
B) estimate the temperature at all locations, using a sample of locations
C) compare your prediction with your hold out sample.
and well, it works. go figure.
The method works to do what it was designed to do. Estimate temperature at un observed locations using observed locations. The concept of ‘average’ temperature is somewhat confused, for the reasons you state. That is why, I wouldnt characterize ANY temperature series as an ‘average’ temperature. It is an index. Its non physical. It tells you nothing about energy balance and was never intended to. nevertheless the concept of ‘average’ temperature has a meaning.
When we say the LIA was colder we are referring to something.
Horsefeather’s paper certainly reads more like a sort of justification or rationalisation paper in response to valid criticism of techniques and has all the same failings.
steven mosher;
4. If the process of taking anomalies changed means or trends and you can show that, then a nobel prize awaits you.
>>>>>>>>>>>>>
Haven’t got a clue if it does or not. But if the purpose of tracking the data is to determine if there is an energy balance at earth surface, why wouldn’t you average and trend w/m2 at earth’s surface? Demonstrating that the trend would be different would by no means earn me a Nobel prize, and I think you know that. Red herring. You have to raw data and access to the compute horsepower to do it. As does Zeke. As do many others. But you insist instead on using a proxy, and one that you ADMIT is imperfect. One than can be demonstrated with artificial data to produce a negative energy balance trend for a positive temperature trend. Which I have done, no one has produced an error in my math, yet no Nobel prize have a been nominated for.
What should I put you down for in the list? Something like Joel Shore, it is an imperfect metric but let’s use it anyway?
Steven Mosher says:
February 13, 2013 at 1:12 pm
5. I dont have time to answer every question. So, write up your nobel prize winner and do like zeke did. Do like Mcintyre. Do like Anthony. publish.
Translation: Mosher doesn’t know how to answer the question and tries to obfuscate as a cover.
BTW, is there anything in this paper that attempts to factor in what Pielke and others have found relative to vertical mixing caused by man made structures? Even a rural station could have structures well away from the actual thermometer that cause this mixing but still qualify the station as rural. If this is ignored then the paper is open to valid criticism.
@Doug Proctor
I picked my own quarrel with a different lot at Berkeley who are using results of triplicate tests to get an ‘average’. Later, the average of the result of a different set of triplicates is generated. The two averages are then compared. They concluded that the test method is precise to the extent of the difference between the two sets of averages.
I cannot for the life of me see any difference between that (which is completely unscientific) and what is being done with these homogenisation and averaging routines. What is this paper about? It is a test of processing methods, is it not? Am I reading this correctly? It is a series of comparisons that tests whether one or other processing method is ‘valid’. The use to which the data set emerging from the process they are examining is fed into another averaging process. I found the several methods described in detail during the hullabaloo about the BEST pre-print nothing less that extraordinary. The comment above about generating your own homogenised baseline and then using a different process to generate a comparison runs an interesting risk. At what point are you comparing artifices of the methods and at what point are you comparing trends in data? Zeke’s first comment is quite on the mark when he says basically, ‘this is what we did and this is what we found when we did it’. Well, OK that is a valid statement. It is what he did. The question hangs large over the exercise : what has been shown by this effort? There are three processes involved: the original process, the process used to test that process, and the analysis of the meaning of the result of the second process upon the results of the first. If an artificially modified version of the raw data was fed into both processes 1 and 2, would analysis 3 be able to tell what kind of modification was made to the raw data? I am borrowing a page from S McIntyre here. Basically the paper claims the answer is yes.
@davidmhoffer
You may enjoy this: I am reminded of yet another Berkeley group who have been averaging results too. They have constructed metrics which are ‘inverted’ and then they produce a simple average. [An example is miles per gallon and litres per 100 km – the latter, volume per distance, is the inverse metric of the former.] When inverted they should average using a Harmonic Mean, not an Average. The effect is to bias the results (of using the incorrect averaging method) always in one direction. One way this can appear in temperature averages is to use anomalies instead of numbers because sometimes inverting or re-expressing them and then averaging the numbers gives the wrong answer. It is vaguely like your energy and temperature example. When you change the denominator the method must also be changed appropriately.
Question for readers to come to grips with this:
Example 1
Rural stations increase in temperature at 0.1 deg per decade
Peri-urban stations increase in temperature at 0.2 deg per decade
Urban stations increase in temperature at 0.3 deg per decade
What is the average temperature rise of these three sets of stations, per decade? (assume equal weighting)
Example 2, devived from Example 1 by inverting the data
Scenario 1 is 10 decades per degree of temperature rise
Scenario 2 is 5 decades per degree of rise
Secnario 3 is 3.333 decades per degree of rise.
What is the average number of decades per degree of temperature rise? Invert your answer. Does it agree with the answer to Example 1?
Imagine you were trying to forecast how long it will take for the temperature to rise 2 degrees, or to double. Methinks there is madness in some methods.
Anthony, I hope your paper gets published and gets lots of attention. Quality of surface station data (and subsequent “homogenizations” and “adjustments”) are one of the sloppiest aspects of current Climate “Science.” Of course, the adjustments to ocean temperature and troposphere/ stratospher temperatures are just as suspect. And the effects of soot on arctic temperatures should be seriously revisited.
If half the “scientists” studying this would spend time on data quality and the physics of radiative absorption, we’d get somewhere, and the alarmists would go away with tails between their legs.
Zeke, Something I’ve wondered about (relating to homogenization processing) is have you compared a local calculated value while excluding a known good station, with the good station to see how your calculations compared to actual measurements?
Homogenisation assumes most of the data is good, but as Watts has demonstrated, most of the data is bad. UHI does not show up as “breaks” in the data, it is a gradually increasing bias to the data.
Notably, when stations are located in park-like settings within a city, the microclimate of the park can be isolated from the urban heat island “bubble” of surrounding built-up areas [Spronken-Smith and Oke, 1998; Peterson, 2003].”
Urban temperature measurements from parks are problematic in warmer and drier climates because parks are normally irrigated. Here in Perth the official site was moved in 1992 from opposite an irrigated park to an un-irrigated location and night time temperatures immediately rose 1.5C. The park was irrigated at night.
Urban rural comparisons are moot because the implicit assumption that rural locations don’t have local anthropogenic influences is wrong. And the rural temperature measurement problem is compounded by the fact temperature measurement is often done at agricultural research stations.
What we should be comparing is urban vs rural vs pristine locations, and as said above if we only have 50 pristine locations, then so be it. Although, I’d expect at least a few hundred.
Steven Mosher;
You need to read the papers. First we are not at all interested in the energy balance.
>>>>>>>>>>>>>
Oh.
My.
God.
Boels commented
You might like the work I’ve done with night time temperature change. If you follow the link in my name you’ll find a few blogs I’ve written. Feel free to contact me if you’d like.
Worth mentioning Ed Long’s work from a few years back which suggests that UHI correction is the wrong way around, especially for quality rural sites.
Also Ray Spencer’s work which empirically shows ‘urban’ heat island effect can kick in very significantly with as few as 20 people/km^2.
To me the elephant in the room is the reason why the surface temperature record diverges from the satellite record.
Anthony attempts to deal with the elephant by offering the explanation that degradation over time of station siting, creeping UHI, and similar slow by steady degradation of the temperature network have caused the land based record to record warmer temperatures. There is experimental support for this. Poor station siting can clearly cause an instrument to record a spuriously high temperature. His preferred solution is to examine in detail the nature of each measurement site and to eliminate suspect data.
Others seem to see any suggested problem with the data as an invitation to adjust it (and hence produce a paper without leaving the office). Yet any adjustment method will result in the temperature record being contaminated with the biases and assumptions of those choosing the method of adjustment. When we look at the output we see serious unexplained anomalies. Adjustments supposedly made to eliminate UHI somehow result in still greater warming. They result in temperatures from pristine stations being changed, usually in the direction of showing much greater warming, with absolutely no attempt made to provide a physical justification for this. If the station is pristine why are you tampering with its data? And no – you cannot point somewhere into the mathematical complexities of your data mangling machine and say the reason is hidden in the mechanism. If your data mangling machine wants to mangle pristine data then your machine is broken because the data cannot be.
This approach seems so wrong and generates such strange results that many of us have lost all trust in the people doing this. And one of the main custodians of the US data record is being arrested in front of the white house right now, which also does not inspire confidence in the impartiality and scientific detachment of those doing the adjustment. In any case all discussion about mangling methods to produce an even more dramatic record of steadily rising temperatures completely ignores the elephant in the room. At the end of it all you still must explain why the satellite record shows a much smaller rise.
While I have only skimmed this latest paper, it it seems to me that all it does is show that the results of the various data mangling machines are insensitive to certain choices in the data massaging method chosen. This might be of interest to people who want to build data mangling machines. It is of little interest to those of us who are deeply suspicious of them. It offers no explanation of some of the paradoxes generated by these machine. It doesn’t explain why the massaging methods which are supposed to eliminate UHI make the temperature rise greater. It does not explain what in these machines is broken which leads them to mangle pristine data to show greater warming. And once again it ignores the elephant in the room.
At least Anthony has tried to talk to the elephant.
Bruce of Newcastle says:
February 13, 2013 at 2:55 pm
Also Ray Spencer’s work which empirically shows ‘urban’ heat island effect can kick in very significantly with as few as 20 people/km^2.
20 to 50 people/km2 is arable land in most parts of the world. Yesterday, I called this Rural Heat Island. The Spencer paper is a must read.
Zeke Hausfather says:
February 13, 2013 at 9:37 am
Bill Illis,
That graph does not show urban and rural temperatures. You want Figs. 3-6 in our paper for a good example of that.
—————-
Oh yeah, Figs 3-6 are clear.
Can you explain what exactly does the caption mean “Time of obs min urban-rural differences 1895-2010.”
And why does it show 0.4C of change in the difference between Urban and Rural from 1920 to 2000.
Why does the abstract describe this situation as “urbanization accounts for 14% to 21% of the rise in unadjusted minimum temperatures since 1895”.
And what exactly does that mean.
And when I say “exactly”, I mean something that describes the situation in tempC to a number. Like 0.5C of the increase of 0.8C is caused by urbanization.
Now that would be a paper that is helpful to everyone.
Ian H. writes:
“When we look at the output we see serious unexplained anomalies. Adjustments supposedly made to eliminate UHI somehow result in still greater warming. They result in temperatures from pristine stations being changed, usually in the direction of showing much greater warming, with absolutely no attempt made to provide a physical justification for this. If the station is pristine why are you tampering with its data? And no – you cannot point somewhere into the mathematical complexities of your data mangling machine and say the reason is hidden in the mechanism. If your data mangling machine wants to mangle pristine data then your machine is broken because the data cannot be.”
Amen. And Amen to the entire post. The answer is that their concept of “pristine data” is a statistical concept that cannot be explained except by reference to their statistical efforts of the moment.
That raises the Big Question and the Big Picture. Why do they engage in statistical exercises that make reference to “pristine data” or to Anthony’s five-fold classification of measurement sites? Are they hoping that the reader will confuse the empirical concept of pristine data with their statistical concept of pristine data? No such confusion will occur at this site.
Steven Mosher says:
February 13, 2013 at 12:03 pm
“‘Look Zeke, the objective is to determine what is happening to the global climate. If you took the simple mean of the uncorrected records of the world’s 100 most pristeen stations – well distributed – you would have a far more trustworthy idea of that objective than all of the nonsense that you are currently doing. But then, probably nobody would fund you or publish you for doing that, right”
###
did that. the answer is the same.”
I think something is being missed here. This is a great idea. If you chose the 100 most pristine sites in the world only, and kept track of their raw data over time, even though you are unlikely to have a good average of global temp(if that is what is being attempted with all the adjustments), you would have a handle on a clean useful trend. If CAGW is significant and longterm, it should show an incontrovertible signal, free of criticism that the data has been incorrectly manipulated. Let’s face it, if we are going into some serious longterm heating, there is no need for controversial homogenization corrections. To take an extreme example: it the sea is going to rise 20 metres, there is no need to make 0.3mm.annual adjustments for whatever reason. 19.97 metres is close enough!
I would be very interested in a proper critique of this idea.
Wanted to point out that Troy Masters has a post up now with more details & analysis.
http://troyca.wordpress.com/2013/02/13/our-paper-on-uhi-in-ushcn-is-now-published/
Bill Illis,
I am assuming your first questions are about the bottom panel in Figure 9? If so, hopefully this helps:
The three lines on that chart show the difference between the grid-averaged minimum U.S. temperature using all CONUS stations for homogenization (USHCNv2) and the grid-averaged minimum U.S. temperature using the following three sets:
1) No homogenization (TOB only)
2) Station data homogenized using ONLY rural CONUS stations (rural neighbor)
3) Station data homogenized using ONLY urban CONUS stations (urban neighbor)
Obviously, the urban-only adjusted series shows contamination of the stations by urban neighbors. I think this is the 0.4 K change you are referring to. This is the very reason I was interested in the analysis in the first place, to see what urban stations potentially might have contaminated rural stations during homogenization. However, obviously the urban-adjusted-only dataset is very different from the actual NCDC USHCNv2 dataset, as indicated by the large trend in that figure, which should be a good sign that the main (all-neighbor) dataset is not similarly contaminated.
If you look that green line (V2.0 All Coop Neigh minus Rural Neigh), you see it does NOT have a subsantial trend, suggesting that adjusting using rural-only neighbors produces similar results to homogenization using ALL neighbors, again indicating that we are not getting the “urban bleeding” when using all stations (as in USHCNv2) that many (and myself personally) were initially concerned about.
For more on this particular topic I discuss it on my blog:
http://troyca.wordpress.com/2013/02/13/our-paper-on-uhi-in-ushcn-is-now-published/
Thanks Carrick, I see you included the link while I was mid-post.
Bill Illis, to answer your follow-on comment:
From table SI.1, you can see that the trend in T_Min from 1895-2010 in the “unadjusted” (TOB) all-station series is 0.074 C/Decade. When using only rural stations, that trend in T_Min over the same period (again for TOB) is 0.060 to 0.064, depending on the urbanity proxy used. The difference is thus 14% to 21% of the all-station series. I would thus say that UHI contributes ~0.12C to ~0.16C of the ~0.85 C rise in MIN U.S. temperatures in the TOB-only dataset. Obviously, the conclusion of the paper is that the homogenization process removes most of this influence from UHI. The reason that the trend doesn’t decrease by this much after homogenization, when the UHI influence is removed, is because inhomogeneities identified by the PHA — which artificially deflated the trend by a similar amount — are also removed. This is why we investigated using rural-only neighbors, to see if the PHA was really just spreading the UHI, rather than actually removing it. Given that using the PHA with rural-only neighbors *still* identifies the inhomogeneities, and increases the trend, this led us to the conclusion that the corrections were warranted and not simply UHI spreading. As you recall from a while back, I had investigated using the PHA with synthetic data, and as a first check obviously just determined whether it would artificially inflate the trend. It did not:
http://troyca.wordpress.com/2011/01/14/testing-the-pha-with-synthetic-data-part-1/
Speaking of UHI and further to James Sefton’s Australia comments above, it’s worth looking at the BoM’s December 2012 update on weather for Melbourne.
http://www.bom.gov.au/climate/current/month/vic/archive/201212.melbourne.shtml
Located in the Central District at the head of Port Phillip Bay, Melbourne is Victoria’s State Capital. Here, overnight minimum temperatures were much warmer than those usually experienced and averaged 15.1°C (departure from normal 2.2°C). That the overnight temperatures in Melbourne are higher than those in most surrounding localities is a consequence of the city being under the influence of the effect of urbanisation (cities are usually warmer than their rural surroundings, especially at night, because of heat stored in bricks and concrete and trapped between close-packed buildings). Daytime maximum temperatures were much warmer than those usually experienced and averaged 25.7°C (departure from normal 1.5°C). Total rainfall for the month was 30 mm, this being less than that usually recorded (normal 59.3 mm, percentage of normal received 51%).
Some 20 kilometres northwest of the Melbourne city centre, and located in a somewhat rural setting, Melbourne Airport, is more typical of the suburban areas of Melbourne. Here, overnight minimum temperatures were slightly warmer than those usually experienced and averaged 12.5°C (departure from normal 0.5°C). Daytime maximum temperatures were much warmer than those usually experienced and averaged 26°C (departure from normal 1.6°C). Total rainfall for the month was 18.6 mm, this being much less than that usually recorded (normal 48.8 mm, percentage of normal received 38%).
OK, the BoM acknowledges that UHI affects Melbourne Regional Office temps, primarily minima which, if their airport comparison is the benchmark, adds as much as 2.6C.
Indeed, the December 2012 mean minima at nine weather stations surrounding Melbourne RO averages 12.3C, so it might be said that UHI exaggerates MRO December 2012 min by an average 2.8C.
Since they acknowledge UHI, it surely can be assumed they adjust down to compensate for it in their ACORN dataset – the homogenised temp records from a network of 112 stations since 1910 (sort of) that provide Australia’s feed into global temp indices. Melbourne RO is in the ACORN network.
If you look up Melbourne RO raw min temps via http://www.bom.gov.au/climate/data/ and BoM ACORN min temps via http://www.bom.gov.au/climate/change/acorn-sat/#tabs=1, you’ll find December 2012 adjustments thus:
1 Dec 17.6C adjusted to 17.6C
2 Dec 13.5C adjusted to 13.5C
3 Dec 14.2C adjusted to 14.2C
4 Dec 12C adjusted to 12C
5 Dec 12.2C adjusted to 12.2C
etc with no adjustment at all.
There have been no adjustments since 1998/99. So how is this explained? By looking at the adjustments for historic Melbourne RO raw vs ACORN minima records:
1910-29 adjusted up .6C
1930-59 up 1C
1960-69 up .6C
1970-89 up .4C
1990-99 up .2C
no adjustment since 98/99
Since there have been no Stevenson screen, instrument or location changes, early records are adjusted up presumably to compensate for modern UHI rather than new records adjusted down to compensate, the difference narrowing since about 1960.
ACORN adjustments reduce the 1910-2012 min increase at Melbourne RO from 1.8C to 1.2C. Melbourne is lucky compared to most other stations.
For example, Laverton RAAF 87031 from the BoM December 2012 comparison table linked above, which is the only ACORN site for comparison on their Melbourne monthly update page … 1946 (earliest year without days missing) Laverton raw mean min 8.8C. ACORN 1946 adjusted down to 8.1C. Laverton 2011 raw 10C. Laverton 2011 ACORN 10C.
Historic UHI adjustments are little more than guesswork and one of various reasons why ACORN is a mess.
Bill Illis,
The 0.4 C difference between urban and rural temps over the century doesn’t translate into an overall 0.4 C bias in the temperature record from all stations. In practice, the bias is about half of that as about half of the stations are urban and half rural (the actual proportion will vary based on the urbanity proxy used). As Troy mentions, you can find the trends from all stations and rural stations for various proxies, series, and time periods in table SI.1 in the supplementary information.
In Australia there are many sites that could be described as pristine. I selected and culled to 44 sites and looked at trends in the last 35 years. The logic was that either Tmax or Tmin or Tmean would show a similar trend from place to place, theoretically related to GHG changes, over the period.
It is desirable to establish a baseline change that is as isolated from spurious effects as possible. I failed to find one. I failed to explain why slopes were all over the place.
I’ve posted this before, but nobody has yet explained it.
Its importance is that failure to obtain a consistent baseline trend in Australia also fails attempts elsewhere in the world, until an explanation can be found. So, Zeke, you can fiddle with figures as much as you choose, but you can’t have them believed until you can explain this Australian anomaly.
http://www.geoffstuff.com/Pristine_Summary_1972_to_2006.xls
Data are from the Australian Bureau of Meteorology as posted on their web sites. There is occasional infilling that would have negligible effect on the outcome. The use of linear least squares fit does not imply an endorsement that this is the best way to interpret the data. It is simply a help to guide the eye. The period was chosen from 1972 because there is a break point in much Australian data about 1970 and I wanted to be past that. They end in 2006 because that’s when my data ended.
The summary information is graphed at the bottom.
‘They make claims at century scales in the paper without supporting metadata at the same scale.’
Climate science has ever operates at a standard below that expected of a student doing a science degree . Is it really to much to ask professionals to be at least has good has their own students ?
I first started taking an active interest in the basis for “climate change” a couple of years ago. Knowing nothing of which sites that might deal with this issue, or what their nature might be, I searched under Hansen, whom i’d heard of.
After finding a few sites, including this one, I firstly took an interest in the actual discussions around the scientific basis for this speculation, theory, or dogma, as you choose.
Whilst keeping an interested eye open, I no longer do that assiduosly.
The reason for that starts at Hansen’s NASA site and his “explanation” as to what constitutes a legitimate methodology for establishing the actual temperatures of the earth in the first instance and then the use of anomolies in preference to raw (adjusted) temperature data.
In a nutshell, from memory, he maintains that there is no such thing as the “real” temperature because any one measurement can only be taken at a specific point, and then goes on to illustrate this supposedly intractable metaphysical problem by citing the difference between a reading, for example at a height of 1 metre compared to say 10 metres. Let alone any measurement taken in a hollow or behind trees etc a little distance away.
And since it is impractical – even impossible – to take measurements across this range that he has manufactured, this is not how things must be done.
Having claiming to have established in this transcendent fashion that there is no legitimacy to an actual measurement, he then claims that the manner for establishing truth is to apply a methodology of his own devising to the very measurement that has no legitimacy.
And so to manufacture “true” data.
I actually couldn’t and still can’t believe what I read.
This is the most bogus thing I have ever encountered. It represents the complete defeat of intelligence. It reeks of deceit.
I can honestly say that my interest in this whole issue is not driven by curiosity it is driven by fear.
The fact that this being was not just considered and accepted as a scientist but as in effect the presiding authority on this issue has made me think that mankind simply has no hope.
Even those who are sceptical, or who give alternative interpretations, seem never to actually see this rudimentary exercise in either incomprehensible incompetence or primaeval fraud. Having since heard of his manouverings in the 1988 Congress hearing, I know what I think it is.
Your efforts Anthony in attempting to actually verify what was being measured both disturbed me profoundly in that it is beyond comprehension that instruments used in testing were never themselves verified, and reassured me that there was at least some basic human intelligence being applied, somewhere.
I see from some – only some – of the above comments, continuing efforts by you and possibly others, but mainly the fact that the core question of, really, what constitutes the basic application of human intelligence, is now being brought into focus, that the degradation of human capacities may soon end.
People do need to reduce all of this to such simple observations and evaluations.
It is not even necessary, in coming to a decision on whether CAGW is true or not, to even consider the science or what purports to be science. When someone, anyone, claims “I know this” and therefore “this will happen” about anything at all, and it doesn’t, then they were WRONG. That is, at the the time of making such a claim THEY DID NOT KNOW WHAT THEY WERE TALKING ABOUT.
Any subsequent claims to knowledge must be judged in that light. That is, they didn’t know what they were taking about then, but claimed to, and now they are making another claim with the same level of conviction. What should I make of this revised claim – and this person?
When that person refuses to even acknowledge that they were wrong (“its worse that we thought”) then you are dealing with a person who is fundamentally dishonest. Intractably dishonest.
The scientific inquiry on this will go on. But it must exclude the apparent multitude of those who are simply not scientists regardless of their accreditation and ratification as such.
People such as @Crispin in Waterloo and @RACook PE1978 above are focusing on the guts not just of this issue but of the whole culture that has generated it.