Guest post by Jeff Id 
I will leave this alone for another week or two while I wait for a reply to my emails to the BEST group, but there are three primary problems with the Berkeley temperature trends which must be addressed if the result is to be taken seriously. Now by seriously, I don’t mean by the IPCC which takes all alarmist information seriously, but by the thinking person.
Here’s the points:
1 – Chopping of data is excessive. They detect steps in the data, chop the series at the steps and reassemble them. These steps wouldn’t be so problematic if we weren’t worrying about detecting hundredths of a degree of temperature change per year. Considering that a balanced elimination of up and down steps in any algorithm I know of would always detect more steps in the opposite direction of trend, it seems impossible that they haven’t added an additional amount of trend to the result through these methods.
Steve McIntyre discusses this here. At the very least, an examination of the bias this process could have on the result is required.
2 – UHI effect. The Berkeley study not only failed to determine the magnitude of UHI, a known effect on city temperatures that even kids can detect, it failed to detect UHI at all. Instead of treating their own methods with skepticism, they simply claimed that UHI was not detectable using MODIS and therefore not a relevent effect.
This is not statistically consistent with prior estimates, but it does verify that the effect is very small, and almost insignificant on the scale of the observed warming (1.9 ± 0.1 °C/100yr since 1950 in the land average from figure 5A).
This is in direct opposition to Anthony Watts surfacestation project which through greater detail was very much able to detect the ‘insignificant’ effect.
Summary and Discussion
The classification of 82.5% of USHCNv2 stations based on CRN criteria provides a unique opportunity for investigating the impacts of different types of station exposure on temperature trends, allowing us to extend the work initiated in Watts [2009] and Menne et al. [2010].
The comparison of time series of annual temperature records from good and poor exposure sites shows that differences do exist between temperatures and trends calculated from USHCNv2 stations with different exposure characteristics. 550 Unlike Menne et al. [2010], who grouped all USHCNv2 stations into two classes and found that “the unadjusted CONUS minimum temperature trend from good and poor exposure sites … show only slight differences in the unadjusted data”, we found the raw (unadjusted) minimum temperature trend to be significantly larger when estimated from the sites with the poorest exposure sites relative to the sites with the best exposure. These trend differences were present over both the recent NARR overlap period (1979-2008) and the period of record (1895-2009). We find that the partial cancellation Menne et al. [2010] reported between the effects of time of observation bias adjustment and other adjustments on minimum temperature trends is present in CRN 3 and CRN 4 stations but not CRN 5 stations. Conversely, and in agreement with Menne et al. [2010], maximum temperature trends were lower with poor exposure sites than with good exposure sites, and the differences in
trends compared to CRN 1&2 stations were statistically significant for all groups of poorly sited stations except for the CRN 5 stations alone. The magnitudes of the significant trend differences exceeded 0.1°C/decade for the period 1979-2008 and, for minimum temperatures, 0.7°C per century for the period 1895-2009.
The non-detection of UHI by Berkeley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people. A skeptical scientist would be naturally concerned by this and it leaves a bad taste in my mouth to say the least that the authors aren’t more concerned with the Berkeley methods. Either surfacestations very detailed, very public results are flat wrong or Berkeley’s black box literal “characterization from space” results are.
Someone needs to show me the middle ground here because I can’t find it.
I sent this in an email to Dr. Curry:
Non-detection of UHI is a sign of problems in method. If I had the time, I would compare the urban/rural BEST sorting with the completed surfacestations project. My guess is that the comparison of methods would result in a non-significant relationship.
3 – Confidence intervals.
The confidence intervals were calculated in this method by eliminating a portion of the temperature stations and looking at the noise that the elimination created. Lubos Motl described the method accurately as intentionally ‘damaging’ the dataset. It is a clever method to identify the sensitivity of the method and result to noise. The problem is that the amount of damage assumed is equal to the percentage of temperature stations which were eliminated. Unfortunately the high variance stations are de-weighted by intent in the processes such that the elimination of 1/8 of the stations is absolutely no guarantee of damaging 1/8 of the noise. The ratio of eliminated noise to change in final result is assumed to be 1/8 and despite some vague discussion of Monte-Carlo verifications, no discussion of this non-linearity was even attempted in the paper.
Prayer to the AGW gods.
All that said, I don’t believe that warming is undetectable or that temperatures haven’t risen this century. I believe that CO2 helps warming along as the most basic physics proves. My objection has always been to the magnitude caused by man, the danger and the literally crazy “solutions”. Despite all of that, this temperature series is statistically speaking, the least impressive on the market. Hopefully, the group will address my confidence interval critiques, McIntyre’s very valid breakpoint detection issues and a more in depth UHI study.
Holding of breath is not advised.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Fred Singer’s op-ed in the WSJ, is a polite and reserved rebuke of Muller’s bombastic propaganda in that same publication:
http://online.wsj.com/article/SB10001424052970204394804577012014136900828.html?mod=googlenews_wsj
HenryP,
In your article linked above you say:
“In the wavelengths areas where absorption takes place, the molecule starts acting like a little mirror, the strength of which depends on the amount of absorption taking place inside the molecule. Because the molecule is like a sphere, we may assume that ca. 62,5% of a certain amount of light (radiation) is sent back in a radius of 180 degrees in the direction where it came from. This is the warming or cooling effect of a gas hit by radiation. Same effect is also observed when car lights are put on bright in humid, moist conditions: your light is returned to you!!”
I think you are confusing absorption and scattering. Reflection of light off of a molecule is different from absorption and reemission. Absorption represents a change in the quantum state of a molecule. When a photon of the appropriate energy is absorbed by a molecule, that molecule is excited to a higher energy state. Eventually, the molecule will return to the the “ground state” (lowest allowable energy state) and reemit the photon. But, the direction of the reemission depends on the particular excitation and is not strongly correlated with the original direction of the photon. I am simplifying a bit, but that’s the general picture. Reflection, on the other hand, is elastic scattering…and it is specular (angle of reflection = angle of incidence).
Your approximation of molecules as spherical is problematic. CO2 and O2 are very much non-spherical. Organics like Methane are very long chains of atoms and even less spherical.
Also, even if you make this spherical approximation, your claim that the light is reflected 180 degrees is incorrect. The reflection angle depends on the incidence angle. If a ray of light hits a reflecting sphere off-center, the light will not be reflected back along the same direction it came. Sit down and draw the ray optics. Your argument about the light from your headlights “coming back to you” drives the point home. If the light were reflected at 180 degrees it would come back to the headlights…not to you. You are seeing scattered and reflected light at a different angle from 180 degrees. And, you are only seeing a small fraction of the light. The bulk of it is absorbed, forward scattered, or transmitted.
In the end, it is true that atmospheric gasses reflect some light back into space. This is a component of the earth’s “albedo” and it does have a slight cooling affect. However, ice and clouds are much more important to the earth’s albedo than atmospheric gasses.
When greenhouse gasses absorb IR radiation from the sun, the direction of reemission is not specular (mirror image reflection). It is randomized. And it is this randomized reemission that contributes to the warming affect of GHGs. At particular IR wavelengths the warming from absorption and reemission is much more significant than the cooling from back reflection of the gasses.
Regarding Surface temperatures: It’s not only “UHI” effect, I’ve never seen anyone speak to Rural Terrain Heat Island effect.
What is “RTHI” effect you ask, well every motorcycle ride knows what this is. In the evening or at night as you ride through the countryside and you crest a hill you can feel the temperature drop by many degrees or enter a valley the warm night air (and insect rain) is often many degrees warmer. This effect of hitting warm and cold spots also happens on relatively level patches of road often triggered by a wood lot or river. Temperature inversion layers that trap heat or cold close to the ground could easily throw rural temperature measurement stations off by many degrees, how is this compensated for ?
“At particular IR wavelengths the warming from absorption and reemission is much more significant than the cooling from back reflection of the gasses.”
You are doing the same thing he did; conflating absorption and re-emission, and reflection.
Doesn’t some of the randomized re-emission of absorbed IR radiation from the sun, go back into space?
Matt, I go with observations. Absorption is a wrong term. I grew up with terms like extinction and transmission. Water vapor is somewhat problematic because the molecules build up to small droplets which do cause optical scattering. I know what is the difference. However, it appears that the observed effect (e.g. via the moon) is pretty similar as if it were mirrored. (see my footnote: follow the green and blue line in fig 6 bottom and see how everything comes back to earth via the moon in fig 6 top and figure 7.)
e.g. there is no change in the molecule (of carbon dioxide) quantum or otherwise if you throw light of it on of 4.26 um to measure because otherwise it would get warmer and explode eventually if you measured the % in a closed container and left the meter on?
It appears that there is a table showing the sun’s Watts/cm2 between 4 and 5 um
(e.g. Nasa report 351)
but where is the table showing earth”s emission in Watts per cm2 between 14 and 15?
In other words, I am asking how much exactly is the difference between the cooling effect and the warming effect of the CO2? If you don’t have those measurements in Watts/m2/m3 0.01%CO2/24 hours for both the cooling and warming effect of the CO2 then how would anyone know for sure that the net effect of more carbon dioxide is warming rather than cooling?
My biggest concern is that people are actually trying to work with Best’s Trend data without looking at the actual data in the “Site” recordsets.
The Taverage data, which are temperatures not anomalies is absolutely riddled with errors, how can you use an error ridden dataset for accurate trend analysis?
I scanned the data initially looking for patterns and it soon became apparent that the data has lots of improper minus signs causing data to vary by 30-40 degrees between months.
Next I looked at Winter averages and found that they were higher than summer averages, how can this be.
I ran simple query for all stations above 10 degrees latitude for January versus June, July & August and if it was higher than any one of them flag it up.
Of the 34103 sites above 10d Latitude 30506 have 1 or more January averages higher than June, July or August, sometimes all 3 summer months.
In Addition to my previous post I found that of the 34103 sites above 10d Latitude 29770 have 1 or more with both January & February averages higher than June, July or August, sometimes all 3 summer months.
This can’t possibly be correct, what has there processing done to the values?
Richard says:
November 4, 2011 at 2:24 am
“Why is it that we cannot seem to agree that UHI (which is a well observed fact and easily visible in the data) is different to dUHI and dRural.”
=========================================================================
Richard has this right, assuming he means dT(UHI)/dt and dT(Rural)/dt. Too many in this discussion seem to be confusing the difference between the trend in both urban and rural temperatures over time, with the differences between urban and rural temperatures at a given time (the UHI effect).
It would seem to me that the effect on a weather station being engulfed in the UHI is not initially a constant, but something that would rise to some maximum level over time, then remain constant over time.
Consider two stations A and B, a couple of hundred kilometers apart. Both stations were rural at some time in the past. At that time temperatures at both stations were the same. Now assume that station B starts to be engulfed in urbanization from a nearby city. Station B temperature starts being greater than A because of the urban heat island. As the city overtakes Station B the difference between Station A and Station B becomes larger. Finally Station B is fully urban, and (assuming no change in heat added by urbanization) the temperature difference between Station A and Station B now becomes a constant. The temperature anomaly graph with time will look different for the two stations if the time covered begins when they were both rural. But if it begins when Station B was fully urban there will be no difference.
Of course anthropogenic UHI heat content is not constant, or the same for any two urban areas.. Which makes land based historic data very difficult to sort out.
P. Solar says:
November 4, 2011 at 3:02 am
I agree with you, compare the best results to the pre 2000 version of the other datasets, where have all the major peaks & valleys gone. Especially the one that prompted the “Ice Age” concerns of the 70s?
Matt says:
November 3, 2011 at 7:34 pm
“Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”.
Note that the second quote is -not- an exculpatory factor. It’s a -compounding- factor. If cities are a miniscule faction of land mass (true) –but the majority of temperature measurements happen to be within (or near enough) to the cities– then your entire data set has an exceedingly skewed sample.
That is: Cities -do- have a ” a small impact on –actual– global averaged trends”, but the –measurements– are unfortunately concentrated in exactly those areas — and thus the UHI –effect– can have an overwhelming effect on the global –measurements– of the average trend.
IOW:
We’d be better off flat-out excluding data from anywhere within 10 miles of a city. That is excluding a –small– portion of the landmass with known-corrupted data. They do happen to be a vastly disproportionate fraction of the available surface stations
1) CRN1 is a -prerequisite- for competent data, not a sufficient condition.
2) UHI is not a microsite issue, is unfixed by CRN1 quality stations, and happens at -far- more stations than “1% of land mass” would predict – and thus is non-negligible.
3) A calibrated point-source measurement is not a calibrated grid-cell average measurement.
Don,
“You are doing the same thing he did; conflating absorption and re-emission, and reflection.”
Please explain how I’m doing the same thing he did? I very clearly distinguished between elastic scattering and absorption. Specular reflection of light depends on angle of incidence. Angular re-emmission of IR is more or less isotropic.
“Doesn’t some of the randomized re-emission of absorbed IR radiation from the sun, go back into space?”
Of course it does. The re-emission is isotropic, so some of the re-emitted IR is pointing back towards space. The other important point is that the probabilities of absorption and scattering depend heavily on wavelength. Atmospheric gasses are largely transparent to optical wavelengths. Blue light has a very short mean free path and a high probability of scattering (answering the age old “why is the sky blue” question). However gases like water-vapor and CO2 make the atmosphere highly opaque to IR. The probability of absorption (for certain wavelengths) is much higher than transmission or reflection.
Steven Mosher: Can we conclude ( as Christy, Spencer and Steve do) that the difference
.28 – .18C or .1C per decade could be UHI?
In Steve McIntyre’s wording, I think that is reasonable.
There have been a number of good critiques of the BEST analyses, and this is one of them. point #2 can’t be decided (imo) can’t really be addressed without some study of the particular surfacestations: what distinguishes the “warming” from the “nearly constant” from the “cooling” stations. But the comparison of the satellite trend to the surface station trend is reasonable.
Henry P,
Read my point to Mont. Different wavelengths have different probabilities of absorption and reflection. Certain IR wavelengths are maximally absorbed by water.
Here is a nice plot of the transmissivity of the atmosphere to various wavelengths:
http://en.wikipedia.org/wiki/File:Atmospheric_electromagnetic_opacity.svg
“Water vapor is somewhat problematic because the molecules build up to small droplets which do cause optical scattering.”
No. Water vapor is the gaseous state of water. Droplets that form clouds are in a liquid state. Clouds do reflect a lot of light back into space. In any case, it is “optical” light that reflects off of water droplets. Water is not a good IR reflector, but it is a very good absorber:
http://en.wikipedia.org/wiki/File:Water_absorption_spectrum.png
Anyway, I would really suggest that you work through some formal physics. You’re clearly a smart guy. But, without basic physics literacy it is very easy to think that you understand things that you don’t.
I wouldn’t even purport to be an expert on the topic of atmospheric response to radiation and I have a PhD in physics. I’m riding on what I learned from Electrodynamics in grad school and from some hands on experience working with an IR laser (although my laser is very shallow IR). I know enough to see that you are clearly jumbling up concepts. I don’t know what else to say. The teacher in me hurts to hear arguments like this and I know its probably futile to suggest that you sit down with someone and try to learn some of the formal basics. Again, I don’t mean to sound condescending. I’m not an expert on the topic either. But, I’m also not challenging this century-old science. If I wanted to do that, I would also sit down with an expert and hit the books first.
There are plenty of legitimate scientific discussion points on the subject of AGW. But, the radiative properties of CO2 are just not among them…Anyway, I can’t let myself get distracted again…I’m done here. Cheers.
old engineer,
Can you name some of those who are disagreeing with Richard?
Quick examination of stations to see how many are inside city limits:
Brewton, AL: Inside.
Fairhope, AL: Inside.
Gainesville, AL: 2km.
Greensboro, AL: Inside.
Highland Home, AL: (1), 500m to high school.
Muscle Shoals, AL: Inside.
Scottsboro, AL: Inside.
Selma, AL: Inside.
St Bernard, AL: Inside.
Taladega, AL: Inside.
Thomasville, AL: Inside.
Troy, AL: Inside.
(1) No city limit demarcation in google maps.
That’s the first page of the Alabama USHCN stations as tabulated at surfacestations.org.
So the (daft) quote is “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”.
Taking the most conservative position possible from that data:
That’s -83%- of the data collected -inside- cities. Those areas known to have issues and known to be -unrepresentative- of the bulk land mass. Nice data collection methods.
Tempted to run through the entire list of USHCN stations, there just aren’t that many.
Stephen Rasey: The splices contain no low frequency information in the spectrum where we expect GW and UHI signals to exist. But when they glue the spices together, low frequencies return – but from WHERE? It can only come from the glue. It is not in the data anymore.
They cut and splice where there is a jump discontinuity in the data. It is the act that produced the jump discontinuity (which may have been relocating the thermostat, or putting an asphalt runway near it) that perturbed the low frequency signal. Cutting and slicing restores the low frequency signal that the jump discontinuity perverted.
Matt,
Read the sentence of yours that I quoted, again. You conflated absorption and re-emission with reflection. You did not mention that absorption and re-emission results in some of the re-emission going back into space. Don’t you think that might have been appropriate there? But if you are happy with that, go with it.
And see what Alan Blue has to say, above. He knows how to frame an issue.
You say Berkeley, I say Berkley. Probably best to spell it correctly because Berkley is Cockney rhyming slang. English is a rich and fascinating language dear to my heart, my only reason for offering this. Anyone who does something extremely stupid can be called a “great steaming birk”, (Berkley contracted) with little chance of causing offence. This seems odd once you appreciate the true meaning of the word.
For those not au-fait with old East London Cockney rhyming slang… Tit for tat rhymes with hat, so the Cockney’s hat is his “titfer”. China plate rhymes with mate, so his buddy is his “China”. Richard the third rhymes with bird and they gave our children the “Dicky birds”. True Cockneys have to be born withing the sound of Bow bells. They greet each with “Wotcha” which comes straight from the high language of chivalry, “What cheer Sir knight?” But I digress.
“Berkley Hunt” is the female, anatomical equivalent of the male “Hampton Wick”, so I will quite understand if this reply gets moderated out of existance.
http://wattsupwiththat.com/2011/11/03/a-considered-critique-of-berkley-temperature-series/#comment-787568
“Perhaps you are unaware of this but if you take the time to correctly process satellite LTL data and compare it to ground data, there is a statistically significant difference in trend. i.e. detrend sat data, scale variance, retrend, regress, examine residuals. That is really all the confirmation of UHI that I need. So when a paper is published on non-detection of UHI, it is an example of go home and do it again. ”
Actually, I don’t know at all what the above means.
Are you suggesting that you yourself have taken the time to do your own independent surface temperature reconstruction from the raw satellite data, independent from the methods used by UAH and/or RSS?
Also, how do we know that the satellite data are the real temperature trend? After all, it is an indirect measurement of the lower troposphere and not a direct measurement at (or near) ground level;
http://en.wikipedia.org/wiki/Satellite_temperature_measurements
“Satellites do not measure temperature. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have produced differing temperature datasets. Among these are the UAH dataset prepared at the University of Alabama in Huntsville and the RSS dataset prepared by Remote Sensing Systems. The satellite series is not fully homogeneous – it is constructed from a series of satellites with similar but not identical instrumentation. The sensors deteriorate over time, and corrections are necessary for orbital drift and decay. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult.”
After reading that (and more), I can only conclude that the direct surface temperature measurements are not the same as the indirect lower troposphere satellite measurements, one is direct the other is indirect, one is at the surface the other is somewhere (??) in the troposphere.
So if I were to look anywhere first, then it would be the satellite data, as the 0.1C difference is more likely due to errors in the satellite data, or errors in the mathematical inversion to tropospere temperature, or simply due to the two sets of measurements not being taken from the same elevations.
Oh, and here a couple of links for you;
http://www.demographia.com/db-worldua.pdf
http://en.wikipedia.org/wiki/Earth
From the first of the above two links;
“This report contains population, land area and population density for all 780 identified urban areas (urban agglomerations or urbanized areas) in the world with 500,000 or more population as of the volume date. A number of additional urban areas are also listed, including all urban areas over 100,000 in France, New Zealand, Puerto Rico, the United Kingdom and the United States and all urban areas over 50,000 in Australia and Canada. Rankings are indicated only for urban areas of 500,000 and over.
More than 1,400 urban areas of all sizes are included, accounting for 53 percent of the world urban population in the fourth quarter of 2005 (the average year of the estimates).
From Table 7: 1,824,985,000 people live in these urban areas with an average population density of 5,480 people/km^2;
1,824,985,000/5,480 = 333,000 km^2
The total land surface area of the Earth is 148,940,000 km^2.
330,000/148,940,000 = 0.0022 or 0.22% of the total land surface of the Earth is comprised of an urban population of 1,824,985,000 people (as of 2005).
That means that the rest of Earth’s 2005 population of ~6.5 billion people occupy the rest of Earth’s land surface;
(6,500,000,000 – 1,824,985,000)/(148,940,000 – 330,000) = 31.4 people per square kilometer
It would be awfully hard to imagine that Earth’s total urban population covers more than say 1% of Earth’s total land surface area.
QED, therefore, forthwith, we can safely conclude that a proper area weighted land surface temperature reconstruction from all land surface measurements, both with urban areas and without urban areas, will be essentially the same, to say two decimal points of precision (you all can fight over the whichever temperature scale you all prefer).
So, essential, your reasoning is that the temperature dataset and quality of that dataset and the statistics and its methods and the quality of said statistics and methods is absolutely positively in all shapes and forms completely crap and can’t be trusted to be used to prove anything diddley squat, but you believe that the temperature has gone up anyway.
Exactly how does that not make you sound like an utter global warming fundamentalist too deep into the bubbles of his own bong? :p
“The conclusion of the three groups is that the urban heat island contribution to the global
average is much smaller than the observed global warming. Support is provided by the
studies of Karl et al. (1988), Peterson et al. (1999), Peterson (2003) and Parker (2004) who
also conclude that the magnitude of the effect of urban heating on global averages is small.
There has been further discussion about the possibility of large non-climactic contamination
in global temperature averages, particularly due to local effects of urbanization,
development, and industrialization (see, for example, McKitrick & Micheals 2004, 2007; De
Laat & Maurellis 2006; Schmidt 2009; and McKitrick & Nierenberg 2010.).”
———————————————————–
This is not a balanced review of peer reviewed literature. Supporting papers are clearly highlighted as “support”, opposing papers are summarized as “further discussion”.
Further discussion, such as satellite/ground differences, diverging sea surface/land trends, UHI log population law and expected dUHI increase with population growth does not appear at all.
Verity
,my view from a number of blogs is that Steven Mosher is about 14 years old…and is going through puberty. ignore him. he is worried about his pimples…he has a telescope and a laptop and he thinks he knows things.
Richard says:
November 4, 2011 at 5:00 am
I think they failed to differentiate between UHI and dUHI, where only the latter is relevant for trends.
They failed to explain, why “very rural stations” should show low dUHI, when the UHI log population law implies something else, let alone microsite issues and land use change.
steven mosher says:
November 4, 2011 at 8:15 am
You asked:
Steve’s analysis suggest an upper bound. Are you open to discussion of the upper bound or do you disagree with McIntyre, Christy, Spencer and Pielke?
I can either answer yes or no to the first part of your question (yes I am open to discussion of it, or no I see no need to argue about it/discuss that it is beyond this limit which Steve suggests is reasonable and I agree) and no to the second part, I don’t disagree with Steve et al. My lack of a clear answer earlier was due to slight puzzlement at the phrasing of your question.
My only qualification of this is to point out, as I am sure you and Steve would agree, that this is net warming, since some sites are cooling (30% according to BEST). In this case the individual site limits for UHI developing over the period are not bounded by this 0.1C/decade.
I am not suggesting extrapolation of the 0.1C rate either, merely observing that this is a reasonable bound for the satellite era, but that is all. And that rate clearly does not marry well with the historical overall rate of warming, which again is my point that UHI development will vary with time and location. How do we discern this in the surface record?
Septic Matthew says:
November 4, 2011 at 12:50 pm
They cut and splice where there is a jump discontinuity in the data. It is the act that produced the jump discontinuity (which may have been relocating the thermostat, or putting an asphalt runway near it) that perturbed the low frequency signal. Cutting and slicing restores the low frequency signal that the jump discontinuity perverted.
I thought like this too until recently when Steve McIntyre presented a possible scenario that complicates the issue. If a station in town which grew to a city over a century and a half was moved over the years of growth to ever more rural areas, each step could produce a sharp downstep in the data. This downstep after the move could then creep up over decades as more UHI built around it until the station is moved again, producing another downstep. BEST would try and detect and re-align the downstep producing a long term uptrend whereas the true signal would be far closer to the original WITH the steps in it.