A Considered Critique of Berkeley Temperature Series

Guest post by Jeff Id File:Berkeley earth surface temperature logo.jpg

I will leave this alone for another week or two while I wait for a reply to my emails to the BEST group, but there are three primary problems with the Berkeley temperature trends which must be addressed if the result is to be taken seriously.  Now by seriously, I don’t mean by the IPCC which takes all alarmist information seriously, but by the thinking person.

Here’s the points:

1 – Chopping of data is excessive.   They detect steps in the data, chop the series at the steps and reassemble them.   These steps wouldn’t  be so problematic if we weren’t worrying about detecting hundredths of a degree of temperature change per year. Considering that a balanced elimination of up and down steps in any algorithm I know of would always detect more steps in the opposite direction of trend, it seems impossible that they haven’t added an additional amount of trend to the result through these methods.

Steve McIntyre discusses this here. At the very least, an examination of the bias this process could have on the result is required.

2 – UHI effect.  The Berkeley study not only failed to determine the magnitude of UHI, a known effect on city temperatures that even kids can detect, it failed to detect UHI at all.  Instead of treating their own methods with skepticism, they simply claimed that UHI was not detectable using MODIS and therefore not a relevent effect.

This is not statistically consistent with prior estimates, but it does verify that the effect is very small, and almost insignificant on the scale of the observed warming (1.9 ± 0.1 °C/100yr since 1950 in the land average from figure 5A).

This is in direct opposition to Anthony Watts surfacestation project which through greater detail was very much able to detect the ‘insignificant’ effect.

Summary and Discussion

The classification of 82.5% of USHCNv2 stations based on CRN criteria provides a unique opportunity for investigating the impacts of different types of station exposure on temperature trends, allowing us to extend the work initiated in Watts [2009] and Menne et al. [2010].

The comparison of time series of annual temperature records from good and poor exposure sites shows that differences do exist between temperatures and trends calculated from USHCNv2 stations with different exposure characteristics. 550 Unlike Menne et al. [2010], who grouped all USHCNv2 stations into two classes and found that “the unadjusted CONUS minimum temperature trend from good and poor exposure sites … show only slight differences in the unadjusted data”, we found the raw (unadjusted) minimum temperature trend to be significantly larger when estimated from the sites with the poorest exposure sites relative to the sites with the best exposure. These trend differences were present over both the recent NARR overlap period (1979-2008) and the period of record (1895-2009). We find that the partial cancellation Menne et al. [2010] reported between the effects of time of observation bias adjustment and other adjustments on minimum temperature trends is present in CRN 3 and CRN 4 stations but not CRN 5 stations. Conversely, and in agreement with Menne et al. [2010], maximum temperature trends were lower with poor exposure sites than with good exposure sites, and the differences in

trends compared to CRN 1&2 stations were statistically significant for all groups of poorly sited stations except for the CRN 5 stations alone. The magnitudes of the significant trend differences exceeded 0.1°C/decade for the period 1979-2008 and, for minimum temperatures, 0.7°C per century for the period 1895-2009.

The non-detection of UHI by Berkeley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people. A skeptical scientist would be naturally concerned by this and it leaves a bad taste in my mouth to say the least that the authors aren’t more concerned with the Berkeley methods. Either surfacestations very detailed, very public results are flat wrong or Berkeley’s black box literal “characterization from space” results are.

Someone needs to show me the middle ground here because I can’t find it.

I sent this in an email to Dr. Curry:

Non-detection of UHI is a sign of problems in method. If I had the time, I would compare the urban/rural BEST sorting with the completed surfacestations project. My guess is that the comparison of methods would result in a non-significant relationship.

3 – Confidence intervals.

The confidence intervals were calculated in this method by eliminating a portion of the temperature stations and looking at the noise that the elimination created. Lubos Motl described the method accurately as intentionally ‘damaging’ the dataset.  It is a clever method to identify the sensitivity of the method and result to noise.  The problem is that the amount of damage assumed is equal to the percentage of temperature stations which were eliminated. Unfortunately the high variance stations are de-weighted by intent in the processes such that the elimination of 1/8 of the stations is absolutely no guarantee of damaging 1/8 of the noise. The ratio of eliminated noise to change in final result is assumed to be 1/8 and despite some vague discussion of Monte-Carlo verifications, no discussion of this non-linearity was even attempted in the paper.

Prayer to the AGW gods.

All that said, I don’t believe that warming is undetectable or that temperatures haven’t risen this century. I believe that CO2 helps warming along as the most basic physics proves. My objection has always been to the magnitude caused by man, the danger and the literally crazy “solutions”. Despite all of that, this temperature series is statistically speaking, the least impressive on the market. Hopefully, the group will address my confidence interval critiques, McIntyre’s very valid breakpoint detection issues and a more in depth UHI study.

Holding of breath is not advised.

5 1 vote
Article Rating
145 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Editor
November 3, 2011 6:37 pm

Excellent Jeff. My own concerns centre around issue #2.

R. Shearer
November 3, 2011 6:39 pm

Despite all this “warming” we’re only one large volcanic eruption away from a “year without a summer.”

November 3, 2011 7:06 pm

Still more of why I don’t think it shoud be called “BEST” –
I’m still favoring Berkeley EST.

David
November 3, 2011 7:06 pm

When is all the raw data and code going to be released? My understanding is that what has been released thus far is of very limited value.
I know some people have said it’s just preliminary stuff and the real stuff is coming but if the papers are ready for peer review and Muller is all over the news it seems odd that what was released is not particularly useable to people who want to get to the heart of the methodolgy that was used.

Doug in Seattle
November 3, 2011 7:18 pm

While I agree that the BEST scalpel is a good idea, I think that in the end it can only be properly employed based on a direct examination of both the metadata and the temperature data. What the BEST crew did was to try and automate this process based on trend.
This is really a problem with the research model rather than the researchers. Crowd sourcing, as was done in the Surface Stations project might be a better way to accomplish this.

November 3, 2011 7:20 pm

Concern #1 speaks loudest. Once you start hacking at the data, you basically add a bias. Mother Nature doesn’t do that, nor does Anthropogenophecles, the god of significant figures.

November 3, 2011 7:21 pm

Admin: just some housekeeping. Can you please perform a replace of Berkley with Berkeley? Thanks.

November 3, 2011 7:23 pm

David says:
November 3, 2011 at 7:06 pm
“…it seems odd that what was released is not particularly useable to people who want to get to the heart of the methodolgy that was used.”

People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.

u.k.(us)
November 3, 2011 7:23 pm

I just don’t want Al Gore running the world.

Randy
November 3, 2011 7:31 pm

Well spoken. The words you use are the same as the thoughts we are thinking as we read along. Succinct. And to the point. Thanks for what you and Anthony do.

Matt
November 3, 2011 7:34 pm

Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Look, I’m not interested in debating whether or not the latter statement is true. But, the fact that “kid’s can measure” local urban heat islands doesn’t mean it has a significant impact on globally averaged means.
“The non-detection of UHI by Berkley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people”
Lot’s of science involves *amazing detail* and yet yields a null result. I’m glad that Anthony et al did such a thorough assessment of station quality. It was a real service to the science. But, that fact that they worked hard does not have any bearing on whether or not UHI has an impact on the change in globally averaged temperature anomaly. Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments. What Jeff is saying is that the Berkeley results *must* be wrong because they go against his a priori belief.
One last point: I may be tired and misreading this, but doesn’t the excerpt from Anthony’s paper say that the reconstructions with the bad sites *understate* the temperature trend (in agreement with Meene et al and also, BTW, Berkeley)? Isn’t that the opposite of what Jeff wants to believe?

Don Monfort
November 3, 2011 7:38 pm

“Admin: just some housekeeping. Can you please perform a replace of Berkley with Berkeley? Thanks.”
It’s actually Berzerkely.

u.k.(us)
November 3, 2011 7:55 pm

Matt says:
November 3, 2011 at 7:34 pm
“Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments.”
===========
I assume you have a peer-reviewed paper to back-up this claim ?
I’m sorry, I mean a paper that has cleared peer-review.
A link to same would be best.

November 3, 2011 7:57 pm

Verity
Lets look at issue #2
Look at Steve McIntyre’s latest post.
We start with the trend of Satellites. Surely you accept the trends of John Christy and Roy Spencer.
Then steve applies a similar technique to that used here
http://hurricane.atmos.colostate.edu/Includes/Documents/Publications/klotzbachetal2009.pdf
Then he compares it to the surface trend
Lets walk through it slowly using one example.
1. We accept the UAH trend say .18C per decade
2. we look at CRU warming more at .28C per deacde
Can we conclude ( as Christy, Spencer and Steve do) that the difference
.28 – .18C or .1C per decade could be UHI?
Thats about what Ross suggests?
We all realize that UHI is a potential problem. The first question is can we bound the problem.
its not zero ( so BEST is wrong) AND its not 9C. the whole world is not Tokyo.
Steve’s analysis suggest an upper bound. Are you open to discussion of the upper bound or do you disagree with McIntyre, Christy, Spencer and Pielke?

November 3, 2011 8:02 pm

Another item I’ve NEVER heard these “wonks” address. Yes, we have “thermometer data going back to about 1800.
NO, none of the data until the 20’s or 30’s (and a small amount at first) was taken with “rotating drum”, daily recorders.
The problem with this? WHEN IS PEAK, when is trough?
We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I HAVE NEVER HEARD THIS MENTIONED OR ADDRESSED.
Frankly, the lack of discussion of that fact tells me that “all the best laid plans of mice and men” have gone wrong.
There is, in essence, NO VALUE to data from about 1800 to 1920 !!!
Max

Richard M
November 3, 2011 8:11 pm

The UHI effect is obviously a fact. The problem is detecting it and determining how much influence it has on the overall trends. Since it varies by time and location it will require a lot of effort to understand. I haven’t seen this effort from anyone including BEST.

November 3, 2011 8:17 pm

Is the BEST the enemy of good (analysis)?
How do these BEST trends compare with satellite trends (for the period for which both sets exist)? I am not sure one can give too much credence to land based measurements considering the changes that can be introduced into the microenvironment, without even trying (e.g., due to changes in vegetative cover, clearing of land, in the immediate vicinity of the instrument, etc.)

Don Monfort
November 3, 2011 8:21 pm

Matt,
“there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Cities, towns, villages, suburbs, neighborhoods, hamlets, etc. are where the thermometers are. Population centers are over-represented in the temperature records. Read the BEST paper on UHI and tell us how they dealt with that issue.

November 3, 2011 8:22 pm

Max Hugoson says:
November 3, 2011 at 8:02 pm
The problem with this? WHEN IS PEAK, when is trough? We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I HAVE NEVER HEARD THIS MENTIONED OR ADDRESSED.

The max-min-thermometer was invented in 1794 by Six, so now you have heard this mentioned.

Don Monfort
November 3, 2011 8:23 pm

I will play Steve. If you are asserting that a reasonable upper limit on UHI is .1C per decade, I will take that. I only hope that this time your guessing game has and ending.

JJ
November 3, 2011 8:27 pm

JohnWho says:
“Still more of why I don’t think it shoud be called ‘BEST’ –
I’m still favoring Berkeley EST.”

I think it should be Berkeleyest. That keeps is useful as an adjective, but (unlike the current acronym) it would be accurate. e.g. –
“Did you catch the nonsense that came out of Durban last week? It is about the Berkeleyest thing I’ve heard in a long time!”
🙂

Theo Goodwin
November 3, 2011 8:33 pm

Matt says:
November 3, 2011 at 7:34 pm
“Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Look, I’m not interested in debating whether or not the latter statement is true. But, the fact that “kid’s can measure” local urban heat islands doesn’t mean it has a significant impact on globally averaged means.”
Your comment contains the equivocation on the phrase “no UHI” that is found in BEST’s work. You seem to be aware of the problem but unaware that BEST uses the equivocation in a fallacious argument. They begin talking about UHI as a local phenomenon, which is the topic they took from Anthony but conclude that UHI has no impact on global average temperatures, a new topic that was not addressed by Anthony. In addition, introducing the topic of “global averaged trends” at all is a beautiful example of a Red Herring; that is, they introduced a topic that might sound like the actual topic but is actually irrelevant to it.
Anthony’s claim is that there is local UHI and that it has a disproportionate effect On The Measurement of global average trends because thermometers are disproportionately found in settings affected by UHI. Anthony does not claim that UHI has a causal effect on global temperature; rather, his claim is about the measurements that feed into measurements of global temperature.
In not addressing Anthony’s concerns about local UHI (and in changing from Anthony’s 30 year period for which there is metadata to a 60 year period for which there is none for the first 30 years) BEST betrayed Anthony’s trust yet chose to give the impression that they did address his concerns. That is plain old deception for the purpose of appearing to attain a success that was not attained, not earned, and not deserved.

November 3, 2011 8:42 pm

Jeff – when you say – “They detect steps in the data, chop the series at the steps and reassemble them. ”
Do these two GISS diagrams express the same thing ? – from Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl 2001. A closer look at United States and global surface temperature change. J. Geophys. Res. 106, 23947-23963, doi:10.1029/2001JD000354. and a pdf can be downloaded at http://pubs.giss.nasa.gov/abstracts/2001/
I have a page here –
http://www.warwickhughes.com/papers/gissuhi.htm
and also a blog post.
http://www.warwickhughes.com/blog/?p=753

Theo Goodwin
November 3, 2011 8:51 pm

Steven Mosher says:
November 3, 2011 at 7:57 pm
“1. We accept the UAH trend say .18C per decade
2. we look at CRU warming more at .28C per decade”
Where did you get these numbers? Does CRU claim that temperatures have risen at .28C per decade? For how many years? Let’s take 60 years as it is very important in some of BEST’s work, especially that having to do with Anthony and local UHI.
Let’s see, .28 per decade times 6 decades yields 1.68C for the most recent sixty years. In turn, that is equivalent to about three degrees Fahrenheit. Does CRU claim that global average temperature has risen 3F in the last 60 years?

Gail Combs
November 3, 2011 8:54 pm

Indur M. Goklany says:
November 3, 2011 at 8:17 pm
Is the BEST the enemy of good (analysis)?
How do these BEST trends compare with satellite trends (for the period for which both sets exist)? I am not sure one can give too much credence to land based measurements considering the changes that can be introduced into the microenvironment, without even trying (e.g., due to changes in vegetative cover, clearing of land, in the immediate vicinity of the instrument, etc.)
____________________________
Anthony’s Surface Station project is looking into all those microenvironments.
As far as the early data goes, one has to look at the logs and notes of the collectors of the data.
WUWT posted this earlier
“…from 1892 is a letter from Sergeant James A. Barwick to cooperative weather observers in California….</b. http://wattsupwiththat.com/2011/10/26/even-as-far-back-as-1892-station-siting-was-a-concern/
Some one else is trying to collect the old British shipping records on water temperature.

Uri
November 3, 2011 8:55 pm

Mike Bromley the Kurd says:
People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.
Maybe you should have looked at the data yourself.
A cursory look at the preliminary data (plain text version) shows how useless it is.
File data.txt contains adjusted monthly averages, including over 3000 values that are either below -90 Celsius or above 57 Celsius. Some values are below absolute zero (-273), and some are in the many thousands (I saw values above 28000 Celsius).
In site_summary.txt over 600 sites have no known location.
If this is the data set used for the papers, they must retract them.

Matt
November 3, 2011 9:09 pm

u.k.(us) says:
November 3, 2011 at 7:55 pm
Matt says:
November 3, 2011 at 7:34 pm
“Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments.”
===========
I assume you have a peer-reviewed paper to back-up this claim ?
I’m sorry, I mean a paper that has cleared peer-review.
A link to same would be best.
U.K,
Let me clarify: Pick up *any* peer reviewed measurement and read the error analysis section. I am describing normal scientific protocol. All instruments have finite precision. All instruments have some bias. The question is: “what affect does the finite precision or bias have on the end results?” Good science attempts to meticulously quantify that uncertainty or bias. It is not enough to say an instrument is imprecise. You have to say the instrument has an precision of X and when that error is propagated through the analysis, it has an effect Y on the final result. If Y too big, you go out and buy a new instrument. But, if it is not too big to measure the trend in question, then you’re fine. The strongest thing Anthony’s paper seems to be saying is that the poor quality stations tend to amplify diurnal variation. That just adds to the noise, but as Anthony admits, the noise cancels out when you get an average trend. It seems that in Anthony’s own summary of the Fall et al paper, he admits that including the poor stations does not bias the average daily temperature trends towards more warming (if anything, towards less warming).
To quote Anthony:
*Minimum temperature warming trends are overestimated at poorer sites
*Maximum temperature warming trends are underestimated at poorer sites
*Mean temperature trends are similar at poorer sites due to the contrasting biases of maximum and minimum trends.
-taken from http://www.surfacestations.org/Fall_etal_2011/fall_etal_media_resource_may08.pdf
I can’t seem to find a draft of his paper (links dead) so I don’t want to overstate my understanding of his conclusions or his analysis. But, I see nothing in his media summary that significantly contradicts the conclusions of any prior analysis or the BEST analysis. I would welcome anyone who can direct me to a draft of the paper or who can correct me if I’m wrong about this.

DR
November 3, 2011 9:23 pm

Doesn’t basic atmospheric physics demand the troposphere warm at a significantly higher rate than the ground? Why wouldn’t this be point #4? Am I missing something?

November 3, 2011 9:25 pm

DR says:
November 3, 2011 at 9:23 pm
Doesn’t basic atmospheric physics demand the troposphere warm at a significantly higher rate than the ground?
The troposphere is warmed by and from the ground up.

richard verney
November 3, 2011 9:37 pm

Since UHI is a real and undeniable FACT, if we cannot measure/detect UHI then there is something wrong with the resolution of the measuring equipment (algorithyms used) PERIOD.
I would be very sceptical of a data set that is unable to detect UHI. I acknowledge that on a trend basis it could be rather small but it should there and should be identifiable.

David Falkner
November 3, 2011 9:41 pm

Matt says:
November 3, 2011 at 9:09 pm
I would think, intuitively, that converging maximum and minimum temperatures upon each other would reduce the variance in the data artificially, making the error bars associated with the data seem more reasonable than they ought to be.

fortunate cookie
November 3, 2011 9:43 pm

Slightly OT, but apparently Anthony is scheduled to be the guest during the first hour of tonight’s Coast to Coast radio broadcast (in the US), starting in about twenty minutes.
I wonder whether a discussion of BEST is planned. The only description given on the website is: First Hour: Meteorologist Anthony Watts comments on the global warming issue.
See http://www.coasttocoastam.com/

November 3, 2011 9:52 pm

The overall trend in US continental temperatures follow the ocean cycles, PDO+AMO as shown by Joe D’Aleo. We are just seeing a repeat variant of the dust bowl cycle.
The meteorological surface air temerpature (MSAT), or weather temperature is the temperature of the air measured in an enclosure placed at about eye level above the ground. The minimum daily MSAT is essentially a measure of the bulk air temperature of the local weather system as it is passing by the station. (It is the base of the local lapse rate). The maximum daily MSAT is essentially a measure of the solar surface heating/moist convection added to the minimum MSAT.
In California, the minimum MSAT tracks the PDO, with a slope bias from the UHI effect. Most of the CA weather systems originate in the N. Pacific. This is not perfect, but it gives a way of using the PDO as an independent reference to quality check the local station data. Using this technique, I have estimated the UHI for about 30 CA stations. It seems to work. I have also used it on some UK stations with the AMO as reference.
Once the MSAT data is analyzed in terms of the energy transfer physics instead of just a number series, then it becomes clear that there is no CO2 induced global warming in the MSAT record.
http://hidethedecline.eu/pages/posts/what-surface-temperature-is-your-model-really-predicting-190.php
http://venturaphotonics.com/CAClimate.html

November 3, 2011 9:55 pm

Uri says:
November 3, 2011 at 8:55 pm
Mike Bromley the Kurd says:
People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.
Maybe you should have looked at the data yourself.

I don’t have to. Dang those pesky /sarc tags. My comment on the masses in the PP libraries? First of all, you missed it, Uri. What masses? (read Muller’s spin on the thread).

November 3, 2011 9:56 pm

Jeff, I agree with your points. UHI is BEST’s Achille’s Heel. That they cannot determine any kind of UHI reduces anyone’s confidence their records can be used for any detail work. You have to believe that UHI does not really exist despite evidence to the contrary and everyday experience. You have to believe that BEST’s failure to find UHI is work as significant as Michelson-Morley’s failure to measure the ether wind. Or you can believe they have a problem in their analysis method. And I think that have a problem with their scalpel and said so on April 2 in WUWT.
Which brings me to support your point 1 from a different dimension: the Fourier domain. I think a fatal flaw in the scalpel is that it eliminates low frequency data from the temperature records. UHI and GW are signals made up completely of long-wavelength, very low frequencies with less than a cycle per decade; maybe a less than a cycle per century. The lowest frequency possible in a record is one wavelength in the entire record. If you use a scalpel to cut records in time, you destroy the lowest frequencies – precisely those frequencies you are looking for! See CA 11/1/11 for a more detailed argument.
In Muller’s WSJ 10/21/2011 essay, he says:

… By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.

In the process to “avoid data-selection bias” – they commit a bias toward selecting high frequencies and a bias against low frequencies. The splices contain no low frequency information in the spectrum where we expect GW and UHI signals to exist. But when they glue the spices together, low frequencies return – but from WHERE? It can only come from the glue. It is not in the data anymore. No wonder they could not find UHI – it’s on the cutting room floor. At the very least, we need to analyze the glue, but I suspect they are just turning high frequency info into what appears to be low. Whatever – it is not original low frequency information; it is counterfeit.

EFS_Junior
November 3, 2011 9:57 pm

(1) and (3) Both of these should be testable with the full (presnipped) BEST dataset. Temporal snips do not necessarily include vertical temperature shifts. Vertical temperature snips can use an asymmetrical offset criteria in line with the mean decadal (or a shorter or a longer baseline) trend line (by ineration if the underlying trendline is unknown to begin with). The main goal being to minimize snipping biases, if they do in fact exist at all.
(3) If the subsample via Monte Carlo (random) selection truely represents the total sample, across all groups, statistically speaking (no major under-/over-sampling), and both populations are quite large (1/8 of ~40,000 is 5,000), then the BEST method for station uncertainties may, in fact, scale quite accurtely (but they could do 1/7, 1/5, 1/5. 1/4, 1/3, 1/2 as a check, if necessary).
(2) The BEST data did not say that there was no UHI. What they did say is that the UHI had an insignificant effect on the global land temperature time series, simply due to the fact that urban areas are a relatively small precentage of the total land area. This was the BEST preliminary result for global versus rural only, -0.0019 ± 0.0019 °C/yr (meaning that rural had just a barely higher trend line than global.
So for instance, one could cherry pick and rank all temperature stations showing the highest rates of increase, then remove all rural stations as outliers, and voilà UHI proven (but that still won’t change the global temperature record, as again, urban areas are a very small part of the total global land area).
In the end this is what BEST wanted in the beginning, open peer review. Just don’t expect the BEST efforts to go about chasing all the various hundreds and/or thousands of leads sent their way.

richard verney
November 3, 2011 10:03 pm

There needs to be an explanation as to why approximately a third of stations show no warming or even cooling. This is not an insignificant proportion, and they cannot therefore seen to be simply some form of outlier.
This needs to be investigated in minute detail and in particular whether there are some identifiable micro climate or siting or equipment explanations. After all, it is feasible that the one third station data showing no warming (or even cooling) is correct, and the two third station data showing warming is incorrect. Not probable but not so unlikely that it can be safely ruled out as a possibility.

November 3, 2011 10:22 pm

It is obvious many are treating this BEST stuff as serious if flawed work. It is work, it is flawed, but serious, I think not. At best this is simply an attempt at getting attention. At worst an attempt at obfuscation. I think this foolishness needs to be completely ignored until all the raw data and codes is released and the papers are properly published.

Don Monfort
November 3, 2011 10:25 pm

EFS,
“(2) The BEST data did not say that there was no UHI. What they did say is that the UHI had an insignificant effect on the global land temperature time series, simply due to the fact that urban areas are a relatively small precentage of the total land area. This was the BEST preliminary result for global versus rural only, -0.0019 ± 0.0019 °C/yr (meaning that rural had just a barely higher trend line than global.
So for instance, one could cherry pick and rank all temperature stations showing the highest rates of increase, then remove all rural stations as outliers, and voilà UHI proven (but that still won’t change the global temperature record, as again, urban areas are a very small part of the total global land area).”
Urban areas are indeed a relatively small percentage of the total land area. But urban areas, suburban areas, small towns, hamlets, burgs, villages, etc. are over-represented in the temperature record. Cities of 50,000 or more make up 27% of the CHCN stations, according to BEST. Areas with few or no people, and little warming influence from people and their accompanying infrastructure, have few thermometers. What is so hard to understand about that?
Did you actually read the BEST UHI paper? They stated they could not use MODIS data to separate urban from rural areas, so they broke areas down into rural, and very rural. Make sense to you? See if you can find in their methodology a comparison of urban areas vs. unpopulated or sparsely populated areas? I will help you. They state: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural.” Is it any wonder they couldn’t find a UHI effect.

Don Monfort
November 3, 2011 10:43 pm

richard verney,
And if you look at figure 4 in the BEST UHI paper you will see that the distribution of the cooling trend stations is homogeneously distributed among the warming stations. Look at the almost continuous urban area on the North Atlantic coast of the US. Don’t those blue dots among all that red look weird? There is something wrong with the data, or the methods of analyzing the data. You can also see the red dot concentrations where major cities are located.
http://berkeleyearth.org/Resources/Berkeley_Earth_UHI.pdf
Looking at figure 2, you can see where their “very-rural” sites are located. Match that up with this population density map.
http://modernsurvivalblog.com/wp-content/uploads/2010/06/world-population-density-map.gif

Philip Bradley
November 3, 2011 10:59 pm

Bishop Hill will publish in the next few hours an analysis by me that shows BEST, as well as GISS and HadCRUT, by using daily minimum temperature, over-estimate the average warming over the last 60 years by approximately 45%, And further shows most of the remaining measured warming is due to increased early morning solar insolation, likely due to decreased black carbon aerosols.
I also think much of the UHI effect is also due to decreased black carbon aerosols.

JJ
November 3, 2011 11:09 pm

EFS_Junior
(2) The BEST data did not say that there was no UHI. What they did say is that the UHI had an insignificant effect on the global land temperature time series, simply due to the fact that urban areas are a relatively small precentage of the total land area.

Wrong metric. The issue is not that urban areas are a small proportion of global area. The question is -> is the influence of urban sited thermometers on the temperature record an equally small proportion of the influence of all thermometers on that record.
Further, looking only at “urbanization” on the grand scale of cities is to play semantic games with the concept involved. The problem is not merely the several degree rise in temps across the profile of a megalopolis. It is equally (if not more so) the problem of ‘rural’ thermometers being impacted by anthropomorphic warming on very local scales – impacts that have orders of magnitude less effect on global temps than do big cities, but which can have a much larger effect on the microclimate that the thermometer is measuring. And, of course, how those locally warmed thermometers’ readings end up affecting the homegenization adjustments applied to the data.
Thermometers have always been placed near houses, even in rural areas, simply because they have to be convenient to the people who monitor them. With MMTS, they tend to be placed even closer to buildings, as they are tethered by a cable. Such “UHI” doesn’t even require any development to grow in magnitude, it merely requires that MMTS cables of any length be relatively expensive …

ferd berple
November 3, 2011 11:13 pm

Stephen Rasey says:
November 3, 2011 at 9:56 pm
In the process to “avoid data-selection bias” – they commit a bias toward …
What steps did BEST take to validate their methodology with a known data set, to ensure that it was not introducing spurious results? Isn’t that standard practice in science when introducing a new methodology, to test the methodology with a known reference data set, where the answer is already agreed, to prove that it performs as expected?

juanslayton
November 3, 2011 11:14 pm

Max Hugoson: We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I think you are concerned with Time of Observation bias, which has a long history of study. Don’t entirely understand it myself, but a good start might be W.F. Rumbaugh’s article in the October 1934 issue of Monthly Weather Report. He used a 6 year thermograph record (those would be DAY LONG RECORDINGS) to evaluate the accuracy of min/max measurements.
Which gives me occasion to ask a question that has been bugging me for some time. Perhaps some reader will know. The weather service was taking thermograph recordings in quite a number of stations from the 1800’s and well into the 20th century. What happened to those records? What contribution would they make to climate studies (if any)?

EFS_Junior
November 3, 2011 11:17 pm

http://wattsupwiththat.com/2011/11/03/a-considered-critique-of-berkley-temperature-series/#comment-787371
“Urban areas are indeed a relatively small percentage of the total land area. But urban areas, suburban areas, small towns, hamlets, burgs, villages, etc. are over-represented in the temperature record. Cities of 50,000 or more make up 27% of the CHCN stations, according to BEST. Areas with few or no people, and little warming influence from people and their accompanying infrastructure, have few thermometers. What is so hard to understand about that? ”
Yes, there are a relatively high precentage of urban stations, relatively tightly packed I might add, my understanding is that high density station areas were given an inverse weighting to cancal out this disproportion. So for example, 10 urban stations cover the same area as one rural station, the rural station receives a weight of one while the 10 urban stations receive a weight of one-tenth.
“Did you actually read the BEST UHI paper? They stated they could not use MODIS data to separate urban from rural areas, so they broke areas down into rural, and very rural. Make sense to you? See if you can find in their methodology a comparison of urban areas vs. unpopulated or sparsely populated areas? I will help you. They state: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural.” Is it any wonder they couldn’t find a UHI effect.”
But what’s the point? To show how much of an UHI exists, while at the same time showing how little it contributes to the overall global surface land temperature record, simply due to it’s much smaller area of coverage relative to the total global land area.
Don’t really understand all the fuss about UHI anyways, as anyone that wants to do it properly, should do a proper inverse weighting based on aerial station density.

November 3, 2011 11:18 pm

Theo, The numbers are for 1979 to 2010
The values are for illustration ( but BEST was at around .28C per decade)

November 3, 2011 11:21 pm

So Theo.
from 1979 to 2010.
UAH has .18C of warming
Best has .28C of warming
Do you or dont you buy the argument ( christy, spencer, mcIntyre) that the difference
between these two .1C decade could be attributed to UHI?
Recall that Ross McKittrick also suggests a figure around .1C decade.
Don Monfort agrees?
What say you?

November 3, 2011 11:24 pm

Don thats a horrible population density map.
1. What is the average population of the BEST very rural sites?
If I told you it was less than 20 people per square km would that count as rural.. or not
10?
5?

November 3, 2011 11:27 pm

They state: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural.” Is it any wonder they couldn’t find a UHI effect.
#############
you are misunderstanding what they mean.
what they mean is that sites with No built pixels are very rural ( 16,000)
all other sites are called NOT very rural.. that includes urban

cce
November 3, 2011 11:29 pm

BEST is not testing for the existence of the Urban Heat Island effect. That is not controversial. It is looking for increased warming at urban sites, not higher absolute temperatures. More specifically:
“Time series of the Earth’s average land temperature are estimated using the Berkeley Earth methodology applied to the full dataset and the rural subset; the difference of these shows a slight negative slope over the period 1950 to 2010, with a slope of -0.19°C ± 0.19 / 100yr (95% confidence), opposite in sign to that expected if the urban heat island
effect was adding anomalous warming to the record. The small size, and its negative sign, supports the key conclusion of prior groups that urban warming does not unduly bias estimates of recent global temperature change.”
Setting aside the exact details of their analysis, if “very rural” sites are warming at the same rate as all of the sites, it’s difficult to argue that urbanization is exaggerating “global” warming. The fact that “very rural” sites make up a minority of all sites is irrelevent provided they are weighted according to their spatial distribution.

Philip Bradley
November 3, 2011 11:48 pm

Max Hugoson,
The MinMax Thermometer was invented in 1782. The same design is still in use today to measure minimum and maximum temperatures at places like schools.
Minimum and maximum temperatures have been measured in the same way since before 1800 thru to the introduction of electronic thermometers in the later part of the 20th century.
The accuracy of measuring minimum and maximum temperatures would have been pretty much unchanged for 150 years.

Philip Bradley
November 4, 2011 12:27 am

I see Leif beat me to the minmax thermometer point.
The troposphere is warmed by and from the ground up
Excepting aerosols, which warm the troposphere by absorbing and scattering incoming solar irradiance.
One study from India
http://www.agu.org/pubs/crossref/2011/2011GL046654.shtml

Philip Bradley
November 4, 2011 1:16 am

My analysis is now up at the Bishop Hill blog
http://www.bishop-hill.net/blog/2011/11/4/australian-temperatures.html

Richard S Courtney
November 4, 2011 1:52 am

Friends:
I write to point out a confusion that seems to exist in the minds of several posters to this thread;
i.e. UHI and local anthropogenic effects on temperature are not the same thing.
UHI affects cities and their immediate surroundings. A UHI can provide a temperature of several degrees above the temperature of the surrounding countryside, and this difference between city and surrounding temperatures can be expected to increase with time as the city grows. But cities cover a tiny proportion of the Earth’s surface so the effect of UHI on global average temperature may be undetectable among the other ‘noise’ of the global average.
However, the averaged temperature measurements on land are mostly obtained near sites of human habitation. And human habitation affects local temperature in many ways; e.g. land use changes, proximity of measurement equipment to human devices such as dwellings, machinery, and heating or cooling equipment, and etc..
So, the obtained temperature measurements on land are almost all affected by human activity. And these measurements provide a significant contribution to the estimate of global average temperature. Therefore, local anthroppgenic effects probably provide a significant contribution – i.e. distortion – to the global average although they are mostly not UHI.
Richard

John Whitman
November 4, 2011 2:04 am

Jeff Id,
Appreciate your post.
What are your thoughts about whether or not a rural farming effect from land use also needs to be evaluated but separately from UHI? Could such an effect have detection issues?
John

David
November 4, 2011 2:14 am

Has anyone done a study on what time the minimum temperature is reached? If I heat one pot to 300 F, and another to 400 F, and I set both pots outside overnight, they will both be the same temperature at some point, but the 300 F pot will get there more quickly.

Ask why is it so?
November 4, 2011 2:22 am

In 1850 how much land world wide had been changed? In 2011 how much land world wide has been changed? It’s not just urbanization, it all land changes man has made to the surface of the earth over the 150 years. In 1970 the land around my house was farmland and dirt roads, 40 years later the farmland is gone replaced with bricks, concrete and bitumen, how can this not have any effect? If the absorption of Solar radiation by the surface of the earth is the means by which the temperature of the earth is determined any change in the absorbing surface will change the temperature achieved, up or down. If concrete and bitumen replace trees and grass the temperature must go up, that’s basic physics.
Just not sure what this statement means, please explain:
Jeff Id “I believe that CO2 helps warming along as the most basic physics proves.”
How?

Richard
November 4, 2011 2:24 am

Why is it that we cannot seem to agree that UHI (which is a well observed fact and easily visible in the data) is different to dUHI and dRural. BEST did not attempt to investigate UHI, they did (possibly rather poorly) attempt to detect the difference betwen dUHI and dRural. They conclude dUHI/dRural shows no divergence but that any long term warming signal is present in both.
I am not happy about quite a bit in the BEST study but I do wish people would stop conflating UHI and dUHI and thereby create Straw Man arguments.

November 4, 2011 2:25 am

AT LAST – an intelligent analysis of a temperature data set!

Australian temperatures
1) Using a minimum and maximum temperature dataset exaggerates the increase in the global average land surface temperature over the last 60 years by approximately 45%
2) Almost all the warming over the last 60 years occurred between 6am and 12 noon
3) Warming is strongly correlated with decreasing cloud cover during the daytime and is therefore caused by increased solar insolation
4) Reduced anthropogenic aerosols (and clouds seeded by anthropogenic aerosols) are the cause of most the observed warming over the last 60 years
http://bishophill.squarespace.com/blog/2011/11/4/australian-temperatures.html

This study based upon weather data taken at 3 hourly intervals for 60 years… hopefully there are some data series with ONE HOUR data than can be analysed… and perhaps a study of ONE MINUTE data would show just how INSANE the settle science of (Tmin+Tmax)/2 really is!!!!!!!!!!!!

DirkH
November 4, 2011 2:30 am

Don Monfort says:
November 3, 2011 at 10:25 pm
“Did you actually read the BEST UHI paper? They stated they could not use MODIS data to separate urban from rural areas, so they broke areas down into rural, and very rural. […] They state: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural.”
Looks like the “best” climate science of 2011 still cannot account for the development of UHI over time.
Maybe, if we give them ALL of our money, that’ll suffice so that they can find out how human settlements around temperature stations developed over time.
They could use historic maps.
BEST was obviously an attempt at fabricating a publication with a media campaign attached that they could rush out the door in the run-up to Durban. Rename it to “FASTEST, CHEAPEST AND WORST”.

RACookPE1978
Editor
November 4, 2011 2:43 am

A “Rural Farming Effect”? (RFE, for those who abbreviate everything…)
Think instead perhaps of a “Rural Forestry Effect” with respect to its affect on overall northern hemisphere albedo during each of the four North Hemisphere!) land-dominated seasons.
And – above all – Don’t get blind-tracked into “winter=snow everywhere either! Consider a real-world snow-line at about 40 degrees north through the eastern US, but further south through the mountain states, a big “hole” in the Great Basin, and high-altitude snow average coverages down the west coast.
Canada, as all ‘Merikins well know, is completely covered in snow all year round…. “Obviously” there has been no changes in forest extent, forest type, or crop type up there. 8<)
Europe? Will vary as well: Not all will be snow covered every winter, but even Spain changes its albedo from season to season. Much of the eastern plains of Europe (north and south of the Carpathians, pretty much all the way through to the southern Russian and Turkish mountains, then the Himalayas through China to Manchuria will also vary greatly from summer to winter. The question again becomes: Has the seasonal albedo changed as more forest grow – in part due to more CO2? Have the European and Ural forest types (pine, deciduous, colors, shades, heights) changed to change albedo the tens of millions of sq km's where no people live and farm?
To consider the change in albedo that may be causing an change in temperature the past 40 years, I believe you must consider is not an albedo changing summer-through-winter, but the change in each season's albedo as North American deciduous forests have re-grown so strongly the past 40 years. In the southern pine forests, much more trees – but they don't drop their leaves in the winter. Then again, south of 40 north latitude, is there wide areas of "snow" on the ground to be reflective, regardless of tree coverage? What is the change in albedo as small farms are abandoned, and crop areas previous plowed in spring, then growing as dark green through the summer, then harvested back to "dirt" in the fall and then laid bare over the winter return to grassland and then to trees and shrubs? The new grass, trees and shrubs will change albedo.
Overseas? I expect a careful check will find that many Russian (fast-growing) forests have re-grown as industrial growth and large-scale industrial farming suddenly ended with the fall of Communism. The Ural Sea is an example of the opposite: massive desertification because of socialist "farming" policies and bad irrigation. But an example of a regional change in albedo none-the-less. What is the change in China's albedo over its new farmlands? Or was there crop use changes since 1990?
European forest/crop changes? I don't know – and welcome your thoughts and contributions.
African forest/crop/savannah/farming changes? Very significant de-forestation as people try desperately to get wood for fires – since the enviro's deny them real fuel. Farming? In many areas, farms are abandoned due to socialist dictatorships and racial warfare burn out farms, and their previously rotated fields return to ???? True – African farm and crop changes are mid-tropic, but what is that change in the albedo where the sun is overhead all year?
India and the nearby island nations? Again, near the equator, but has the crop use changed the past 40 years? The past 20 years? Has it been steady in those very crowded lands? Or has it been steady with respect to albedo BECAUSE they (India and Indonesia) are so crowded?

Manfred
November 4, 2011 2:54 am

This is my simple approximation (or perhaps lower limit) for UHI related global temperature increase since 1900:
Spencer has computed a nice graph drawing UHI over population density.for the year 2000.
http://www.drroyspencer.com/wp-content/uploads/ISH-station-warming-vs-pop-density-with-lowest-bin-full.jpg
The curve is close to a logarithmic function, a quadrupling of population density increases UHI by about 0.3-0.4 deg Celcius independant of the initial population. I call this increase in UHI subsequently dUHI. The curve in itself already explains why BEST did not work, as dUHI for a population increase from 10 to 20 is about the same as for an increase from 1 million to 2 million. Very rural and NOT very rural is then no criteria to identify locations with low/strong dUHI.
Now since 1900, the world population has risen from 1.7 to 7 billion. According to Spencer’s curve such an increase in population density would result in an UHI increase dUHI of approx. 0.4 degrees.
2 notes:
1. The graph covers only the USA, it may differ elsewhere..
2. The graph allows only an estimation for an instant population increase in a thought experiment. Other factors contributing over time as well to UHI are not included, such as ever increasing energy consumption, paving of roads, installation of air conditioning, deforestation etc. Most of these factors should further increase UHI.

November 4, 2011 2:57 am

Leif Svalgaard says:
November 3, 2011 at 9:25 pm
So why do we worry about greenhouse gases, Leif?

P. Solar
November 4, 2011 3:02 am

Jeff, I agree about UHI, B-est is a total fail on that issue. More on that later.
#1 is a big problem too. A far as I can tell they have done zero assessment of the effects of this splicing technique. They certainly don’t publish any and the only relevant comments in the paper seems to show they think the effects “should be trend neutral”. To my reading that is a fairly honest and clear statement that they have not looked.
So let’s look for them.
Since Dr Muller has chosen to make very public comments about the level of rise since 1950 I looked at how much that result depended on the time scale and start date. I looked at the simple OLS “trend” over a given interval for all possible start dates.
I did not get what I expected.
50y trends:
http://tinypic.com/r/10i5is4/5
Oh, oh! What are those flat bits about? Decade long periods with ZERO trend. That’s not climate.
Let’s check 10y trends:
http://tinypic.com/r/21ophd/5
OK, that’s more like it.
Now let’s look at , say, 12y trends.
http://tinypic.com/r/1449ol/5
WATTSUPWITHTHAT !???
According to BerkeleyEST there was virtually no 12 year period in the last 200 years that had a non zero trend.
This explains why their record shows less variation than other records.
Not only have they removed the noise , they’ve removed half the signal as well.
(I plot the resulting trend against the date of the middle of the range, so “last 50 years” for the most recent last 50 years gets logged at it’s midpoint of 1985)
Careful with that scalpel Doctor !!

KnR
November 4, 2011 3:22 am

While given station location and requirements meeting validity, does anyone know the number of sites that actual that have been checked not guessed as meeting these standards, is important .
Given that two thirds of the planet is water and given that BEST did not cover this at all, you could suggest that when it comes to ‘global temperatures’ the UHI and what is rural is almost side if interesting arguments. Can you really make a claim about ‘global ‘ anything when you don’t cover the majority of the globe?

LazyTeenager
November 4, 2011 3:33 am

I thought Jeff Id was a smart guy but this is sounds either wrong or badly expressed.
———
– UHI effect.  The Berkeley study not only failed to determine the magnitude of UHI, a known effect on city temperatures that even kids can detect, it failed to detect UHI at all. 
———-
There is a distinction between
1. the existence of the UHI which is considered to be measurable by everyone
2. The TREND in the UHI which is commonly positive, but not necessarily always.
3. The effect of the TREND in UHI which may contribute to the trend, and possibly bias, the surface temperature trend.
There are some if buts and maybes around 2 and 3.
As far as I can tell BEST is saying 3 is not turning out as expected here. And confirms previous research.
And gut-feelings don’t count here.

BillD
November 4, 2011 3:51 am

One of the main predictions of climate science is that warming should be fastest at high latitudes. Warming as been very fast in arctic, where UHI is clearly not a factor. I agree with Matt’s analysis. There is a big difference between saying that UHI is not detected and saying that it has a significant effect on world land temperature anomalies. Saying that UHI is not detected is very misleading. Even Anthony’s data, evidently, does not show an effect of UHI on temperature anomalies.

GregS
November 4, 2011 4:26 am

Just seconding the observation made by Richard Courtney upthread about land-use change.
The land has changed dramatically here in Southern Minnesota and Northern Iowa. I blame my father-in-law for that. He owns a D-9 Caterpillar bull-dozer and every time he purchased a new farm over the last few decades, the first thing he would do is level the woodlots and building sites to plant corn and bean.
So how do you measure the effect of transforming a landscaped checker-boarded by green into one that is uniformly black (or near black) for six months of the year?
Also, what exactly defines “urban”. I know, I know population. But seriously, are all urban centers equal? Compare a mature neighborhood shaded by large elms to twenty thousand acres of new suburb. In other words, growing secondary cities are hotter than more established urban centers. How do we account for that?
Even the rural areas are hotter. Our local GHCN station at Zubrota (72644003) is located at a sewage treatment plant that has added a lot of concrete over the last couple decades as well as a new neighborhood – upwind.
So how much warming can be attributed to these factors? .05C? .1C? Does anyone care?

Jeff Id
November 4, 2011 4:41 am

Matt,
“Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. “
Perhaps you are unaware of this but if you take the time to correctly process satellite LTL data and compare it to ground data, there is a statistically significant difference in trend. i.e. detrend sat data, scale variance, retrend, regress, examine residuals. That is really all the confirmation of UHI that I need. So when a paper is published on non-detection of UHI, it is an example of go home and do it again.
http://youtu.be/4HgUh5bOgbM
Thanks for the tip though.

November 4, 2011 4:50 am

Another top notch analysis Jeff !!!
Great work !

Editor
November 4, 2011 4:56 am

Steven Mosher says:
November 3, 2011 at 7:57 pm
Are you open to discussion of the upper bound…..?
In broad agreement. I read Steve’s excellent post when there were no comments on it. I haven’t got back to read the comments yet (will do so this evening).The upper bound of 0.1C/decade only goes back to the start of the satelite era and cannot be extrapolated backwards. We should not expect it to be a constant factor anyway. See also my comment here: http://noconsensus.wordpress.com/2011/11/03/considered-critique-of-berkley-temp-series/#comment-57003

Richard
November 4, 2011 5:00 am

I suppose that the fact that BEST were unable to distinguish any different trends between dUHI and dRural can be interpreted another way. We know that trend figures produced from a mixture of dUHI and dRural (the Estimated Global Temperature and its associated trend) differ from trend figures obtained from other sources, i.e. satellites. Perhaps both dUHI and dRural are truely showing the same contamination to the basic signal that their difference from these other sources imply.
Therefore, possibly, this surprising result in identical trends for both dUHI and dRural can be used to determine the magnitude of the different trends between satellite and ground observations.

Digirata
November 4, 2011 5:11 am

Marc Morano at Climate Depot headlines today:
Sci-fi writer Jim Laughter: ‘Polar cities no laughing matter’ — ‘Envisions so-called ” polar cities” for future survivors of devastating climate….with link to interview with Laughter, his real name, by the way….

Editor
November 4, 2011 5:12 am

Stephen Rasey says:
November 3, 2011 at 9:56 pm

UHI and GW are signals made up completely of long-wavelength, very low frequencies with less than a cycle per decade; maybe a less than a cycle per century.

UHI is cyclic? What’s the period – equal to the rise and decline of the Roman empire? Except the Roman Empire wasn’t cyclic.

November 4, 2011 5:49 am

Question, re: “UHI” effect –
If we take a time series of temperature measurements, going back to, say 1900, I would think we would need to analyze first which stations were under a UHI effect for each year since the “urban” areas pretty much keep expanding over that time. We’d also have to account for every station that moved either into or away from a UHI effected position. Further, the actual UHI in a given area would vary as well. Every station that at any time was in a UHI would require a slightly different “adjustment” from any other station over that time series. That, and more that I no doubt didn’t mention, would need to be done for each station and the adjustment would need to be “reviewed” for accuracy.
So, my question is, simply, has it been?
Add that to the high number of sites that are not sited properly and are recording information that uniquely requires an individual adjustment for each station, and the question of whether we get a good temperature measurement, let alone a better or “best” one, seems very reasonable to me.
Do we have a group of stations that are not and have not been either in a UHI and are, and continuously have been, properly sited? Assuming they have been properly maintained, what do these, and only these stations show from the 1800’s through today? Heck, what do they show from 1979 through today?
Just wondering.

mindert eiting
November 4, 2011 5:57 am

Dear Jeff,
How do you know that all those surface measurements reflect the earth and not the policy of the WMO? Here is a recipe for data fraud: each year you may include some new stations and some others may be dropped. Open stations as you like. However, before closing stations you should compute the slope of their private regressions. Even in a small area warming and cooling stations may co-exist. You only have to close more cooling stations than warming ones in order to create a warming world. It’s better to invest in methods of detecting data manipulation even for BEST. One method is survival analysis. Compute as dependent variable for each station the number of years it was on duty. Define as independent variables (1) latitude category, and (2) station’s melody. I defined the latter on 8000 GHCN stations as follows: compute regression slopes for the periods 1930-1969 and 1970-2009. Dichotomize them as up or down. You get four melodies, down-down, down-up, up-down, and up-up. It’s not surprising that stations on the northern hemisphere ‘live’ longer than in the tropics. But I found that their life expectancy also depends on melody, differences of more than six times the sum of the associated standard deviations. This is just some private exploration, but you may understand that I do not trust the data even if they are called BEST.

JR
November 4, 2011 6:00 am

Re: BillD
Where do you come up with the statement that …in arctic, where UHI is clearly not a factor.?
The trends since 2002 at CRN station 4 ENE of Barrow AK and at GHCN 42570026 (which is listed as rural in GHCN) are:
CRN: -1.2 C/century
GHCN: +7.9 C/century
I didn’t cherry-pick 2002 – it is when the CRN data start. But it sure suggests that UHI is happening even in the arctic.

Jason Calley
November 4, 2011 6:03 am

@Stephen Rasey says: November 3, 2011 at 9:56 pm
Thank you! That was a wonderfully understandable post about how low frequency info can be disappeared and high frequency info emphasized.

Editor
November 4, 2011 6:07 am

Muller admits that – 27% of the Global Historical Climatology Network Monthly stations are located in cities with a population greater than 50000.
and that – currently GISS allow for this (UHI) effect by making adjustments which result in a reduction in global temperatures of about 0.01C over the period 1900-2009.
Something does not add up here.
http://notalotofpeopleknowthat.wordpress.com/2011/10/23/mullers-problem-with-uhi/

Latitude
November 4, 2011 6:12 am

If rural and UHI are trending the same…
…look for a reason
It could be something as simple as CO2 makes plants darker green……..

Matt
November 4, 2011 6:14 am

Jeff,
Thanks for the response.
“Perhaps you are unaware of this but if you take the time to correctly process satellite LTL data and compare it to ground data, there is a statistically significant difference in trend. i.e. detrend sat data, scale variance, retrend, regress, examine residuals. That is really all the confirmation of UHI that I need. So when a paper is published on non-detection of UHI, it is an example of go home and do it again.”
I appreciate that there may be a statistically significant difference between satellite and land trends, but there are important two important caveats on that:
1. I am not an expert on temp reconstruction and I don’t know if anyone quantifies systematic uncertainties for these reconstructions. But, when two measurements are made using very different methods, one cannot just compare the statistical uncertainties to claim a significant effect. One has to use the total uncertainty (including systematics). Otherwise you’ll detect a statistical significance that isn’t significant.
2. Even if there is a statistically significant difference between the satellite and thermometer trends, that is not “all the confirmation of UHI [you] need”. The claim that UHI accounts for the difference is purely speculative. It is not a totally unreasonable hypothesis and it goes in the right direction. But, it is speculation…and one of many possible explanations for the difference. The satellite measurements might not have siting problems, but they have plenty of challenges and unknowns of their own. Reconstructing surface temps from satellite data requires some extrapolation from models and I remind you that for much of the early history of the UAH, miscorrections for orbital decay gave the temp trend an opposite slope. So the discrepancy could be an overestimate in the thermometer record OR an underestimate in the satellite record OR a little bit of both OR just an underestimate of the error bars. There is simply no way to know absent further work (and perhaps some time). It certainly shouldn’t be “all you need” to confirm UHI.

JJ
November 4, 2011 6:22 am

Richard S Courtney says:
I write to point out a confusion that seems to exist in the minds of several posters to this thread;
i.e. UHI and local anthropogenic effects on temperature are not the same thing.

Yes, they very often are, with the only difference being one of scale. A ‘rural’ thermometer may yet be influence by proximity to ashphalt and other heat collecting surfaces, heat dissipating equipment, heated buildings, etc. UHI is nothing more or less than those same impacts aggregated over a larger volume.
This is why I stated that limiting analysis of UHI to only the largest aggregations of the effect – cities – was a effectively a semantic arguement. They are only looking for the UHI effect in ‘urban’ areas. Nonsense. The importance of the UHI mechnaism is its proximity to thermometers, not how close it is to hip-hop artists.

Pamela Gray
November 4, 2011 6:46 am

As I sit here surrounded by snow at pass level in the mountains (that I will have to negotiate later today), I am contemplating the significance of studies centered on the previous warming trend that has obviously stalled to anyone with enough sense to be able to read a thermometer.
Does it matter to my selection of coats and boots who did what to the data? Will it help me predict that the snow will be more wet, the rain less dry, the storm more extreme, or the resulting snow pack more, less, or somewhat rotten? Apparently it does to some people, mostly scientists who are trying to relocate the warming signal in a cooling oscillation. It may also matter to liberal voters who want onerous regulations over a gnats-ass trend. And it matters to statisticians who want analysis done right. To be sure, if warming comes round again, I would rather not go through the hysteria we have had as a daily meal shoved down my throat again.
But for today, my selection of a coat and boots to wear, as well as what time I should start out to safely nagivate the pass, will depend on common sense. OMG. Common sense. To those of you who are mesmerized by impending doom and see only dark days ahead in the now-you-see-it, now-you-don’t temperature trend BEST has relocated, ask your grandparents what common sense means.

mindert eiting
November 4, 2011 6:52 am

A second trial (perhaps I used a forbidden word). Has the BEST team done a survival analysis? I did it for 8000 GHCN stations. Define as dependent variable the number of years a station was on duty. Define as independent variables (1) latitude category, and (2) stations melody. Compute for the relevant stations the regression slopes for 1930-1969 and 1970-2009. Dichotomize these as up or down. You get four melodies. It’s not surprising that stations life expectancy depends on latitude: on the northern hemisphere stations are for more years on duty than in the tropics. Having controlled for that, suppose the expectancy depended on melody. What would you conclude? Before applying sophisticated statistics, quality controls are needed. I have my reasons not to trust these data.

kim
November 4, 2011 6:56 am

mindert @ 5:57.
Hum a few bars and they can fake it.
=============

Jeff Id
November 4, 2011 7:02 am

Matt,
You have made the assumption that I’ve made some blind claims which were not considered carefully. I’ve been at this for quite a while now though and have a few dozen links from the three and a half years of work put into this climate blog thing. Some of these have been run at WUWT in the past. My own global temp reconstruction which does nothing for UHI.
http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/
and comparisons of satellite and ground data including problems in both:
http://noconsensus.wordpress.com/2010/02/20/345/
http://noconsensus.wordpress.com/2009/11/09/statistical-significance-in-satellite-data/
http://noconsensus.wordpress.com/2009/01/30/tropospheric-temperature-trend-amplification/
http://noconsensus.wordpress.com/2009/01/23/bifurcated-temperature-trend/
http://noconsensus.wordpress.com/2009/10/28/satellite-temps-getting-closer/
Corrections to RSS:
http://noconsensus.wordpress.com/2009/01/19/satellite-temp-homoginization-using-giss/
The newer posts are more accurate than the old as my opinions changed and I learned. In addition to these posts, I have discussed the issues with various climate scientists and obviously have read a lot of literature on the subject. Actually, temperature reconstructions are the only thing I’m published on in climate science.
I always suggest to people that they make their own conclusions, but if you don’t spend the time looking at the differences and the magnitudes of the differences, a true familiarity cannot occur. It is my opinion that there is a real (statistically significant) difference between ground and satellite trends. Perhaps it is time for an updated post on that.
Another example you can see refuting the Berkeley UHI claim, is the difference between ground and ocean surface trends which also should not be ignored. Offsets are fine but why are trends different? How long can trends stay different? I have spent very little time on ocean data because this is a hobby. It seems a reasonable question which has been answered unsatisfactorily to date.

Mark Buehner
November 4, 2011 7:36 am

Phil Jones of all people demonstrated the UHI effect in his analysis of Chinese stations. I guess they blew right past embarrassment on that paper and moved on to pretending it doesn’t exist.

November 4, 2011 7:41 am

I understood UHI to account for 1% of land surface area. The book; “Living in the Endless City” states that it is 2% of land surface area. Characterising UHI with a simple % does not describe fully the nature of that UHI coverage and makes it easy to dismiss as too small. This approach appears to be in contrast to the view that a change in the atmosphere of 1% of 1% in terms of CO2 content is considered very significant.
Whether Urbanisation is 1% or 2%, this number does not describe the fact that nearly all urbanisation is concentrated around mid latitudes where the sun has more impact in heating the surface. That 1% or 2% now starts to look a lot bigger. Add to that urbanisation is further concentrated to cover large areas of conurbation such the US Eastern seaboard. UHI is not confined to the city limits and involves all ancilliary activities that cities require, such as food and water.
Not long ago a watershed was passed where more of us were now living in cities. That still leaves a massive 3.5 billion living in rural areas, which is twice what the total population was in 1900.

theduke
November 4, 2011 7:52 am

Jeff Id writes re Anthony’s project and BEST: “My guess is that the comparison of methods would result in a non-significant relationship.”
Can you expand and clarify on that?

Chris S
November 4, 2011 7:56 am

WORST not BEST. Without Objective or Robust Statistical Techniques.

November 4, 2011 8:15 am

Verity
ANSWER the question
“Steven Mosher says:
November 3, 2011 at 7:57 pm
Are you open to discussion of the upper bound…..?
In broad agreement. I read Steve’s excellent post when there were no comments on it. I haven’t got back to read the comments yet (will do so this evening).The upper bound of 0.1C/decade only goes back to the start of the satelite era and cannot be extrapolated backwards.”
############
I am not suggesting an extrapolation.
Do you agree with steve and spencer and Christy or not!
Answer, do not assume that I will make the move you suggest.. That will never happen

Matt
November 4, 2011 8:47 am

Thanks Jeff,
Will look at your material. However, might not have time to look at everything in depth and I couldn’t find any analysis that points to the UHI as the source of discrepancy between satellite and earth-based measurement or excludes the possibility of other affects? Could you point me specifically to that analysis?
Also, one point of clarification:
We are talking about a few percent discrepancy in the slopes, right? As I’ve suggested, measures of statistical significance are a little hairy. If you don’t account for systematics, I’d be a little skeptical. But, let’s assume that there is a statistically significant affect. It looks like the affect amounts to a 3% difference in slope between satellite and earth-based reconstruction over the lifetime of the satellite record. Am I right on that? Personally (and I acknowledge that this is a subjective statement) I’d say that it is pretty impressive that such vastly different methods as satellite and thermometer reconstructions come within 3% of each other. I also feel that a 3% affect is not large enough to justify Anthony’s claims that “it cannot be credibly asserted there has been any significant ‘global warming’ in the 20th century” or that “all terrestrial surface-temperature databases exhibit signs of urban heat pollution and post measurement adjustments that render them unreliable for determining accurate long-term temperature trends” (http://scienceandpublicpolicy.org/images/stories/papers/originals/surface_temp.pdf).

November 4, 2011 8:50 am

Mr. Bradly:
Yes, that might be true that a MIN-MAX thermometer was invented in 1782. (Daniel Gabriel Fahrenheit invented the alcohol thermometer in 1709). However – considering that there was a revelation about 6 years ago that the DEW and BMEWS operators were “fudging” the winter low (and even the day time high, if cold enough) numbers because it was TOO DAMNED COLD TO GO OUT AN TAKE READINGS..
And I would BEG of you to realise COST and SOPHISTICATION. TO IMPLY that all these readings were taken with Min – Max thermometer is disingenuous and mis-leading at best. It is complete “temperal provincialism”. Temporal provintialism in it’s basic sense is to judge everything in the future and everything in the past based on what we know now.
So, for example, putting today’s abilities and “quality assurance” standards on our forebearers, is as ill-legitimate as making computer programs to predict “climate” 50 and 100 years in the future from NOW.
Sorry, don’t buy the hand waving. Min-max themometers were NOT THE NORM in the 19th century. It was hand recorded data with the inherent errors. (So me diaries, sketches, records.)

November 4, 2011 8:51 am

Dang, I hate making trivial errors. “SHOW ME” not “So me”.
Max

Spen
November 4, 2011 8:51 am

I am still puzzled by confidence levels/error range. I assume the accuracy of the older temperature measurements was no better than +/- 0.5 deg C. Shouldn’t that degree of accuracy apply to the anomaly?

Don Monfort
November 4, 2011 9:11 am

Steve Mosher,
What’s wrong with the map? It was offered as an eyeball comparison with the lousy map in the BEST UHI paper (figure 2). Do you think that figure 2 belongs in a scientific paper? It’s mislabled (the black dots are not the rural sites but the very-rural sites), and what’s with the totally black USA? Yes I have seen their explanation, but I am sure you could have done much better than that. And anyone should be able to compare the maps and see that the allegedly very-rural stations (black dots) are not in major areas that are obviously the least populated places on the planet. Look at Africa, S. America and Australia. Where are the black dots that represent stations far from urban areas located? Look at Australia, clearly the black dots are not concentrated in the sparsely populated interior, but in the populated coastal areas. Same story in Africa and S America. Not too many black dots in the Sahara Desert, or the Amazon. Go to the cities to find the black dots. The paper says that the US appears to be very black, but only 18% of the sites are very-rural. Again look at Africa, S America, Australia and other places with large sparsely populated areas. See much black? Are you following me here, Steve?
Look, the point is not that they picked relatively less populated areas out of their 39,000 stations. The problem is that urban, one-horse town, burg, hamlet, suburban and blah…blah…blah areas with human influence on local temperature are heavily overrepresented in the data. The truly very-rural sites are generally not there, because there are very few stations in those places. Do you believe that this study has done what Muller claims for it? That it has laid to rest climate skeptics’ doubts regarding UHI effect in estimating global warming?
Oh, but some geniuses are claiming that this paper was not about UHI. Read the title of the paper, or actually read the paper, if you have a few minutes. And read Muller’s article in the WSJ, in which he falsely claimed that BEST compared urban to very-rural, distant from urban areas, and the climate skeptics got no case on UHI any more. See how many times UHI, or references to UHI are found in those things, which you have not yet read.

November 4, 2011 9:13 am

Steven Mosher says:
November 3, 2011 at 11:21 pm
So Theo.
from 1979 to 2010.
UAH has .18C of warming
Best has .28C of warming
===========================================
Maybe this can help……
click for an ugly graph
I think it is becoming increasingly clear that UHI is a term that adds to the confusion. 🙂
I also think is it clear that the BEST team has a ways to go before they’ll be current on the discussion. Rural and not very rural? lol

Don Monfort
November 4, 2011 9:39 am

Steve Mosher,
I forgot to address this:
“you are misunderstanding what they mean.
what they mean is that sites with No built pixels are very rural ( 16,000)
all other sites are called NOT very rural.. that includes urban”
I know that includes urban. It also includes rural, because with the tool they chose to use they cannot distinguish urban from rural. Is that the right tool to use in trying to find out something about UHI? Isn’t this BEST UHI study rather amateurish? Jeff’s criticism is valid, and that is why he hasn’t heard back from Prof. Dr. Muller.

November 4, 2011 9:51 am

Jeff
“All that said, I don’t believe that warming is undetectable or that temperatures haven’t risen this century. I believe that CO2 helps warming along as the most basic physics prove”
Sorry Jeff, which basics would that be? would that be different to mine- I don’t know if the net effect is warming or cooling.
http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011
these data from BEST and what not are all meaningless without the maximas and minima’s? – which would prove natural warming and by how much?
http://www.letterdash.com/HenryP/more-carbon-dioxide-is-ok-ok

Keith
November 4, 2011 10:03 am

Matt says:
November 3, 2011 at 9:09 pm
The strongest thing Anthony’s paper seems to be saying is that the poor quality stations tend to amplify diurnal variation. That just adds to the noise, but as Anthony admits, the noise cancels out when you get an average trend. It seems that in Anthony’s own summary of the Fall et al paper, he admits that including the poor stations does not bias the average daily temperature trends towards more warming (if anything, towards less warming).

It said the opposite: “While the 30-year and 115-year trends, and all groups of stations, showed warming trends over those periods, we found that the minimum temperature trends appeared to be overestimated and the maximum warming trends underestimated at the poorer sites.” See http://wattsupwiththat.com/2011/05/11/the-long-awaited-surfacestations-paper/
In other words, minimum temps are rising faster than trend and maximum temps lower than trend at the poorer sites, giving a smaller diurnal variation. This will reduce noise in the data, giving erroneously reduced error bars and confidence intervals, unless this is catered for through UHI/siting adjustments/corrections. Reduced diurnal variation, particularly in winter, is a key prediction of CO2 warming thoery, so poor siting gives a false degree of confidence that CO2 is indeed having this forecast effect.
Too many people seem to have misunderstood and failed to pick up this key finding from the Fell et al paper, instead proclaiming that the trend of ‘average’ (min-max/2) is unaffected by siting issues and that therefore the Surfacestations project was a waste of time. None so blind, etc…

Don Monfort
November 4, 2011 10:22 am

Fred Singer’s op-ed in the WSJ, is a polite and reserved rebuke of Muller’s bombastic propaganda in that same publication:
http://online.wsj.com/article/SB10001424052970204394804577012014136900828.html?mod=googlenews_wsj

Matt
November 4, 2011 10:38 am

HenryP,
In your article linked above you say:
“In the wavelengths areas where absorption takes place, the molecule starts acting like a little mirror, the strength of which depends on the amount of absorption taking place inside the molecule. Because the molecule is like a sphere, we may assume that ca. 62,5% of a certain amount of light (radiation) is sent back in a radius of 180 degrees in the direction where it came from. This is the warming or cooling effect of a gas hit by radiation. Same effect is also observed when car lights are put on bright in humid, moist conditions: your light is returned to you!!”
I think you are confusing absorption and scattering. Reflection of light off of a molecule is different from absorption and reemission. Absorption represents a change in the quantum state of a molecule. When a photon of the appropriate energy is absorbed by a molecule, that molecule is excited to a higher energy state. Eventually, the molecule will return to the the “ground state” (lowest allowable energy state) and reemit the photon. But, the direction of the reemission depends on the particular excitation and is not strongly correlated with the original direction of the photon. I am simplifying a bit, but that’s the general picture. Reflection, on the other hand, is elastic scattering…and it is specular (angle of reflection = angle of incidence).
Your approximation of molecules as spherical is problematic. CO2 and O2 are very much non-spherical. Organics like Methane are very long chains of atoms and even less spherical.
Also, even if you make this spherical approximation, your claim that the light is reflected 180 degrees is incorrect. The reflection angle depends on the incidence angle. If a ray of light hits a reflecting sphere off-center, the light will not be reflected back along the same direction it came. Sit down and draw the ray optics. Your argument about the light from your headlights “coming back to you” drives the point home. If the light were reflected at 180 degrees it would come back to the headlights…not to you. You are seeing scattered and reflected light at a different angle from 180 degrees. And, you are only seeing a small fraction of the light. The bulk of it is absorbed, forward scattered, or transmitted.
In the end, it is true that atmospheric gasses reflect some light back into space. This is a component of the earth’s “albedo” and it does have a slight cooling affect. However, ice and clouds are much more important to the earth’s albedo than atmospheric gasses.
When greenhouse gasses absorb IR radiation from the sun, the direction of reemission is not specular (mirror image reflection). It is randomized. And it is this randomized reemission that contributes to the warming affect of GHGs. At particular IR wavelengths the warming from absorption and reemission is much more significant than the cooling from back reflection of the gasses.

Sun Spot
November 4, 2011 11:00 am

Regarding Surface temperatures: It’s not only “UHI” effect, I’ve never seen anyone speak to Rural Terrain Heat Island effect.
What is “RTHI” effect you ask, well every motorcycle ride knows what this is. In the evening or at night as you ride through the countryside and you crest a hill you can feel the temperature drop by many degrees or enter a valley the warm night air (and insect rain) is often many degrees warmer. This effect of hitting warm and cold spots also happens on relatively level patches of road often triggered by a wood lot or river. Temperature inversion layers that trap heat or cold close to the ground could easily throw rural temperature measurement stations off by many degrees, how is this compensated for ?

Don Monfort
November 4, 2011 11:07 am

“At particular IR wavelengths the warming from absorption and reemission is much more significant than the cooling from back reflection of the gasses.”
You are doing the same thing he did; conflating absorption and re-emission, and reflection.
Doesn’t some of the randomized re-emission of absorbed IR radiation from the sun, go back into space?

November 4, 2011 11:18 am

Matt, I go with observations. Absorption is a wrong term. I grew up with terms like extinction and transmission. Water vapor is somewhat problematic because the molecules build up to small droplets which do cause optical scattering. I know what is the difference. However, it appears that the observed effect (e.g. via the moon) is pretty similar as if it were mirrored. (see my footnote: follow the green and blue line in fig 6 bottom and see how everything comes back to earth via the moon in fig 6 top and figure 7.)
e.g. there is no change in the molecule (of carbon dioxide) quantum or otherwise if you throw light of it on of 4.26 um to measure because otherwise it would get warmer and explode eventually if you measured the % in a closed container and left the meter on?
It appears that there is a table showing the sun’s Watts/cm2 between 4 and 5 um
(e.g. Nasa report 351)
but where is the table showing earth”s emission in Watts per cm2 between 14 and 15?
In other words, I am asking how much exactly is the difference between the cooling effect and the warming effect of the CO2? If you don’t have those measurements in Watts/m2/m3 0.01%CO2/24 hours for both the cooling and warming effect of the CO2 then how would anyone know for sure that the net effect of more carbon dioxide is warming rather than cooling?

A. C. Osborn
November 4, 2011 11:27 am

My biggest concern is that people are actually trying to work with Best’s Trend data without looking at the actual data in the “Site” recordsets.
The Taverage data, which are temperatures not anomalies is absolutely riddled with errors, how can you use an error ridden dataset for accurate trend analysis?
I scanned the data initially looking for patterns and it soon became apparent that the data has lots of improper minus signs causing data to vary by 30-40 degrees between months.
Next I looked at Winter averages and found that they were higher than summer averages, how can this be.
I ran simple query for all stations above 10 degrees latitude for January versus June, July & August and if it was higher than any one of them flag it up.
Of the 34103 sites above 10d Latitude 30506 have 1 or more January averages higher than June, July or August, sometimes all 3 summer months.

A. C. Osborn
November 4, 2011 11:33 am

In Addition to my previous post I found that of the 34103 sites above 10d Latitude 29770 have 1 or more with both January & February averages higher than June, July or August, sometimes all 3 summer months.
This can’t possibly be correct, what has there processing done to the values?

old engineer
November 4, 2011 12:01 pm

Richard says:
November 4, 2011 at 2:24 am
“Why is it that we cannot seem to agree that UHI (which is a well observed fact and easily visible in the data) is different to dUHI and dRural.”
=========================================================================
Richard has this right, assuming he means dT(UHI)/dt and dT(Rural)/dt. Too many in this discussion seem to be confusing the difference between the trend in both urban and rural temperatures over time, with the differences between urban and rural temperatures at a given time (the UHI effect).
It would seem to me that the effect on a weather station being engulfed in the UHI is not initially a constant, but something that would rise to some maximum level over time, then remain constant over time.
Consider two stations A and B, a couple of hundred kilometers apart. Both stations were rural at some time in the past. At that time temperatures at both stations were the same. Now assume that station B starts to be engulfed in urbanization from a nearby city. Station B temperature starts being greater than A because of the urban heat island. As the city overtakes Station B the difference between Station A and Station B becomes larger. Finally Station B is fully urban, and (assuming no change in heat added by urbanization) the temperature difference between Station A and Station B now becomes a constant. The temperature anomaly graph with time will look different for the two stations if the time covered begins when they were both rural. But if it begins when Station B was fully urban there will be no difference.
Of course anthropogenic UHI heat content is not constant, or the same for any two urban areas.. Which makes land based historic data very difficult to sort out.

A. C. Osborn
November 4, 2011 12:01 pm

P. Solar says:
November 4, 2011 at 3:02 am
I agree with you, compare the best results to the pre 2000 version of the other datasets, where have all the major peaks & valleys gone. Especially the one that prompted the “Ice Age” concerns of the 70s?

Alan S. Blue
November 4, 2011 12:09 pm

Matt says:
November 3, 2011 at 7:34 pm
“Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”.
Note that the second quote is -not- an exculpatory factor. It’s a -compounding- factor. If cities are a miniscule faction of land mass (true) –but the majority of temperature measurements happen to be within (or near enough) to the cities– then your entire data set has an exceedingly skewed sample.
That is: Cities -do- have a ” a small impact on –actual– global averaged trends”, but the –measurements– are unfortunately concentrated in exactly those areas — and thus the UHI –effect– can have an overwhelming effect on the global –measurements– of the average trend.
IOW:
We’d be better off flat-out excluding data from anywhere within 10 miles of a city. That is excluding a –small– portion of the landmass with known-corrupted data. They do happen to be a vastly disproportionate fraction of the available surface stations
1) CRN1 is a -prerequisite- for competent data, not a sufficient condition.
2) UHI is not a microsite issue, is unfixed by CRN1 quality stations, and happens at -far- more stations than “1% of land mass” would predict – and thus is non-negligible.
3) A calibrated point-source measurement is not a calibrated grid-cell average measurement.

Matt
November 4, 2011 12:11 pm

Don,
“You are doing the same thing he did; conflating absorption and re-emission, and reflection.”
Please explain how I’m doing the same thing he did? I very clearly distinguished between elastic scattering and absorption. Specular reflection of light depends on angle of incidence. Angular re-emmission of IR is more or less isotropic.
“Doesn’t some of the randomized re-emission of absorbed IR radiation from the sun, go back into space?”
Of course it does. The re-emission is isotropic, so some of the re-emitted IR is pointing back towards space. The other important point is that the probabilities of absorption and scattering depend heavily on wavelength. Atmospheric gasses are largely transparent to optical wavelengths. Blue light has a very short mean free path and a high probability of scattering (answering the age old “why is the sky blue” question). However gases like water-vapor and CO2 make the atmosphere highly opaque to IR. The probability of absorption (for certain wavelengths) is much higher than transmission or reflection.

Septic Matthew
November 4, 2011 12:24 pm

Steven Mosher: Can we conclude ( as Christy, Spencer and Steve do) that the difference
.28 – .18C or .1C per decade could be UHI?

In Steve McIntyre’s wording, I think that is reasonable.
There have been a number of good critiques of the BEST analyses, and this is one of them. point #2 can’t be decided (imo) can’t really be addressed without some study of the particular surfacestations: what distinguishes the “warming” from the “nearly constant” from the “cooling” stations. But the comparison of the satellite trend to the surface station trend is reasonable.

Matt
November 4, 2011 12:35 pm

Henry P,
Read my point to Mont. Different wavelengths have different probabilities of absorption and reflection. Certain IR wavelengths are maximally absorbed by water.
Here is a nice plot of the transmissivity of the atmosphere to various wavelengths:
http://en.wikipedia.org/wiki/File:Atmospheric_electromagnetic_opacity.svg
“Water vapor is somewhat problematic because the molecules build up to small droplets which do cause optical scattering.”
No. Water vapor is the gaseous state of water. Droplets that form clouds are in a liquid state. Clouds do reflect a lot of light back into space. In any case, it is “optical” light that reflects off of water droplets. Water is not a good IR reflector, but it is a very good absorber:
http://en.wikipedia.org/wiki/File:Water_absorption_spectrum.png
Anyway, I would really suggest that you work through some formal physics. You’re clearly a smart guy. But, without basic physics literacy it is very easy to think that you understand things that you don’t.
I wouldn’t even purport to be an expert on the topic of atmospheric response to radiation and I have a PhD in physics. I’m riding on what I learned from Electrodynamics in grad school and from some hands on experience working with an IR laser (although my laser is very shallow IR). I know enough to see that you are clearly jumbling up concepts. I don’t know what else to say. The teacher in me hurts to hear arguments like this and I know its probably futile to suggest that you sit down with someone and try to learn some of the formal basics. Again, I don’t mean to sound condescending. I’m not an expert on the topic either. But, I’m also not challenging this century-old science. If I wanted to do that, I would also sit down with an expert and hit the books first.
There are plenty of legitimate scientific discussion points on the subject of AGW. But, the radiative properties of CO2 are just not among them…Anyway, I can’t let myself get distracted again…I’m done here. Cheers.

Don Monfort
November 4, 2011 12:39 pm

old engineer,
Can you name some of those who are disagreeing with Richard?

Alan S. Blue
November 4, 2011 12:48 pm

Quick examination of stations to see how many are inside city limits:
Brewton, AL: Inside.
Fairhope, AL: Inside.
Gainesville, AL: 2km.
Greensboro, AL: Inside.
Highland Home, AL: (1), 500m to high school.
Muscle Shoals, AL: Inside.
Scottsboro, AL: Inside.
Selma, AL: Inside.
St Bernard, AL: Inside.
Taladega, AL: Inside.
Thomasville, AL: Inside.
Troy, AL: Inside.
(1) No city limit demarcation in google maps.
That’s the first page of the Alabama USHCN stations as tabulated at surfacestations.org.
So the (daft) quote is “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”.
Taking the most conservative position possible from that data:
That’s -83%- of the data collected -inside- cities. Those areas known to have issues and known to be -unrepresentative- of the bulk land mass. Nice data collection methods.
Tempted to run through the entire list of USHCN stations, there just aren’t that many.

Septic Matthew
November 4, 2011 12:50 pm

Stephen Rasey: The splices contain no low frequency information in the spectrum where we expect GW and UHI signals to exist. But when they glue the spices together, low frequencies return – but from WHERE? It can only come from the glue. It is not in the data anymore.
They cut and splice where there is a jump discontinuity in the data. It is the act that produced the jump discontinuity (which may have been relocating the thermostat, or putting an asphalt runway near it) that perturbed the low frequency signal. Cutting and slicing restores the low frequency signal that the jump discontinuity perverted.

Don Monfort
November 4, 2011 1:06 pm

Matt,
Read the sentence of yours that I quoted, again. You conflated absorption and re-emission with reflection. You did not mention that absorption and re-emission results in some of the re-emission going back into space. Don’t you think that might have been appropriate there? But if you are happy with that, go with it.
And see what Alan Blue has to say, above. He knows how to frame an issue.

Robin Hewitt
November 4, 2011 1:18 pm

You say Berkeley, I say Berkley. Probably best to spell it correctly because Berkley is Cockney rhyming slang. English is a rich and fascinating language dear to my heart, my only reason for offering this. Anyone who does something extremely stupid can be called a “great steaming birk”, (Berkley contracted) with little chance of causing offence. This seems odd once you appreciate the true meaning of the word.
For those not au-fait with old East London Cockney rhyming slang… Tit for tat rhymes with hat, so the Cockney’s hat is his “titfer”. China plate rhymes with mate, so his buddy is his “China”. Richard the third rhymes with bird and they gave our children the “Dicky birds”. True Cockneys have to be born withing the sound of Bow bells. They greet each with “Wotcha” which comes straight from the high language of chivalry, “What cheer Sir knight?” But I digress.
“Berkley Hunt” is the female, anatomical equivalent of the male “Hampton Wick”, so I will quite understand if this reply gets moderated out of existance.

EFS_Junior
November 4, 2011 1:27 pm

http://wattsupwiththat.com/2011/11/03/a-considered-critique-of-berkley-temperature-series/#comment-787568
“Perhaps you are unaware of this but if you take the time to correctly process satellite LTL data and compare it to ground data, there is a statistically significant difference in trend. i.e. detrend sat data, scale variance, retrend, regress, examine residuals. That is really all the confirmation of UHI that I need. So when a paper is published on non-detection of UHI, it is an example of go home and do it again. ”
Actually, I don’t know at all what the above means.
Are you suggesting that you yourself have taken the time to do your own independent surface temperature reconstruction from the raw satellite data, independent from the methods used by UAH and/or RSS?
Also, how do we know that the satellite data are the real temperature trend? After all, it is an indirect measurement of the lower troposphere and not a direct measurement at (or near) ground level;
http://en.wikipedia.org/wiki/Satellite_temperature_measurements
“Satellites do not measure temperature. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have produced differing temperature datasets. Among these are the UAH dataset prepared at the University of Alabama in Huntsville and the RSS dataset prepared by Remote Sensing Systems. The satellite series is not fully homogeneous – it is constructed from a series of satellites with similar but not identical instrumentation. The sensors deteriorate over time, and corrections are necessary for orbital drift and decay. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult.”
After reading that (and more), I can only conclude that the direct surface temperature measurements are not the same as the indirect lower troposphere satellite measurements, one is direct the other is indirect, one is at the surface the other is somewhere (??) in the troposphere.
So if I were to look anywhere first, then it would be the satellite data, as the 0.1C difference is more likely due to errors in the satellite data, or errors in the mathematical inversion to tropospere temperature, or simply due to the two sets of measurements not being taken from the same elevations.
Oh, and here a couple of links for you;
http://www.demographia.com/db-worldua.pdf
http://en.wikipedia.org/wiki/Earth
From the first of the above two links;
“This report contains population, land area and population density for all 780 identified urban areas (urban agglomerations or urbanized areas) in the world with 500,000 or more population as of the volume date. A number of additional urban areas are also listed, including all urban areas over 100,000 in France, New Zealand, Puerto Rico, the United Kingdom and the United States and all urban areas over 50,000 in Australia and Canada. Rankings are indicated only for urban areas of 500,000 and over.
More than 1,400 urban areas of all sizes are included, accounting for 53 percent of the world urban population in the fourth quarter of 2005 (the average year of the estimates).
From Table 7: 1,824,985,000 people live in these urban areas with an average population density of 5,480 people/km^2;
1,824,985,000/5,480 = 333,000 km^2
The total land surface area of the Earth is 148,940,000 km^2.
330,000/148,940,000 = 0.0022 or 0.22% of the total land surface of the Earth is comprised of an urban population of 1,824,985,000 people (as of 2005).
That means that the rest of Earth’s 2005 population of ~6.5 billion people occupy the rest of Earth’s land surface;
(6,500,000,000 – 1,824,985,000)/(148,940,000 – 330,000) = 31.4 people per square kilometer
It would be awfully hard to imagine that Earth’s total urban population covers more than say 1% of Earth’s total land surface area.
QED, therefore, forthwith, we can safely conclude that a proper area weighted land surface temperature reconstruction from all land surface measurements, both with urban areas and without urban areas, will be essentially the same, to say two decimal points of precision (you all can fight over the whichever temperature scale you all prefer).

1DandyTroll
November 4, 2011 1:42 pm

So, essential, your reasoning is that the temperature dataset and quality of that dataset and the statistics and its methods and the quality of said statistics and methods is absolutely positively in all shapes and forms completely crap and can’t be trusted to be used to prove anything diddley squat, but you believe that the temperature has gone up anyway.
Exactly how does that not make you sound like an utter global warming fundamentalist too deep into the bubbles of his own bong? :p

Manfred
November 4, 2011 2:04 pm

“The conclusion of the three groups is that the urban heat island contribution to the global
average is much smaller than the observed global warming. Support is provided by the
studies of Karl et al. (1988), Peterson et al. (1999), Peterson (2003) and Parker (2004) who
also conclude that the magnitude of the effect of urban heating on global averages is small.
There has been further discussion about the possibility of large non-climactic contamination
in global temperature averages, particularly due to local effects of urbanization,
development, and industrialization (see, for example, McKitrick & Micheals 2004, 2007; De
Laat & Maurellis 2006; Schmidt 2009; and McKitrick & Nierenberg 2010.).”
———————————————————–
This is not a balanced review of peer reviewed literature. Supporting papers are clearly highlighted as “support”, opposing papers are summarized as “further discussion”.
Further discussion, such as satellite/ground differences, diverging sea surface/land trends, UHI log population law and expected dUHI increase with population growth does not appear at all.

diogenes
November 4, 2011 2:27 pm

Verity
,my view from a number of blogs is that Steven Mosher is about 14 years old…and is going through puberty. ignore him. he is worried about his pimples…he has a telescope and a laptop and he thinks he knows things.

Manfred
November 4, 2011 2:31 pm

Richard says:
November 4, 2011 at 5:00 am
I think they failed to differentiate between UHI and dUHI, where only the latter is relevant for trends.
They failed to explain, why “very rural stations” should show low dUHI, when the UHI log population law implies something else, let alone microsite issues and land use change.

Editor
November 4, 2011 3:12 pm

steven mosher says:
November 4, 2011 at 8:15 am
You asked:
Steve’s analysis suggest an upper bound. Are you open to discussion of the upper bound or do you disagree with McIntyre, Christy, Spencer and Pielke?
I can either answer yes or no to the first part of your question (yes I am open to discussion of it, or no I see no need to argue about it/discuss that it is beyond this limit which Steve suggests is reasonable and I agree) and no to the second part, I don’t disagree with Steve et al. My lack of a clear answer earlier was due to slight puzzlement at the phrasing of your question.
My only qualification of this is to point out, as I am sure you and Steve would agree, that this is net warming, since some sites are cooling (30% according to BEST). In this case the individual site limits for UHI developing over the period are not bounded by this 0.1C/decade.
I am not suggesting extrapolation of the 0.1C rate either, merely observing that this is a reasonable bound for the satellite era, but that is all. And that rate clearly does not marry well with the historical overall rate of warming, which again is my point that UHI development will vary with time and location. How do we discern this in the surface record?

November 4, 2011 3:22 pm

Septic Matthew says:
November 4, 2011 at 12:50 pm
They cut and splice where there is a jump discontinuity in the data. It is the act that produced the jump discontinuity (which may have been relocating the thermostat, or putting an asphalt runway near it) that perturbed the low frequency signal. Cutting and slicing restores the low frequency signal that the jump discontinuity perverted.
I thought like this too until recently when Steve McIntyre presented a possible scenario that complicates the issue. If a station in town which grew to a city over a century and a half was moved over the years of growth to ever more rural areas, each step could produce a sharp downstep in the data. This downstep after the move could then creep up over decades as more UHI built around it until the station is moved again, producing another downstep. BEST would try and detect and re-align the downstep producing a long term uptrend whereas the true signal would be far closer to the original WITH the steps in it.

November 4, 2011 3:24 pm

Robin Hewitt,
Sorry for the spelling. I’ve corrected the post at tAV.

Gail Combs
November 4, 2011 3:29 pm

malagaview says:
November 4, 2011 at 2:25 am
AT LAST – an intelligent analysis of a temperature data set!…
_________________________________________
Agreed. Sort of blows holes in all the “Official” data sets does it not?
It also blows holes in the CO2 has cause the warming propaganda.
I would like to see the info posted here too.

Gail Combs
November 4, 2011 3:57 pm

Spen says:
November 4, 2011 at 8:51 am
I am still puzzled by confidence levels/error range. I assume the accuracy of the older temperature measurements was no better than +/- 0.5 deg C. Shouldn’t that degree of accuracy apply to the anomaly?
_________________________________
You might want to look at AJ Strata’s article about error in the temp record.
http://strata-sphere.com/blog/index.php/archives/11420

P. Solar
November 4, 2011 4:44 pm

Jeff, in relation to your #1 here is a comparison of FFT of Berkeley-est and HadcruT3 (land and sea).
http://tinypic.com/view.php?pic=24qu049&s=5
Up to around 20y period (5 on the /century scale) seem possible but the longer frequencies seem to have been decimated by the scalpel.
As you noted , they need to asses the effects of this processing and report it in the paper.
Hopefully this will address this lacuna before it gets published.

diogenes
November 4, 2011 4:48 pm

well done Verity for resisting the passive aggressive teenager that Lord Stephen Mosher is trying to become – he is obviously becoming more and more opposed to the idea of truth as he grows ever younger and more immature

1DandyTroll
November 4, 2011 4:49 pm

Why is it that UHI effect always get upped by the models when they either should stay the same or rather be simmered down due to the mitigating effect that we create as well by planting tress, creating reservoirs/lakes, irrigation of dried out land, and so on and so forth.
Why is weather always chaotic but climate is always simplified beyond disbelief to a über linearity?
Why is the temperature said to be rising when nobody are even certain of what the temperature was before it supposedly started to rise?

November 4, 2011 4:52 pm

@Septic Matthew says: They cut and splice where there is a jump discontinuity in the data. It is the act that produced the jump discontinuity (which may have been relocating the thermostat, or putting an asphalt runway near it) that perturbed the low frequency signal. Cutting and slicing restores the low frequency signal that the jump discontinuity perverted.
It is a legitimate intent to preserve the low frequency (GW and UHI) part of the spectrum. But does the suture do that? Have I made the point clear enough that the splices cannot contain the low frequencies? The low frequencies must be part of the gluing process. And just what data do they use to tune the glue? Hmmm?
Let me give you an example. One of the discontinuities you cite is a station move. Now station moves can be for many different reasons, but I’ll wager that a disproportionate number are positively related to urban pressure. At a station that used to be a Class 1, building modifications, parking lot paving, build up of neighbors, the station class has grown to 4. We can’t have that! So we move the station to a new Class 1 location. The right thing to do, but…. Discontinuity! Over the course of a century, this happens 4 times, once per 20 year span. We have a temperature record that is a bit saw-toothed with a UHI rise in each tooth. Deft use of the scalpel slices this 100 year record with 4 discontinuities into 5 “clean” pieces.
They are glued together… HOW? Eliminate the saw-tooth and make a ramp? If “the trend is the only thing you can trust” how can they do anything else?
I say the use of the scalpel and suture has made a bad data situation much worse. In the example I presented above, which of these is closer to the truth of the real regional temperature record? A) To remove the discontinuities? OR B) LEAVE THEM IN? I say use the entire record, uncut, unaltered. Leave the low frequency in. Isn’t the very act of moving the station an exemplary way to account and correct for a UHI problem? The ‘discontinuities” are not noise. They are an important part of the signal and must not be removed. The discontinuities are themselves the removal of a great deal of UHI from the record.
If you use one long saw-toothed record, you will have a strong low frequency signal that will be a combination of GW and UHI. Yes, UHI will contaminate the signal. But the UHI component is in the teeth of the wave, the gradual buildup between moves. By moving the station, the base temperature ought to drop back down due to lower UHI effects to the GW region trend. The overall century-long trend in the station might be (GW + ½ UHI). At least that is an upper bound on GW. But if you cut at discontinuities, treat the trend as your friend, and discount the absolute temperatures, then after splicing out the discontinuities your long term reconstructed signal will probably be (GW + 3.5 UHI), vastly overestimating the real GW trend.
I have used this station-move-because-of-UHI example to make an illustration of my point. Is it contrived? I think not. I think it represents many discontinuity problems, but not all. Be that as it may, my main point is that the scalpel destroys the very data we seek in the GW argument. The suture, the glue, is no guarantee the original signal is preserved. Indeed, the glue can be a major source of corruption or counterfeiting.

Werner Brozek
November 4, 2011 5:40 pm

“old engineer says:
November 4, 2011 at 12:01 pm
Now assume that station B starts to be engulfed in urbanization from a nearby city. Station B temperature starts being greater than A because of the urban heat island.”
This is an excellent point. I assume we know how much hotter an urban station is from a rural station. What we need to know now is how many stations went from rural to urban during the time involved. Or perhaps we should even eliminate all readings where there was a rural to urban change to get the true change in global temperature?

November 4, 2011 6:13 pm

Mike Bromley the Kurd,
“People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.”
People who submit to those libraries apparently are often not qualified to critique their own methods either!

Don Monfort
November 4, 2011 6:27 pm

diogenes,
Mosher does have one glaring personality flaw. He is impatient with the stubbornly stupid. Stings, don’t it. You can’t carry Mosher’s jock strap. Your lamp has gone out.

wayne
November 4, 2011 7:17 pm

Stephen Rasey says:
November 4, 2011 at 4:52 pm
Let me give you an example. One of the discontinuities you cite is a station move. Now station moves can be for many different reasons, but I’ll wager that a disproportionate number are positively related to urban pressure. At a station that used to be a Class 1, building modifications, parking lot paving, build up of neighbors, the station class has grown to 4. We can’t have that! So we move the station to a new Class 1 location. The right thing to do, but…. Discontinuity! Over the course of a century, this happens 4 times, once per 20 year span. We have a temperature record that is a bit saw-toothed with a UHI rise in each tooth. Deft use of the scalpel slices this 100 year record with 4 discontinuities into 5 “clean” pieces.
They are glued together… HOW? Eliminate the saw-tooth and make a ramp? If “the trend is the only thing you can trust” how can they do anything else?
I say the use of the scalpel and suture has made a bad data situation much worse.

Stephen, could not agree more with what you have said. Well put. That is pure common sense, not even needing deep statistics, and you know this is happening, the trends are being manufactured to some degree, intentionally or not from such manipulation of the data.
So many people have misread what the UHI effect actually is. They think it about energy USED in the cities warming the Earth causing a global trend and that is totally wrong.
UHI has to do with trends of growth from tiny cooler undeveloped towns to warmer cities over many decades and the trends the temperature stations create out of solar heat absorbed by manmade structures that heats the thermometers locally more than when the structure didn’t even exist years before, not warming the Earth itself.
BEST is right that cities only account for a tiny fraction of the world’s area and the energy they use is insignificant but BEST is also so wrong for it is each individual local thermometer that is being affected giving the illusion of widespread warming when it is just a warmer thermometer caused by UHI growth over the decades.
The splicing you speak of above just magnifies this illusion and you explained how this happens so clear.

wayne
November 4, 2011 7:35 pm

Stephen Rasey, one more thing along your line of thought. When Anthony gets some time I would be very curious if the surfacestation metadata shows any significant number of stations being moved from grassy knolls outside of cities into the core of the cities atop fire stations or in proximity to air-con exhausts.
My guess is no. The opposite should have happened as stations are moved to improve the quality of the readings out of cities, BUT, this creates exactly the scenario you lay out above. I just keep seeing the chart from NOAA in my mind of the difference between adjusted temperatures and raw temperatures as it stair-steps upward and upward like a machine. This could very well be caused by the splicing and adjustments being performed on the temperature data.
Bet many people would like to know that.

Septic Matthew
November 4, 2011 9:40 pm

Stephen Rasey: Have I made the point clear enough that the splices cannot contain the low frequencies?
Your point was clear, but I think you are mistaken for the most part. That is, there may be some records for which the technique masks low frequency variation and introduces spurious low frequency variation. Your example shows that it is possible. It is always possible after the fact to create a single time series, or a few time series, that defeat a particular analysis technique. That’s one of the reasons that so many analysis techniques have been invented. So what you wrote may apply to some of the records. It’s something that the authors might be able to check on.
More likely, in these r records, there is some variability related to natural oscillations, some to trend ( which may include UHI). For example, a small airport near a tiny town that grew from 1950 – 2010, may have had it’s runway first paved and two new buildings added in about 1957, and the thermometer reconditioned. the resultant (presumptive) jump in temperature would pervert the trend and any low frequency signal, but cutting and splicing would restore the trend and the low frequency signal to something more like their true values.
So the question works out to a question about the preponderance of cases as you describe (sawtooths) vs. the preponderance of cases such as I describe (trend line plus sinusoid, with a few random jumps.) When you think of the sources of the low frequency signal (solar, AMO, PDO,) and the otherwise flat or monotonic trend in most temperature records, and reflect that there are 39,000 station records, I think that the cases like I describe predominate. Eventually I’ll ask the authors about this, in a professional meeting of some sort.
More points: The ‘discontinuities” are not noise.
to the degree that they pervert estimating the trend, they are noise.
The discontinuities are themselves the removal of a great deal of UHI from the record.
That’s desirable in this case, like “partialling out” the effect of a covariate.
The suture, the glue, is no guarantee the original signal is preserved.
There are no guarantees, so this method is one more method that is no guarantee.
I think that your sawtooth example is interesting. It is the first I have read of the possibility that a station may have been repeatedly located away from a growing heat source. I think there is no way of estimating UHI or eliminating its effect without some kind of systematic attempt to study the “cooling” and “warming” thermometers. Anthony Watts has tried to lead an effort, but there are too many to be examined in sufficient detail, and the sampling is biased. So a random sample instead of a census might be required.

Septic Matthew
November 4, 2011 9:49 pm

Jeff Id says:
November 4, 2011 at 3:22 pm
I addressed this in my response to Stephen Rasey. I doubt that such cases predominate, but I hope for a systematic study, full histories, of samples of the thermometers. This is for sure a reason to think that their error bounds are too optimistic.
Like you, Steve McIntyre has impact. I hope that the Berkeley team is attending to his critiques.
My last word on the subject: you and Mr. Rasey might be right.

Septic Matthew
November 4, 2011 9:54 pm

diogenes: my view from a number of blogs is that Steven Mosher is about 14 years old
That is most inaccurate and unfair. Steven Mosher very knowledgeable about the topics that posts on, and his posts are always worth reading and considering. If he is sometimes mistaken (who isn’t?) he’s never written a post as stupid as that one from you. (n.b. that is a criticism of the post, not of the person who wrote it. For all I know you are a great and knowledgeable person who just goofed.)

P. Solar
November 5, 2011 1:03 am

Stephen Rasey says: “They are glued together… HOW? ”
You have made a lot of comments about the segments being glue back together or sutured. While I don’t disagree with your general arguments, I don’t think this suturing is what is done in B-est. The little bits remain little bits. and are viewed as zillions of short records that are then statistically analysed. If they were glued back together long term signals would likely be preserved. Study the methods paper again, I see nothing indicating such a reassembly of the shreds.
Your point about longer frequencies seems to be born out by an FFT frequency analysis. Here’s a comparison of Berkeley-est and HadcruT3 (land and sea):
http://tinypic.com/view.php?pic=24qu049&s=5

P. Solar
November 5, 2011 1:12 am

A significant issue is how this method will handle a volcanic event. A rapid cooling followed by decade long recovery. All this emphasises the need for BEST to study what their method is actually doing .
Their paper simply states the effect of splicing “should be trend neutral”. This seems to be a clear and honest declaration that they have not even looked.
It must be remembered that this is UNREVIEWED and UNPUBLISHED at this stage, so I would expect this issue to be addressed during the review period.

November 5, 2011 6:15 am

@ P.Solar. The word “Suture” is not mine. It comes from BEST. Somehow they are taking trends of a temperature segment (some sort of average first derivatives of a function), throwing away everything else, and then making the pieces fit into a 200 year temperature record at the end.
Glue is a good a word as any. And if it implies a source of contamination, it is a better word than most.

November 5, 2011 7:14 am

The situation becomes even worse when we consider that the downstep is more mathematically detectable than the upstep due to the general uptrend of the average. You are more likely to wipe out the re-sighted temp stations.
So the case where station has urban pressure, it is moved to an out of the way position and a downstep is produced is more likely to be chopped than a station which experiences the installation of a building next to it and is not moved.

November 7, 2011 11:48 am

Matt says:
In the end, it is true that atmospheric gasses reflect some light back into space. This is a component of the earth’s “albedo” and it does have a slight cooling affect. However, ice and clouds are much more important to the earth’s albedo than atmospheric gasses.
Henry@Matt
Matt you are arguing that somehow the cooling effect of CO2 is already “counted” in earth’s albedo.
I heard this point made before. That is of course a very stupid argument. Because as CO2 increases, so must its cooling effect (by deflecting sun’s energy) and its warming effect (by deflecting earth’s energy). The question is and was: what exactly is the net effect of an increase in CO2?
I again urge you to carefully read the footnote on the bottom here:
http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011
I am particularly worried about the the “absorption” of CO2 in the 4-5 um band because this is where the sun is emitting “hot” radiation which as CO2 increases bounces off the earth.
I am glad you are not teaching anymore, because I don’t know what you want to teach until you understand by simple observation what is actually happening.