A Considered Critique of Berkeley Temperature Series

Guest post by Jeff Id File:Berkeley earth surface temperature logo.jpg

I will leave this alone for another week or two while I wait for a reply to my emails to the BEST group, but there are three primary problems with the Berkeley temperature trends which must be addressed if the result is to be taken seriously.  Now by seriously, I don’t mean by the IPCC which takes all alarmist information seriously, but by the thinking person.

Here’s the points:

1 – Chopping of data is excessive.   They detect steps in the data, chop the series at the steps and reassemble them.   These steps wouldn’t  be so problematic if we weren’t worrying about detecting hundredths of a degree of temperature change per year. Considering that a balanced elimination of up and down steps in any algorithm I know of would always detect more steps in the opposite direction of trend, it seems impossible that they haven’t added an additional amount of trend to the result through these methods.

Steve McIntyre discusses this here. At the very least, an examination of the bias this process could have on the result is required.

2 – UHI effect.  The Berkeley study not only failed to determine the magnitude of UHI, a known effect on city temperatures that even kids can detect, it failed to detect UHI at all.  Instead of treating their own methods with skepticism, they simply claimed that UHI was not detectable using MODIS and therefore not a relevent effect.

This is not statistically consistent with prior estimates, but it does verify that the effect is very small, and almost insignificant on the scale of the observed warming (1.9 ± 0.1 °C/100yr since 1950 in the land average from figure 5A).

This is in direct opposition to Anthony Watts surfacestation project which through greater detail was very much able to detect the ‘insignificant’ effect.

Summary and Discussion

The classification of 82.5% of USHCNv2 stations based on CRN criteria provides a unique opportunity for investigating the impacts of different types of station exposure on temperature trends, allowing us to extend the work initiated in Watts [2009] and Menne et al. [2010].

The comparison of time series of annual temperature records from good and poor exposure sites shows that differences do exist between temperatures and trends calculated from USHCNv2 stations with different exposure characteristics. 550 Unlike Menne et al. [2010], who grouped all USHCNv2 stations into two classes and found that “the unadjusted CONUS minimum temperature trend from good and poor exposure sites … show only slight differences in the unadjusted data”, we found the raw (unadjusted) minimum temperature trend to be significantly larger when estimated from the sites with the poorest exposure sites relative to the sites with the best exposure. These trend differences were present over both the recent NARR overlap period (1979-2008) and the period of record (1895-2009). We find that the partial cancellation Menne et al. [2010] reported between the effects of time of observation bias adjustment and other adjustments on minimum temperature trends is present in CRN 3 and CRN 4 stations but not CRN 5 stations. Conversely, and in agreement with Menne et al. [2010], maximum temperature trends were lower with poor exposure sites than with good exposure sites, and the differences in

trends compared to CRN 1&2 stations were statistically significant for all groups of poorly sited stations except for the CRN 5 stations alone. The magnitudes of the significant trend differences exceeded 0.1°C/decade for the period 1979-2008 and, for minimum temperatures, 0.7°C per century for the period 1895-2009.

The non-detection of UHI by Berkeley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people. A skeptical scientist would be naturally concerned by this and it leaves a bad taste in my mouth to say the least that the authors aren’t more concerned with the Berkeley methods. Either surfacestations very detailed, very public results are flat wrong or Berkeley’s black box literal “characterization from space” results are.

Someone needs to show me the middle ground here because I can’t find it.

I sent this in an email to Dr. Curry:

Non-detection of UHI is a sign of problems in method. If I had the time, I would compare the urban/rural BEST sorting with the completed surfacestations project. My guess is that the comparison of methods would result in a non-significant relationship.

3 – Confidence intervals.

The confidence intervals were calculated in this method by eliminating a portion of the temperature stations and looking at the noise that the elimination created. Lubos Motl described the method accurately as intentionally ‘damaging’ the dataset.  It is a clever method to identify the sensitivity of the method and result to noise.  The problem is that the amount of damage assumed is equal to the percentage of temperature stations which were eliminated. Unfortunately the high variance stations are de-weighted by intent in the processes such that the elimination of 1/8 of the stations is absolutely no guarantee of damaging 1/8 of the noise. The ratio of eliminated noise to change in final result is assumed to be 1/8 and despite some vague discussion of Monte-Carlo verifications, no discussion of this non-linearity was even attempted in the paper.

Prayer to the AGW gods.

All that said, I don’t believe that warming is undetectable or that temperatures haven’t risen this century. I believe that CO2 helps warming along as the most basic physics proves. My objection has always been to the magnitude caused by man, the danger and the literally crazy “solutions”. Despite all of that, this temperature series is statistically speaking, the least impressive on the market. Hopefully, the group will address my confidence interval critiques, McIntyre’s very valid breakpoint detection issues and a more in depth UHI study.

Holding of breath is not advised.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Excellent Jeff. My own concerns centre around issue #2.

R. Shearer

Despite all this “warming” we’re only one large volcanic eruption away from a “year without a summer.”

Still more of why I don’t think it shoud be called “BEST” –
I’m still favoring Berkeley EST.

David

When is all the raw data and code going to be released? My understanding is that what has been released thus far is of very limited value.
I know some people have said it’s just preliminary stuff and the real stuff is coming but if the papers are ready for peer review and Muller is all over the news it seems odd that what was released is not particularly useable to people who want to get to the heart of the methodolgy that was used.

Doug in Seattle

While I agree that the BEST scalpel is a good idea, I think that in the end it can only be properly employed based on a direct examination of both the metadata and the temperature data. What the BEST crew did was to try and automate this process based on trend.
This is really a problem with the research model rather than the researchers. Crowd sourcing, as was done in the Surface Stations project might be a better way to accomplish this.

Concern #1 speaks loudest. Once you start hacking at the data, you basically add a bias. Mother Nature doesn’t do that, nor does Anthropogenophecles, the god of significant figures.

Admin: just some housekeeping. Can you please perform a replace of Berkley with Berkeley? Thanks.

David says:
November 3, 2011 at 7:06 pm
“…it seems odd that what was released is not particularly useable to people who want to get to the heart of the methodolgy that was used.”

People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.

u.k.(us)

I just don’t want Al Gore running the world.

Randy

Well spoken. The words you use are the same as the thoughts we are thinking as we read along. Succinct. And to the point. Thanks for what you and Anthony do.

Matt

Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Look, I’m not interested in debating whether or not the latter statement is true. But, the fact that “kid’s can measure” local urban heat islands doesn’t mean it has a significant impact on globally averaged means.
“The non-detection of UHI by Berkley is NOT a sign of a good quality result considering the amazing detail that went into Surfacestations by so many people”
Lot’s of science involves *amazing detail* and yet yields a null result. I’m glad that Anthony et al did such a thorough assessment of station quality. It was a real service to the science. But, that fact that they worked hard does not have any bearing on whether or not UHI has an impact on the change in globally averaged temperature anomaly. Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments. What Jeff is saying is that the Berkeley results *must* be wrong because they go against his a priori belief.
One last point: I may be tired and misreading this, but doesn’t the excerpt from Anthony’s paper say that the reconstructions with the bad sites *understate* the temperature trend (in agreement with Meene et al and also, BTW, Berkeley)? Isn’t that the opposite of what Jeff wants to believe?

Don Monfort

“Admin: just some housekeeping. Can you please perform a replace of Berkley with Berkeley? Thanks.”
It’s actually Berzerkely.

u.k.(us)

Matt says:
November 3, 2011 at 7:34 pm
“Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments.”
===========
I assume you have a peer-reviewed paper to back-up this claim ?
I’m sorry, I mean a paper that has cleared peer-review.
A link to same would be best.

Verity
Lets look at issue #2
Look at Steve McIntyre’s latest post.
We start with the trend of Satellites. Surely you accept the trends of John Christy and Roy Spencer.
Then steve applies a similar technique to that used here
http://hurricane.atmos.colostate.edu/Includes/Documents/Publications/klotzbachetal2009.pdf
Then he compares it to the surface trend
Lets walk through it slowly using one example.
1. We accept the UAH trend say .18C per decade
2. we look at CRU warming more at .28C per deacde
Can we conclude ( as Christy, Spencer and Steve do) that the difference
.28 – .18C or .1C per decade could be UHI?
Thats about what Ross suggests?
We all realize that UHI is a potential problem. The first question is can we bound the problem.
its not zero ( so BEST is wrong) AND its not 9C. the whole world is not Tokyo.
Steve’s analysis suggest an upper bound. Are you open to discussion of the upper bound or do you disagree with McIntyre, Christy, Spencer and Pielke?

Another item I’ve NEVER heard these “wonks” address. Yes, we have “thermometer data going back to about 1800.
NO, none of the data until the 20’s or 30’s (and a small amount at first) was taken with “rotating drum”, daily recorders.
The problem with this? WHEN IS PEAK, when is trough?
We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I HAVE NEVER HEARD THIS MENTIONED OR ADDRESSED.
Frankly, the lack of discussion of that fact tells me that “all the best laid plans of mice and men” have gone wrong.
There is, in essence, NO VALUE to data from about 1800 to 1920 !!!
Max

Richard M

The UHI effect is obviously a fact. The problem is detecting it and determining how much influence it has on the overall trends. Since it varies by time and location it will require a lot of effort to understand. I haven’t seen this effort from anyone including BEST.

Is the BEST the enemy of good (analysis)?
How do these BEST trends compare with satellite trends (for the period for which both sets exist)? I am not sure one can give too much credence to land based measurements considering the changes that can be introduced into the microenvironment, without even trying (e.g., due to changes in vegetative cover, clearing of land, in the immediate vicinity of the instrument, etc.)

Don Monfort

Matt,
“there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Cities, towns, villages, suburbs, neighborhoods, hamlets, etc. are where the thermometers are. Population centers are over-represented in the temperature records. Read the BEST paper on UHI and tell us how they dealt with that issue.

Max Hugoson says:
November 3, 2011 at 8:02 pm
The problem with this? WHEN IS PEAK, when is trough? We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I HAVE NEVER HEARD THIS MENTIONED OR ADDRESSED.

The max-min-thermometer was invented in 1794 by Six, so now you have heard this mentioned.

Don Monfort

I will play Steve. If you are asserting that a reasonable upper limit on UHI is .1C per decade, I will take that. I only hope that this time your guessing game has and ending.

JJ

JohnWho says:
“Still more of why I don’t think it shoud be called ‘BEST’ –
I’m still favoring Berkeley EST.”

I think it should be Berkeleyest. That keeps is useful as an adjective, but (unlike the current acronym) it would be accurate. e.g. –
“Did you catch the nonsense that came out of Durban last week? It is about the Berkeleyest thing I’ve heard in a long time!”
🙂

Theo Goodwin

Matt says:
November 3, 2011 at 7:34 pm
“Someone should point out to Jeff that there is a difference between saying “there is no UHI” and saying “there is significant UHI but cities only account for a small fraction of land surface and have only a small impact on global averaged trends”. Look, I’m not interested in debating whether or not the latter statement is true. But, the fact that “kid’s can measure” local urban heat islands doesn’t mean it has a significant impact on globally averaged means.”
Your comment contains the equivocation on the phrase “no UHI” that is found in BEST’s work. You seem to be aware of the problem but unaware that BEST uses the equivocation in a fallacious argument. They begin talking about UHI as a local phenomenon, which is the topic they took from Anthony but conclude that UHI has no impact on global average temperatures, a new topic that was not addressed by Anthony. In addition, introducing the topic of “global averaged trends” at all is a beautiful example of a Red Herring; that is, they introduced a topic that might sound like the actual topic but is actually irrelevant to it.
Anthony’s claim is that there is local UHI and that it has a disproportionate effect On The Measurement of global average trends because thermometers are disproportionately found in settings affected by UHI. Anthony does not claim that UHI has a causal effect on global temperature; rather, his claim is about the measurements that feed into measurements of global temperature.
In not addressing Anthony’s concerns about local UHI (and in changing from Anthony’s 30 year period for which there is metadata to a 60 year period for which there is none for the first 30 years) BEST betrayed Anthony’s trust yet chose to give the impression that they did address his concerns. That is plain old deception for the purpose of appearing to attain a success that was not attained, not earned, and not deserved.

Jeff – when you say – “They detect steps in the data, chop the series at the steps and reassemble them. ”
Do these two GISS diagrams express the same thing ? – from Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl 2001. A closer look at United States and global surface temperature change. J. Geophys. Res. 106, 23947-23963, doi:10.1029/2001JD000354. and a pdf can be downloaded at http://pubs.giss.nasa.gov/abstracts/2001/
I have a page here –
http://www.warwickhughes.com/papers/gissuhi.htm
and also a blog post.
http://www.warwickhughes.com/blog/?p=753

Theo Goodwin

Steven Mosher says:
November 3, 2011 at 7:57 pm
“1. We accept the UAH trend say .18C per decade
2. we look at CRU warming more at .28C per decade”
Where did you get these numbers? Does CRU claim that temperatures have risen at .28C per decade? For how many years? Let’s take 60 years as it is very important in some of BEST’s work, especially that having to do with Anthony and local UHI.
Let’s see, .28 per decade times 6 decades yields 1.68C for the most recent sixty years. In turn, that is equivalent to about three degrees Fahrenheit. Does CRU claim that global average temperature has risen 3F in the last 60 years?

Gail Combs

Indur M. Goklany says:
November 3, 2011 at 8:17 pm
Is the BEST the enemy of good (analysis)?
How do these BEST trends compare with satellite trends (for the period for which both sets exist)? I am not sure one can give too much credence to land based measurements considering the changes that can be introduced into the microenvironment, without even trying (e.g., due to changes in vegetative cover, clearing of land, in the immediate vicinity of the instrument, etc.)
____________________________
Anthony’s Surface Station project is looking into all those microenvironments.
As far as the early data goes, one has to look at the logs and notes of the collectors of the data.
WUWT posted this earlier
“…from 1892 is a letter from Sergeant James A. Barwick to cooperative weather observers in California….</b. http://wattsupwiththat.com/2011/10/26/even-as-far-back-as-1892-station-siting-was-a-concern/
Some one else is trying to collect the old British shipping records on water temperature.

Uri

Mike Bromley the Kurd says:
People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.
Maybe you should have looked at the data yourself.
A cursory look at the preliminary data (plain text version) shows how useless it is.
File data.txt contains adjusted monthly averages, including over 3000 values that are either below -90 Celsius or above 57 Celsius. Some values are below absolute zero (-273), and some are in the many thousands (I saw values above 28000 Celsius).
In site_summary.txt over 600 sites have no known location.
If this is the data set used for the papers, they must retract them.

Matt

u.k.(us) says:
November 3, 2011 at 7:55 pm
Matt says:
November 3, 2011 at 7:34 pm
“Lot’s of measurements and average trends are robust over poorly inter-calibrated instruments.”
===========
I assume you have a peer-reviewed paper to back-up this claim ?
I’m sorry, I mean a paper that has cleared peer-review.
A link to same would be best.
U.K,
Let me clarify: Pick up *any* peer reviewed measurement and read the error analysis section. I am describing normal scientific protocol. All instruments have finite precision. All instruments have some bias. The question is: “what affect does the finite precision or bias have on the end results?” Good science attempts to meticulously quantify that uncertainty or bias. It is not enough to say an instrument is imprecise. You have to say the instrument has an precision of X and when that error is propagated through the analysis, it has an effect Y on the final result. If Y too big, you go out and buy a new instrument. But, if it is not too big to measure the trend in question, then you’re fine. The strongest thing Anthony’s paper seems to be saying is that the poor quality stations tend to amplify diurnal variation. That just adds to the noise, but as Anthony admits, the noise cancels out when you get an average trend. It seems that in Anthony’s own summary of the Fall et al paper, he admits that including the poor stations does not bias the average daily temperature trends towards more warming (if anything, towards less warming).
To quote Anthony:
*Minimum temperature warming trends are overestimated at poorer sites
*Maximum temperature warming trends are underestimated at poorer sites
*Mean temperature trends are similar at poorer sites due to the contrasting biases of maximum and minimum trends.
-taken from http://www.surfacestations.org/Fall_etal_2011/fall_etal_media_resource_may08.pdf
I can’t seem to find a draft of his paper (links dead) so I don’t want to overstate my understanding of his conclusions or his analysis. But, I see nothing in his media summary that significantly contradicts the conclusions of any prior analysis or the BEST analysis. I would welcome anyone who can direct me to a draft of the paper or who can correct me if I’m wrong about this.

DR

Doesn’t basic atmospheric physics demand the troposphere warm at a significantly higher rate than the ground? Why wouldn’t this be point #4? Am I missing something?

DR says:
November 3, 2011 at 9:23 pm
Doesn’t basic atmospheric physics demand the troposphere warm at a significantly higher rate than the ground?
The troposphere is warmed by and from the ground up.

richard verney

Since UHI is a real and undeniable FACT, if we cannot measure/detect UHI then there is something wrong with the resolution of the measuring equipment (algorithyms used) PERIOD.
I would be very sceptical of a data set that is unable to detect UHI. I acknowledge that on a trend basis it could be rather small but it should there and should be identifiable.

David Falkner

Matt says:
November 3, 2011 at 9:09 pm
I would think, intuitively, that converging maximum and minimum temperatures upon each other would reduce the variance in the data artificially, making the error bars associated with the data seem more reasonable than they ought to be.

fortunate cookie

Slightly OT, but apparently Anthony is scheduled to be the guest during the first hour of tonight’s Coast to Coast radio broadcast (in the US), starting in about twenty minutes.
I wonder whether a discussion of BEST is planned. The only description given on the website is: First Hour: Meteorologist Anthony Watts comments on the global warming issue.
See http://www.coasttocoastam.com/

The overall trend in US continental temperatures follow the ocean cycles, PDO+AMO as shown by Joe D’Aleo. We are just seeing a repeat variant of the dust bowl cycle.
The meteorological surface air temerpature (MSAT), or weather temperature is the temperature of the air measured in an enclosure placed at about eye level above the ground. The minimum daily MSAT is essentially a measure of the bulk air temperature of the local weather system as it is passing by the station. (It is the base of the local lapse rate). The maximum daily MSAT is essentially a measure of the solar surface heating/moist convection added to the minimum MSAT.
In California, the minimum MSAT tracks the PDO, with a slope bias from the UHI effect. Most of the CA weather systems originate in the N. Pacific. This is not perfect, but it gives a way of using the PDO as an independent reference to quality check the local station data. Using this technique, I have estimated the UHI for about 30 CA stations. It seems to work. I have also used it on some UK stations with the AMO as reference.
Once the MSAT data is analyzed in terms of the energy transfer physics instead of just a number series, then it becomes clear that there is no CO2 induced global warming in the MSAT record.
http://hidethedecline.eu/pages/posts/what-surface-temperature-is-your-model-really-predicting-190.php
http://venturaphotonics.com/CAClimate.html

Uri says:
November 3, 2011 at 8:55 pm
Mike Bromley the Kurd says:
People who frequent “preprint libraries”, although vast in number, are not qualified to critique methodology.
Maybe you should have looked at the data yourself.

I don’t have to. Dang those pesky /sarc tags. My comment on the masses in the PP libraries? First of all, you missed it, Uri. What masses? (read Muller’s spin on the thread).

Jeff, I agree with your points. UHI is BEST’s Achille’s Heel. That they cannot determine any kind of UHI reduces anyone’s confidence their records can be used for any detail work. You have to believe that UHI does not really exist despite evidence to the contrary and everyday experience. You have to believe that BEST’s failure to find UHI is work as significant as Michelson-Morley’s failure to measure the ether wind. Or you can believe they have a problem in their analysis method. And I think that have a problem with their scalpel and said so on April 2 in WUWT.
Which brings me to support your point 1 from a different dimension: the Fourier domain. I think a fatal flaw in the scalpel is that it eliminates low frequency data from the temperature records. UHI and GW are signals made up completely of long-wavelength, very low frequencies with less than a cycle per decade; maybe a less than a cycle per century. The lowest frequency possible in a record is one wavelength in the entire record. If you use a scalpel to cut records in time, you destroy the lowest frequencies – precisely those frequencies you are looking for! See CA 11/1/11 for a more detailed argument.
In Muller’s WSJ 10/21/2011 essay, he says:

… By using data from virtually all the available stations, we avoided data-selection bias. Rather than try to correct for the discontinuities in the records, we simply sliced the records where the data cut off, thereby creating two records from one.

In the process to “avoid data-selection bias” – they commit a bias toward selecting high frequencies and a bias against low frequencies. The splices contain no low frequency information in the spectrum where we expect GW and UHI signals to exist. But when they glue the spices together, low frequencies return – but from WHERE? It can only come from the glue. It is not in the data anymore. No wonder they could not find UHI – it’s on the cutting room floor. At the very least, we need to analyze the glue, but I suspect they are just turning high frequency info into what appears to be low. Whatever – it is not original low frequency information; it is counterfeit.

EFS_Junior

(1) and (3) Both of these should be testable with the full (presnipped) BEST dataset. Temporal snips do not necessarily include vertical temperature shifts. Vertical temperature snips can use an asymmetrical offset criteria in line with the mean decadal (or a shorter or a longer baseline) trend line (by ineration if the underlying trendline is unknown to begin with). The main goal being to minimize snipping biases, if they do in fact exist at all.
(3) If the subsample via Monte Carlo (random) selection truely represents the total sample, across all groups, statistically speaking (no major under-/over-sampling), and both populations are quite large (1/8 of ~40,000 is 5,000), then the BEST method for station uncertainties may, in fact, scale quite accurtely (but they could do 1/7, 1/5, 1/5. 1/4, 1/3, 1/2 as a check, if necessary).
(2) The BEST data did not say that there was no UHI. What they did say is that the UHI had an insignificant effect on the global land temperature time series, simply due to the fact that urban areas are a relatively small precentage of the total land area. This was the BEST preliminary result for global versus rural only, -0.0019 ± 0.0019 °C/yr (meaning that rural had just a barely higher trend line than global.
So for instance, one could cherry pick and rank all temperature stations showing the highest rates of increase, then remove all rural stations as outliers, and voilà UHI proven (but that still won’t change the global temperature record, as again, urban areas are a very small part of the total global land area).
In the end this is what BEST wanted in the beginning, open peer review. Just don’t expect the BEST efforts to go about chasing all the various hundreds and/or thousands of leads sent their way.

richard verney

There needs to be an explanation as to why approximately a third of stations show no warming or even cooling. This is not an insignificant proportion, and they cannot therefore seen to be simply some form of outlier.
This needs to be investigated in minute detail and in particular whether there are some identifiable micro climate or siting or equipment explanations. After all, it is feasible that the one third station data showing no warming (or even cooling) is correct, and the two third station data showing warming is incorrect. Not probable but not so unlikely that it can be safely ruled out as a possibility.

It is obvious many are treating this BEST stuff as serious if flawed work. It is work, it is flawed, but serious, I think not. At best this is simply an attempt at getting attention. At worst an attempt at obfuscation. I think this foolishness needs to be completely ignored until all the raw data and codes is released and the papers are properly published.

Don Monfort

EFS,
“(2) The BEST data did not say that there was no UHI. What they did say is that the UHI had an insignificant effect on the global land temperature time series, simply due to the fact that urban areas are a relatively small precentage of the total land area. This was the BEST preliminary result for global versus rural only, -0.0019 ± 0.0019 °C/yr (meaning that rural had just a barely higher trend line than global.
So for instance, one could cherry pick and rank all temperature stations showing the highest rates of increase, then remove all rural stations as outliers, and voilà UHI proven (but that still won’t change the global temperature record, as again, urban areas are a very small part of the total global land area).”
Urban areas are indeed a relatively small percentage of the total land area. But urban areas, suburban areas, small towns, hamlets, burgs, villages, etc. are over-represented in the temperature record. Cities of 50,000 or more make up 27% of the CHCN stations, according to BEST. Areas with few or no people, and little warming influence from people and their accompanying infrastructure, have few thermometers. What is so hard to understand about that?
Did you actually read the BEST UHI paper? They stated they could not use MODIS data to separate urban from rural areas, so they broke areas down into rural, and very rural. Make sense to you? See if you can find in their methodology a comparison of urban areas vs. unpopulated or sparsely populated areas? I will help you. They state: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural.” Is it any wonder they couldn’t find a UHI effect.

Don Monfort

richard verney,
And if you look at figure 4 in the BEST UHI paper you will see that the distribution of the cooling trend stations is homogeneously distributed among the warming stations. Look at the almost continuous urban area on the North Atlantic coast of the US. Don’t those blue dots among all that red look weird? There is something wrong with the data, or the methods of analyzing the data. You can also see the red dot concentrations where major cities are located.
http://berkeleyearth.org/Resources/Berkeley_Earth_UHI.pdf
Looking at figure 2, you can see where their “very-rural” sites are located. Match that up with this population density map.
http://modernsurvivalblog.com/wp-content/uploads/2010/06/world-population-density-map.gif

Philip Bradley

Bishop Hill will publish in the next few hours an analysis by me that shows BEST, as well as GISS and HadCRUT, by using daily minimum temperature, over-estimate the average warming over the last 60 years by approximately 45%, And further shows most of the remaining measured warming is due to increased early morning solar insolation, likely due to decreased black carbon aerosols.
I also think much of the UHI effect is also due to decreased black carbon aerosols.

JJ

EFS_Junior
(2) The BEST data did not say that there was no UHI. What they did say is that the UHI had an insignificant effect on the global land temperature time series, simply due to the fact that urban areas are a relatively small precentage of the total land area.

Wrong metric. The issue is not that urban areas are a small proportion of global area. The question is -> is the influence of urban sited thermometers on the temperature record an equally small proportion of the influence of all thermometers on that record.
Further, looking only at “urbanization” on the grand scale of cities is to play semantic games with the concept involved. The problem is not merely the several degree rise in temps across the profile of a megalopolis. It is equally (if not more so) the problem of ‘rural’ thermometers being impacted by anthropomorphic warming on very local scales – impacts that have orders of magnitude less effect on global temps than do big cities, but which can have a much larger effect on the microclimate that the thermometer is measuring. And, of course, how those locally warmed thermometers’ readings end up affecting the homegenization adjustments applied to the data.
Thermometers have always been placed near houses, even in rural areas, simply because they have to be convenient to the people who monitor them. With MMTS, they tend to be placed even closer to buildings, as they are tethered by a cable. Such “UHI” doesn’t even require any development to grow in magnitude, it merely requires that MMTS cables of any length be relatively expensive …

ferdberple

Stephen Rasey says:
November 3, 2011 at 9:56 pm
In the process to “avoid data-selection bias” – they commit a bias toward …
What steps did BEST take to validate their methodology with a known data set, to ensure that it was not introducing spurious results? Isn’t that standard practice in science when introducing a new methodology, to test the methodology with a known reference data set, where the answer is already agreed, to prove that it performs as expected?

juanslayton

Max Hugoson: We have NO assurance, without DAY LONG RECORDINGS that peaks and troughs were properly recorded.
I think you are concerned with Time of Observation bias, which has a long history of study. Don’t entirely understand it myself, but a good start might be W.F. Rumbaugh’s article in the October 1934 issue of Monthly Weather Report. He used a 6 year thermograph record (those would be DAY LONG RECORDINGS) to evaluate the accuracy of min/max measurements.
Which gives me occasion to ask a question that has been bugging me for some time. Perhaps some reader will know. The weather service was taking thermograph recordings in quite a number of stations from the 1800’s and well into the 20th century. What happened to those records? What contribution would they make to climate studies (if any)?

EFS_Junior

http://wattsupwiththat.com/2011/11/03/a-considered-critique-of-berkley-temperature-series/#comment-787371
“Urban areas are indeed a relatively small percentage of the total land area. But urban areas, suburban areas, small towns, hamlets, burgs, villages, etc. are over-represented in the temperature record. Cities of 50,000 or more make up 27% of the CHCN stations, according to BEST. Areas with few or no people, and little warming influence from people and their accompanying infrastructure, have few thermometers. What is so hard to understand about that? ”
Yes, there are a relatively high precentage of urban stations, relatively tightly packed I might add, my understanding is that high density station areas were given an inverse weighting to cancal out this disproportion. So for example, 10 urban stations cover the same area as one rural station, the rural station receives a weight of one while the 10 urban stations receive a weight of one-tenth.
“Did you actually read the BEST UHI paper? They stated they could not use MODIS data to separate urban from rural areas, so they broke areas down into rural, and very rural. Make sense to you? See if you can find in their methodology a comparison of urban areas vs. unpopulated or sparsely populated areas? I will help you. They state: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural.” Is it any wonder they couldn’t find a UHI effect.”
But what’s the point? To show how much of an UHI exists, while at the same time showing how little it contributes to the overall global surface land temperature record, simply due to it’s much smaller area of coverage relative to the total global land area.
Don’t really understand all the fuss about UHI anyways, as anyone that wants to do it properly, should do a proper inverse weighting based on aerial station density.

Theo, The numbers are for 1979 to 2010
The values are for illustration ( but BEST was at around .28C per decade)

So Theo.
from 1979 to 2010.
UAH has .18C of warming
Best has .28C of warming
Do you or dont you buy the argument ( christy, spencer, mcIntyre) that the difference
between these two .1C decade could be attributed to UHI?
Recall that Ross McKittrick also suggests a figure around .1C decade.
Don Monfort agrees?
What say you?

Don thats a horrible population density map.
1. What is the average population of the BEST very rural sites?
If I told you it was less than 20 people per square km would that count as rural.. or not
10?
5?

They state: “Rather than compare urban sites to non-urban, thereby explicitly estimating UHI effects, we split sites into very-rural and not very-rural.” Is it any wonder they couldn’t find a UHI effect.
#############
you are misunderstanding what they mean.
what they mean is that sites with No built pixels are very rural ( 16,000)
all other sites are called NOT very rural.. that includes urban

cce

BEST is not testing for the existence of the Urban Heat Island effect. That is not controversial. It is looking for increased warming at urban sites, not higher absolute temperatures. More specifically:
“Time series of the Earth’s average land temperature are estimated using the Berkeley Earth methodology applied to the full dataset and the rural subset; the difference of these shows a slight negative slope over the period 1950 to 2010, with a slope of -0.19°C ± 0.19 / 100yr (95% confidence), opposite in sign to that expected if the urban heat island
effect was adding anomalous warming to the record. The small size, and its negative sign, supports the key conclusion of prior groups that urban warming does not unduly bias estimates of recent global temperature change.”
Setting aside the exact details of their analysis, if “very rural” sites are warming at the same rate as all of the sites, it’s difficult to argue that urbanization is exaggerating “global” warming. The fact that “very rural” sites make up a minority of all sites is irrelevent provided they are weighted according to their spatial distribution.