Spencer's UHI -vs- population project – an update

In case you missed it, Roy Spencer performed a unique and valuable analysis comparing International Hourly Surface data to population density to provide a simple gauge for the Urban Heat Island (UHI) effect. It was presented at WUWT yesterday with this result:

ISH-station-warming-vs-pop-density-with-lowest-bin-full

There were lots of questions on the method. Dr. Spencer adds to the discussion below.

===========================================

UPDATE #2: Clarifications and answers to questions

After sifting through the 212 comments posted in the last 12 hours at Anthony Watts’ site, I thought I would answer those concerns that seemed most relevant.

Many of the questions and objections posted there were actually answered by others peoples’ posts — see especially the 2 comments by Jim Clarke at time stamps 18:23:56 & 01:32:40. Clearly, Jim understood what I did, why I did it, and phrased the explanations even better than I could have.

Some readers were left confused since my posting was necessarily greatly simplified; the level of detail for a journal submission would increase by about a factor of ten. I appreciate all the input, which has helped clarify my thinking.

RATIONALE FOR THE STUDY

While it might not have been obvious, I am trying to come up with a quantitative method for correcting past temperature measurements for the localized warming effects due to the urban heat island (UHI) effect. I am generally including in the “UHI effect” any replacement of natural vegetation by manmade surfaces, structures and active sources of heat. I don’t want to argue about terminology, just keep things simple.

For instance, the addition of an outbuilding and a sidewalk next to an otherwise naturally-vegetated thermometer site would be considered UHI-contaminated. (As Roger Pielke, Sr., has repeatedly pointed out, changes in land use, without the addition of manmade surfaces and structures, can also cause temperature changes. I consider this to be a much more difficult influence to correct for in the global thermometer data.)

The UHI effect leads to a spurious warming signal which, even though only local, has been given global significance by some experts. Many of us believe that as much as 50% (or more) of the “global warming” signal in the thermometer data could actually be from local UHI effects. The IPCC community, in contrast, appears to believe that the thermometer record has not been substantially contaminated.

Unless someone quantitatively demonstrates that there is a significant UHI signal in the global thermometer data, the IPCC can claim that global temperature trends are not substantially contaminated by such effects.

If there were sufficient thermometer data scattered around the world that are unaffected by UHI effects, then we could simply throw away all of the contaminated data. A couple of people wondered why this is not done. I believe that there is not enough uncontaminated data to do this, which means we must find some way of correcting for UHI effects that exist in most of the thermometer data — preferably extending back 100 years or more.

Since population data is one of the few pieces of information that we have long term records for, it makes sense to determine if we can quantify the UHI effect based upon population data. My post introduces a simple method for doing that, based upon the analysis of global thermometer and population density data for a single year, 2000. The analysis needs to be done for other years as well, but the high-resolution population density data only extends back to 1990.

Admittedly, if we had good long-term records of some other variable that was more closely related to UHI, then we could use that instead. But the purpose here is not to find the best way to estimate the magnitude of TODAY’S UHI effect, but to find a practical way to correct PAST thermometer data. What I posted was the first step in that direction.

Clearly, satellite surveys of land use change in the last 10 or 20 years are not going to allow you to extend a method back to 1900. Population data, though, ARE available (although of arguable quality). But no method will be perfect, and all possible methods should be investigated.

STATION PAIRING

My goal is to quantify how much of a UHI temperature rise occurs, on average, for any population density, compared to a population density of zero. We can not do this directly because that would require a zero-population temperature measurement near every populated temperature measurement location. So, we must do it in a piecewise fashion.

For every closely-spaced station pair in the world, we can compare the temperature difference between the 2 stations to the population density difference between the two station locations. Using station pairs is easily programmable on a computer, allowing the approx 10,000 temperature measurements sites to be processed relatively quickly.

Using a simple example to introduce the concept, theoretically one could compute:

1) how much average UHI warming occurs from going from 0 to 20 people per sq. km, then

2) the average warming going from 20 to 50 people per sq. km, then

3) the average warming going from 50 to 100 people per. sq. km,

etc.

If you can compute all of these separate statistics, we can determine how the UHI effect varies with population density going from 0 to the highest population densities.

Unfortunately, the populations of any 2 closely-spaced stations will be highly variable, not neatly ordered like this simple example. We need some way of handling the fact that stations do NOT have population densities exactly at 0, 20, 100 (etc.) persons per sq. km., but can have ANY population density. I handle this problem by doing averaging in specific population intervals.

For each pair of closely spaced stations, if the higher-population station is in population interval #3, and the lower population station is in population interval #1, I put that station pair’s year-average temperature difference in a 2-dimensional (interval#3, interval#1) population “bin” for later averaging.

Not only is the average temperature difference computed for all station pairs falling in each population bin, but also computed are the average populations in those bins. We will need those statistics later for our calculations of how temperature increases with population density.

Note that we can even compute the temperature difference between stations in the SAME population bin, as long as we keep track of which one has the higher population and which has the lower population. If the population densities for a pair of stations are exactly the same, we do not include that pair in the averaging.

The fact that the greatest warming RATE is observed at the lowest population densities is not a new finding. My comment that the greatest amount of spurious warming might therefore occur at the rural (rather than urban) sites, as a couple of people pointed out, presumes that rural sites tend to increase in population over the years. This might not be the case for most rural sites.

Also, as some pointed out, the UHI warming will vary with time of day, season, geography, wind conditions, etc. These are all mixed in together in my averages. But the fact that a UHI signal clearly exists without any correction for these other effects means that the global warming over the last 100 years measured using daily max/min temperature data has likely been overestimated. This is an important starting point, and its large-scale, big-picture approach complements the kind of individual-station surveys that Anthony Watts has been performing.

0 0 votes
Article Rating
107 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
debreuil
March 4, 2010 2:30 pm

I think it might also be useful to go by economic output rather than population. There are a lot of small towns that don’t increase a lot in population, but are very transformed by an economic boom. I would guess the amount of cement/buildings can be tracked closer by dollars than people. Also, the stats may be better for that, or at least a second set to cross reference.

Sydney Sceptic
March 4, 2010 2:32 pm

Nice work!

Michael
March 4, 2010 2:40 pm

“The lack of systematic auditing of the IPCC, NOAA, NASA or East Anglia CRU, leaves a gaping vacuum. It’s possible that honest scientists have dutifully followed their grant applications, always looking for one thing in one direction, and when they have made flawed assumptions or errors, or just exaggerations, no one has pointed it out simply because everyone who could have, had a job doing something else. In the end the auditors who volunteered — like Steve McIntyre and Anthony Watts — are retired scientists, because they are the only ones who have the time and the expertise to do the hard work”
The Money Trail
http://www.abc.net.au/unleashed/stories/s2835581.htm
Carbon Market Update
“Investors are becoming less convinced that a global carbon market, estimated to be worth about USD 2 trillion by the end of the decade, can be established as uncertainty over global climate policy persists.
The absence of legally binding global climate deal and a federal emissions trading scheme in the United States are standing in the way of the market in global emissions trading growing to achieve yearly turnover of USD 2 trillion by 2020.
“There will only be a USD 2 trillion market if the US gets on board,” Trevor Sikorski, head of carbon research at Barclays Capital, told Reuters at a carbon conference in Amsterdam.”
Hopes For USD 2 Trillion Global Carbon Market Fade
http://www.moneycontrol.com/news/business/hopes-for-usd-2-trillion-global-carbon-market-fade_444850.html

Jim
March 4, 2010 2:41 pm

Someone once postulated that 50 well placed thermometers would be adequate to gauge average global temperature trends. Any statisticians care to comment? Are there 50 uncontaminated thermometers that have 100 year records and are somewhat well placed?

graham g
March 4, 2010 2:47 pm

I enjoy reading your articles. Thank you.
Two points to consider.
1..Willis Island off the coast of North Queensland, Australia has a very isolated station site with reliable records for cyclone forecasts that may be worth your consideration. I have seen the data on a blogsite some time ago. You might have to request the release of the data from the BOM or CSIRO in Australia.
2…I’m not sure your hard work will change anything in the “scientific peer reviewed establishment” . It seems to me that the UN Agenda 21objectives are having to be met by your scientists to satisfy your governments goals.

George E. Smith
March 4, 2010 2:53 pm

“”” The UHI effect leads to a spurious warming signal which, even though only local, has been given global significance by some experts. Many of us believe that as much as 50% (or more) of the “global warming” signal in the thermometer data could actually be from local UHI effects. The IPCC community, in contrast, appears to believe that the thermometer record has not been substantially contaminated. “””
This paragraph, Roy, captures what i think is the essence of the problem.
Few sane persons would doubt that the structural trappings of MAN do have an influence, both on the local environs of those trappings; and also on the measurments that are made in such places to represent what is going on there.
The airport runway, is a classic case in point. Those “weather” stations, exist there for the specific purpose of telling the aviation community, the important information they need to jusdge the current safety of operations from that runway; they were never intended to be a part of a global “Climate” reporting network; but they are; or some are.
And your process of trying to correlate at least the local effect of the mantrappings to simpler measures like population density; sounds about as ingenious as anything one might conjure up as a “proxy” for concrete and Weber Grills.
UHIs do tend to get hotter than the average landscape they used to be, so it is proper to measure them.
The error comes in trying to extend the influence of the measured UHI far beyond its real sphere of influence.
That to me seems to be an error in methodology; and not something which calls for “correction” of the UHI measurment. The correction called for is in curbing the radius of influence assigned to the UHI; not in changing its value.
But it sounds like you have tumbled to an interesting “proxy”; I won’t say “stumbled over”, because you ain’t the stumbling type.
I look forward to when we can read the expurgated version of your paper, when you feel it is ready for prime time. In the mean time, thank you for letting us all kick you in the shins while you are working on this.
George

latitude
March 4, 2010 2:56 pm

Dr. Spencer, I apologize up front for this post.
I think the world of you.
I looks to me like you are trying to massage, manipulate, and squeeze old temperature data, that you know is contaminated in the first place.
That is what got us to this point, and no different that what has already been done.
It also gives credence to a “theory” that is so shaky, it can’t even stand on one leg.
What I would like to see is more work proving or dis-proving the theory of AGW. Until that is done, it does not matter if a chicken laid frozen eggs in 1850, or if the eggs came out hard boiled.

March 4, 2010 2:58 pm

A link to a file with the values that are plotted in the graphs would be helpful, and also a graph with logarithmic x-axis, to make a direct comparison with torok et al possible
http://www.warwickhughes.com/climate/seozuhi.htm
Torok, S.J., Morris, C.J.G., Skinner, C. and Plummer, N. 2001. Urban heat island features of southeast Australian towns. Australian Meteorological Magazine 50: 1-13.
thank you

March 4, 2010 3:02 pm
James Sexton
March 4, 2010 3:07 pm

Jim (14:41:26) :
Someone once postulated that 50 well placed thermometers would be adequate to gauge average global temperature trends. Any statisticians care to comment? Are there 50 uncontaminated thermometers that have 100 year records and are somewhat well placed?
Hmm……7 per continent. Nope. Don’t think so. Of course, the word uncontaminated isn’t probably used here properly. Even if the population is 0, you still have to account for elevation, proximity to water, heck, even the color of the rocks nearby. Just my thoughts, I’m sure there are others that would disagree.

b.poli
March 4, 2010 3:13 pm

Let the adventure begin. No hidden peers or pals, open. Come on Gavin, James or Phil – your productive input please.
The business model of science publishers will be challenged, perhaps revolutionized. But what we have learned from the CRU-mails their model looked more than pale anyway.

Daniel H
March 4, 2010 3:14 pm

Is the trend line supposed to look so fantastically logarithmic? If so, what are the implications of that (if any)? I’m not a statistician so forgive me if this question is ignorant.
http://www.statemaster.com/encyclopedia/Image:Graph-of-common-logarithm.png

Greg Cavanagh
March 4, 2010 3:15 pm

Now that I understand what this study is attempting to achieve, and what metric you’re planning on using and why, I might be able to add to your efforts in a small way.
I would recommend identifying particular sites to be specific exclusion cases. Airports and sewerage treatment plants for example. These may be added to the global temperature record only after another more specific study is conducted for their bias.
You appear to be making a set of recommended adjustments based on population only. But I think it’s more likely you’ll need to identify environment types. For example, proximity of ocean or lake, height above sea level, any prevailing winds, trapped valleys subject to fog or temperature inversions, industrial areas, city centres, suburban areas, decentralised towns, ect.
A range of site conditions will make for a longer lasting adjustment set which could be applied to any site around the world for true global temperature gauge adjustments.
A hell of an effort in front of you, but it does need to be done. I wish that I could be involved, but alas.

Mike Ewing
March 4, 2010 3:16 pm

Dr Spencer says:
“The fact that the greatest warming RATE is observed at the lowest population densities is not a new finding. My comment that the greatest amount of spurious warming might therefore occur at the rural (rather than urban) sites, as a couple of people pointed out, presumes that rural sites tend to increase in population over the years. This might not be the case for most rural sites.”
This is of course is very true… But i can say with relative certainty that there have been far larger changes in rural settings in recent decades comparative to urban in regards to modernization of agriculture/horticulture and required transport… I myself have lived rural all my life in new zealand, and only twenty years ago the landscape i grew up in has changed dramatically… the roads are sealed!!! farmers knock hills off, and re contour paddocks etc(if yer can afford a bulldozer..why not eh)… but all of this would be no help… however id imagine that local councils(or whatever theyre called wherever) would have records of public expenditure on roading development/public works expenditure, per district, which may be a possible proxy? Just because it would probably be related to economic expansion in that area… no council is going to spend money to run roads up to a recluse who lives in a cave, but they will to get milk tankers to the more recently developed large commercial farms etc
But this in itself would be a huge task to try and compile, and could well be useless. But just a thought.
Good luck and all the best with yer work

George E. Smith
March 4, 2010 3:17 pm

“”” Jim (14:41:26) :
Someone once postulated that 50 well placed thermometers would be adequate to gauge average global temperature trends. Any statisticians care to comment? Are there 50 uncontaminated thermometers that have 100 year records and are somewhat well placed? “””
Well Jim, I would say (as another someone) that “someone” didn’t know what they were talking about.
Bear in mind that the changes being sought are extremely small. We are told for example, that a complete adherence to the Kyoto accords, as to curbing future CO2 emissions, would likely result in a warming reduction over the next 50 years, that would be too small to even observe.
So in attempting to measure to the degree of sensitivity a global continuous function of time and space, with clearly known temporal cycles of at elast 24 hours and 365 days and a spatial extent, that covers a total extreme temperature range of about 150 deg C, all of which may be present simultaneously on a northern summer mid day, it simply is not a matter of statistics.
It is a question of sampled data theory; and that theory says that even the average value of a continuous function is not recoverable in the presence of out of band signals at just twice the sampling rate. Forget the central limit theorem; and any other trappings of statistical mathematics; that is not where the problem lies; the problem is buried under a mountain of aliassing noise caused by quite inadequate sampling procedures.
50 thermometers won’t do the job. 50,000 might have a chance, but I wouldn’t bet on it.
Part of the problem comes in your use of the word “trend”.
The word itself conjures up the result of observations over an extended period of time.
The trouble is that NO interval of time is sacred, when it comes to assigning an appropriate window to look for a “trend”.
Any extensive analysis of “climate data” over any geological time scale you want to examine; will clearly show that the data has all the ear marks of 1/f noise. Not that I am claiming that the data DOES fit a 1/f noise spectrum.
Another way of putting it would be to sday the data is fractal in nature; and no matter what time scale one chooses to look for a trend; similar appearances, will occur at both longer and shorter time frames.
So any of these “regression” analyses that climate statisticians seem to like to indulge in, is unlikely to lead to any conclusions about what might happen over any other time frame.
Remember that regressions and trend analyses, and smoothing algorithms, are merely processes for throwing away actual real data, often gained at great pain and expense; and replacing that real data with completely fictitious pseudo data.

EricH
March 4, 2010 3:17 pm

This is the sort of research that should have been done by government operated Met Offices to check that their figures were correct; not by an individual. Surely at least one government Met Office, somewhere in the world’s 203 nations, has actually done some work similar to this; if not it is a dreadful oversight and omission which has cost, is costing and will cost us all dearly.
Thank you Dr. Spencer for taking the time to do this vital research. I wish you well for when this research is published in the scientific literature.

March 4, 2010 3:18 pm

I grew up with parents who were born in the 1920’s. To me, UHI affect increased after WW2, as the age of Electricity and Air-conditioners were deployed in the western world.
At the turn of the 20th century, when you are suggesting that you start looking at records, sidewalks were wooden in places instead of concrete, and horses were still a means of traffic congestion. Roads were dirt, instead of asphalt or concrete. Roofs were wood shingles instead of hydrocarbons, coated in black, sealant.
To me, UHI is the effect of a post world war industrialization of humanity’s lifestyle. Its not an effect you can remove from the readings with any consistency. In fact, its the removel of it, that has allowed the chicanory in the temp records to happen in the first place, in some cases.
If you change the micro-environment that you live in to a warmer bias, you changed it. That is the temp, as read at that location. Its micro environmental change, it is not Climate Change.
I think we have to be really really careful, when we suggest why we should be changing temp records, because of X reasons. I don’t like X in this case.
I think that temp records should stay in the raw. You should be able to turn off stations in the models based on if you want to remove urbanization from the record. But finding a golden answer for UHI, to apply to each station in question, is what got us into trouble. In my personal humble opinion sir.
Best Wishes,
Jack Barnes

Kevin Kilty
March 4, 2010 3:19 pm

Jim (14:41:26) :
Someone once postulated that 50 well placed thermometers would be adequate to gauge average global temperature trends. Any statisticians care to comment? Are there 50 uncontaminated thermometers that have 100 year records and are somewhat well placed?

It is common for people to think that more data means better result. This is not true. If more data are not adding independent information into an analysis, then less data will do just as well. I’m not certain that just 50 thermometers would be adequate, but it is possibly so. If one could find 50 locations representative of all climatic regions, well sited, undisturbed, unbiased, well instrumented and providing long records. Do you think one could find 50 such locations? How would we certify a site as representative of a region? These aren’t trivial concerns.
The present state of the land temperature record, though, reminds me of a statement, made in earnest, by one of my Ph.D. committee when I pointed out the hopeless state of a particular set of seismic data. “Sure,” he admitted, “the data are crap; but, there is so much of it!”

Ben
March 4, 2010 3:20 pm

Tilting at windmills:
Our energy policy is being driven by EU diktat
[ http://www.youtube.com/watch?v=G-ENhGRJ028 ]

pat
March 4, 2010 3:22 pm

the fightback! whereas reuters and the TV stations ignored the UK Parliamentary Inquiry on Climategate, watch how this ‘insiders’ review goes viral in the MSM.
4 March: Financial Times: Review backs man-made global warming
By Clive Cookson in London
The case for man-made global warming is even stronger than the Intergovernmental Panel on Climate Change maintained in its official assessments, according to the first scientific review published since December’s Copenhagen conference and subsequent attacks on the IPCC’s credibility.
An international research team led by the UK Met Office spent the past year analysing more than 100 recent scientific papers to update the last IPCC assessment, released in 2007.
Although the review itself preceded the sceptics’ assault on climate science over the past three months, its launch in London on Thursday marks a resumption of the campaign by mainstream scientists to show that man-made releases of greenhouse gases are causing potentially dangerous global warming.
“The fingerprint of human influence has been detected in many different aspects of observed climate changes,” said Peter Stott, head of climate monitoring at the Met Office Hadley Centre for Climate Research. “Natural variability, from the sun, volcanic eruptions or natural cycles, cannot explain recent warming.”
The review, published in the journal Wiley Interdisciplinary Reviews: Climate Change, found several “fingerprints” of warming that had not been established by the time of the last IPCC assessment but were now unambiguously present.
One is human-induced climate in the Antarctic, the last continent where regional warming has been demonstrated….
A separate study by Russian and US scientists, published today in the journal Science, shows methane, a powerful greenhouse gas, is escaping from the seafloor of the warming Arctic Ocean more rapidly than has been suspected
http://www.ft.com/cms/s/0/9513bee6-27b3-11df-863d-00144feabdc0.html
5 March: UK Times: Ben Webster: 95 per cent chance that Man is to blame for global warming, say scientists
The evidence that human activity is causing global warming is much stronger than previously stated and is found in all parts of the world, according to a study that attempts to refute claims from sceptics.
The “fingerprints” of human influence on the climate can be detected not only in rising temperatures but also in the saltiness of the oceans, rising humidity, changes in rainfall and the shrinking of Arctic Sea ice at the rate of 600,000 sq km a decade.
The study, by senior scientists from the Met Office Hadley Centre, Edinburgh University, Melbourne University and Victoria University in Canada, concluded that there was an “increasingly remote possibility” that the sceptics were right that human activities were having no discernible impact. There was a less than 5 per cent likelihood that natural variations in climate were responsible for the changes. ..
However, a section of the study that said changes in hurricane activity were poorly understood is likely to be seized on by sceptics…
The study found that since 1980, the average global temperature had increased by about 0.5C and that the Earth was continuing to warm at the rate of about 0.16C a decade. This trend is reflected in measurements from the oceans. Warmer temperatures had led to more evaporation from the surface, most noticeably in the sub-tropical Atlantic, said Dr Stott. As a result, the sea was getting saltier. Evaporation in turn affected humidity and rainfall. The atmosphere was getting more humid, as climate models had predicted, and amplifying the water cycle. This meant that more rain was falling in high and low latitudes and less in tropical and sub-tropical regions.
http://www.timesonline.co.uk/tol/news/environment/article7050341.ece
Al Gore To Give Free Lecture At Duke
03/04/10 12:17PM
Former Vice President Al Gore is coming to the Triangle. He’s slated to deliver the 2010 Environment and Society Lecture at Duke University.
The lecture is part of an ongoing series that brings in prominent figures who are helping build a sustainable future.
The lecture is free and open to the public, but you will need tickets to attend. The event is April 8, at 6 p.m. in Page Auditorium at Duke. For more information and to secure tickets, visit http://www.nicholas.duke.edu/deanseries.
http://www.wchl1360.com/details3.html?id=13749

Rob
March 4, 2010 3:23 pm

I love the work you do here and I am convinced by yours and others comments, Anthony. I also look at the real climate site, which the whole I find desperate and unconvincing. They recently posted this though…
Can anyone with expertise in the area make comment, either for or against this work ? I’d be interested to hear any thoughts. And apologies that this post isn’t commenting on this particular article
http://www.realclimate.org/index.php/archives/2010/03/climate-change-commitments/#more-3070

Dave F
March 4, 2010 3:30 pm

OK, in this light, using population makes a ton more sense. I thought you were trying to quantify an adjustment for the current record, hence the references to station siting. The large red spot in Canada does not appear to be explainable with this method, but E.M. Smith does have a plausible explanation for that, especially if that record is being extrapolated from one containing a UHI signal.

Al Gore's Brother
March 4, 2010 3:32 pm

Wouldn’t land use trends affect rural temperature measurements as well? Farming/Oil drilling for instance may have a warming effect because vegetation is cleared vs. say wandering meadows or even forested land. Does this begin to get to complicated for your model?
Nice job so far. I think this will shed some much needed light on the subject of UHI.

pat
March 4, 2010 3:32 pm

mention of gore in norway on previous threads. here, for anyone who can translate norwegian:
4 March: Al Gore for solenergi
http://www.framtiden.no/201003042849/aktuelt/klima/al-gore-for-solenergi.html

Scott Covert
March 4, 2010 3:35 pm

Since we (Skeptics) are funded by Big Oil, can’t we just get high resolution multi frequency infrared photos of urban and rural areas at T-min and T-max comparing adjacent urban/rural areas for UHI?
Oh that’s right we don’t really have financial backing.

DocMartyn
March 4, 2010 3:35 pm

There is a huge exclusion zone around Chenobal, there must have been thermometers in the exclusion zone before and after the reactor did the big firework. This would give you the UHI backwards, as the population left, vegetation returned.

M. de Lange
March 4, 2010 3:36 pm

debreuil (14:30:03) :
“I think it might also be useful to go by economic output rather than population”
Why not the amount of KwH delivered by energy company’s. They should keep records of that. Gr. M.

Duster
March 4, 2010 3:42 pm

Jim (14:41:26) :
There are enormous methodological problems with that assertion. The most obvious is defining “well placed.” Ideally, “well placed” would mean located away from any human induced environmental alterations that alter regional “climate” as reflected in thermometer data. The problem with that idea is that until that advent of electrical methods of data recording and transmission, thermometers must be located where they can be accessed by a human reader daily. That means that for all older data, no thermometer is or could be located away from some source of UHI as defined by Dr. Spencer. In fact the “best” will be located in situations where the effect is greatest (0-10 people per square kilometer). I would speculate that the manner in which Dr. Spencer’s curve approaches a linear trend after population density reaches about 250 people/sq. km. (about one person per 0.4 hectares) is because no new agricultural changes to vegetation are likely. After that the effects are primarily due to increasing urban changes – the development of market and manufacturing centers and other large population aggregates that are not agriculturally based.
If an agreement (consensus) could be reached concerning what “well placed” implied, there is still a problem concerning what an adequate sample period would be. The climate is a natural system and is 100s of millions of years old. If 100 years of data were good enough to sort out natural trends in the climate, there are numerous data sets that span at least that range. It seems pretty clear though, even among the AGW school, that 100 years is not regarded as an adequate sampling period. There is a sound reason for the search for temperature proxies, even if the selection of such proxies has heretofore in some cases been pretty questionable.
Nor is there anything like a real consensus on just how to define “climate.” There is broad agreement that it is not weather, but just what does comprise climate seems to be a very active area of debate – read some or Dr. Pielke, Sr.’s discussions regarding climate for an idea.

Alan S
March 4, 2010 3:55 pm

Jim (14:41:26) :
“Someone once postulated that 50 well placed thermometers would be adequate to gauge average global temperature trends. Any statisticians care to comment? Are there 50 uncontaminated thermometers that have 100 year records and are somewhat well placed?”
Jim, i would narrow it down to two, one Northern hemisphere one Southern hemisphere mid latitude. Obviously well placed rural.
Everything else is hand waving.

Daniel H
March 4, 2010 3:57 pm

@DocMartyn
“There is a huge exclusion zone around Chenobal, there must have been thermometers in the exclusion zone before and after the reactor did the big firework. This would give you the UHI backwards, as the population left, vegetation returned.”
Another good candidate for measuring the reverse-UHI effect is the Detroit area. Over the past 50 years it has been depopulating nearly as fast as Chernobyl did during the 1980s!
http://upload.wikimedia.org/wikipedia/en/thumb/b/bf/Detroit_population_and_rank.svg/500px-Detroit_population_and_rank.svg.png

Ed
March 4, 2010 4:24 pm

I still don’t understand (with some earlier commenters) how a population of 1 – 20 can be considered “Urban”. Further, if it is true that an area with such minimal population has a warmer temperature, this means that most of the global landmass is affected, that is everywhere that humanity has settled. If this is the case, “Urban Heat Island” would have to have both “Urban” and “Island” removed from the label (Urban because the effect is clearly greatest in rural areas, and Island because urban areas are no longer islands under this definition) and we’d be left with, um, “Heat”.
And the ‘spurious’ signal becomes the actual…
Maybe on the whole its better to stick with satellites?

wsbriggs
March 4, 2010 4:26 pm

Outstanding work!
Regional differences in construction and coloring will affect the UHI, as well as changes in construction methods. Asphalt roofs have a different thermal spectrum than Bermuda Tile roofs, etc. All of this needs to be included in a detailed analysis to get improved accuracy, but I doubt that doing so will change the post knee portion of the curve significantly.
Brilliant approach, just elegant.

JDN
March 4, 2010 4:43 pm

Roy- Does the station warm bias = difference between pairs of stations within 150 km? Why not go with something more descriptive? Also, is the zero point of this graph being set by the curve fit?
Does this also mean that all stations w/in 150km where the population density is higher were warmer than all stations where the population density is lower? If so, I can suggest that mountains have a much, much lower population density and should almost always be colder than the lower-lying cities. Maybe that’s a second-order correction? But how many of those extremely low-density station pairs represent mountain vs. low-lying city?

JDN
March 4, 2010 4:44 pm

BTW, that explanation you pre-pended works pretty well.

Pamela Gray
March 4, 2010 4:44 pm

Given the current emotional set of journal editors, I don’t think your research has a chance to see the light of day. Not unless it is tightened down. You must reduce the unknown variables or else your thesis will be torn to shreds.

Scott
March 4, 2010 5:05 pm

Couple comments on this post and the previous one.
First, I think this is a very important piece of work to consider. Once a reasonable and quantitative correction factor for UHI is implemented and a modified temperature dataset presented, the picture of possible AGW will be much clearer (and harder for “warmers” to dismiss if the skeptics are correct).
As others have pointed out, population isn’t the greatest proxy, but I think it’s the only one where data is good enough for the past 100+ years to make an effective UHI proxy. Presumably, the UHI/population density figure changes with both population density AND date. Dr. Spencer – do you think there’s enough population data resolution in older records to estimate the effect from before 1990 (the older stuff is what you’re targeting)? Also, this approach (or something similar) is still useful on current data, IMO.
One note – I’ve gone by the rule of “3 degrees Fahrenheit per 1000 ft” since my professor mentioned it in a college geography class. Thus, I was very happy to see the 5.4 C/km number calculated in your work, which computes to 2.96 F/1000 ft, or 3.0 if we keep track of sig figs ;-).
As for the comment from Jim @ 14:41:26, I’ve always thought of something similar…a few excellent temperature readings are far better than masses of bad ones. Assuming that “bad” thermometers are 5 times “noiser” (note that bad ones add systematic errors, not necessarily noise, but the method used to correct for the errors likely introduce the equivalent of noise), then it would take 1250 “bad” thermometers to match 50 “good” ones. Actually finding excellently-placed thermometers or ensuring the data are acceptable is an entirely different, and very difficult, matter (as noted by other commentors).

George E. Smith
March 4, 2010 5:36 pm

Well it would seem from reading the posts from Kevin Kilty and others, that there are a lot of people who have never watched a horse opera on television (or the movies) in which the runaway wagon wheels are clearly revolving backwards.
That by itself would seem adequate proof that you cannot expect to get believable results from insufficient data.
The problem is similar to that of the traffic cop, who presents the judge with a picture that clearly shows the defendant’s car on the wrong side of the road, and rotated 90 degrees to the direction of other traffic on the road.
Whereupon the judge slaps the hapless drive with a reckless driving conviction, for being sideways on on the wrong side of the road.
A movie of the actual circumstances; i.e. a data set with enough “thermometers” to reveal what actually happened, would show that the defendant was simply driving through on a cross street, when the cop snapped him “sideways on on the wrong side of the road.”
The general theory of sampled data systems is VERY WELL KNOWN, and in practice IT WORKS because our entire high bandwidth data communications, and telephonic voice and other communication high bandwidth traffic, are entirely dependent on proper sampling of all of those signals.
Those who ignore that theory; to save some thermometers do so at their own risk.
So on a nice hot northern summer midday, in a North African or middle-eastern desert, the ground temperature might be +60 deg C or higher (140 F or 333K). Simultaneously it is the dark of winter midnight at or near Vostok station in Antartica’s highlands, and the temperature as low as -90 deg C (-130 F or 183 K). And due to a famous argument in Galileo’s “Dialog on The two World Systems.” we know that every possible value of temperature between thsoe two extremes will exists somewhere on the planet at the same time ( ok to be pedantic; IF those are the two extremes at the moment).
So now where do we want to put those two thermometers; the NH and SH temperature monitors; and be sure it is well away from urban blight.
You cannot statistically create information where none exists.

xyzlatin
March 4, 2010 5:54 pm

Whilst I commend Dr Spencer’s effort and work, mainly because it throws up arguments against using thermometers at all, to “measure” the Earth’s temperature, and throws a bit more doubt on the methodology, a useful holding strategy.
However, I believe that measuring the treeline and measuring the extent of the snowline, throughout all countries including the Southern Hemisphere, is a more valid way of measuring what is happening to the Earth. The response of the vegetation worldwide at the “coalface” of the snowlines, would be equivalent to millions of thermometers.
One of the interesting things I have noted about the debate over the trees showing a cooling trend opposing the thermometer record, which led to the whole “hiding the decline” debacle, is the complete absence of anyone standing up for the trees! What if the trees WERE correct and the thermometers were wrong?
The trees have a “dog in the fight” in that if they don’t adapt and quickly, they die. The thermometers, however, by themselves as inanimate, man made items, can be wrong and nothing happens to them. ( I believe that each and every mercury thermometer ever made, has an inbuilt error factor of +-1 degree, which is greater than the amount of warming per century supposedly measured by them.)
(By the way, OT but with reference to Australia, you may not be aware that Western Queensland is having widespread floods through the Channel Country, which are breaking records going back over 100 years in many towns.)

janama
March 4, 2010 6:02 pm

Dr Spencer – Dr Simon Torok did a similar kind of calculation back in 1999
http://reg.bom.gov.au/amm/docs/2001/torok_hres.pdf
Urban heat island features
of southeast Australian towns
Simon J. Torok and Christopher J.G. Morris
School of Earth Sciences, University of Melbourne, Australia
and
Carol Skinner and Neil Plummer
National Climate Centre, Bureau of Meteorology, Australia
(Manuscript received December 1998; revised June 2000)

xyzlatin
March 4, 2010 6:05 pm

Sorry the link is here for the flood information http://www.couriermail.com.au/news/st-george-charleville-brace-for-more-rain/story-e6freon6-1225837127753
UPDATE 11:30AM WEARY Charleville residents confront their third flood in five days as St George prepares for worst flood in 120 years with predictions of more rain

March 4, 2010 6:06 pm

George E. Smith (14:53:17)
UHIs do tend to get hotter than the average landscape they used to be, so it is proper to measure them.
The error comes in trying to extend the influence of the measured UHI far beyond its real sphere of influence.
That to me seems to be an error in methodology; and not something which calls for “correction” of the UHI measurment. The correction called for is in curbing the radius of influence assigned to the UHI; not in changing its value.
I really do enjoy, appreciate, and second this point. From a temperature record standpoint, UHIs should be seen as blips on a map (maybe something like this) – not giant blobs… and I think that from a climate modeling standpoint, UHI should be seen as a local forcing not something that needs to be corrected out
Well said!

GilesE
March 4, 2010 6:12 pm

A very interesting article and some great debate – perhaps a role model for a model open peer review process for future climate debate discussion.
On a slight tangent, Dr Spencer, as described, the methodology you used involved the averaging of the hourly temperature data for each station to calculate mean average temperature for that station. My understanding of the average daily temperatures generally used in historical analysis of temperature trends, including climate models, are actually the mean of the maximum and minimum temperatures recorded for that day, not an average of hourly measurements (presumably because the historical record is based on measurements recorded using those old max-min thermometers I vaguely remember from my long distant schooling). As its looks like you already have the data, I am curious as to how closely the hourly computed average daily temperature correlates with the min-max average and, if there are systematic differences, what implications/caveats it has for assuming changes of the min-max average over time are the same as changes in the average temperature.

George Turner
March 4, 2010 6:19 pm

George E. Smith,
If the temperature measurements really do resemble 1/f noise, well, I know of a guy who has a Fortran program that can convert it into a hockey stick. ^_^
Back to Dr. Spencer’s post, it tells me that none of the methods being used to account for UHI account for all of it, because the rural stations are also being affected (Rural Heat Reefs?) .
We’ve probably encountered this problem before, with some cave men during the last ice age arguing that the existing climate data sets included too many measurements from inside their caves, while others argued that rural stations were unreliable because of station moves due to advancing and retreating glaciers.

David Schnare
March 4, 2010 6:20 pm

latitude (14:56:35) :
Latitude has it right. There is no good reason to resqueeze rotten fruit in the hopes that the juice will come out fresh.
Further, we won’t know if there are enough “clean rural sites” until we look for them. As the SPPI report suggests, they are clearly not as dirty as the urban sites, and as our work is begining to show, there probably will be enough to fill a 5×5 grid in the US.

Peter Hartley
March 4, 2010 6:21 pm

I believe that in quite a few places you have written “population” where you may meant “population density”. Saying “population” makes one think you are talking about absolute population levels not densities.
A more substantive question is — why not just estimate a multivariate regression with population density and station elevation and distance from water as explanatory variables? In fact, you could use log(population density) to allow for the non-linearity and perhaps also higher order terms such as (log(population density))^2. The coefficients on the population density variable(s) would then give you the UHI correction you seek.
I can see one advantage of the pairing idea is that two paired stations might have a common unmeasured factor affecting their temperatures. Looking at the difference then cancels the common error. Even if you added other control variables, such as climate zone indicators, you would still no doubt miss things that would then appear in the error term and could have been eliminated via differencing. Still, it would be interesting to do the multivariate regression and compare the coefficients across methods.
Thinking of using regression methods of course brings up the McKitrick & Michaels paper. Why not regress on the satellite temperatures too and use coefficients on population density or other socioeconomic variables to correct for UHI?

Mike Ewing
March 4, 2010 6:36 pm

NickB. (18:06:14)
Yea for the literal meaning of UHI… if youre meaning a change in micro climate through human alterations in environment, maybe not… id imagine there in the USA at the beginning o the century, you were pastoral farmers by n large… and now you are factory farmers, because you can grow more KGs a hectare in grain than grass, and get more productivity a hectare… Things like this could conceivably play a role in giving artificial bias’s. But how would you find out?
Hell chances are in cold areas last century, the guy doing the recording, just guessed on cold nights, cause he didnt want to go out side :-0 Really the satellite record is the only one you could claim any certainty on, what a conundrum in this age o instant answers eh!

Scott
March 4, 2010 6:43 pm

Several commentors on both this thread and the original one have talked about the large effect/slope at “rural” sites, mentioning that it doesn’t make sense. I think one possible rationale is to think about where the monitoring site’s positioning is relative to the population. I’m guessing the thermometer/sensor is closer to the local population at these sites than one might assume.
Think about it this way – if one placed a grid of sensors every 5 meters in sq. km, one would have 4000 readings. In a heavily urbanized environment, the average of these readings is probably close to the reading of the sited sensor, simply because the UHI is uniform. However, in more rural areas, my hunch is that the true sited sensor would read near the high end of the 4000 sample distribution because it is located unrealistically close to a local HI (house cluster, airport, factor, you name it).
That’s just a hunch, but it would explain the trend. Perhaps one way to verify this would be to compare the variance at the rural sites compared to the urban sites. My hypothesis is that the rural sites will show more variance.
Any thoughts?
-Scott

wayne
March 4, 2010 7:09 pm

Roy, I applaud your approach. I assume you mean that if this relationship can be firmly and correctly established; two different parameters can be inferred at the same time. One is the amount that the pre-urbanization globe has warmed up to and including today and two, the increase in temperature any city of a given size should experience on any windless day due to the UHI. Is that close?
Even though the relationship of station heat bias and population density can be statistically proven, those two other parameters mentioned above may prove harder to statistically establish, but I and many others will know they are true in spite of the lack of a statistical stamp of approval. It only requires a logical mind.

Keith Minto
March 4, 2010 7:21 pm

George E Smith
Any extensive analysis of “climate data” over any geological time scale you want to examine; will clearly show that the data has all the ear marks of 1/f noise. Not that I am claiming that the data DOES fit a 1/f noise spectrum.
Another way of putting it would be to sday the data is fractal in nature; and no matter what time scale one chooses to look for a trend; similar appearances, will occur at both longer and shorter time frames.

This would have not made much sense to my before I read Deep Simplicity by John Gribbin. He made a good fist at explaining fractals,etropy and order at the edge of chaos. I do amateur audio work and the 1/f starts (in audio) with white noise which is completely random to music that is well structured. In between there is pink noise, brown noise and so it goes down the line with increasing order(information).
John Gribbin is very much a warmist and the book did discuss weather as 1/f noise in very much detail all.
George has mentioned this concept before, perhaps he can elaborate on chaos, fractals,1/f noise in relation to weather at some time. We need to come to grips with this factor, its effect may be a directional ‘forcing’ or may be random, like white noise, we just do not know.
It just may help clarify Spencer’s excellent work.

Stephan
March 4, 2010 7:41 pm

All I can say it seems that everything has been done to cool down past temps and “heat up” current ones to validate AGW

Jim Clarke
March 4, 2010 7:43 pm

From pat (15:22:03) :
“The study, by senior scientists from the Met Office Hadley Centre, Edinburgh University, Melbourne University and Victoria University in Canada, concluded that there was an “increasingly remote possibility” that the sceptics were right that human activities were having no discernible impact.”
Never let your enemy state your argument! Does any skeptic here think that human activities are having no discernible impact? Have any of you ever held that belief? I don’t know any atmospheric scientist who believes that. Of course ‘discernible impact’ was never the benchmark for skepticism. We are not being asked to give bureaucrats control of the global energy supply because humans will have a ‘discernible impact’ on climate. Proving a ‘discernible impact’ proves nothing about the AGW argument! Absolutely nothing at all….but it is presented as if they are, therefore, correct and skeptics are stupid.
We are skeptical of a global warming CRISIS! We find the AGW theory to be extraordinarily weak and most of the real world data unsupportive, contradictory or better explained by other factors, some quantified (ocean cycles) and some theoretical (cosmic ray, tropical convection and so on). We find that the temperature record does not correlate well with the CO2 record and that AGW supporters have to make stuff up in order to force the data to fit (mid 20th century cooling). We find the lack of correlation so large, that there is no way that CO2 can be the primary driver of global temperature changes. Natural variations must be stronger and are largely ignored or miscalculated by the IPCC.
If after 100 years of CO2 influence, 20 years of focused science and untold billions of dollars spent to quantify the impending crisis, all they can say is that humans are having a ‘discernible impact’, then they really don’t have much of an argument! I could have told them that 20 years ago for a cup of coffee…and I don’t even like coffee!
The facts remain the same as they have for 20 years. There is nothing outside of computer models that indicates an impending climate crisis! And the only reason that the computer models do predict a crisis is because they are programed to do so, through the assumption of positive feedbacks for which there is no compelling physical evidence, despite the massive search!
Sorry for the rant. I just hate it when scientists with Phd’s get away with saying such ignorant things in order to make themselves look good. You would think they could just stick to the science if it was that compelling!

George Turner
March 4, 2010 7:46 pm

Kevin Minto,
You remind me of another point. Spencer’s analysis is for averages, not monthly or yearly maximum temperatures. What he’s plotting is the [i]average[/i] value of UHI, but as you mention, it would be different on a windless day with no clouds. So even if you used his numbers to correct for UHI, you’d still expect to have many days where the high temperature record is set purely due to the maximum value of UHI that’s not being completely compensated for.
This would matter if, after including Spencer’s adjustment factors, someone started plotting the number of stations setting record temperatures and offered it as further evidence of global warming.

Honest ABE
March 4, 2010 7:48 pm

When I made the point that technology/construction differences would make extrapolating and adjusting past temperature records problematic I knew you couldn’t really correct for it, but I was hoping that you’d mention such factors if you ever publish since it is best to be honest about the limitations of one’s work.

Peter Hartley
March 4, 2010 8:10 pm

I am a little tired of hearing about the Parker study on windy versus calm nights. In statistical parlance, the test has very low power. There could easily be a significant difference between windy and calm nights but his test would not find it. The reason is his measure of windy versus calm is very noisy.
Let me explain the point as follows. Suppose “windy” versus “calm” were determined by the flip of a coin. Then you would not expect to find any difference between rural and urban stations on “windy” versus “calm” nights so determined. The conclusion that there is no UHI affecting urban versus rural temperatures would then be the wrong conclusion. Really, the result follows simply from the variable being used to determine the windy versus calm distinction does not in fact do that.
In reality, the measure Parker used is not a pure coin flip, but there are good reasons to believe it was a very poor measure of windy versus calm.

wayne
March 4, 2010 8:33 pm

Clarification to wayne (19:09:14) :
… One is the amount that the pre-urbanization globe has warmed up to and including today
should have read
… One is the amount that the pre-urbanization globe has warmed up to and including today with the UHI temperature effect removed

Keith Minto
March 4, 2010 8:44 pm

Jim Clarke (19:43:58) :
Sorry for the rant It’s OK to rant, I write better when I am fired up about something but I try not to get fired up on WUWT!
George Turner (19:46:27) : for italics use to enclose the i

Keith Minto
March 4, 2010 8:49 pm

George, use the ‘greater than’ and ‘less than’ keyboard symbol to enclose the ‘i’ and
‘/i’ for italics.
Keith.

George Turner
March 4, 2010 9:00 pm

Jim Clarke,
The way I put it to my neighbors here in horse central is that the worst of the IPCC’s claims are true, one far off day when your children are parents, Kentucky will be as hot as Tennessee. This is supposed to scare me?
Sadly, I don’t think we’ll have such luck.
I once wrote a story about a man taking his family in their RV from Dallas to North Dakota, trying to stay ahead of the IPCC’s worst warming predictions by traveling north, looking at a map of the average US temperatures.
Unfortunately he drank to much coffee, got excited, and drove 5 miles up the highway before pulling into a gas station parking lot, where he has to spend the next 10 years to get back on his 2,000 year trip plan.
Most people haven’t considered that a huge (several degrees C) difference in climate occurs in just a few hours of highway travel.

George Turner
March 4, 2010 9:13 pm

Keith,
I’m an old blogger but sometimes hang out on message boards, so occassionally I forget whether I’m doing less-than greater-than or left-bracket, right bracket.
Admittedly, it’s not as bad a mistake as turning noise into a hockey stick or accidentally feeding a century’s worth of temperature records through a paper shredder, but it is embarrassing, nonetheless.
What irks me is that some people’s sloppiness is rewarded with a supercomputer center and a couple billion from the government while mine has won me nothing. Perhaps I should produce a late-night infomercial on how to turn abjectly error-prone analysis, unforgivably neglectful data loss, and criminal incompetance into eighteen wheelers full of taxpayer cash.
Can you do math? No problem!
Can you read a thermometer? No problem!
Can you solve Navier-Stokes equations? No problem!
Here at the climate data center we can turn your inabilities into money, mountains of money!

G.L. Alston
March 4, 2010 10:33 pm

Scott (18:43:33) : My hypothesis is that the rural sites will show more variance.
What Dr. Spencer will end up showing is that temp is based on land use and changes thereof. I don’t know that UHI needs to be filtered out; ultimately one could probably predict the avg temp rise merely by looking at population density and farming. Clearly his data shows that land use makes the largest apparent change and the rest of it is waste heat etc in population centers (and PHX ought to be a special test case for this after they seemed to have discovered the miracle of watering.)
I’ve never quite understood what everyone thinks is being measured. Surely the only relevant place to measure if CO2 has any influence whatsoever is a desert at night: no water vapour, the most potent GHG. Simply look at the nighttime measurement of a handful of desert stations over 100 years. If they’re rising, we ought to be able to determine the CO2 influence (if any.) Why the entire enterprise is more complicated than this is a mystery.

Stephen Brown
March 4, 2010 10:41 pm
Dave Aschim
March 4, 2010 11:02 pm

Don’t get me wrong, this is useful.
But using proxies to make bad data better seems like treading water. Would it not be better to evaluate actual UHI at specific urban stations? The whole concept of average UHI seems to be an unnatural value since surely it is wildly different at different stations. One of my problems with so much climate science is that everything is a model or an extrapolation or an spurious average or a proxy. Can we not get back to measuring actual things?
Until the data is being corrected by real factors and not average factors, it won’t be that useful to prove or disprove anything. Perhaps I’m naive.

March 4, 2010 11:35 pm

GHG. Simply look at the nighttime measurement of a handful of desert stations over 100 years. If they’re rising, we ought to be able to determine the CO2 influence (if any.) Why the entire enterprise is more complicated than this is a mystery.
I did this a while back. it was instructive.

March 4, 2010 11:42 pm

t is common for people to think that more data means better result. This is not true. If more data are not adding independent information into an analysis, then less data will do just as well. I’m not certain that just 50 thermometers would be adequate, but it is possibly so. If one could find 50 locations representative of all climatic regions, well sited, undisturbed, unbiased, well instrumented and providing long records. Do you think one could find 50 such locations? How would we certify a site as representative of a region? These aren’t trivial concerns.
The figure is 60 optimally placed stations. See shen’s paper. just go to CA
and google it there. been discussed before.

John Whitman
March 4, 2010 11:45 pm

Dr Spencer,
Please accept the below comment as sincere.
Why are you interested in analysis of these surface temperature records, given that you are a prominent longtime pioneer/leader in obtaining data from spacecraft?
Do see a significant future project of comparing/synthesizing the satellite and terrestrial data?
John

March 5, 2010 12:04 am

here for people who care about the number of stations and the sampling error.
She is the guy gavin quotes when gavin says u just need 60 optimally placed stations.
http://www.math.sdsu.edu/AMS_SIAM08Shen/Shen.pdf

Adam Gallon
March 5, 2010 12:13 am

I do remember seeing a paper reguarding land use changes in Florida, by a retired meterologist, that showed quite clearly that the warming that was occuring was largely linked to deforestation and samp draining.

Joffre
March 5, 2010 12:15 am

If there are so few uncontaminated sites, then I would suggest that while the heat island effect is real, its ubiquity suggests that has effected global temperatures, not just local. Thus the “man-made” contribution would be quite high, while the “man-made carbon induced” would be significantly lower than calculated.
This means reducing the carbon output would have limited effect. On the bright side all we have to do is burrow underground or otherwise engineer heat-neutral cities and farms. Or we could just enjoy a slightly warmer climate, which should have a net beneficial effect.
I will add one other point here. Its possible the “man-made” contribution is slowing, not growing. The transition from virgin forests and prairies to farmed land that came with the paleolithic agricultural revolution may have had far more effect than what we are doing.

March 5, 2010 12:23 am

Steven,
What stations should I pull, for desert review. I will obviously double or triple the total, but I would love a series to use as a base line, uncontaminated by my own conceptions to start I wont hold you to any results derived from them, in any form. I am honestly looking for a list of interesting desert stations.
Your comment, made me smile, as the logic was easily understandable. In the middle of the desert, urban growth patterns should be missing. Sprinkler systems should not be contaminating the results, if we can use meta data, to confirm a lack of western urbanization around the location itself. They make the best set of base line long term stations possible.
A long series of desert location is quit possibly the single best series of locations possible. Granted, a change in climate at those locations, could impact year over year data, but that should average out over decades.
Love It…
Please share a list, either in public, or in private. I honestly will probably do zero with it, but look at it

son of mulder
March 5, 2010 12:24 am

What’s the average measured temperature increase per decade at the 724 stations, you reference, in the lowest population category? I assume they nearly all would have always been in that category? So we ought to get an average somewhere near to AGW affects on thermometers.

March 5, 2010 12:27 am

here is the shen paper on the number of stations…
http://www.met.tamu.edu/class/atmo632/(77)ShenKN94.pdf

anna v
March 5, 2010 12:35 am

We are debating the method of corrections of the raw temperatures because that is what has happened and has been used to come out with catastrophic projections and has been used by the IPCC in the effort to stampede the west into a cap and trade pyramid.
Finding errors in their methodology of averaging temperature does dispute the catastrophic prophecies effectively, and thus it is very useful.
Starting from scratch, thinking about the problem I would start with the energy balance . The energy coming in from the sun is measured. The energy going out from the earth can be measured by satellite. That is all we need to know if the earth is heating or cooling during the period we have satellite measurements, and at what rate.
Temperature is useful for humans , from cultivation to clothing to safety it is a necessary measurement, and that is why it has been recorded since the 19th century.
BUT
Temperature is a proxy of energy: if it is a black body the energy flux out is given by j=CxT^4 . Now the earth is not a black body. Therefore each square meter of earth is characterized by a gray body constant C, ( the ratio is called emissivity) and also the radiation spectrum by a gray body is different for each C.
Emissivity can change by as much as 30% ( there are tables of emissivity, sand is .75). Therefore to use temperature and the black body formula as a proxy to extract the energy radiated by the earth requires knowledge of all these materials covering the surface of the planet. Suppose we can do that with enough computing power: have 50 types of surfaces with an average gray body constant and radiation spectrum. In this type of calculation the urban effect will have its own emissivity and spectrum.
Can we get a good estimate of energy radiated out this way to be compared with the satellite data?
There is the next hurdle: On land the thermometers are 2 meters above the ground, measuring air temperature. To use that temperature in the gray body formulas there is the crucial assumption that the ground temperature is identical to the air temperature. This is not the case. The ground can be much colder and much hotter in tens of degrees than the air at two meters:
Wind from the south at the north pole, wind from the sahara on europe, wind from the north to the south etc etc.) There is no real equilibrium between the temperature of the ground and the surface, it is what creates the winds after all.
I am not aware that anybody is measuring real ground temperatures.
Even over the seas, the water is much colder in the summer than the air and warmer in the winter than the air ( lets hope that at least for the oceans the energy balance was from sea surface temperatures).
( the atmosphere has a T^6 dependence not a black body one)
There can be tens of watts errors if one uses the air temperature to compute desert radiation for example.
I do not see how to overcome this problem. A proxy is needed that could tell us how many degrees hotter or colder the ground is at the given time, and I cannot conceive of one. Maybe if one did a lot of measurements in the desert and the arctic and the tundra and the amazon etc etc, one could come out with a table that could be used together with the gray body approximations to gauge the energy radiated that should be compared with satellite data.
So I think one should assign temperatures their true status of telling us how the biosphere is doing, but not for the energy balance. We should really only use satellite data.

G.L. Alston
March 5, 2010 12:39 am

mosher — I did this a while back. it was instructive.
What did you find out, and when did you find it?
Or is this in upcoming paper?
Thanks…

graham g
March 5, 2010 1:18 am

Message for Roy Spencer
I have found the blogsite for Willis Island mentioned in my earlier post.
It is http://kenskingdom.wordpress.com/2010/02/05/giss-manipulates-climate-data-in-mackay/
You will need to scrowl down to get the Willis Island graph.
I hope it is of some help.
Good luck.

graham g
March 5, 2010 1:52 am

Since my post above ,I hope that I have found the person who can best help you in CSIRO. His name is Dr. Simon Torok, and he has been kindly responding to my emails to the CSIRO regarding my enquiries on some interesting points that I have observed in the CSIRO website.
If you review to the comment from Janama @18.02.45 above, you will see a reference to a paper on UHI that was produced by Simon J.Torok when he was using the E-mail address in 1998 of [snip]
Simon’s E-mail address is currently… [snip]
(That is if they are the same person.)
Reply: Point people to links, but we frown on posting email addresses here. ~ ctm

Douglas Cohen
March 5, 2010 2:46 am

The numbers for the UHI are based on measured data and so should have an associated error. Where are the error bars for your plots? This is important to know because anyone planning to use these UHI values to “correct” past temperature data will have to acknowledge that the corrected temperature values have greater error bars associated with them than they did before they were corrected. Before they were corrected, the temperature values might not represent what you want to know but at least their error bars are only that of the actual instuments measuring the local uncorrected temperature. After correction, the temperatures may now be what you want to know, but any uncertainty — that is, error — in the correction combines with the original error bars to increase the overall error. I would be very surprised to find that the gain made by correcting for UHI, and thus being able to add more temperature stations to your temperature database, was not undone by — correctly — accounting for the error in the UHI correction and giving the corrected temperatures larger error bars. Note that any error in the UHI correction, since it will be the same error for all the corrected stations, is **not** an uncorrelated error and thus you cannot expect to reduce it by averaging together the new UHI-corrected temperature values.

graham g
March 5, 2010 2:56 am

Note for Dr Spencer
P.D. Jones is preferred to twice in the references of the Torok & Nicholls paper in 1996 about a temperature dataset for Australia.
Dr. N,Nicholls then address was the BOM, Melbourne, while Dr, Simon Torok’s address was the University of Melbourne at that time.!
It appears Dr. Torok was at the UEA in 2001 by the Aust.Met.Mag.50 copy.

Editor
March 5, 2010 4:14 am

Regarding using satellite data for land use change going back to 1900 AD:
There is actually a useful method for this, at least in some agricultural areas.
First identify forested areas, then use software to find linear ditches and stone walls traversing forested land. This land is easily identifiable as formerly farmed land that has returned to forest.
You can also identify the age of suburban residential developments by the size/age of the trees in those neighborhoods, also, you can obtain population data on a per-zip-code basis going back to the beginning of use of postal codes in US census data.
You can tell if a suburb was developed out of forest or ag land by the continuity of species from neighboring forest to greenbelts in the suburb as well as detecting the presence of long agricultural windbreak treelines.

C.W. Schoneveld
March 5, 2010 4:22 am

Dear dr Spencer,
Could you please address the following fundamental question:
What is the use of all the efforts made by so many people, including yourself, in measuring and interpreting temperatures before it is clear whether the results can be used to prove anything?
Or, for short:
Can man-made measurements (dis)prove man-made warming?
A popular Dutch rhyme goes: “meten is weten!” (to measure is to know). Is that true here?

Alex Heyworth
March 5, 2010 5:00 am

Re: George E. Smith (Mar 4 15:17),
Excellent comment, George. About the only time frame that might avoid the arguments would be to start from the formation of the Earth, four and a half billion years ago. Definitely a downward trend since then, although that still doesn’t guarantee a downward trend in the future!

March 5, 2010 5:10 am

Re steven mosher (23:35:56) : and the one comment next to this.
Yes Steve, what trends did you find when you compared the arid temperature history stations? If they showed no warming perhaps we could stop there. However, if they show warming, then we could not legitmately transferr this to the entire planet due to the overlapping absorbtion bands from increased humidity in non arid regions. Also we would need to know relative humidity trends of the arid regions over the trend period, as these may or may not change with ocean cycles.
I like your KISS comment. I think much research is polluted by our computers ability to assimilate so much data. Just as computers were suppose to reduce paper use, but instead increased it. Computers are of course immensely valuable, but the immense ability can obstruct clarity.

gcb
March 5, 2010 5:21 am

Stupid question…
Has anyone compared, say, the 50 “best” sites (as audited at http://www.surfacestations.org) with the 50 “worst” sites? For that matter, if we only take the “best” sites, what sort of trend (if any) in the temperature record is seen?
Okay, a quick check of the data shows that 2% of the 1000 or so surveyed stations are considered “CRN=1”, but that still gives us 20 stations within the US where UHI effects should be minimized, right? Surely just using that subset of data (plus as many known-good overseas sites as can be found) would give us some sort of a picture of the non-UHI trend?

March 5, 2010 5:25 am

@ C.W. Schoneveld (04:22:25) :
“Can man-made measurements (dis)prove man-made warming?
A popular Dutch rhyme goes: “meten is weten!” (to measure is to know). Is that true here?”
Funny – in Germany, we say “Wer misst, misst Mist!” (Whoever measures, measures garbage), referring to measurements always being imprecise and thus never perfectly in line with what you’d expect from theory.
Reality probably is halfway between those two maxims: Measurements without a theoretical network are nothing but anecdotal knowledge, OTOH a theory fundamentally at odds with measurements does not properly describe the phenomenon. What we need (and what Spencer IMHO rather successfully tries to set up) is a theory that integrates the imprecise measurements into a reliable formula that is able to predict values of future, or not-yet analyzed, measurements.

Jean Parisot
March 5, 2010 5:56 am

Cool, real-time, open-source science. Someone needs to hack up SVN for sceintific papers.

vigilantfish
March 5, 2010 6:22 am

Have not read the recent comments, but I am wondering how Dr. Spencer’s findings can be reconciled with the rather different story told by Dr. Edward Long here:
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/
This study showed the UHI effect really diverging from rural datasets after 1965 in the United States – an effect that is not explained, but for which it can be convincingly argued that a combination of demographic shifts from rural to urban settings, combined with the advent of wide-spread use of air-conditioning and the increase in the use of electricity were important contributors. Without a consideration of technology use and its intensification, together with the increased construction of urban structures, I don’t think that population proxies alone work very well, especially considering the huge socioeconomic differences between Western developed societies and what were called Third World economies over much of the historical period that has been the focus of AGW ‘science’.

March 5, 2010 6:50 am

Mike Ewing (18:36:33) :
Yea for the literal meaning of UHI… if youre meaning a change in micro climate through human alterations in environment, maybe not… id imagine there in the USA at the beginning o the century, you were pastoral farmers by n large… and now you are factory farmers, because you can grow more KGs a hectare in grain than grass, and get more productivity a hectare… Things like this could conceivably play a role in giving artificial bias’s. But how would you find out?
The IPCC claims that land use changes are a net negative, I believe mostly because of increasing albedo (~10%) when forest is converted to farmland. What you’re describing is a land use change, which I believe is Pielke Sr.’s area of expertise. Increasing density of crops… that’s an interesting point! I could definitely see that as being a delta over time, effects of the “Green Revolution”
Hell chances are in cold areas last century, the guy doing the recording, just guessed on cold nights, cause he didnt want to go out side :-0 Really the satellite record is the only one you could claim any certainty on, what a conundrum in this age o instant answers eh!
I agree with your point, to *really* learn stuff we need accurate data, but unfortunately historical temperaature representations like this one are the defacto standard for all things climate these days. That up trend (from 1950-on, which is also defacto standard) is made up of all sorts of things. The argumentum ad ignorantiam is that, more or less, because there is no other explanation it MUST be all/most CO2. Without some sort of reliable mechanism to explain that “hey, maybe you’re overestimating CO2’s contribution to said warming”, “consensus” thinkers will continue to post (without any shred of worthy evidence IMO) that CO2 is the “control knob” for the climate.
Demonstrating that a significant chunk of that historical trend could be, or is likely to have been due to UHI that was either ignored or underestimated by the temperature records (depending on the one we’re talking about), and left out of the IPCC and GCM assessments would go a long way towards knocking this, IMO, insane “consensus” oversimplification off its pedestal and bringing climate science back on track.

March 5, 2010 6:58 am

@ vigilantfish (06:22:27) :but I am wondering how Dr. Spencer’s findings can be reconciled with the rather different story told by Dr. Edward Long here
One significant difference stands out rather quickly. The SPPI publication uses “Both raw and adjusted data from the NCDC has been examined for a selected Contiguous U. S. set of rural and urban stations, 48 each or one per State.
Dr. Spencer is using a rather larger data set and not (cherry?) picking one pair in each state.

Coalsoffire
March 5, 2010 7:08 am

The whole idea of chasing a mythical average air temperature seems bogus to me. And trying to find some proxy (be it light or population or the value of building permits) to identify a trend in the UHI effect is even more fanciful. Add to that what we know from Anthony’s site project and you have a perfect storm of unknowns and unknowables. It strikes me too that the difference between measurement effects and UHI effects is not properly distinguished in this analysis. Nor probably can it be. It seems to be assumed that site effects are part of the UHI, but surely they are something else, so that if you have a strong UHI effect and a significant measurement site effect as well you could really have a blow out on the result.
But I don’t want to encourage that sort of endeavor. We should be measuring heat, not the temperature of the ever shifting air. A puff of wind, a dash of rain, a shift of cloud, a fired up BBQ or AC unit, or any combination of these random events and the mercury goes crazy. Add to that the dizzying notion of “average” temperature and you have a place for much mischief to be made and very little hope of anything useful.

Cabot E
March 5, 2010 7:10 am

I was wondering why we need to have so many stations to take the temperature of the planet over time. I understand that weather is different everywhere, but if the climate of the planet is getting warmer, then everywhere will eventually get warmer and it would reflect on every reading on the planet. For some reason, I think its like taking the temperature of a large bowl of soup being stirred. Parts are warmer than others, but when you warm or cool the bowl, everywhere in the bowl will eventually warm or cool. So I would postulate you only need one ‘untainted’ location to continuously take the Earth’s temperature. Where am I off in this line of reasoning?

Toho
March 5, 2010 7:11 am

One problem with the Spencer method of adjustment is the increase in energy consumption per capita over the last century. A simple back of the envelope calculation shows that increased energy consumption it is probably a significant contribution to UHI trends, especially in densely populated areas.

Barry B Hoffman
March 5, 2010 7:45 am

anna v (00:35:59) :
George E. Smith (15:17:29) :
All efforts that “correct” for temperature readings, no matter the location where collected, appear to be constantly trying to adjust for an a priori assumption that it is possible to construct a forward looking model for climate “direction”. The ground based probes that measure local temperature (energy) are only accurate at the point of measurement and not 2′ away. The loss of reporting stations from 6000 to 1500, Arctic Oscillations, Maunder and Dalton Minimums, GHG effects, and countless other variables lead one to conclude that Chaos Theory input has enormous relevance here. Could a butterfly flapping it’s wings in Beijing have had an effect on a huge wave nearly capsizing a cruise boat in the Mediteranian?
So what are we really looking for? A one hundred year direction, a 1000 year direction (hockey stick), or something on the order of the Vostok ice core record? Joe Bastardi at Accuweather is predicting a 30 year cooling trend, based on scientific data, with far more serious implications than a warming trend. This is far more relevant to our current global population.
I agree with anna v. Use satalites to take the earth’s “temperature” multiple times each day and construct a record. But what matters is tomorrow, next week, or what I can expect when I take my next vacation.

March 5, 2010 8:19 am

Toho (07:11:18) :
One problem with the Spencer method of adjustment is the increase in energy consumption per capita over the last century. A simple back of the envelope calculation shows that increased energy consumption it is probably a significant contribution to UHI trends, especially in densely populated areas.
As Roy points out in the post, this analysis could only be done going back to 1990 as the starting point due to the availability of high resolution population studies. In its current form it was done for a single year. If it was performed from 1990-current then it stands to reason that a trend over time might be demonstrated.
There are many contributory factors that could cause the UHI effect to change over time: energy use probably being the biggest, more cars on the road per population (more and bigger roads), decreasing average population per household (more houses/dwellings for the same population), widespread adoption of A/C (partially, but not totally accounted for by power consumption), bigger avg. house/dwelling size, etc. If Dr. Spencer’s point in time analysis for 2000 was applied to years prior it would, I believe, overcorrect if it were applied to, say the 60’s… and if used that way might introduce a warming bias of some magnitude. I don’t think anyone has proposed trying to do that.
That said, this is a Macro analysis (speaking of, I wonder if Climatology will ever follow Economic’s lead to refine Micro vs. Macro but anyway…) and what you’re talking about is really coming at it from a different direction. My back of the envelope calculations regarding power use is posted here. Based on my information and assumptions (which I tried to make realistic), I calculated a forcing in urban areas for power consumption of ~144% that attributed to CO2 by the IPCC. A couple of caveats, I might have underestimated consumption efficiency (the 15 TW is, as I understand it, at the meter), it does not address forcing in non-urban populated areas, and there would also be significant heat island effects around power stations.

Pat D
March 5, 2010 8:40 am

Jim & Dr. S
If you haven’t seen it the link below includes one to official NZ temperature data from seven different areas, some of the data goes back over 155 years
http://www.climateconversation.wordshine.co.nz/docs/awfw/are-we-feeling-warmer-yet.htm
The NZ base dataset is likely as pure as any existing, globally.
Their base data can safely be assumed to have integrity. Punctiliousness is a national trait.
It is well maintained, diverse and relatively complete.Topographically NZ is as diverse as any country. It is relatively pristine and population growth is steady and consistent.. It’s population of about 4.5 million lives in a country that’s 13% bigger than the states of NY, CT, MA, NH, RI and VT combined: low density. [It also has
more than it’s share of livestock].
I doubt that there is another significant country or region that shares all of these characteristics, and also has an uncorrupted data set.

March 5, 2010 9:20 am

G.L. Alston (00:39:38) :
The desert study? I looked at it back in 2007 while JohnV and I were using his opentemp to do preliminary work on the CRN12345 issue. My thinking was to look at deserts ( one because its counter intuitive). This was prior to photo documenting of all sites so I had to use the land use data in the HCN inventory files ( unaudited). I did find a warming signal that was consistent with ( love that phrase) the entire database. The number of desert stations was small as I recall. I brought it up with a couple people and they asked why I trusted the meta data to get things right. Good question.
If I had to start a follow on project to surface stations it would be a land use
meta data audit. better population data. historical population.
Anybody here with database skill?

G.L. Alston
March 5, 2010 9:45 am

David A (05:10:59) : However, if they show warming, then we could not legitmately transferr this to the entire planet due to the overlapping absorbtion bands from increased humidity in non arid regions.
The purpose of nighttime desert-only series is to isolate water vapour and concentrate solely on trace gas GHG effect.
If the earth is warming naturally (LIA rebound) then the desert night trend ought to be roughly the same slow upward trend as anywhere else.
If the desert data looks like a hockey stick then one can’t claim natural cause nor taint from water vapour acting as GHG (minimal humidity tends to do this.) It could only show a stick shape due to trace gas GHGs.
If I were to place a wager, nighttime desert temp series would show constant upward slow trend. No hockey stick.
The stick is the difference between natural vs anthropogenic. All we need to do is look for the stick shape.

Toho
March 5, 2010 9:51 am

NickB. (08:19:41) :
“… If Dr. Spencer’s point in time analysis for 2000 was applied to years prior it would, I believe, overcorrect if it were applied to, say the 60’s… and if used that way might introduce a warming bias of some magnitude. I don’t think anyone has proposed trying to do that.”
Dr. Spencer:
“Clearly, satellite surveys of land use change in the last 10 or 20 years are not going to allow you to extend a method back to 1900. Population data, though, ARE available (although of arguable quality). But no method will be perfect, and all possible methods should be investigated.”
It seems to me that it it exactly what he is suggesting.
Besides NickB, re you energy calculations: I get values of about 15W/m2 for Stockholm, Sweden where I have fairly good data. The city center would have higher values still. However, those values are not comparable to the forcing from CO2. Depending on weather conditions, heat from a localized heat source will rapidly (compared to forcing from CO2) convect. My estimate is that those 15W/m2 gives a contribution to UHI of somewhere between 0.1 and 1 K.

Scott
March 5, 2010 10:34 am

A bit off topic (OT), but when will this site be updated with the February global temperature anomaly? Isn’t Dr. Spencer the one who provides this data, or does he just normally discuss it?
Several of the warmists I work with have repeating the “weather is not climate” mantra with respect to all the snow in the eastern U.S., and say that Jan 2010 was the warmest ever (doubtful on that, but I don’t bother to argue). I was hoping the Feb numbers would come out soon so I could counter with that.
Thanks,
-Scott

Steve Koch
March 5, 2010 12:08 pm

I’d prefer to move the temp sensors to remote locations rather than adjust the readings. Shouldn’t the land sensors be used as a secondary source, with primary source of temp readings be satellite temps?
Given that 99.9% of the climate energy is stored in the ocean, shouldn’t the focus be on ocean heat content (ohc) rather than surface temps?

March 5, 2010 1:57 pm

Toho (09:51:54) :
“Clearly, satellite surveys of land use change in the last 10 or 20 years are not going to allow you to extend a method back to 1900. Population data, though, ARE available (although of arguable quality). But no method will be perfect, and all possible methods should be investigated.”
It seems to me that it it exactly what he is suggesting.
If his analysis is expanded to cover 1990-current, I suspect (maybe posit is the right word) that there will be a trend over time in the relationship of UHI per person, which would make this approach valid (not perfect). If the relationship is seen as static, then I do think there might be problems trying to apply it retroactively.
Besides NickB, re you energy calculations: I get values of about 15W/m2 for Stockholm, Sweden where I have fairly good data. The city center would have higher values still. However, those values are not comparable to the forcing from CO2. Depending on weather conditions, heat from a localized heat source will rapidly (compared to forcing from CO2) convect. My estimate is that those 15W/m2 gives a contribution to UHI of somewhere between 0.1 and 1 K.
Population density, latitude, and country (avg. power consumption per capita vary greatly country to country) are probably the most important variables for calculating the local forcing for a particular city. Someone here posted an analysis of NYC a while back (which is what led me to try my hand at it) and I think came up with a forcing of ~8 W/m2. My attempt, admittedly crude, was more generic and averaged the forcing across all “Urban” areas.
A couple of questions… Did you factor in usage efficiency? I couldn’t find a good number for how much power is lost to heat generation on average once it gets to the meter box. I SWAG’d 33% average efficiency (67% heat loss). Also, for mine I had no way to tell if 50% of the world lives in “urban” settings was the same definition of “urban” as 1.5% of land surface is “urban” – so there could be some error around matching m2 for Stockholm, vs. power consumption.
Not sure if it makes any difference, but there was also no accounting for vehicle use, heating oil, etc in my calculation – just Electrical consumption.
Interesting conversation – cheers!

March 5, 2010 2:22 pm

Steve Koch (12:08:54)
But… but that would make sense! 😀
The (over)focus on surface temperature instead of heat/energy measures in general, and OHC in particular is a miss, but such is the state of climate science and the great debate today. Nobody seems to talk much about Hansen’s (GISS’) gross overestimation of OHC trends (see more on it here), but instead try and point to the alleged correlations with projections and the surface temperature record (see here and here)
So here we are, arguing about a proxy to what we should really be looking for – heat/energy accumulation

Kevin Kilty
March 5, 2010 9:29 pm

George E. Smith (17:36:48) :
Well it would seem from reading the posts from Kevin Kilty and others, that there are a lot of people who have never watched a horse opera on television (or the movies) in which the runaway wagon wheels are clearly revolving backwards.
That by itself would seem adequate proof that you cannot expect to get believable results from insufficient data.

George, I clearly stated that “f more data are not adding independent information into an analysis, then less data will do just as well.” Aliasing obviously means one has insufficient information and needs to add more. I do wish you’d read people’s postings carefully before making baseless accusations.

Manfred
March 6, 2010 2:10 am

I think the study lacks the dimension of the population sizes.
Currently, a village of 10000 is placed into the same bin as a city of 10 million, if they only have the same population densities.
I think they shouldn’t. There is clearly a cumulative UHI effect due to city sizes and they should have clearly distinct UHI sizes.
Size data may be extracted from the same population data base by some form of integration.
Then this would lead to a 3 dimensional graph, UHI over population density and overall size of the village/town and this would be a much more precise adjusment tool.

gingoro
March 6, 2010 7:40 am

Good Job! UHI adjustment
Would it be possible to show what the curve looks like for small populations in the 2 to 200 range?
The other interesting item would be how far the UHI effect goes from very low populations. There also must be a significant effect depending on whether or not artificial heating or cooling is occurring in the houses.
Dave w

Toho
March 7, 2010 2:38 am

NickB. (13:57:10) :
“A couple of questions… Did you factor in usage efficiency? I couldn’t find a good number for how much power is lost to heat generation on average once it gets to the meter box.”
Practically all of it. All of our domestic energy consumption ends up as heat eventually, regardless of efficiency. The only exception is the light radiated directly into space. (There are industrial processes where part of the energy ends up as chemical energy, though.)
“Not sure if it makes any difference, but there was also no accounting for vehicle use, heating oil, etc in my calculation – just Electrical consumption. ”
My calculation includes all energy use, i.e. including transportation, coal heating oil, etc. The only missing part was energy from heat pumps that move energy from deep in the ground. Over the time period in question, the heat lost from the ground is not replaced, so there is a net addition of energy to the surface. Such heat pumps are not common in the city, but are abundant in suburbs.

Oscar Bajner
March 7, 2010 7:19 am

While reading Dr Spencer’s post, I was looking at some South African Temperature series, near where I live. I discovered some examples, which clearly demonstrate the Urban Heat Island. The data is from NASA’s GISS, and I have linked to their online plots of this data, which I believe speak for themselves.
1. Pretoria : (25.7S, 28.2E) 1950 ~ 2010, shows a definite warming trend. This area has undergone continuous and increasing urbanization over the same period.
link: http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=141682620001&data_set=1&num_neighbors=1
2. Pretoria Univ Proefplaas : (25.8S, 28.3E) 1960 ~ 1990, shows a cooling trend. This area is part of the Pretoria University and is actually on their “experimental farm” grounds.
link : http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=141682620010&data_set=1&num_neighbors=1

stephan
March 7, 2010 9:36 am

That NZ stuff is outrageous surely they will face fraud charges?

Ivan
March 7, 2010 12:24 pm

So, to conclude we haven’t seen any explanation from dr Spencer whatsoever as to why it is more appropriate when analyzing the USA data to try to “correct” Jones or NCDC data-sets using various mathematical techniques, rather than to simply compare the rural and urban trends? There might be too little rural stations with the long record in other countries (as dr Spencer notes), but there are plenty of them in the USA.
So, dr Spencer, what do you think about dr Long’s finding that rural warming in the USA 48 was only 0.1 deg C during 20the century and 3 times lower than UAH 48 trend 1979-2009? Are we going to have any answer to that?