In case you missed it, Roy Spencer performed a unique and valuable analysis comparing International Hourly Surface data to population density to provide a simple gauge for the Urban Heat Island (UHI) effect. It was presented at WUWT yesterday with this result:
There were lots of questions on the method. Dr. Spencer adds to the discussion below.
===========================================
UPDATE #2: Clarifications and answers to questions
After sifting through the 212 comments posted in the last 12 hours at Anthony Watts’ site, I thought I would answer those concerns that seemed most relevant.
Many of the questions and objections posted there were actually answered by others peoples’ posts — see especially the 2 comments by Jim Clarke at time stamps 18:23:56 & 01:32:40. Clearly, Jim understood what I did, why I did it, and phrased the explanations even better than I could have.
Some readers were left confused since my posting was necessarily greatly simplified; the level of detail for a journal submission would increase by about a factor of ten. I appreciate all the input, which has helped clarify my thinking.
RATIONALE FOR THE STUDY
While it might not have been obvious, I am trying to come up with a quantitative method for correcting past temperature measurements for the localized warming effects due to the urban heat island (UHI) effect. I am generally including in the “UHI effect” any replacement of natural vegetation by manmade surfaces, structures and active sources of heat. I don’t want to argue about terminology, just keep things simple.
For instance, the addition of an outbuilding and a sidewalk next to an otherwise naturally-vegetated thermometer site would be considered UHI-contaminated. (As Roger Pielke, Sr., has repeatedly pointed out, changes in land use, without the addition of manmade surfaces and structures, can also cause temperature changes. I consider this to be a much more difficult influence to correct for in the global thermometer data.)
The UHI effect leads to a spurious warming signal which, even though only local, has been given global significance by some experts. Many of us believe that as much as 50% (or more) of the “global warming” signal in the thermometer data could actually be from local UHI effects. The IPCC community, in contrast, appears to believe that the thermometer record has not been substantially contaminated.
Unless someone quantitatively demonstrates that there is a significant UHI signal in the global thermometer data, the IPCC can claim that global temperature trends are not substantially contaminated by such effects.
If there were sufficient thermometer data scattered around the world that are unaffected by UHI effects, then we could simply throw away all of the contaminated data. A couple of people wondered why this is not done. I believe that there is not enough uncontaminated data to do this, which means we must find some way of correcting for UHI effects that exist in most of the thermometer data — preferably extending back 100 years or more.
Since population data is one of the few pieces of information that we have long term records for, it makes sense to determine if we can quantify the UHI effect based upon population data. My post introduces a simple method for doing that, based upon the analysis of global thermometer and population density data for a single year, 2000. The analysis needs to be done for other years as well, but the high-resolution population density data only extends back to 1990.
Admittedly, if we had good long-term records of some other variable that was more closely related to UHI, then we could use that instead. But the purpose here is not to find the best way to estimate the magnitude of TODAY’S UHI effect, but to find a practical way to correct PAST thermometer data. What I posted was the first step in that direction.
Clearly, satellite surveys of land use change in the last 10 or 20 years are not going to allow you to extend a method back to 1900. Population data, though, ARE available (although of arguable quality). But no method will be perfect, and all possible methods should be investigated.
STATION PAIRING
My goal is to quantify how much of a UHI temperature rise occurs, on average, for any population density, compared to a population density of zero. We can not do this directly because that would require a zero-population temperature measurement near every populated temperature measurement location. So, we must do it in a piecewise fashion.
For every closely-spaced station pair in the world, we can compare the temperature difference between the 2 stations to the population density difference between the two station locations. Using station pairs is easily programmable on a computer, allowing the approx 10,000 temperature measurements sites to be processed relatively quickly.
Using a simple example to introduce the concept, theoretically one could compute:
1) how much average UHI warming occurs from going from 0 to 20 people per sq. km, then
2) the average warming going from 20 to 50 people per sq. km, then
3) the average warming going from 50 to 100 people per. sq. km,
etc.
If you can compute all of these separate statistics, we can determine how the UHI effect varies with population density going from 0 to the highest population densities.
Unfortunately, the populations of any 2 closely-spaced stations will be highly variable, not neatly ordered like this simple example. We need some way of handling the fact that stations do NOT have population densities exactly at 0, 20, 100 (etc.) persons per sq. km., but can have ANY population density. I handle this problem by doing averaging in specific population intervals.
For each pair of closely spaced stations, if the higher-population station is in population interval #3, and the lower population station is in population interval #1, I put that station pair’s year-average temperature difference in a 2-dimensional (interval#3, interval#1) population “bin” for later averaging.
Not only is the average temperature difference computed for all station pairs falling in each population bin, but also computed are the average populations in those bins. We will need those statistics later for our calculations of how temperature increases with population density.
Note that we can even compute the temperature difference between stations in the SAME population bin, as long as we keep track of which one has the higher population and which has the lower population. If the population densities for a pair of stations are exactly the same, we do not include that pair in the averaging.
The fact that the greatest warming RATE is observed at the lowest population densities is not a new finding. My comment that the greatest amount of spurious warming might therefore occur at the rural (rather than urban) sites, as a couple of people pointed out, presumes that rural sites tend to increase in population over the years. This might not be the case for most rural sites.
Also, as some pointed out, the UHI warming will vary with time of day, season, geography, wind conditions, etc. These are all mixed in together in my averages. But the fact that a UHI signal clearly exists without any correction for these other effects means that the global warming over the last 100 years measured using daily max/min temperature data has likely been overestimated. This is an important starting point, and its large-scale, big-picture approach complements the kind of individual-station surveys that Anthony Watts has been performing.

The numbers for the UHI are based on measured data and so should have an associated error. Where are the error bars for your plots? This is important to know because anyone planning to use these UHI values to “correct” past temperature data will have to acknowledge that the corrected temperature values have greater error bars associated with them than they did before they were corrected. Before they were corrected, the temperature values might not represent what you want to know but at least their error bars are only that of the actual instuments measuring the local uncorrected temperature. After correction, the temperatures may now be what you want to know, but any uncertainty — that is, error — in the correction combines with the original error bars to increase the overall error. I would be very surprised to find that the gain made by correcting for UHI, and thus being able to add more temperature stations to your temperature database, was not undone by — correctly — accounting for the error in the UHI correction and giving the corrected temperatures larger error bars. Note that any error in the UHI correction, since it will be the same error for all the corrected stations, is **not** an uncorrelated error and thus you cannot expect to reduce it by averaging together the new UHI-corrected temperature values.
Note for Dr Spencer
P.D. Jones is preferred to twice in the references of the Torok & Nicholls paper in 1996 about a temperature dataset for Australia.
Dr. N,Nicholls then address was the BOM, Melbourne, while Dr, Simon Torok’s address was the University of Melbourne at that time.!
It appears Dr. Torok was at the UEA in 2001 by the Aust.Met.Mag.50 copy.
Regarding using satellite data for land use change going back to 1900 AD:
There is actually a useful method for this, at least in some agricultural areas.
First identify forested areas, then use software to find linear ditches and stone walls traversing forested land. This land is easily identifiable as formerly farmed land that has returned to forest.
You can also identify the age of suburban residential developments by the size/age of the trees in those neighborhoods, also, you can obtain population data on a per-zip-code basis going back to the beginning of use of postal codes in US census data.
You can tell if a suburb was developed out of forest or ag land by the continuity of species from neighboring forest to greenbelts in the suburb as well as detecting the presence of long agricultural windbreak treelines.
Dear dr Spencer,
Could you please address the following fundamental question:
What is the use of all the efforts made by so many people, including yourself, in measuring and interpreting temperatures before it is clear whether the results can be used to prove anything?
Or, for short:
Can man-made measurements (dis)prove man-made warming?
A popular Dutch rhyme goes: “meten is weten!” (to measure is to know). Is that true here?
Re: George E. Smith (Mar 4 15:17),
Excellent comment, George. About the only time frame that might avoid the arguments would be to start from the formation of the Earth, four and a half billion years ago. Definitely a downward trend since then, although that still doesn’t guarantee a downward trend in the future!
Re steven mosher (23:35:56) : and the one comment next to this.
Yes Steve, what trends did you find when you compared the arid temperature history stations? If they showed no warming perhaps we could stop there. However, if they show warming, then we could not legitmately transferr this to the entire planet due to the overlapping absorbtion bands from increased humidity in non arid regions. Also we would need to know relative humidity trends of the arid regions over the trend period, as these may or may not change with ocean cycles.
I like your KISS comment. I think much research is polluted by our computers ability to assimilate so much data. Just as computers were suppose to reduce paper use, but instead increased it. Computers are of course immensely valuable, but the immense ability can obstruct clarity.
Stupid question…
Has anyone compared, say, the 50 “best” sites (as audited at http://www.surfacestations.org) with the 50 “worst” sites? For that matter, if we only take the “best” sites, what sort of trend (if any) in the temperature record is seen?
Okay, a quick check of the data shows that 2% of the 1000 or so surveyed stations are considered “CRN=1”, but that still gives us 20 stations within the US where UHI effects should be minimized, right? Surely just using that subset of data (plus as many known-good overseas sites as can be found) would give us some sort of a picture of the non-UHI trend?
@ur momisugly C.W. Schoneveld (04:22:25) :
“Can man-made measurements (dis)prove man-made warming?
A popular Dutch rhyme goes: “meten is weten!” (to measure is to know). Is that true here?”
Funny – in Germany, we say “Wer misst, misst Mist!” (Whoever measures, measures garbage), referring to measurements always being imprecise and thus never perfectly in line with what you’d expect from theory.
Reality probably is halfway between those two maxims: Measurements without a theoretical network are nothing but anecdotal knowledge, OTOH a theory fundamentally at odds with measurements does not properly describe the phenomenon. What we need (and what Spencer IMHO rather successfully tries to set up) is a theory that integrates the imprecise measurements into a reliable formula that is able to predict values of future, or not-yet analyzed, measurements.
Cool, real-time, open-source science. Someone needs to hack up SVN for sceintific papers.
Have not read the recent comments, but I am wondering how Dr. Spencer’s findings can be reconciled with the rather different story told by Dr. Edward Long here:
http://wattsupwiththat.com/2010/02/26/a-new-paper-comparing-ncdc-rural-and-urban-us-surface-temperature-data/
This study showed the UHI effect really diverging from rural datasets after 1965 in the United States – an effect that is not explained, but for which it can be convincingly argued that a combination of demographic shifts from rural to urban settings, combined with the advent of wide-spread use of air-conditioning and the increase in the use of electricity were important contributors. Without a consideration of technology use and its intensification, together with the increased construction of urban structures, I don’t think that population proxies alone work very well, especially considering the huge socioeconomic differences between Western developed societies and what were called Third World economies over much of the historical period that has been the focus of AGW ‘science’.
Mike Ewing (18:36:33) :
Yea for the literal meaning of UHI… if youre meaning a change in micro climate through human alterations in environment, maybe not… id imagine there in the USA at the beginning o the century, you were pastoral farmers by n large… and now you are factory farmers, because you can grow more KGs a hectare in grain than grass, and get more productivity a hectare… Things like this could conceivably play a role in giving artificial bias’s. But how would you find out?
The IPCC claims that land use changes are a net negative, I believe mostly because of increasing albedo (~10%) when forest is converted to farmland. What you’re describing is a land use change, which I believe is Pielke Sr.’s area of expertise. Increasing density of crops… that’s an interesting point! I could definitely see that as being a delta over time, effects of the “Green Revolution”
Hell chances are in cold areas last century, the guy doing the recording, just guessed on cold nights, cause he didnt want to go out side :-0 Really the satellite record is the only one you could claim any certainty on, what a conundrum in this age o instant answers eh!
I agree with your point, to *really* learn stuff we need accurate data, but unfortunately historical temperaature representations like this one are the defacto standard for all things climate these days. That up trend (from 1950-on, which is also defacto standard) is made up of all sorts of things. The argumentum ad ignorantiam is that, more or less, because there is no other explanation it MUST be all/most CO2. Without some sort of reliable mechanism to explain that “hey, maybe you’re overestimating CO2’s contribution to said warming”, “consensus” thinkers will continue to post (without any shred of worthy evidence IMO) that CO2 is the “control knob” for the climate.
Demonstrating that a significant chunk of that historical trend could be, or is likely to have been due to UHI that was either ignored or underestimated by the temperature records (depending on the one we’re talking about), and left out of the IPCC and GCM assessments would go a long way towards knocking this, IMO, insane “consensus” oversimplification off its pedestal and bringing climate science back on track.
@ur momisugly vigilantfish (06:22:27) :but I am wondering how Dr. Spencer’s findings can be reconciled with the rather different story told by Dr. Edward Long here
One significant difference stands out rather quickly. The SPPI publication uses “Both raw and adjusted data from the NCDC has been examined for a selected Contiguous U. S. set of rural and urban stations, 48 each or one per State.”
Dr. Spencer is using a rather larger data set and not (cherry?) picking one pair in each state.
The whole idea of chasing a mythical average air temperature seems bogus to me. And trying to find some proxy (be it light or population or the value of building permits) to identify a trend in the UHI effect is even more fanciful. Add to that what we know from Anthony’s site project and you have a perfect storm of unknowns and unknowables. It strikes me too that the difference between measurement effects and UHI effects is not properly distinguished in this analysis. Nor probably can it be. It seems to be assumed that site effects are part of the UHI, but surely they are something else, so that if you have a strong UHI effect and a significant measurement site effect as well you could really have a blow out on the result.
But I don’t want to encourage that sort of endeavor. We should be measuring heat, not the temperature of the ever shifting air. A puff of wind, a dash of rain, a shift of cloud, a fired up BBQ or AC unit, or any combination of these random events and the mercury goes crazy. Add to that the dizzying notion of “average” temperature and you have a place for much mischief to be made and very little hope of anything useful.
I was wondering why we need to have so many stations to take the temperature of the planet over time. I understand that weather is different everywhere, but if the climate of the planet is getting warmer, then everywhere will eventually get warmer and it would reflect on every reading on the planet. For some reason, I think its like taking the temperature of a large bowl of soup being stirred. Parts are warmer than others, but when you warm or cool the bowl, everywhere in the bowl will eventually warm or cool. So I would postulate you only need one ‘untainted’ location to continuously take the Earth’s temperature. Where am I off in this line of reasoning?
One problem with the Spencer method of adjustment is the increase in energy consumption per capita over the last century. A simple back of the envelope calculation shows that increased energy consumption it is probably a significant contribution to UHI trends, especially in densely populated areas.
anna v (00:35:59) :
George E. Smith (15:17:29) :
All efforts that “correct” for temperature readings, no matter the location where collected, appear to be constantly trying to adjust for an a priori assumption that it is possible to construct a forward looking model for climate “direction”. The ground based probes that measure local temperature (energy) are only accurate at the point of measurement and not 2′ away. The loss of reporting stations from 6000 to 1500, Arctic Oscillations, Maunder and Dalton Minimums, GHG effects, and countless other variables lead one to conclude that Chaos Theory input has enormous relevance here. Could a butterfly flapping it’s wings in Beijing have had an effect on a huge wave nearly capsizing a cruise boat in the Mediteranian?
So what are we really looking for? A one hundred year direction, a 1000 year direction (hockey stick), or something on the order of the Vostok ice core record? Joe Bastardi at Accuweather is predicting a 30 year cooling trend, based on scientific data, with far more serious implications than a warming trend. This is far more relevant to our current global population.
I agree with anna v. Use satalites to take the earth’s “temperature” multiple times each day and construct a record. But what matters is tomorrow, next week, or what I can expect when I take my next vacation.
Toho (07:11:18) :
One problem with the Spencer method of adjustment is the increase in energy consumption per capita over the last century. A simple back of the envelope calculation shows that increased energy consumption it is probably a significant contribution to UHI trends, especially in densely populated areas.
As Roy points out in the post, this analysis could only be done going back to 1990 as the starting point due to the availability of high resolution population studies. In its current form it was done for a single year. If it was performed from 1990-current then it stands to reason that a trend over time might be demonstrated.
There are many contributory factors that could cause the UHI effect to change over time: energy use probably being the biggest, more cars on the road per population (more and bigger roads), decreasing average population per household (more houses/dwellings for the same population), widespread adoption of A/C (partially, but not totally accounted for by power consumption), bigger avg. house/dwelling size, etc. If Dr. Spencer’s point in time analysis for 2000 was applied to years prior it would, I believe, overcorrect if it were applied to, say the 60’s… and if used that way might introduce a warming bias of some magnitude. I don’t think anyone has proposed trying to do that.
That said, this is a Macro analysis (speaking of, I wonder if Climatology will ever follow Economic’s lead to refine Micro vs. Macro but anyway…) and what you’re talking about is really coming at it from a different direction. My back of the envelope calculations regarding power use is posted here. Based on my information and assumptions (which I tried to make realistic), I calculated a forcing in urban areas for power consumption of ~144% that attributed to CO2 by the IPCC. A couple of caveats, I might have underestimated consumption efficiency (the 15 TW is, as I understand it, at the meter), it does not address forcing in non-urban populated areas, and there would also be significant heat island effects around power stations.
Jim & Dr. S
If you haven’t seen it the link below includes one to official NZ temperature data from seven different areas, some of the data goes back over 155 years
http://www.climateconversation.wordshine.co.nz/docs/awfw/are-we-feeling-warmer-yet.htm
The NZ base dataset is likely as pure as any existing, globally.
Their base data can safely be assumed to have integrity. Punctiliousness is a national trait.
It is well maintained, diverse and relatively complete.Topographically NZ is as diverse as any country. It is relatively pristine and population growth is steady and consistent.. It’s population of about 4.5 million lives in a country that’s 13% bigger than the states of NY, CT, MA, NH, RI and VT combined: low density. [It also has
more than it’s share of livestock].
I doubt that there is another significant country or region that shares all of these characteristics, and also has an uncorrupted data set.
G.L. Alston (00:39:38) :
The desert study? I looked at it back in 2007 while JohnV and I were using his opentemp to do preliminary work on the CRN12345 issue. My thinking was to look at deserts ( one because its counter intuitive). This was prior to photo documenting of all sites so I had to use the land use data in the HCN inventory files ( unaudited). I did find a warming signal that was consistent with ( love that phrase) the entire database. The number of desert stations was small as I recall. I brought it up with a couple people and they asked why I trusted the meta data to get things right. Good question.
If I had to start a follow on project to surface stations it would be a land use
meta data audit. better population data. historical population.
Anybody here with database skill?
David A (05:10:59) : However, if they show warming, then we could not legitmately transferr this to the entire planet due to the overlapping absorbtion bands from increased humidity in non arid regions.
The purpose of nighttime desert-only series is to isolate water vapour and concentrate solely on trace gas GHG effect.
If the earth is warming naturally (LIA rebound) then the desert night trend ought to be roughly the same slow upward trend as anywhere else.
If the desert data looks like a hockey stick then one can’t claim natural cause nor taint from water vapour acting as GHG (minimal humidity tends to do this.) It could only show a stick shape due to trace gas GHGs.
If I were to place a wager, nighttime desert temp series would show constant upward slow trend. No hockey stick.
The stick is the difference between natural vs anthropogenic. All we need to do is look for the stick shape.
NickB. (08:19:41) :
“… If Dr. Spencer’s point in time analysis for 2000 was applied to years prior it would, I believe, overcorrect if it were applied to, say the 60’s… and if used that way might introduce a warming bias of some magnitude. I don’t think anyone has proposed trying to do that.”
Dr. Spencer:
“Clearly, satellite surveys of land use change in the last 10 or 20 years are not going to allow you to extend a method back to 1900. Population data, though, ARE available (although of arguable quality). But no method will be perfect, and all possible methods should be investigated.”
It seems to me that it it exactly what he is suggesting.
Besides NickB, re you energy calculations: I get values of about 15W/m2 for Stockholm, Sweden where I have fairly good data. The city center would have higher values still. However, those values are not comparable to the forcing from CO2. Depending on weather conditions, heat from a localized heat source will rapidly (compared to forcing from CO2) convect. My estimate is that those 15W/m2 gives a contribution to UHI of somewhere between 0.1 and 1 K.
A bit off topic (OT), but when will this site be updated with the February global temperature anomaly? Isn’t Dr. Spencer the one who provides this data, or does he just normally discuss it?
Several of the warmists I work with have repeating the “weather is not climate” mantra with respect to all the snow in the eastern U.S., and say that Jan 2010 was the warmest ever (doubtful on that, but I don’t bother to argue). I was hoping the Feb numbers would come out soon so I could counter with that.
Thanks,
-Scott
I’d prefer to move the temp sensors to remote locations rather than adjust the readings. Shouldn’t the land sensors be used as a secondary source, with primary source of temp readings be satellite temps?
Given that 99.9% of the climate energy is stored in the ocean, shouldn’t the focus be on ocean heat content (ohc) rather than surface temps?
Toho (09:51:54) :
“Clearly, satellite surveys of land use change in the last 10 or 20 years are not going to allow you to extend a method back to 1900. Population data, though, ARE available (although of arguable quality). But no method will be perfect, and all possible methods should be investigated.”
It seems to me that it it exactly what he is suggesting.
If his analysis is expanded to cover 1990-current, I suspect (maybe posit is the right word) that there will be a trend over time in the relationship of UHI per person, which would make this approach valid (not perfect). If the relationship is seen as static, then I do think there might be problems trying to apply it retroactively.
Besides NickB, re you energy calculations: I get values of about 15W/m2 for Stockholm, Sweden where I have fairly good data. The city center would have higher values still. However, those values are not comparable to the forcing from CO2. Depending on weather conditions, heat from a localized heat source will rapidly (compared to forcing from CO2) convect. My estimate is that those 15W/m2 gives a contribution to UHI of somewhere between 0.1 and 1 K.
Population density, latitude, and country (avg. power consumption per capita vary greatly country to country) are probably the most important variables for calculating the local forcing for a particular city. Someone here posted an analysis of NYC a while back (which is what led me to try my hand at it) and I think came up with a forcing of ~8 W/m2. My attempt, admittedly crude, was more generic and averaged the forcing across all “Urban” areas.
A couple of questions… Did you factor in usage efficiency? I couldn’t find a good number for how much power is lost to heat generation on average once it gets to the meter box. I SWAG’d 33% average efficiency (67% heat loss). Also, for mine I had no way to tell if 50% of the world lives in “urban” settings was the same definition of “urban” as 1.5% of land surface is “urban” – so there could be some error around matching m2 for Stockholm, vs. power consumption.
Not sure if it makes any difference, but there was also no accounting for vehicle use, heating oil, etc in my calculation – just Electrical consumption.
Interesting conversation – cheers!
Steve Koch (12:08:54)
But… but that would make sense! 😀
The (over)focus on surface temperature instead of heat/energy measures in general, and OHC in particular is a miss, but such is the state of climate science and the great debate today. Nobody seems to talk much about Hansen’s (GISS’) gross overestimation of OHC trends (see more on it here), but instead try and point to the alleged correlations with projections and the surface temperature record (see here and here)
So here we are, arguing about a proxy to what we should really be looking for – heat/energy accumulation