New modeling analysis paper by Ross McKitrick

Dr. McKitrick’s new paper with Lise Tole is now online at Climate Dynamics. He also has an op-ed in the Financial Post on June 13. A version with the citations provided is here. Part II is here online, and the versions with citations is here.

**McKitrick, Ross R. and Lise Tole (2012) “Evaluating Explanatory Models of the Spatial Pattern of Surface Climate Trends using Model Selection and Bayesian Averaging Methods” Climate Dynamics, 2012, DOI: 10.1007/s00382-012-1418-9

The abstract is:

We evaluate three categories of variables for explaining the spatial pattern of warming and cooling trends over land: predictions of general circulation models (GCMs) in response to observed forcings; geographical factors like latitude and pressure; and socioeconomic influences on the land surface and data quality. Spatial autocorrelation (SAC) in the observed trend pattern is removed from the residuals by a well-specified explanatory model. Encompassing tests show that none of the three classes of variables account for the contributions of the other two, though 20 of 22 GCMs individually contribute either no significant explanatory power or yield a trend pattern negatively correlated with observations. Non-nested testing rejects the null hypothesis that socioeconomic variables have no explanatory power. We apply a Bayesian Model Averaging (BMA) method to search over all possible linear combinations of explanatory variables and generate posterior coefficient distributions robust to model selection. These results, confirmed by classical encompassing tests, indicate that the geographical variables plus three of the 22 GCMs and three socioeconomic variables provide all the explanatory power in the data set. We conclude that the most valid model of the spatial pattern of trends in land surface temperature records over 1979-2002 requires a combination of the processes represented in some GCMs and certain socioeconomic measures that capture data quality variations and changes to the land surface.

He writes on his website:

We apply classical and Bayesian methods to look at how well 3 different types of variables can explain the spatial pattern of temperature trends over 1979-2002. One type is the output of a collection of 22 General Circulation Models (GCMs) used by the IPCC in the Fourth Assessment Report. Another is a collection of measures of socioeconomic development over land.

The third is a collection of geopgraphic indicators including latitude, coastline proximity and tropospheric temperature trends. The question is whether one can justify an extreme position that rules out one or more categories of data, or whether some combination of the three types is necessary. I would describe the IPCC position as extreme since they dismiss the role of socioeconomic factors in their assessments. In the classical tests, we look at whether any combination of one or two types can “encompass” the third, and whether non-nested tests combining pairs of groups reject either 0% or 100% weighting on either. (“Encompass” means provide sufficient explanatory power not only to fit the data but also to account for the apparent explanatory power of the rival model.) In all cases we strongly reject leaving out the socioeconomic data.

In only 3 of 22 cases do we reject leaving out the climate model data, but in one of those cases the correlation is negative, so only 2 count–that is, in 20 of 22 cases we find the climate models are either no better than or worse than random numbers. We then apply Bayesian Model Averaging to search over the space of 537 million possible combinations of explanatory variables and generate coefficients and standard errors robust to model selection (aka cherry-picking). In addition to the geographic data (which we include by assumption) we identify 3 socioeconomic variables and 3 climate models as the ones that belong in the optimal explanatory model, a combination that encompasses all remaining data. So our conclusion is that a valid explanatory model of the pattern of climate change over land requires use of both socioeconomic indicators and GCM processes. The failure to include the socioeconomic factors in empirical work may be biasing analysis of the magnitude and causes of observed climate trends since 1979.

0 0 votes
Article Rating
54 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
johanna
June 21, 2012 5:53 am

Minor typo – the op-ed was published on June 13.
[REPLY: Fixed. Thanks. -REP]

AJ
June 21, 2012 5:54 am

So FGOALS and CCSM3.0 have the best explanatory power as per this paper. I find this odd because on other measures (e.g. seasonal temperature variations in the oceans) they are amongst the worse.

Alan D McIntire
June 21, 2012 6:21 am

Matt Briggs has a “random walk” climatre generating program here, which can be easily downloaded.
http://wmbriggs.com/blog/?p=257
The standard deviation used by Matt Briggs was 0.1238 degrees C for a year. .
Using 0.01 degrees,, , a random walk would give an average change of square root of n over n years. After 100 years, the average change would be plus or minus sqrt 100 =10*.01 or 0.1C – plausible.
After 10,000 years, the average change would be 100* 0.01 = 1C.- quite possible.
After 100,000,000 years, the average change would be 10,,000* 0.01 = 100 C.= ridiculous.
Obviously there’s some dampening effect, making it less and less likely that temperatures will go from high to higher, or from low to lower, else life on earth would have been eliminated long ago.
What would a better randomized deviation be? Would it be proportional to log(n) rather than n?

ferd berple
June 21, 2012 6:24 am

“10 of the 22 climate models predicted a pattern that was negatively correlated with observations and had to be removed from most of the analysis to avoid biasing the results. In 10 other cases we found the climate models predicted a pattern that was loosely correlated with observations, but not significantly so—in other words not significantly better than random numbers. In only 2 cases was there statistically significant evidence of explanatory power.”

ferd berple
June 21, 2012 6:25 am

“In all 22 cases the probability you could leave out the socioeconomic data was computed as zero. But only in 3 of 22 cases did the data say you should keep the GCM, and in one of those cases the fit was negative (opposite to the observed patterns) so it didn’t count. So, again, only 2 of 22 climate models demonstrated enough explanatory power to be worth retaining, but in all 22 cases the data gave primary support to the socioeconomic measures the IPCC insists should not be used.”

ferd berple
June 21, 2012 6:26 am

“But the IPCC has taken an extreme position, that the socioeconomic patterns have no effect2 and any temperature changes must be due to global “forcings” like carbon dioxide (CO2) emissions. Studies that claim to detect the effects of CO2 emissions on the climate make this assumption, as do those that estimate the rate of greenhouse gas-induced warming. As Allen says, if this assumption isn’t true, there are a lot of papers that would have to be retracted or redone.”

ferd berple
June 21, 2012 6:30 am

“The three climate models consistently identified as having explanatory power were from China, Russia and a US lab called NCAR. Climate models from Norway, Canada, Australia, Germany, France, Japan and the UK, as well as American models from Princeton and two US government labs (NASA and NOAA), failed to exhibit any explanatory power for the spatial pattern of surface temperature trends in any test, alone or in any combination.”

John F. Hultquist
June 21, 2012 6:34 am

The Financial Post article provides good context for the research reported here on WUWT. [FP claims an updated on June 14th]
~~~~
The headline writer for the Financial Post places this under
<<>>
. . . then provides a last line:
<<>>
There are 82 comments; some about the headline, and some trash; at 6:30 am Thursday, the 21st.

commieBob
June 21, 2012 6:34 am

Have I got this right?
‘Socioeconomic variables’ are associated with things like land use and urban heat island effect. yes/no
Dr. Pielke Sr. has provided us with ample evidence that land use has a large effect on regional climate. The Urban Heat Island effect is well known.
In other words, it is quite reasonable that socioeconomic variables could have an effect of regional climate.

Douglas Leahey
June 21, 2012 6:38 am

The failure of the GCMs comes as no surprise. The dynamical equations contained in these complex “sophisticated” models are derived by applying Newton’s second law to a single parcel of air which is assumed never to mix with the atmosphere (e.g. Haltiner and Martin 1957). Once the equations are derived they are arbitrarily “transformed” from a Lagrangian to an Eulerian frame of reference. Such a transformation allows the equations to be treated as though they have general applicability to all diffusive atmospheric conditions. This is contrary to assumptions of applicability only to a single indivisible air parcel contained in the derivation of the equations. .
The models would be rejected out-of-hand by any student of logic.
Reference: Haltiner G.J. and F.L. Martin 1957 Dynamical and Physical Meteorology McGraw-Hill Book Company Inc. New York Toronto London
Douglas Leahey

John F. Hultquist
June 21, 2012 6:41 am

I did not realize the GT and LT signs would remove the text.
So, the 2 lines are:
“ Junk Science Week: Climate models fail reality test”
and
“Next week: Climate models offer millions of ways of getting the wrong answer.”

timetochooseagain
June 21, 2012 6:44 am

AJ on June 21, 2012 at 5:54 am said:
“So FGOALS and CCSM3.0 have the best explanatory power as per this paper. I find this odd because on other measures (e.g. seasonal temperature variations in the oceans) they are amongst the worse.”
Not odd at all. At least one model should be expected to get good agreement by pure dumb luck, given that there are more than twenty.

MarkW
June 21, 2012 6:48 am

John F. Hultquist says:
June 21, 2012 at 6:41 am
HTML tags begin with a LT symbol and end with a GT symbol. The software saw the pair assumed it was an HTML tag that it didn’t recognize, and removed it.
[In this context, LT = less than, GT = greater than, symbols when typed. Robt]

ferd berple
June 21, 2012 7:04 am

“The three climate models consistently identified as having explanatory power were from China, Russia and a US lab called NCAR.”
Any bets which one of the three had negative explanatory power?

John Brookes
June 21, 2012 7:17 am

This is not surprising. The climate models predict a warming earth, and this is happening. However, the idea that regional level effects can be accurately predicted is a bit far fetched – especially on as short a time scale as 23 years. Maybe over 100 years.
But it matters little how the warming is distributed around the world, it remains highly likely to be harmful to us.

ferd berple
June 21, 2012 7:42 am

commieBob says:
June 21, 2012 at 6:34 am
The Urban Heat Island effect is well known.
==========
Not in climate science. Be it Phil, the dog age my homework Jones, or BEST and the faulty statistical comparison, ignoring delta/delta and finding no correlation (no surprise).
The IPCC has latched onto these studies (no UHI) because they support the notion that coal is bad and must be taxed out of existence. Since coal is currently the only practical fuel in plentiful supply, this will create artificial worldwide shortages of energy allowing for windfall profits by those that invest ahead of the regulations.
We see this already in Australia, where the CO2 tax is forcing Australians to export their coal to China rather than using it to power their own economy. The Chinese could not have hoped for better terms if they paid the Oz government.

John West
June 21, 2012 7:48 am

John Brookes says:
“But it matters little how the warming is distributed around the world, it remains highly likely to be harmful to us.”
Why? Basis? Logic? Reasoning? Anything? Or is it just fear of the unknown?

June 21, 2012 7:50 am

“In only 3 of 22 cases do we reject leaving out the climate model data, …”
The outputs of climate models may be many things, but they are not DATA.

ferd berple
June 21, 2012 7:52 am

John Brookes says:
June 21, 2012 at 7:17 am
The climate models predict a warming earth, and this is happening.
===========
Low temperatures are increasing, which is increasing the averages. However, high temperatures are not increasing. So all that can really be said is that climate is becoming less extreme. By averaging temperatures, climate science has painted a misleading picture as to what is actually happening, which is at the heart of the failure of climate models to match reality.
The lack of increase in high temperatures points to negative feedback in the system. Something that not one of the GCM used by the IPCC allows for. Every one of the IPCC GCM’s assumes a positive climate feedback.
No climate model has been programmed with negative feedback to see if it does a better job of explaining climate. Why? The reason you use models is to test assumptions. This major assumption remains untested.

ferd berple
June 21, 2012 7:54 am

John Brookes says:
June 21, 2012 at 7:17 am
But it matters little how the warming is distributed around the world, it remains highly likely to be harmful to us.
=======
Warming is not harmful to humans. We cannot survive without clothing (technology) below 28C. Only the tropical jungles are this warm. Every other place on the planet is currently fatal to humans without technology.

June 21, 2012 8:02 am

does “socioeconomic indicators” = UHI?

Luther Wu
June 21, 2012 8:12 am

But he doesn’t even mention sustainability…
Who is paying attention to any of this?
Is anyone in the current US administration paying any attention? Anyone at IPCC?
How about anyone wielding power in the developed world, or reporting on same?

ferd berple
June 21, 2012 8:18 am

Mark Wagner says:
June 21, 2012 at 8:02 am
does “socioeconomic indicators” = UHI?
=======
Studies that compare population to temperature ignore industrialization rates. This is how the UHI has been typically studied in the past (BEST). Thus highly populated areas are assumed to be identical, regardless of economic activity (Asia/USA/EU). On this basis there is little UHI signal. This new study shows that it is economic activity, not population that drives temperature.

Jim Clarke
June 21, 2012 8:18 am

In regards to:
John Brookes says:
June 21, 2012 at 7:17 am
Well, John…if the atmosphere has been warming over the last 15 years, it is not significant enough to proclaim it with such assurance. I could easily look at RSS and claim just the opposite.
Secondly, there is no logic in thinking that something that has no skill over a 23 year time period will suddenly acquire it if it does the same thing over a 100 year time period. Are you suggesting that 23 wrongs don’t make a right, but a 100 will! If a model is built on false assumptions, it will never make good predictions, except for some occasional curve fitting; a blind-squirrel phenomenon. (See ‘the late 20th century and AGW’ for an example.)
Thirdly, your last statement is the most unfounded. It is derived from the false assumption that is a bedrock of modern environmentalism: Any change to the environment, especially if created by humans, is a negative, and often a crisis. This assumption denies the entire history of the planet, the theory of evolution and, indeed, life itself. Without change, this planet would still be a lifeless rock. But the planet is constantly changing, producing temperature swings far greater than humans can create. The result is a thriving biosphere, particularly when the planet is warm. The vast majority of species do better when the planet is warmer.
Does a warmer (than today) world present some opportunity for adaptation? Certainly. That opportunity is tiny compared to what would be required in a cooling world of equal magnitude. And a warmer world would provide a great many blessings, like dramatically reducing the amount of energy humans need to survive and greatly increasing the available land and season length for food production. It is unfortunate that we face a much greater probability of a cooler world in the centuries ahead. The Holocene could double dip and continue for another 12,000 years, but it is more likely that its days are numbered, according to the recent (5 million year) history of this planet.

ferd berple
June 21, 2012 8:20 am

ferd berple says:
June 21, 2012 at 8:18 am
This new study shows that it is economic activity, not population that drives temperature.
=====
Thus the solution to global warming is to halt economic activity.

ferd berple
June 21, 2012 8:22 am

This study helps explain why the NE USA (the rust belt) has shown long term declining temperatures.

gopal panicker
June 21, 2012 8:27 am

[SNIP: Rude, crude and uninformative. If you have something substantive to say, please do so. -REP]

rgbatduke
June 21, 2012 8:36 am

Koutsayannis rocks through again! Could his hydrology papers prove to be the straw that ultimately breaks the back of the GCMs? Backs (to be sure) that are already strongly bent under the stress of the unremitting work done by McKittrick and McIntyre and their collaborators.
One thing is very clear. Climatologists all flunked statistics in college, or else they took just the one mandatory stats class that covers the Gaussian, erf, t-tests and so on, and never quite made it to Bayes theorem. What is the prior probability that any of the GCMs are close to correct? No better than the probability that we actually understand the climatological dynamics of the glacial-interglacial transition, since it is always good to understand the gross features of the elephant you are examining before trying to resolve the wart on its butt.
rgb

Jean Parisot
June 21, 2012 9:19 am

This paper, and other measured human induced climate effects, open a can of worms into the spatial statistics associated with the distribution of weather data in the land records, the subsequent samples used for analysis, and the “gridding” methodology. I’ve beat on this drum before, the underlying data mapping and analysis thereof is a problem for climate science.

Ian W
June 21, 2012 9:38 am

Atmospheric temperature is not a metric for atmospheric heat content.
The hypothesized ‘green house effect’ is based on the notion that some gases reduce the rate of heat loss to space.
Minor changes in humidity (which has apparently been reducing) could account for all the atmospheric temperature changes by reducing atmospheric enthalpy.
However esoterically – why argue over the incorrect metric?

aaron
June 21, 2012 9:39 am

So, what do the two significant models project for trend GHG accumulation? What about for zero emissions?

aaron
June 21, 2012 9:54 am

“This study helps explain why the NE USA (the rust belt) has shown long term declining temperatures.”
Yeah, what does the surface station data look like when compared to regional CO2 levels and changes?

Russ R.
June 21, 2012 9:56 am

What specifically are the “certain socioeconomic measures” that were tested for their explanatory power?
I can’t find any mention of these details in the abstract, or in McKitrick’s comment in the Financial Post.

JC
June 21, 2012 10:31 am

Why 2002? For that matter, why 1979?

Don Keiller
June 21, 2012 10:55 am

Surrealclimate is not going to like this 🙂

Gary Hladik
June 21, 2012 11:11 am

I’m shocked, shocked I tell you, that the GCMs have so little explanatory power! I mean, they’re run on computers, right? How can computers be wrong???
/sarc

kim2ooo
June 21, 2012 11:19 am

Don Keiller says:
June 21, 2012 at 10:55 am
Surrealclimate is not going to like this 🙂
xxxxxxxxxxxxxxxxxxxxxx
+10

Russ R.
June 21, 2012 11:59 am

At RealClimate, Gavin has already responded to a commenter’s question on this paper. Rather than me trying to paraphrase his points, I’m just going to quote him verbatim:
“McKitrick is nothing if predictable. He makes the same conceptual error here as he made in McKitrick and Nierenberg, McKitrick and Vogel and McKitrick, McIntyre and Herman. The basic issue is that for short time scales (in this case 1979-2000), grid point temperature trends are not a strong function of the forcings – rather they are a function of the (unique realisation of) internal variability and are thus strongly stochastic. With the GCMs, each realisation within a single model ensemble gives insight into what that internal variability looks like, but McKitrick averages these all together whenever he can and only tests the ensemble mean. Ironically then, the models that provide the greatest numbers of ensemble members with which to define the internal variability, are the ones which contribute nothing to his analysis. He knows this is an error since it has been pointed out to him before and for McKitrick and Nierenberg and McKitrick, McIntyre and Herman he actually calculated the statistics using individual runs. In neither case did those results get included in the papers. The results of those tests in the M&N paper showed that using his exact tests some of the model runs were ‘highly significantly’ (p<0.01!!) contaminated by 'socio-economic' factors. This was of course nonsense, and so are his conclusions in this new paper. There are other issues, but his basic conceptual error is big one from which all other stem. – gavin"

theduke
June 21, 2012 12:38 pm

I’d like to see RM’s response to Gavin’s critique, which sounds a bit ummmm . . . dodgy.

June 21, 2012 12:48 pm

for short time scales (in this case 1979-2000)

Actually 1979-2002 in this paper, 1979-2009 for M,M&H; and 1959-2010 for McKitrick and Vogelsang (not “Vogel”). How can the excuse about models not being meaningful on a short time scale apply to all those different intervals? And when they thought Santer et al. had shown consistency between models and observations over the even shorter 1979-1999 interval, they were all happy to declare the matter resolved.

With the GCMs, each realisation within a single model ensemble gives insight into what that internal variability looks like, but McKitrick averages these all together whenever he can and only tests the ensemble mean.

We tested each model individually and in every possible linear combination, including an average of them all. But if Gavin really believes that all a model run shows is “strongly stochastic” interval variability, then how would an accumulation of dozens or hundreds of such runs yield any information, especially if he condemns the average? Lots of people are using GCM runs to make forecasts of climatic changes at the local level. Either these are meaningful or not. If not, fine, let’s say so and throw all those studies in the trash. But if we are supposed to take them seriously, then let’s first test the models against observations and see how well they do.

He knows this is an error since it has been pointed out to him before

I’ve responded to published criticism, but this new claim says, in effect, that GCMs should never be compared to observations on any time scale. It is not one I’ve encountered before. If Gavin thinks he has a legitimate point he should send a comment in to the journal.

some of the model runs were ‘highly significantly’ (p<0.01!!) contaminated by 'socio-economic' factors. This was of course nonsense, and so are his conclusions in this new paper.

The cases Gavin showed in his IJOC paper did not correct for spatial autocorrelation (after he had spent so much ink complaining about the problem in my results, ignoring the fact that it is not a problem in my model but was a real factor in his). And his “significant” coefficients in the model-generated data took the opposite sign to the coefficients estimated on observations, so it is a significant FAIL not a significant replication of the effect. Finally, Gavin ignores the fact that not every model gets equivalent posterior support in the data. The Bayesian Model Averaging accounts for this.

theduke
June 21, 2012 1:06 pm
Maus
June 21, 2012 3:13 pm

“But if we are supposed to take them seriously, then let’s first test the models against observations and see how well they do.”
Arguing in favor of empirical validation is a surefire sign of a crank. It’s a wonder you get anything published when spewing such pseudoscientific nonsense.

timetochooseagain
June 21, 2012 3:22 pm

JC-why 1979? Because that is when the satellite records of RSS and UAH start. Why 2002? Because some of the socioeconomic data was originally available only to then.

June 21, 2012 4:25 pm

Gavin Schmidt has said,
“Any single realisation (of a GCM) can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”
At best, the internal variability is the modellers estimate of natural climate variability.
Otherwise, the socioeconomic factors are likely proxies for aerosol emissions.

June 21, 2012 4:58 pm

for McKitrick and Nierenberg and McKitrick, McIntyre and Herman he actually calculated the statistics using individual runs. In neither case did those results get included in the papers. The results of those tests in the M&N paper showed that using his exact tests some of the model runs were ‘highly significantly’ (p<0.01!!) contaminated by 'socio-economic' factors.
What Gavin is claiming is that if you run the model enough times you will eventually get an accurate representation of the actual climate over whatever period.
Well, Duh. Thats what randomness will do. More subtly he is suggesting that the models accurately model natural variability, but can’t come out and say this, because he is on the record as saying they don’t.

timetochooseagain
June 21, 2012 5:10 pm

Philip Bradley says: ‘Otherwise, the socioeconomic factors are likely proxies for aerosol emissions.”
If so, then the aerosol effects must be to warm, not cool, since many of the models are negatively correlated with the observed spatial pattern, and those models use aerosols…

AlexS
June 21, 2012 5:35 pm

The pathetic tentatives to find meaning in a big pile of noise and chaos with too many variables to be understandable continue…

JC
June 21, 2012 5:50 pm

timetochooseagain: Thanks.

June 21, 2012 7:09 pm

timetochooseagain says:
June 21, 2012 at 5:10 pm
If so, then the aerosol effects must be to warm, not cool, since many of the models are negatively correlated with the observed spatial pattern, and those models use aerosols…

Reduced aerosols warm the surface by increasing solar insolation. And they have reduced over most of the world land area over last 40 years or so. Unfortunately satellite measurements only go back about 15 years (about the time warming ended) and miss the main reductions from the 1970s. But still show modest aerosol reductions over most land.
http://www.atmos-chem-phys-discuss.net/12/8465/2012/acpd-12-8465-2012.html
That GISS and other models incorporate an increasing cooling effect from aerosols over this period is suspect to say the least.
http://data.giss.nasa.gov/modelforce/

June 21, 2012 8:18 pm

Interesting historical analysis of black carbon and organic carbon emissions.
http://www.cee.mtu.edu/~nurban/classes/ce5508/2008/Readings/Bond07.pdf
Note table 4. large reductions in coal emissions in Europe, N America and the former USSR since 1950, with a huge increase in China.
There have been similar reductions in vehicle emissions since the mid-1970s.

ferd berple
June 21, 2012 9:44 pm

aaron says:
June 21, 2012 at 9:54 am
Yeah, what does the surface station data look like when compared to regional CO2 levels and changes?
=============
NE USA. CO2 increasing, industrial activity decreasing, temp decreasing.

RobertInAz
June 22, 2012 1:02 pm

Over at Judith Curry’s place, Steve Mosher points out some issues with the data for island territories – they get the same population and GDP as the parent country. He further states:
“since there is a “flag” for land and water he may just set this to zero at the end. However, he has significant populations in antarctica and his method gives alaska the same population density as he continental US.”
http://judithcurry.com/2012/06/21/three-new-papers-on-interpreting-temperature-trends/#comment-211553
The bad data for the remote locations may get filtered downstream, but it does not look good. I would hope that the socioeconomic model had greater resolution than country for large countries.

RobertInAz
June 22, 2012 1:07 pm

Since Greenland is an island territory of Denmark, I would be very interested in how it was handled – though not interested enough to figure it out myself 😐

June 22, 2012 2:11 pm

RobertInAz
Ross indicates that he drop antarctica from the analysis. However, the problem goes beyond
that. What ross effectively does is this:
The population density in each 5degree by 5 degree cell is “modelled” as follows
Density of cell = total country population/ total land area
That means Alaska has the same population density as New york, and if an Island that belongs to france or england or the us is in the data it gets overestimated as well.
The other issue will be coastal cells.
The concentration of industry and people in specific areas CAN cause UHI, but the only way to tease that out is to use data at the right resolution. The temperature cells are 5 degrees by 5 degrees. But the population data is modelled as if population was uniformaly distributed over the entire land area. We know this to be false. Put another way, Ross population density is not population density. Its something else that defies definition.