Pielke Sr. on sampling error in BEST 2% preliminary results

Is There A Sampling Bias In The BEST Analysis Reported By Richard Muller?

Guest post by Dr. Roger Pielke Senior

In his testimony Richard Muller (which I posted on Friday April 2 2011), indicated that he used 2% of the available surface stations that measure temperatures in the BEST assessment of long-term trends. It is important to realize that the sampling is still biased if a preponderance of his data sources comes from a subset of actual landscape types.  The sampling will necessarily be skewed towards those sites.

If the BEST data came from a different distribution of locations than the GHCNv.2, however, then his results would add important new insight into the temperature trend analyses. If they have the same spatial distribution, however, they would not add anything beyond confirming that NCDC, GISS and CRU were properly using the collected raw data.

We discuss this bias in station locations in our paper

Montandon, L.M., S. Fall, R.A. Pielke Sr., and D. Niyogi, 2011: Distribution of landscape types in the Global Historical Climatology Network. Earth Interactions, 15:6, doi: 10.1175/2010EI371

The abstract reads [highlight added]

“The Global Historical Climate Network version 2 (GHCNv.2) surface temperature dataset is widely used for reconstructions such as the global average surface temperature (GAST) anomaly. Because land use and land cover (LULC) affect temperatures, it is important to examine the spatial distribution and the LULC representation of GHCNv.2 stations. Here, nightlight imagery, two LULC datasets, and a population and cropland historical reconstruction are used to estimate the present and historical worldwide occurrence of LULC types and the number of GHCNv.2 stations within each. Results show that the GHCNv.2 station locations are biased toward urban and cropland (>50% stations versus 18.4% of the world’s land) and past century reclaimed cropland areas (35% stations versus 3.4% land). However, widely occurring LULC such as open shrubland, bare, snow/ice, and evergreen broadleaf forests are underrepresented (14% stations versus 48.1% land), as well as nonurban areas that have remained uncultivated in the past century (14.2% stations versus 43.2% land). Results from the temperature trends over the different landscapes confirm that the temperature trends are different for different LULC and that the GHCNv.2 stations network might be missing on long-term larger positive trends. This opens the possibility that the temperature increases of Earth’s land surface in the last century would be higher than what the GHCNv.2-based GAST analyses report.”

This derived surface temperature trends is higher than what BEST found.  However, this also means that the divergence between the surface temperature trends and the lower tropopsheric temperature trends that we found in

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009: An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

is even higher.  This difference suggests that unresolved issues, including a likely systematic warm bias,  remains in the analysis of long term surface temperature trends.

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
April 4, 2011 8:46 am

2 percent of ~1.9 billion records is more than enough for statistical inference. I wouldn’t expect results to change drastically.

April 4, 2011 9:34 am

I think you missed the point. If the population of readings over represents urban areas say, then a random sample (no matter how large or small) will not fix the problem.
You would indeed be able to make valid statistical inference about the *population of readings*, however, this would not be invalid statistical inference about the world.

April 4, 2011 9:38 am

This paper is a useful contribution to the discussion on the validity of temperature estimates. As anyone who has seen as many climate stations in as many continents as I have knows – the difficulty is not in the algorithm for global temperature but in the stations themselves. We don’t even know with any accuracy where the stations are at present (only to within 1 km radius). We have even less information on where they were in the past. The preliminary results of BEST give very similar answers to current global temperature estimates whilst using only 2% of the data randomly selected. If this 2% had been based on stations with long records and good metadata the results would have been more meaningful.
A second point rarely mentioned is precipitation. It is accepted that an increase in temperature greater than 1C is only possible with water vapour feedback but whether the extra vapour remains in the atmosphere, forms clouds or rain is an open question. GCMs appear to be much worse at simulating precipitation than they are temperature. (http://www.climatedata.info/Precipitation/Precipitation/global.html). What we also need is a good record of precipitation to test them against.

Lady Life Grows
April 4, 2011 10:01 am

james put it beautifully about what population is represented in the preliminary sample.
One thing I was reminded of here some months ago is the universal gas law PV=nRT
We know there is a hockey stick rise in urban temperatures–which generally benefits the life in the cities. The world as a whole is another matter. If worldwide temperatures have risen, say 1 degree C since 1900, then the value of 1 atm at sea level should also have changed. A correction would be needed for the average sea level rise per century as to the inverse square law of gravitational force (which would be in the fourth decimal place or so. The temperature change would produce a result in the third decimal place).
Is there any measurable change in atmospheric pressure over the last 100 years? How about the last 30 years?

Jeff Carlson
April 4, 2011 10:04 am

I would suggest that using 2% when 100% is available is lying via statistics …

April 4, 2011 10:51 am

whoops…the above should read:
“however, this would not be VALID statistical inference about the world.”
sorry about the typo.

April 4, 2011 11:22 am

This is similar to oversampling certain demographic groups at mid-day on Election Day, and then leaking those skewed results, in an attempt to discourage voters from going to the polls.
Does 2004 ring a bell?

April 4, 2011 11:53 am

The sampling number becomes irrelevant to the validity of results if you’ve improperly generated the sub-samples you’re doing the statistical analysis on. Especially when the sub-samples of real world, geographically distinct meanings. This is unfortunately a very easy thing to mess up, and one of the issues they were supposed to be controlling for.

April 4, 2011 12:06 pm

With the 91% of the land surface that lies in forests and uncultivated non-urban areas represented by only 28% of the stations (which are never actually sited there, but in nearby towns), the bias in the GHCN data base should be self-evident. But how this implies that actual GST increases in the last century might be even higher than indicated by the urban-biased GHCN is unclear. Perhaps Roger Pielke can explain.

April 4, 2011 12:35 pm

Aaron’s assertion is problematic to say the least. It is a bit like calculating the average weight of humans by randomly sampling all those who attend soccer matches around the world. It is the classic Truman-Dewey situation.
The Pielke argument boils down to the need to sample in a way that ensures all environments/conditions that are believed likely to influence the temperature trend are adequately sampled and weighted before making any pronouncements. Only if there is very little variance in the temperature trend by individual station record would a generalization be in order. Since we know there is considerable variability in the trends among stations then random sampling from the entire sample is simply wrong.

April 4, 2011 12:51 pm

Why is everybody so obsessed with these surface temperature records? There is so much more that proves AGW than just that…

mindert eiting
April 4, 2011 1:31 pm

Random sampling is not the problem. They should have used stratified sampling. The GHCN base is biased in numerous aspects, because it is an historical data base. Besides the aspects mentioned by Pielke Sr., we have the obvious bias that almost 70 percent of the stations resides in the middle region of the Northern Hemisphere. Show us the trends for latitude segments, to begin with.

Theo Barker
April 4, 2011 2:01 pm

flavio says:

Why is everybody so obsessed with these surface temperature records? There is so much more that proves AGW than just that…

Oh, you mean things like more snow and less snow, more rain and less rain, more typhoons and less typhoons, more droughts and less droughts, more floods and less floods, snow cap receding on Kilimanjaro, Himalayan glaciers disappearing by 2035, upper tropospheric hot spots, accelerating sea-level rise, coral bleaching, sand bars sinking, etc… That’s right and it’s all GWB’s fault.

grandpa boris
April 4, 2011 2:11 pm

“Why is everybody so obsessed with these surface temperature records? There is so much more that proves AGW than just that…”
Why is everybody so obsessed? Because when a claim of warming is made, that claim has to be backed up by some evidence of warming. Comparing surface temperatures records is a meaningful means to confirm or controvert the claim that the climate across the whole planet is tilting toward warming.
Once the surface temperature record confirms that the warming is happening, the question becomes “is this an extraordinary, unnaturally rapid warming?” This question can also be answered in a meaningful way for the last 200 or so years through the use of the surface temperature record. This is a contentious subject because the majority of surface temperature monitoring stations are in the urban areas, so will be affected by the UHI effect. Separating the local land use and land cover effects on the local micro-climate from the planet-wide warming is critically important.
Then there is a question of the “A” in “AGW”. We’re in the interglacial warm period and have been in it for the last 8-10K years. The planet is supposed to be warming up. Is the current immediate warming trend is accelerated by human activity or not, and if yes, then by how much? Are the sea levels rising faster than they were over the last 1000 years? Are the temperatures spiking higher and faster than they did over the last 1000 years?
Let’s say you were a short life-span creature capable of cognition and making observations, born at the beginning of spring, in march, when the snow is only beginning to melt, and you had observed the warming of the weather into the first heat wave of summer. Without historical record you may be forgiven to assume that the warming will keep on going at the same pace, 10C/month, until by next march the temperature will be 120C, the oceans will mostly boil away, and Earth will look much like Venus. If you had chanced to be living in an exposed parking lot, your prediction may be even more dire.
This is what we are seeing today in the climate science. Extrapolations are made on the basis of insufficient data. Predictions are made based on poor models, spotty understanding of the natural processes, and generalization of local effects. Add to this the outright fabrication of data (Briffa, Mann, Jones, etc.) and scare-mongering (Gore, Hansen, etc.) and what do we get? We get that there is very little that gives credence to the A in the AGW, while making the W much less than it’s made out to be by the alarmists.

April 4, 2011 2:17 pm

flavio says:
April 4, 2011 at 12:51 pm

“Why is everybody so obsessed with these surface temperature records? There is so much more that proves AGW than just that…”

Unfortunately, there isn’t anything that “proves” AGW at all. Lots of claims exist, but no data, none, zero, zip, bupkis. You can’t prove AGW without data, measured, real data. Global Warming is another thing, there has been at least 300 years of warming, and I’m hoping it continues – another Little Ice Age would really screw up my retirement.

April 4, 2011 2:18 pm

Moderator, please correct the blockquote cite I used. I clearly used it improperly.
[Reply: Just use the blockquote command, don’t bother with the cite. ~dbs]

Tom T
April 4, 2011 2:19 pm

Flavio Really like what?

April 4, 2011 2:49 pm

FWIW, I agree 100%

April 4, 2011 2:59 pm

Let’s stop talking about Man-Made Global Warming , Man-Made Climate Change or what ever name you want to give it.We all know it is nonsense. We all know it is about power. Global power. Controling you and me. All your readers know it’s a big lie . We all know it’s natural and that CO2 has nothing to do with it. So it is better to investigate the plan behind the lie.
Listen to Dr Michael S. Coffman PhD
Author – Rescuing a broken America

A Night with Michael Coffman – Part 1
There are 11 parts

Rob Honeycutt
April 4, 2011 3:08 pm

If there were a statistically significant error based on some aspect of the selection of the 2% of data then, think about this for just a moment… what are the chances that the error would match all the other existing data sets?

Atomic Hairdryer
April 4, 2011 3:08 pm

Re: Aaron says: April 4, 2011 at 8:46 am

2 percent of ~1.9 billion records is more than enough for statistical inference. I wouldn’t expect results to change drastically.

Try it with 2% of a DNA sample, especially if the sequences you’re given are cherry picked. That 2% may be the difference between man and mouse.

April 4, 2011 7:35 pm

Ron Manley you said on April 4, 2011 at 9:38 am :
“What we also need is a good record of precipitation to test them against”
You should start with the Sydney Australia record from Observatory Hill, in the heart of the city where there is a continuous monthly rainfall record since 1859.
This shows that rainfall in Sydney varies widely from year to year (annual summary data) or even for large groups of years, but that the long term trend is completely flat.
Temperature also followed this pattern from the start for over 90 years until a change in the built environment caused UHI to introduce a trend type increase each year.
Ditto Adelaide, the capital of the state of South Australia – and yes there’s more ……
In my opinion, rainfall rather than temperature, is the key to understanding the climate, at least in Australia.
You have to dig deep to get the true temperature figures, location by location. Development of global indexes is dramatic stuff but it is doubtful if doing it the macro way will ever produce the true history of temperature.

April 4, 2011 9:24 pm

2%? Two percent??
If that ain’t the little tail trying to wag the dog, I don’t know what is.
Reminds me of the “correction” factor applied to global sea level measurements based upon a just a few tide gauges…or the way GISS “fakes” thousands of square miles in the arctic.
How stupid do these people think we are?
And how long do they think we are going to tolerate such complete perversions of the scientific method?
They rush stuff to Congress when it is expedient for them to do so….and then they stall peer review [take Lindzen’s paper which has been sitting on ice for months at Science], when it isn’t!!
Grrrrr. Time to light the torches.
Norfolk, VA, USA

michael fellion
April 5, 2011 12:21 am

the comment on rainfall really strikes to the heart of the matter. The amount of rain would increase with temperature as the waters would evaporate more. If it did not come down as rain we would have a long term increase in humidity. We have rainfall measurements going back a long time for a large part of the world. Why is that data ignored by the guys claiming man is changing the environment.

April 5, 2011 5:40 am

2% is certainly a large enough sample if the population is uniformly distributed with respect to the factors influencing the variable of interest – think of the samples for political polling. However, as is the case with political polling, you can get very different results if you poll “likely voters” as opposed to “registered voters”. Roger Pielke is basically saying that Muller should at least demonstrate that his sampling accounts for the non-uniformity of factors influencing temperature trends. Since he did not, his general comments as to the trend should have contained a more extensive and specific list of caveats.

April 5, 2011 7:11 am

Its pretty simple really. Muller can choose a few different 2% samples, and see if he gets the same result. If he does, then it looks like 2% is enough. If not, then 2% is not enough.
However, it hardly seems surprising that Muller finds what everyone else finds. Thus far it seems that the land based temperature record is well and truly verified. Dare I say it, this particular bit of the science is settled?

April 5, 2011 11:15 am

Aaron says:
April 4, 2011 at 8:46 am
2 percent of ~1.9 billion records is more than enough for statistical inference. I wouldn’t expect results to change drastically.
If I set out to establish a graph of the running national mean household income, but then geographically limit my data selection to urban areas, will I be able to trust my final result?
Of course not.

Al Tekhasski
April 5, 2011 2:34 pm

If your initial set of data is biased by placement of sensors to areas with anthropogenic development (as almost ALL stations are), no shuffling/re-selection of subsets can prove anything. You need NEW set of stations, more dense sampling.
Same goes for rainfall. The rain data have exactly the same problem as ground stations – insufficient sampling density. Given fractal character of cloudiness and associated rainfall patterns, one station is not going to sample the amount of rainfall correctly for an area. For example, during a thunderstorm front crossing, one part of town can get 2″ of rain, while another part can have nearly zero. With one sensor/observatory you never know, even if your (single spot) records go back for 150 years. Theoretically the randomness of weather should give you proper statistical estimate over time, but the question is was there enough number of weather events in a given season. It is not. So all global rainfall data are bogus as ground temperature data.

April 5, 2011 11:37 pm

The large debate around the McShane & Wyner paper last year has finally published, would you please list them on your site, Dr. Pielke? It is not allowed to leave comments there.
Annals of Applied Statistics, Vol. 5, No.1.

April 6, 2011 2:46 am

For the cost of one green power station you could readily create small climate monitoring stations and disperse them about the globe to the satisfaction of all parties and in 30 years have a definitive answer on whether the globe is actually warming or not. The fact that nobody is even talking about such a project but instead relying entirely on the dubious data of a number of sources never intended for climate monitoring tells you all you need to know about the AGW scam and the people that support it.
What we need is a team of sceptical scientists to get together and propose such a scheme, then give it a name like “Global Climate Monitoring Network” and then ram it down the throats of Team AGW whenever they speak – “Why don’t you support the GCMN?”, “Why do you rely on unreliable data when GCMN would give us definitive answers?” “Why are you trying to measure temperature from hundred of miles in space when GCMN would tell us the temperature here on earth for a fraction of the cost” etc. etc. etc.
Team AGW are scared of real data – they will run a mile if you push them to accept that real data is needed.

%d bloggers like this: