The Big Valley: Altitude Bias in GHCN

Foreword: The focus of this essay is strictly altitude placement/change of GHCN stations. While challenge and debate of the topic is encouraged, please don’t let the discussion drift into other side issues. As noted in the conclusion, there remain two significant issues that have not been fully addressed in GHCN. I believe a focus on those issues (particularly UHI) will best serve to advance the science and understanding of what GHCN in its current form is measuring and presenting, post processing. – Anthony

Tibet valley, China. Image from Asiagrace.com - click for more info/poster

By Steven Mosher, Zeke Hausfather, and Nick Stokes

Recently on WUWT Dr. McKitrick raised several issues with regard to the quality of the GHCN temperature database. However, McKitrick does note that the methods of computing a global anomaly average are sound. That is essentially what Zeke Hausfather and I showed in our last WUWT post. Several independent researchers are able to  calculate the Global Anomaly Average with very little differences between them.

GISS, NCDC, CRU, JeffId/RomanM, Tamino, ClearClimateCode,  Zeke Hausfather, Chad Herman, Ron Broberg,  Residual Analysis, and MoshTemp all generally agree. Given the GHCN data, the answer one gets about the pace of global warming is not in serious dispute. Whether one extrapolates as GISS does or not, whether one uses a least squares approach or a spatial averaging approach, whether one selects a 2 degree bin or a 5 degree bin, whether one uses an anomaly period of 1961-90 or 1953-1982, the answer is the same for virtually all practical purposes. Debates about methodology are either a distraction from the global warming issues at hand or they are specialist questions that entertain a few of us. Those specialist discussions may refine the answer or express our confidence in the result more explicitly, but the methods all work and agree to a high degree.

As we noted before, the discussion should therefore turn and remain focused on the data issues. How good is GHCN as a database and how serious are its shortcomings? As with any dataset, those of us who analyze data for a living look for several things. We look for errors, we look for bias, we look at the sampling characteristics, and we look at adjustments.  Dr. McKitrick’s recent paper covers several topics relative to the make up and changes in GHCN temperature data. In particular he covers changes over time in the sampling of GHCN stations. He repeats a familiar note: over time the stations representing the temperature data set have changed. There is, as most people know, a fall off in stations reporting shortly after 1990 and then again in 2005. To be sure there are other issues that he raises as well. Those issues, such as UHI, will not be addressed here. Instead, the focus will be on one particular issue: altitude. We confine our discussion to that narrow point in order to remove misunderstandings and refocus the issue where it rightly belongs.

McKitrick writes:

Figure 1-8 shows the mean altitude above sea level in the GHCN record. The steady increase is consistent with a move inland of the network coverage, and also increased sampling in mountainous locations. The sample collapse in 1990 is clearly visible as a drop not only in numbers but also in altitude, implying the remote high-altitude sites tended to be lost in favour of sites in valley and coastal locations. This happened a second time in 2005. Since low-altitude sites tend to be more influenced by agriculture, urbanization and other land surface modification, the failure to maintain consistent altitude of the sample detracts from its statistical continuity.

There are several claims here.

  1. The increase in altitude is consistent with a move inland and out of valleys
  2. The increase in altitude is consistent with more sampling in mountainous locations.
  3. Low level sites tend to be influenced by agriculture, urbanization and other land use modifications

A simple study of the metadata available in the GHCN  database shows that the stations that were dropped do not have the characteristics that McKitrick supposes. As Nick Stokes documents, the process of dropping stations is more related to dropping coverage  in certain countries rather than a direct effort to drop high altitude stations . McKitrick also get the topography specifics wrong.  He supposes that the drop in thermometers shifts the data out of mountainous inland areas into the valleys and low level coastal areas, areas dominated by urbanization and land use changes. That supposition is not entirely accurate as a cursory look at the metadata shows.

There are two significant periods when stations are dropped; Post 1990 and again in 2005. As Stokes show below.

FIGURE 1: Station drop and average altitude of stations.

The decrease in altitude is not caused by a move into valleys, lowland and coastal areas. As the following figures show, the percentage of coastal stations is stable, mountainous stations are still represented and the altitude loss more likely comes from the move out of mountainous valleys .

A simple summary of the total inventory shows this

ALL STATIONS Count Total Percent
Coastal 2180 7280 29.95
Lake 443 7280 6.09
Inland 4657 7280 63.97

TABLE 1: Count of Coastal Stations

The greatest drop in stations occurs in the 1990-1995 period and the 2005 period, as shown above McKitrick supposes that the drop in altitude means a heavier weighting for coastal stations. The data do not support this

Dropped Stations 90-95 Count Total Percent
Coastal 487 1609 30.27
Lake 86 1609 5.34
Inland 1036 1609 64.39
Dropped in 2005-06
Coastal 104 1109 9.38
Lake 77 1109 6.94
Inland 928 1109 83.68

TABLE 2: Count of Coastal Stations dropped

The great march of the thermometers was not a trip to the beach. Neither was the drop in altitude the result of losing a higher percentage of  “mountainous” stations.

FIGURE 2: Distribution of Altitude for the entire GHCN Inventory

Minimum 1st Qu Median Mean 3rd Qu Max NA
-224.0 38.0 192.0 419.9 533.0 4670 142

TABLE 3: descriptive statistics for Altitude of the entire dataset

We can assess the claim about the march of thermometers down the mountains in two ways. First, by looking at the actual distribution of dropped stations.

FIGURE 3 Distribution of altitude for stations dropped in 1990-95

Minimum 1st Qu Median Mean 3rd Qu Max NA
-21.0 40.0 183.0 441 589.2 4613.0 29

TABLE 4:  Descriptive statistics for the Altitude of dropped stations

The character of stations dropped in the 2005 time frame are slightly different. That distribution is depicted below

FIGURE 4 Distribution of altitude for stations dropped in 2005-06

Minimum 1st Qu Median Mean 3rd Qu Max NA
–59 143.0 291.0 509.7 681.0 2763.0 0

TABLE 5:  Descriptive statistics for the Altitude of dropped stations 2005-06

The mean of those dropped is slightly higher than the average station. That hardly supports the contention of thermometers marching out of the mountains. We can put this issue to rest with the following observation from the metadata. GHCN metadata captures the topography surrounding the stations. There are four classifications FL, HI, MT and MV: flat, hilly, mountain and mountain valley. The table below hints at what was unique about the dropout.

Type Entire Dataset Dropped after90-95 Dropped 2005-06 Total of two major movements
Flat 2779 455 (16%) 504 (23%) 959 (43%)
Hilly 3006 688 (23%) 447 (15%) 1135 (38%)
Mountain 61 15 (25%) 3 (5%) 18 (30%)
Mountain Valley 1434 451(31%) 155 (11%) 606 (42%)

TABLE 6 Station drop out by topography type

There wasn’t shift into valleys as McKitrick supposes, but rather mountain valley sites were dropped.  Thermometers left the flatlands and the mountainous valleys. That resulted in a slight decrease in the overall altitude.

That brings us to McKitrick’s third critical claim. McKitrick claims that the dropping of thermometers over weights places more likely to suffer from urbanization and differential land use.  “Low level sites tend to be influenced by agriculture, urbanization and other land use modifications.” The primary concern that Dr. McKitrick voices is that the statistical integrity of the data may have been compromised. That claim needs to be turned into a testable hypothesis. What exactly has been compromised? We can think of two possible concerns. The first concern is that by dropping higher altitude mountain valley stations one is dropping stations that are colder. Since temperature decreases with altitude this would seem to be a reasonable concern. However, it is not. Some people make this claim, but McKitrick does not. He doesn’t because he is aware that the anomaly method prevents this kind of bias. When we create a global anomaly we prevent this kind of bias from entering the calculation by scaling the measurements of station by the mean of that station. Thus, a station located at 4000m may be at -5C, but if that station is always at -5C its anomaly will be zero. Likewise, a station at sea level in Death Valley that is constantly 110F will also have an anomaly of zero. Anomaly captures the departure from the mean of that station.

What this means is that as long as high altitude stations warm or cool at the same rate as low altitude stations, removing them or adding them will not bias the result.

To answer the question of whether dropping or adding higher altitude stations impacts the trend we have several analytical approaches. First, we could add back in stations. But we can’t add back in GHCN stations that were discontinued. The alternative is to add stations from other databases.  Those studies indicate that adding addition stations does not change the trends:

http://www.yaleclimatemediaforum.org/2010/08/an-alternative-land-temperature-record-may-help-allay-critics-data-concerns/

http://moyhu.blogspot.com/2010/07/using-templs-on-alternative-land.html

http://moyhu.blogspot.com/2010/07/arctic-trends-using-gsod-temperature.html

http://moyhu.blogspot.com/2010/07/revisiting-bolivia.html

http://moyhu.blogspot.com/2010/07/global-landocean-gsod-and-ghcn-data.html

The other approach is to randomly remove more stations from GHCN and measure the effect. If we fear that GHCN has biased the sample by dropping higher altitude stations, we can drop more stations and measure the effect. There are two ways to do this. A Monte Carlo approach and an approach that divides the existing data into subsets:

Nick Stokes has conducted the Monte Carlo experiments. In his approach stations are randomly removed  and global averages are recomputed. Stations were removed based on a randomization approach that preferentially removed high altitude stations. This test gives us an estimate of the Standard Error as well.

Period Trend of All Re-Sampled s.d
1900-2009 0.0731 0.0723 0.00179
1979-2009 0.2512 0.2462 0.00324
Mean Altitude 392m 331m

Table 7 Monte Carlo test of altitude sensitivity

This particular test consists of selecting all the stations whose series end after 1990. There are 4814 such stations. The sensitivity to altitude reduction was performed by randomly removing higher altitude stations. The results indicate little to no interaction between altitude and temperature trend in the very stations end after the 1990 period.

The other approach, dividing the sample, was approached in two different ways by Zeke Hausfather and Steven Mosher. Hausfather, approached the problem using a paired approach. Grid cells are selected for processing if the have stations both above and below 300m. This eliminates cells that are represented by a single station.  Series are then constructed for the stations that lie above 300m and below 300m.

Period Elevation > 300m Elevation <300m
1900-2009 .04 .05
1960-2009 .23 .19
1978-2009 .34 .28

Table 8. Comparison of trend versus altitude for paired station testing

FIGURE 5: Comparison of temperature Anomaly for above mean and below mean stations

This test indicates that higher elevation stations tend to see higher rates of warming rather than lower rates of warming. Thus, dropping them, does not bias the temperature record upward. The concern lies in the other direction. If anything the evidence points to this: dropping higher altitude stations post 1990 has lead to a small underestimation of the warming trend.

Finally, Mosher, extending the work of Broberg tested the sensitivity of altitude by dividing the existing sample in the following way, by raw altitude and by topography.

  1. A series containing all stations.
  2. A series of lower altitude stations Altitude < 200m
  3. A series of higher altitude stations Altitude >300m
  4. All Stations in Mountain Valleys
  5. A series of stations at very high altitude. Altitude >400m

The results of that test are shown below

FIGURE 6 Global anomaly.  Smoothing performed for display purpose only with a 21 point binomial  filter

The purple series is the highest altitude stations. The red series lower elevation series. Green is the mountain valley stations. A cursory look at the “trend” indicates that the higher elevation stations warm slightly faster than the lower elevation, confirming Hausfather. Dropping higher elevation stations, if it has any effect whatsoever works to lower the average.  Stations at lower altitudes tend to warm less rapidly than stations at higher elevations. So quite the opposite of what people assume, the dropping of higher altitude stations is more likely to underestimate the warming rather than over estimate the warming.

Conclusion:

The distribution of altitude does change with time in GHCN v2.mean data. That change does not signal a march of thermometers to places with higher rates of warming. The decrease in altitude is not associated with a move toward or away from coasts. The decrease is not clearly associated with a move away mountainous regions and into valleys, but rather a movement out of mountain valley and flatland regions. Yet, mountain valleys do not warm or cool in any differential manner. Changing altitude does not bias the final trends in any appreciable way.

Regardless of the differential characteristics associated with higher elevation, changes in temperature trends is not clearly or demonstrably one of them.  For now, we have no  evidence whatsoever that marching thermometers up and down hills makes any contribution to a overestimation of the warming trend.

Dr. McKitrick presented a series of concerns with GHCN. We have eliminated the concern over changes in the distribution of altitude. That merits a correction to his paper. The concerns he raised about latitude, and airports and UHI will be addressed in forthcoming pieces. Given the preliminary work done on airports. (and here) and latitude to date, we can confidently say that the entire debate will come down to two basic issues: UHI and adjustments, the issues over latitude changes and sampling at airports will fold into those discussions. So, here is where the debate stands. The concerns that people have had about methodology have been addressed. As McKitrick notes, the various independent methods get the same answers. The concern about altitude bias has been addressed. As we’ve argued before, the real issue with temperature series is the metadata, its related microsite and UHI issues and adjustments made prior to entry in the GHCN database.

Special thanks to Ron Broberg for editorial support.

References:

A Critical Review of Global Surface Temperature Data Products. Ross McKitrick, Ph.D. July 26, 2010

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
161 Comments
Inline Feedbacks
View all comments
August 19, 2010 6:09 pm

Steven Mosher:
“First there is no such thing as raw data. All data is processed. Second, I too would like to get my hands on the records made of the actual instrument readings. Until such time, we have what we have.”
Until I can understand what what the “process” of the data entails and why it’s “processed”, I can never have an understanding of what is going on here. Nobody can. You certainly can’t, Mr. Mosher.
Andrew

Rick Bradford
August 19, 2010 6:15 pm

Why, if global warming is held to be so important an issue, are we collecting much less data than before?
Cost-cutting, perhaps? On “the greatest moral challenge of our time”? Surely not.
Or have hubristic climate scientists decided that their computer models are so accurate at interpolation that it is no longer necessary to make observations with the same rigor as before?

Frank Mosher
August 19, 2010 6:29 pm

Good work, Steven. I appreciate all your hard work. I also marvel at E.M. Smith’s analysis. IMHO, when people tune in to the weather forecast, they are primarily interested in the projected high. Most people don’t care if the low tonight in Fair Oaks is 52 0r 54. But calculating the average, there is a huge difference. I believe that using an average of the high/low, to calculate temperature trends, gives misleading impression to the public, i.e. 94 as the high for mon. and tues., but low of 52 for mon. and 54 for tues. gives a 1 degree of “warming”. Really? Not to most people. fm

intrepid_wanders
August 19, 2010 6:43 pm

Steven Mosher,
“5°c, it is a possible instantaneous error in one day, it is not a certainty and it is not systematic.”
Even with a 5deg C three sigma, how in the world are reports produced in standard divisions in 0.1/0.25/0.5 beside the obvious, natural/unnatural(measuement error) variation “swamps” EVERYTHING including altitude, UHI, ocean affects, planetary affects, etc., I feel that without a gauge r&r of the temperature network as well as a correlation with/to individual measurement gauges on the network, your comprehensive and very clever group of analysts work is “moot”.
In my industry, all the product would have been recalled, and our customer would be looking for another supplier.

DeNihilist
August 19, 2010 6:47 pm

Thanx guys. Love seeing the change in Gestalt that occured last November continue on.

Dan Murphy
August 19, 2010 6:54 pm

Steven,
Thank you for replies. I’ve just returned home from the office, please give me time to take care of animals and chores before I can take the proper amount of time to read and digest in your replies. Respectfully, Dan

Marcia, Marcia
August 19, 2010 7:09 pm

What this means is that as long as high altitude stations warm or cool at the same rate as low altitude stations, removing them or adding them will not bias the result.
It will not bias the anomaly. But it will bias the actual temperature.
I think this post will confuse many people.

Marcia, Marcia
August 19, 2010 7:20 pm

He (i.e.McKitrick) doesn’t because he is aware that the anomaly method prevents this kind of bias.
Would you please prove this is what McKitrick was thinking.

August 19, 2010 7:43 pm

See my post on the First Difference Method of computing temperature indices on CA at http://climateaudit.org/2010/08/19/the-first-difference-method/ .

BillyBob
August 19, 2010 7:44 pm

The trouble with anomalies is setting an artificial normal of zero.
When you do that variations of .1 or .5 or .6 degrees looks huge.
All you have to do is look at Central England Temperature over the last 300 years and realize variations of 3.5C – 30 to 40% – from the normal of 9C is not unusual over time frames shorter than what some people claim for the whole of the last century.
And thats with thermometers much less contaminated by UHI than the airport temperatures we now measure to the exclusion of rural areas.
http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png
Whatever warming their may be (and I doubt the max NOW is higher than the 1930’s) is purely part of the natural cycle of the end of the last ice age.
And it will all end when the next ice starts or when some nutbar “scientists” are allowed to shoot SO2 into the stratosphere.
GHCN is unreliable and contaminated by UHI.

Paul
August 19, 2010 7:59 pm

The authors claim that since high altitude stations have higher trends, the loss of those stations only tend to artificially lower rather than raise the overall trend. McKitrick explains why the loss of stations-at-altitude might matter he writes: “Since low-altitude sites tend to be more influenced by agriculture, urbanization and other land surface modification.” The authors cite and reprint that remark.
Unfortunately, this essay fails to demonstrate a lack of bias due to altitude changes. It purports to respond to McKitrick’s claim but does not.
Here’s why:
McKitrick is quoted as claiming that lower altitude stations have trends in greater excess of their GHG warming–due to land use changes. We can state that idea as the following decomposition:

StationTrend = GHGTrend + LandUseTrend.

The global average trend is going to be something of the form:

GlobalTrend = weight1 * LowStationTrend + weight2 * AtAltitudeStationTrend.

If we substitute the variables, we see:

GlobalTrend = weight1 * (GHGTrend + LandUseTrend) + weight2 * AtAltitudeStationTrend.

Clearly, as weight1 increases and weight2 decreases, the LandUseTrend is assigned a bigger weight and therefore may bias the GlobalTrend upward if the LandUseTrend is larger than the difference between high and low altitude trends.
Ergo, the conclusion of the authors: “Dr. McKitrick presented a series of concerns with GHCN. We have eliminated the concern over changes in the distribution of altitude.” is overstated.
They have only demonstrated that its not enough for the LandUseTrend to be positive in order to bias the GlobalTrend up, its necessary for the LandUseTrend to be larger in magnitude than the difference in trends between low and high altitude stations.

Amino Acids in Meteorites
August 19, 2010 8:08 pm

Partagraph below Figure 5:
This test indicates that higher elevation stations tend to see higher rates of warming rather than lower rates of warming. Thus, dropping them, does not bias the temperature record upward. The concern lies in the other direction. If anything the evidence points to this: dropping higher altitude stations post 1990 has lead to a small underestimation of the warming trend.
“Thus, dropping them, does not bias the temperature record upward.”
The wording of this may be wrong. It may be better worded:
“Thus, dropping them, does not bias the temperature anomaly record upward.”
Taking cooler location stations out of the record will potentially bias the actual temperature upward while not statistically significantly affecting anomaly. Global warming is about 1/10ths of a degree. Removing cooler stations can make 1/10ths of a degrees difference.

DR
August 19, 2010 8:26 pm

Steve Mosher said:

LOW altitude stations WARM AT THE SAME RATE as HIGH altitude. its simple physics. Think about it. If the worls warms 10c over time, do you think that WARMING can be confined? the warm low sites get warmer and the high cool sites get warmer. and they warm at the SAME RATE over TIME.

John Christy said:

Detailed temperature reconstructions were generated for the developed San Joaquin Valley of California as well as the adjacent foothills of the Sierra. The daytime temperatures of both regions show virtually no change over the past 100 years, while the nighttime temperatures indicate the developed Valley has warmed significantly while the undeveloped Sierra foothills have not.

Have you done a detailed analysis to test John Christy’s findings?

Rex from NZ
August 19, 2010 8:35 pm

Please … what are: GRINS … ?

Amino Acids in Meteorites
August 19, 2010 8:37 pm

Those studies indicate that adding addition stations does not change the trends
I don’t think there is a lot of arguing over anomaly (except in GISS) from either side of this issue.
But anomaly is not an important issue in the media. We only hear about hottest year ever this, hottest decade ever that, in temperatures, not in anomalies. Dropped stations in cooler areas can create 1/100ths to 1/10ths degrees of warming. I don’t hear any reports in the media saying anomaly this and that. Slight differences in trend that are not statistically significant in the trend/anomaly and are only measured in 1/100ths or 1/10ths of a degree in actual temperature. And again, actual temperature is what global warming is all about, and what warmest this and that reported in the media is all about.

Alvin
August 19, 2010 8:57 pm

E.M.Smith says:
August 19, 2010 at 10:52 am
So, IMHO, we have crappy data and get crappy results from it. Admiring the uniformity of the crappiness does not yield much comfort.

NOM!

davidmhoffer
August 19, 2010 9:01 pm

As much as I agree with the criticisms of this article, I have to agree with the conclusions with one exception.
Some time ago I took the GISS gridded data and looked at the trend with the grid points that lost coverage after 1990 due to station drop out removed from the entire data series. To my surprise (being a hardcore skeptic) the result was less warming, not more. I confirmed this by trending the dropped drid points prior to being dropped, and sure enough, they exhibited a slightly higher warming trend (at least until they dropped out) than the grid points that remained. I cannot conlude from that that there was anything nefarious about which stations were dropped and when.
I do want to comment however on the use of anomalies. As Mr Mosher contends, the use of anomalies solves a great number of problems in comparing trends and data from a wide variety of sites at different altitudes, latitudes, and so on. But the use of anomalies does create a new problem and that is one of perspective. We are debating global temperature changes in terms of tenths of a degree per decade, sometimes hundreths of a degree are all that define a “record high year” compared to the previous record.
On a scale of -1 to +1 degrees, 0.6 degrees per century of warming looms large. To put it in perspective, take the temperature of a city like Moscow, or Winnipeg, or Edmonton and plot the annual temperatures and then plot them again with an extra 0.6 degrees added to every data point. Those cities have annual temperature ranges of 70 degrees or more, and when you plot the data in that fashion, the significance of the warming trend becomes very muted. To make the point further, plot the annual temperature swings of those cities on a Kelvin scale with a range from zero to 320. While the annual variation is still visible, a few tenths of a degree of warming over a period of a century just disappears from view. In fact, from that perspective, the marvel of our planet is just how incredibly stable the over all global temperature actually is.
As I skeptic I firmly believe that the earth has been warming since the LIA, but well within natural variability, and almost all beneficially.

FredG
August 19, 2010 9:14 pm

Steven Mosher,
Your models are writing checks reality can’t cash…
p.s. didn’t see the error bars in any of your plots.

899
August 19, 2010 9:18 pm

George E. Smith says:
August 19, 2010 at 1:40 pm
“”” Rex from NZ says:
August 19, 2010 at 11:26 am
Can someone clarify two things for me: (1) How many times a day is
the temperature recorded for the stations (in general), [–snip–]
Well probably they are read just once per day but apparently with a max/min thermometer, which gives you two numbers per day; but evidently at no particular time. [–snip rest–]
George,
Well, I dunno.
It really would depend upon the time of day, wouldn’t it?
What if the temp. were to be taken at the most likely times of min and max?
That would be more honest, would it not?
But WHEN it the key here.
WHEN?
Does ANYONE really know when ALL those temp sensors are actually read?
Are there ANY time tables for each and every device?
For all we know they might be read whenever …
And you know what that will produce, don’t you?