The Big Valley: Altitude Bias in GHCN

Foreword: The focus of this essay is strictly altitude placement/change of GHCN stations. While challenge and debate of the topic is encouraged, please don’t let the discussion drift into other side issues. As noted in the conclusion, there remain two significant issues that have not been fully addressed in GHCN. I believe a focus on those issues (particularly UHI) will best serve to advance the science and understanding of what GHCN in its current form is measuring and presenting, post processing. – Anthony

Tibet valley, China. Image from Asiagrace.com - click for more info/poster

By Steven Mosher, Zeke Hausfather, and Nick Stokes

Recently on WUWT Dr. McKitrick raised several issues with regard to the quality of the GHCN temperature database. However, McKitrick does note that the methods of computing a global anomaly average are sound. That is essentially what Zeke Hausfather and I showed in our last WUWT post. Several independent researchers are able to  calculate the Global Anomaly Average with very little differences between them.

GISS, NCDC, CRU, JeffId/RomanM, Tamino, ClearClimateCode,  Zeke Hausfather, Chad Herman, Ron Broberg,  Residual Analysis, and MoshTemp all generally agree. Given the GHCN data, the answer one gets about the pace of global warming is not in serious dispute. Whether one extrapolates as GISS does or not, whether one uses a least squares approach or a spatial averaging approach, whether one selects a 2 degree bin or a 5 degree bin, whether one uses an anomaly period of 1961-90 or 1953-1982, the answer is the same for virtually all practical purposes. Debates about methodology are either a distraction from the global warming issues at hand or they are specialist questions that entertain a few of us. Those specialist discussions may refine the answer or express our confidence in the result more explicitly, but the methods all work and agree to a high degree.

As we noted before, the discussion should therefore turn and remain focused on the data issues. How good is GHCN as a database and how serious are its shortcomings? As with any dataset, those of us who analyze data for a living look for several things. We look for errors, we look for bias, we look at the sampling characteristics, and we look at adjustments.  Dr. McKitrick’s recent paper covers several topics relative to the make up and changes in GHCN temperature data. In particular he covers changes over time in the sampling of GHCN stations. He repeats a familiar note: over time the stations representing the temperature data set have changed. There is, as most people know, a fall off in stations reporting shortly after 1990 and then again in 2005. To be sure there are other issues that he raises as well. Those issues, such as UHI, will not be addressed here. Instead, the focus will be on one particular issue: altitude. We confine our discussion to that narrow point in order to remove misunderstandings and refocus the issue where it rightly belongs.

McKitrick writes:

Figure 1-8 shows the mean altitude above sea level in the GHCN record. The steady increase is consistent with a move inland of the network coverage, and also increased sampling in mountainous locations. The sample collapse in 1990 is clearly visible as a drop not only in numbers but also in altitude, implying the remote high-altitude sites tended to be lost in favour of sites in valley and coastal locations. This happened a second time in 2005. Since low-altitude sites tend to be more influenced by agriculture, urbanization and other land surface modification, the failure to maintain consistent altitude of the sample detracts from its statistical continuity.

There are several claims here.

  1. The increase in altitude is consistent with a move inland and out of valleys
  2. The increase in altitude is consistent with more sampling in mountainous locations.
  3. Low level sites tend to be influenced by agriculture, urbanization and other land use modifications

A simple study of the metadata available in the GHCN  database shows that the stations that were dropped do not have the characteristics that McKitrick supposes. As Nick Stokes documents, the process of dropping stations is more related to dropping coverage  in certain countries rather than a direct effort to drop high altitude stations . McKitrick also get the topography specifics wrong.  He supposes that the drop in thermometers shifts the data out of mountainous inland areas into the valleys and low level coastal areas, areas dominated by urbanization and land use changes. That supposition is not entirely accurate as a cursory look at the metadata shows.

There are two significant periods when stations are dropped; Post 1990 and again in 2005. As Stokes show below.

FIGURE 1: Station drop and average altitude of stations.

The decrease in altitude is not caused by a move into valleys, lowland and coastal areas. As the following figures show, the percentage of coastal stations is stable, mountainous stations are still represented and the altitude loss more likely comes from the move out of mountainous valleys .

A simple summary of the total inventory shows this

ALL STATIONS Count Total Percent
Coastal 2180 7280 29.95
Lake 443 7280 6.09
Inland 4657 7280 63.97

TABLE 1: Count of Coastal Stations

The greatest drop in stations occurs in the 1990-1995 period and the 2005 period, as shown above McKitrick supposes that the drop in altitude means a heavier weighting for coastal stations. The data do not support this

Dropped Stations 90-95 Count Total Percent
Coastal 487 1609 30.27
Lake 86 1609 5.34
Inland 1036 1609 64.39
Dropped in 2005-06
Coastal 104 1109 9.38
Lake 77 1109 6.94
Inland 928 1109 83.68

TABLE 2: Count of Coastal Stations dropped

The great march of the thermometers was not a trip to the beach. Neither was the drop in altitude the result of losing a higher percentage of  “mountainous” stations.

FIGURE 2: Distribution of Altitude for the entire GHCN Inventory

Minimum 1st Qu Median Mean 3rd Qu Max NA
-224.0 38.0 192.0 419.9 533.0 4670 142

TABLE 3: descriptive statistics for Altitude of the entire dataset

We can assess the claim about the march of thermometers down the mountains in two ways. First, by looking at the actual distribution of dropped stations.

FIGURE 3 Distribution of altitude for stations dropped in 1990-95

Minimum 1st Qu Median Mean 3rd Qu Max NA
-21.0 40.0 183.0 441 589.2 4613.0 29

TABLE 4:  Descriptive statistics for the Altitude of dropped stations

The character of stations dropped in the 2005 time frame are slightly different. That distribution is depicted below

FIGURE 4 Distribution of altitude for stations dropped in 2005-06

Minimum 1st Qu Median Mean 3rd Qu Max NA
–59 143.0 291.0 509.7 681.0 2763.0 0

TABLE 5:  Descriptive statistics for the Altitude of dropped stations 2005-06

The mean of those dropped is slightly higher than the average station. That hardly supports the contention of thermometers marching out of the mountains. We can put this issue to rest with the following observation from the metadata. GHCN metadata captures the topography surrounding the stations. There are four classifications FL, HI, MT and MV: flat, hilly, mountain and mountain valley. The table below hints at what was unique about the dropout.

Type Entire Dataset Dropped after90-95 Dropped 2005-06 Total of two major movements
Flat 2779 455 (16%) 504 (23%) 959 (43%)
Hilly 3006 688 (23%) 447 (15%) 1135 (38%)
Mountain 61 15 (25%) 3 (5%) 18 (30%)
Mountain Valley 1434 451(31%) 155 (11%) 606 (42%)

TABLE 6 Station drop out by topography type

There wasn’t shift into valleys as McKitrick supposes, but rather mountain valley sites were dropped.  Thermometers left the flatlands and the mountainous valleys. That resulted in a slight decrease in the overall altitude.

That brings us to McKitrick’s third critical claim. McKitrick claims that the dropping of thermometers over weights places more likely to suffer from urbanization and differential land use.  “Low level sites tend to be influenced by agriculture, urbanization and other land use modifications.” The primary concern that Dr. McKitrick voices is that the statistical integrity of the data may have been compromised. That claim needs to be turned into a testable hypothesis. What exactly has been compromised? We can think of two possible concerns. The first concern is that by dropping higher altitude mountain valley stations one is dropping stations that are colder. Since temperature decreases with altitude this would seem to be a reasonable concern. However, it is not. Some people make this claim, but McKitrick does not. He doesn’t because he is aware that the anomaly method prevents this kind of bias. When we create a global anomaly we prevent this kind of bias from entering the calculation by scaling the measurements of station by the mean of that station. Thus, a station located at 4000m may be at -5C, but if that station is always at -5C its anomaly will be zero. Likewise, a station at sea level in Death Valley that is constantly 110F will also have an anomaly of zero. Anomaly captures the departure from the mean of that station.

What this means is that as long as high altitude stations warm or cool at the same rate as low altitude stations, removing them or adding them will not bias the result.

To answer the question of whether dropping or adding higher altitude stations impacts the trend we have several analytical approaches. First, we could add back in stations. But we can’t add back in GHCN stations that were discontinued. The alternative is to add stations from other databases.  Those studies indicate that adding addition stations does not change the trends:

http://www.yaleclimatemediaforum.org/2010/08/an-alternative-land-temperature-record-may-help-allay-critics-data-concerns/

http://moyhu.blogspot.com/2010/07/using-templs-on-alternative-land.html

http://moyhu.blogspot.com/2010/07/arctic-trends-using-gsod-temperature.html

http://moyhu.blogspot.com/2010/07/revisiting-bolivia.html

http://moyhu.blogspot.com/2010/07/global-landocean-gsod-and-ghcn-data.html

The other approach is to randomly remove more stations from GHCN and measure the effect. If we fear that GHCN has biased the sample by dropping higher altitude stations, we can drop more stations and measure the effect. There are two ways to do this. A Monte Carlo approach and an approach that divides the existing data into subsets:

Nick Stokes has conducted the Monte Carlo experiments. In his approach stations are randomly removed  and global averages are recomputed. Stations were removed based on a randomization approach that preferentially removed high altitude stations. This test gives us an estimate of the Standard Error as well.

Period Trend of All Re-Sampled s.d
1900-2009 0.0731 0.0723 0.00179
1979-2009 0.2512 0.2462 0.00324
Mean Altitude 392m 331m

Table 7 Monte Carlo test of altitude sensitivity

This particular test consists of selecting all the stations whose series end after 1990. There are 4814 such stations. The sensitivity to altitude reduction was performed by randomly removing higher altitude stations. The results indicate little to no interaction between altitude and temperature trend in the very stations end after the 1990 period.

The other approach, dividing the sample, was approached in two different ways by Zeke Hausfather and Steven Mosher. Hausfather, approached the problem using a paired approach. Grid cells are selected for processing if the have stations both above and below 300m. This eliminates cells that are represented by a single station.  Series are then constructed for the stations that lie above 300m and below 300m.

Period Elevation > 300m Elevation <300m
1900-2009 .04 .05
1960-2009 .23 .19
1978-2009 .34 .28

Table 8. Comparison of trend versus altitude for paired station testing

FIGURE 5: Comparison of temperature Anomaly for above mean and below mean stations

This test indicates that higher elevation stations tend to see higher rates of warming rather than lower rates of warming. Thus, dropping them, does not bias the temperature record upward. The concern lies in the other direction. If anything the evidence points to this: dropping higher altitude stations post 1990 has lead to a small underestimation of the warming trend.

Finally, Mosher, extending the work of Broberg tested the sensitivity of altitude by dividing the existing sample in the following way, by raw altitude and by topography.

  1. A series containing all stations.
  2. A series of lower altitude stations Altitude < 200m
  3. A series of higher altitude stations Altitude >300m
  4. All Stations in Mountain Valleys
  5. A series of stations at very high altitude. Altitude >400m

The results of that test are shown below

FIGURE 6 Global anomaly.  Smoothing performed for display purpose only with a 21 point binomial  filter

The purple series is the highest altitude stations. The red series lower elevation series. Green is the mountain valley stations. A cursory look at the “trend” indicates that the higher elevation stations warm slightly faster than the lower elevation, confirming Hausfather. Dropping higher elevation stations, if it has any effect whatsoever works to lower the average.  Stations at lower altitudes tend to warm less rapidly than stations at higher elevations. So quite the opposite of what people assume, the dropping of higher altitude stations is more likely to underestimate the warming rather than over estimate the warming.

Conclusion:

The distribution of altitude does change with time in GHCN v2.mean data. That change does not signal a march of thermometers to places with higher rates of warming. The decrease in altitude is not associated with a move toward or away from coasts. The decrease is not clearly associated with a move away mountainous regions and into valleys, but rather a movement out of mountain valley and flatland regions. Yet, mountain valleys do not warm or cool in any differential manner. Changing altitude does not bias the final trends in any appreciable way.

Regardless of the differential characteristics associated with higher elevation, changes in temperature trends is not clearly or demonstrably one of them.  For now, we have no  evidence whatsoever that marching thermometers up and down hills makes any contribution to a overestimation of the warming trend.

Dr. McKitrick presented a series of concerns with GHCN. We have eliminated the concern over changes in the distribution of altitude. That merits a correction to his paper. The concerns he raised about latitude, and airports and UHI will be addressed in forthcoming pieces. Given the preliminary work done on airports. (and here) and latitude to date, we can confidently say that the entire debate will come down to two basic issues: UHI and adjustments, the issues over latitude changes and sampling at airports will fold into those discussions. So, here is where the debate stands. The concerns that people have had about methodology have been addressed. As McKitrick notes, the various independent methods get the same answers. The concern about altitude bias has been addressed. As we’ve argued before, the real issue with temperature series is the metadata, its related microsite and UHI issues and adjustments made prior to entry in the GHCN database.

Special thanks to Ron Broberg for editorial support.

References:

A Critical Review of Global Surface Temperature Data Products. Ross McKitrick, Ph.D. July 26, 2010

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

161 Comments
Inline Feedbacks
View all comments
richard telford
August 19, 2010 8:38 am

Good work. I appreciate skeptics who are prepared to test if alleged biases are of practical importance. If only more would make the effort.

Eric Anderson
August 19, 2010 8:42 am

Very informative piece, and helpful in focusing the issues.
On behalf of those who respect Dr. McKitrick’s overall work, as I do, I trust that he will offer a correction or clarification, to the extent warranted, once he has had a chance to review.
So the remaining issues are: (i) metadata, (ii) microsite issues, (iii) UHI issues, and (iv) adjustments. Hmmm. Not to wave a crystal ball or anything, but I suspect (iii) and (iv) will be the most meaningful . . .

latitude
August 19, 2010 8:47 am

Nope, Steve you didn’t convince me.
The biggest drop in altitude correlates with the biggest increase in temps. Around 1998.
When you’re talking about reading temps in 1/100ths and 1/10ths of a degree, you left out the biggest part. The time of day that those thermometers are read. Lower altitudes are going to show a greater change/swing in temps every day and be more effected by UHI.

Gina Becker
August 19, 2010 8:49 am

As you say, methods for computing global averages are sound. Of course.
It’s the data quality, and the many correction methods that are abjectly wrong and abused, that cause problems. Read the papers that justify the correction methods, note the many assumptions, note the error that should be associated with these assumptions, note the predominant effect of each method (to lower the values of past temperatures).
The data is junk, and the science is corrupt. It’s very sad.

Ken Hall
August 19, 2010 8:56 am

It is not only altitude that effects temps, but when hundreds of stations closed in Canada and Siberia… Surely having lost 60% of the stations and most of those from cold countries, leaving stations in hotter regions, that must increase the average for the remaining stations.
[REPLY – But one can get around that problem by gridding the data. There are, however, other serious issues concerning Siberian stations other than the mere number that are there. ~ Evan]

C James
August 19, 2010 9:11 am

Steven…..I know you and Zeke and Nick are all really bright guys. I know E.M. Smith is a really bright guy too but his work shows that the “march of thermometers” has had an impact on the temperature trends: http://chiefio.wordpress.com/2009/11/03/ghcn-the-global-analysis/.
The real question to me is why are all of you bright guys spending so much time on verifying (or not) that the use of bad data, regardless of methodology, produces similar (or not) results? Why isn’t there a concerted effort on everyone’s part to go back to the raw data and start over? What is the point in spending so much time, effort and research dollars on studying questionable data? Then again, perhaps you don’t agree that it is questionable.

D. Robinson
August 19, 2010 9:12 am

As mentioned by latitude, there is an ‘apparent’ correlation between station count and rate of temperature change if you graph them together. Visually this has always made me raise an eyebrow. Has anybody been done a temp study for only the long lived stations that made it through the purge/s? Any links?
Because imo there should not be a significant difference in trend between this and what is graphed using the entire GHCN database.
Thanks, good post!

DR
August 19, 2010 9:31 am

Dr. John Christy has done specific research on land use change and its effects on LT and surface temperature, one of which McKitrick cites in his paper. It disagrees with the conclusion of this essay.
http://climateaudit.org/2009/07/19/christy-et-al-2009-surface-temperature-variations-in-east-africa/
With due respect, not one single of your entries at WUWT or at Lucia’s have addressed the specific issues raised by Christy et al (his is but one, there are several). Until those issues are addressed and refuted, all that’s being done in these exercises is replication of the data, NOT analysis of the individual stations. It isn’t just microsite and UHI at play here.
Agreement amongst models is nothing to get excited about.

latitude
August 19, 2010 9:33 am

“The concerns that people have had about methodology have been addressed. As McKitrick notes, the various independent methods get the same answers”
I don’t trust it.
What I would like to see is each dropped station’s temp records analyzed.
I don’t think for one minute that any station was dropped “random”.
I think there’s was a specific reason for each station that was dropped.

Gnomish
August 19, 2010 9:36 am

Hey, Mr. Mosher-
“GISS, NCDC, CRU, JeffId/RomanM, Tamino, ClearClimateCode, Zeke Hausfather, Chad Herman, Ron Broberg, Residual Analysis, and MoshTemp all generally agree. Given the GHCN data, the answer one gets about the pace of global warming is not in serious dispute.”
I expanded your method and used your concensus theory on another real world problem:
I took a bitmap of Botticelli’s ‘Printemps’ and used the finest possible AlGorithm to average the pixel colors. Guess what! It’s gray!. Botticelli painted everything in gray, the same color as your elephant – the one that’s very like a poutine.

M Bell
August 19, 2010 9:42 am

Has anyone else clicked onto the source website for that lovely picture? If so have you noticed that the pictures look so very different, yet are clearly identical.
Relevance to the topic at hand is: facts are tricky things, never the same when you cast a different light over them.

Steve Keohane
August 19, 2010 9:57 am

It seems easier to me to just look at the percentages of the stations for the sitings,
pre-1990 vs. post 2006 which did not change: 38.2% vs 39.9% Flat, 41.3% vs. 41.0% Hilly, .008% vs. .009% Mountain and 19.7% vs. 18.1% Mountain Valleys. One could argue the Valleys are under-represented by 5% post-2006. I am curious about the differentiation of mountains vs. valleys, as the top of Mt. Washington would not qualify as a valley in elevation in Western Colorado, let alone the flatlands in the eastern part of the state at 1700 meters. 99% of the population in the Roaring Fork Valley lives in a Mountain Valley, yet this is designated as high altitude as it is 1900 meters at its lowest point, and Aspen is still a valley town at the high end of this valley although its altitude is 2600 meters.
latitude says:August 19, 2010 at 8:47 am Lower altitudes are going to show a greater change/swing in temps every day and be more effected by UHI.
The day/night temperature swing is probably greater at altitude. I’ve been in Colorado for almost 40 years, and we get 30-50°F day to night change virtually every day. The inhibiting factor is cloud cover and or humidity*, both of which should be more predominant at lower and coastal climes.
*Absolute water content, not RH%
I still find this disconcerting: http://i27.tinypic.com/14b6tqo.jpg

Vince Causey
August 19, 2010 9:59 am

It’s good to see these sorts of issues being dealt with at last. Only question, why has it taken a team of amateurs to do what the professionals should have done in the first place?
BTW, I would be interested in what E.M Smith has to say, but for now, good work.

richard telford
August 19, 2010 10:04 am

Ken Hall says:
August 19, 2010 at 8:56 am
Surely having lost 60% of the stations and most of those from cold countries, leaving stations in hotter regions, that must increase the average for the remaining stations.
———-
No. This would happen is the analysis was a mean of the raw temperature data. But it isn’t. It’s the mean of the anomalies. All stations, everywhere, have an average anomaly of zero over the normal period (e.g. 1961-1990).
The contrary problem may exist, if there is Arctic amplification, dropping Arctic sites will drop the sites with the strongest trends, reducing the global trend

Tom Scharf
August 19, 2010 10:06 am

Is there a temperature graph that compares the trends up to 1990, 2005 of all the stations that were rejected, compared to all the stations that were kept?

GeoChemist
August 19, 2010 10:08 am

“pace of global warming is not in serious dispute” ….. NO…..the trend shown in measured surface temperatures is not in dispute. The use of this metric as a measure of “global warming” is in serious dispute. Ocean heat content is clearly the better metric.

anopheles
August 19, 2010 10:12 am

How about doing this from the other direction? Class the stations by trend, rising temps, falliing temps, maybe general shape of the curve, and then find, or try to find, what attributes they have in common. And of course how they are treated by GHCN.

BillyBob
August 19, 2010 10:13 am

If Nick Stokes says its so … I know it ain’t.

Randy
August 19, 2010 10:18 am

Table two has an error in calculation for the flat Terrain type. According to the way the other three lines are calculated the percentages should be 16%, 18%, and 35%, at least if my rpn calculator and my finger works correctly.
Type Entire Dataset Dropped after90-95 Dropped 2005-06 Total of two major movements
Flat 2779 455 (16%) 504 (23%) 959 (43%)

Randy
August 19, 2010 10:20 am

Sorry that was Table 6.

BillyBob
August 19, 2010 10:21 am

” Given the GHCN data, the answer one gets about the pace of global warming is not in serious dispute. ”
Considering that all of the GHCN anomaly caluclators use the mean:
1) The min could be going up
2) The max could be going up
3) Or it could be a combination of both
Having looked at the raw GHCN data, I can say the max is not going up. It is the min.
Therefore it is UHI.
As a proxy for whether the max is going up or the min, may i point out that 25 of the 50 state temperature max records were set in the 1930’s.
The 1930’s were the hottest decade (if you care about max temperatures).
It may be ture that the mean has gone up because the min has gone up.
But that is nothing to worry about.

Jim G
August 19, 2010 10:35 am

Here in Wyoming the colder air settles into the valleys on the plains (hundreds of feet of elevation change in some of these draws and stream valleys here) summer and winter. Temperature inversions also effect even the high mountains where normally temperature decreases with elevation with the exception of inversions where it can be much warmer at higher elevations. These occur regularly here. As Latitude said, time of day will also effect these readings also amount of sunshine which is high here, averaging 270 days a year. Not sure what the change in elevation of temperature reading sites would do overall other than make the data inconsistent with past readings.
The valley in the photo looks like a classic river cut formation without meander though there is not enough of the countryside to say for sure regarding the hills on each side. Shape of these valleys could also effect temperature greatly.

James Evans
August 19, 2010 10:47 am

I think this collaboration is extremely encouraging. Congratulations to the three of you.

Ken Harvey
August 19, 2010 10:49 am

Can we identify one data set from one single station that covers many years continuous to the present, that is widely regarded as accurate to a degree beyond reproach and is free of outside bias and adjustment? Such a data set, if it exists, covering just one single location, would give us a better indication of global changes than anything that has yet been published. With good numbers one can do accurate sums. With bad numbers one can do nothing but waste one’s time. There is no possible way to calculate the value of x where the values of its individual components are uncertain.

E.M.Smith
Editor
August 19, 2010 10:52 am

Two major issues with the assertions in the above posting.
1) Using data that ‘cuts off’ in 1990 does not capture the major issue which is that the way the data are handled post 1990 introduces a ‘hockey blade’ to the anomaly series. This is accompanied by a change of ‘duplicate number’ in the GHCN.
2) It ignores the impact of long duration weather changes, such as 30 and 60 year ocean cycles, as it will work through changes of the average VOLATILITY of the stations in the set. Ignoring volatility changes while looking at other attributes in isolation will fail.
http://chiefio.wordpress.com/2010/08/04/smiths-volatility-surmise/
The fact that a bunch of ways of calculating an average of an intensive variable from the same dataset gives similar results does not improve the quality of the dataset nor does it address that an average of an intensive variable is rather meaningless. The basic mathematics is wrong, so any arithmetic done on it is pointless.
http://chiefio.wordpress.com/2010/07/17/derivative-of-integral-chaos-is-agw/
So we have a bunch of folks doing number games for amusement and claiming to find truth. They aren’t.
So for starters, pick some individual long lived stations and look at their actual temperatures. When you do that you find either little to no “warming” or amounts consistent with UHI and / or for very long lived stations a rise out of the LIA. Sticking 92% of so of present GHCN thermometers at Airports in the USA (and similar percentages in France and ROW) doesn’t help. Tarmac stays hotter than a grass field, regardless of number of flights.
So, IMHO, we have crappy data and get crappy results from it. Admiring the uniformity of the crappiness does not yield much comfort.

1 2 3 7