Guest post by Lance Wallace
Last week (Aug 30), Anthony Watts posted my analysis of the errors in estimating true mean temperatures due to the use of the (Tmin+Tmax)/2 approach widely used in thousands of temperature measuring stations worldwide: http://wattsupwiththat.com/2012/08/30/errors-in-estimating-temperatures-using-the-average-of-tmax-and-tmin-analysis-of-the-uscrn-temperature-stations/ . The errors were determined using the 125 stations in NOAA’s recently-established US Climate Reference Network (USCRN) of very high-quality temperature measuring stations. Some highlights of the findings were:
A majority of the sites had biases that were consistent throughout the years and across all seasons of the year.
The 10-90% range was about -0.5 C to + 0.5 C. (Negative values indicate underestimates of the true temperature due to using the Tminmax approach.)
Two parameters—latitude and relative humidity–were fairly powerful influences on the direction and magnitude of the bias, explaining about 30% of the observed variance in the monthly averages. Geographic influences were also strong, with coastal sites typically overestimating true temperature and continental sites underestimating it.
A better approach than the Tminmax method may be to use observations at fixed hours, which would eliminate the problem of the time of observation of the temperature extremes. One common algorithm is to use measurements at 6 AM, noon, 6 PM, and midnight. We will describe this method as 6121824. A second approach used in Germany for many years was to use measurements at 7AM, 2 PM, and 9 PM (71421) or in some cases to use double weights for the 9 PM measurement (7142121). (h/t to Michael Limburg for the information on the German algorithm.)
How do these methods compare to the Tminmax method? Do they lower the error? Would latitude and RH and geographic conditions continue to be predictors of their errors, or would other parameters be important? In this Part II of this study, we attempt to answer these questions, using again the USCRN as a high-quality test-bed.
In Part I, two datasets from the NOAA site ftp://ftp.ncdc.noaa.gov/pub/data/uscrn/products/ were employed—the daily and monthly datasets, with about 360,000 station-days and 12,000 station-months, respectively. For our purposes here, we also need the hourly dataset, with about 8.2 million records. This was obtained (again with help from the NOAA database manager Scott Embler) on Sept. 4, 2012. These three datasets are all available from me at email@example.com.
The hourly dataset provides the maximum, minimum, and mean temperature for each hour. Also recorded are precipitation (mm), solar radiation flux (W/m2), and RH (%). Since the RH measurements were added several years after the start of the network, only about a third of the hours (2.8 million), days (120,000) and months (3600) have RH values.
A first look confirms that 3 or 4 measurements per day are better than two (Figure 1). The entire range of the 6121824 method almost fits into the interquartile range of the Tminmax method (-0.2 to +0.2C).
Figure 1. Errors in using four algorithms to estimate true mean temperature. Values are monthly averages across all months of service for 125 stations in the USCRN.
A measure of the monthly error is provided by the distribution of the absolute errors (Table 1). The Tminmax method is clearly inferior by this method, having about 3 times the absolute error of the 6121824 method. The two German methods are intermediate at close to 0.2 C.
Table 1. Distribution of absolute errors for 4 algorithms.
|Valid N||Mean Abserror||Std.Dev.||25%ile||Median||75%ile||Maximum|
We can compare methods across years or across seasons for any given site. The error for a given method was often about the same across all four seasons, although the bias across methods could be quite large (Figure 2). Errors across years were even more stable, but again with large biases across the methods (Figures 3 & 4).
Figure 2. Errors (C) by season at Durham NC. DeltaT is the error from the Tminmax method.
Figure 3. Errors (C) by year at Gadsden AL.
Figure 4. Errors (C) at Newton GA.
In Part I, I provided a map of the error from the Tminmax method. That map (updated to include 4 new Alaskan stations and an additional month of August 2012) is reproduced here as Figure 5. The strong geographic effect is immediately apparent, with the overestimates (blue) located along the Pacific Coast and in the Deep South, while underestimates (red) are in the higher and drier western half of the continent as well as along the very northernmost tier of states from Maine to Washington.
Figure 5. DeltaT at 121 USCRN stations. Colors are quartiles. Red: -0.67 to -0.20 C. Gold: -0.20 to -0.02 C. Green: -0.02 to +0.21 C. Blue: +0.21 to +1.35 C.
The next three Figures (Figures 6-8) map the three algorithms discussed in this post: the 4-point 6121824 algorithm as in the ISH network and the 3-point algorithms used in Germany (71421 and 7142121). The 4-point algorithm (Figure 6) does not have the well-demarcated geographic clusters of the Tminmax method. There is a cluster of overestimates (blue) in the farmland of the Middle West from North Dakota to Texas. Just to the West of them, however, there are a set of strong underestimates (red) from Montana through Colorado to New Mexico.
Figure 6. DeltaT 6121824 at 125 USCRN stations. Colors are quartiles. Red: -0.24 to -0.07 C. Gold: -0.07 to -0.02 C. Green: -0.02 to +0.02 C. Blue: +0.02 to +0.25 C.
The 3-point scale 71421 (Figure 7) shows something of a latitude-longitude dependence, with the strongest overestimates (blue) mostly in the North and West. This algorithm is rather heavily biased toward positive errors, so that even the red dots include some overestimates along with strong underestimates.
Figure 7. DeltaT 71421 at 125 USCRN stations. Colors are quartiles. Red: -0.21 to +0.08 C. Gold: +0.08 to +0.13 C. Green: +0.13 to +0.20 C. Blue: +0.20 to +0.45 C.
The errors in method 7142121 with the doubled 9 PM measurement (Figure 8) have a cluster of strong underestimates (red) in the Deep South and the Atlantic Coast from Florida to the Carolinas. Here the green dots are the best estimates (between -0.04 and +0.03) but they are spread throughout most of the country with the exception of the Deep South.
Figure 8. DeltaT 7142121 at 125 USCRN stations. Colors are quartiles. Red: -0.41 to -0.17 C. Gold: -0.17 to -0.04 C. Green: -0.04 to +0.03 C. Blue: +0.03 to +0.43 C.
As in Part I, a multiple regression was performed to detect what measured parameters might have an effect on the error associated with a given method. There are 6 available parameters: latitude, longitude, elevation, precipitation, solar radiation, and RH. Since some of these may be collinear, it is important to determine whether they are sufficiently related to cause errors in the multiple regression. The best way to do this is probably the test devised in Belsley, Kuh, and Welsch (1980). Their test has been incorporated in the SAS PROC REG/COLLIN. Not knowing SAS, or having access to someone who does, I tried factor analysis, as implemented in Statistica v11 (Table 2). Two variables with heavy loadings on Factor 1 were solar radiation and RH (with opposite signs). Factor 2 was dominated by the latitude and longitude. Since the earlier regressions showed that RH was generally stronger than solar radiation, and latitude stronger than longitude, the two weaker variables were left out of some regressions to see if the sign and magnitude of the other parameters would change markedly. However, little change was noticed. Therefore the multiple regressions presented here include all 6 variables.
Table 2. Factor analysis of 6 explanatory variables.
|Factor 1||Factor 2|
Following are the multiple regressions on the errors due to the four different methods (Tables 3-6). Table 3 is a slightly modified (addition of stations in Alaska and Hawaii plus one additional month) version of the corresponding table for the Tminmax errors in Part I. As in Part I, the updated regression shows about equal effects of latitude and RH, accounting for nearly all of the 29% R2 value. The maps in Part I and Figure 5 above showed the powerful effect of the coastal stations (overestimates) and the Western Continental stations (underestimates).
The six measured parameters had far less effect on the method using four equally-spaced hourly measurements (Table 4). In this case, solar radiation had the strongest effect, with an increase in sunlight leading to larger underestimates. However, the R2 was very small, at about 6%.
The strongest effect on the 71421 method was latitude, and it was in the opposite direction of the effect as noted for the Tminmax method (Table 5). Overall, however, the R2 was similarly low, at about 7%.
The method that double-counted the 9 PM measurement was similar in one respect to the Tminmax results, with the two main parameters being RH and latitude, both close to equal in explanatory power (t values of +18 and -18.6) (Table 6). However, the signs of each were in the opposite direction from the Tminmax results. The R2 value of 17% was quite a bit higher than for the other two methods using specified hours, but less than for Tminmax.
A clear finding from this analysis is that the multipoint methods are better than the Tminmax method at estimating the true temperature. In fact, a nice result is that the 2-point method (Minmax) had an absolute average error of about 0.3 C, the 3-point method error was around 0.2 C, and the 4-point method brought the mean absolute error down to 0.1 C. However, this is averaged across all 125 sites and 11,000 months, so errors can be quite a bit larger for individual sites as shown in some of the figures above.
Although one could guess, based on the multiple regression results, that higher-latitude sites using the Tminmax method would be more likely to be underestimating the true temperature, and coastal sites to be overestimating, still the R2 was small enough (29%) that only a ground-truth investigation could be relied on to determine the precise sign and magnitude of the error. It might also be argued that even determining the size of the error at the present time would not tell us what the error was historically. However, the great stability across the years shown by these sites suggests that in fact a proper measurement today could predict past performance for many stations that had stable locations and measurement methods.
With respect to the 4-point method, a second network, the Integrated Surface Hourly (ISH) network uses this approach: ftp://ftp.ncdc.noaa.gov/pub/data/inventories/ISH-HISTORY.TXT. This network apparently has some thousands of stations, although I am not sure how many are of the same high quality as the USCRN stations. Based on these findings, one could expect that the errors at this network are considerably smaller than the errors at stations using the Tminmax method. However, the multiple regressions here give little indication of what direction and magnitude the error might have at any individual station. Therefore, at this network as well as at other stations, a proper series of measurements over several years would be needed to give an idea of the magnitude and direction of the error at a given station. However, if the basic finding here that such errors are highly repeatable over the years applies to many or most stations, then such an approach could go far to indicating the actual temperature field of the world even at much earlier times when only a limited set of measurements (subject to errors of the magnitude and direction found here) were available.
None of the temperature measurement algorithms were without error. The traditional Tminmax method was the worst, with a mean absolute error of about 0.3C. The 3-point German method (71421 and 7142121) had a mean absolute error of about 0.2C, and the 4-point (6121824) method a mean absolute error of about 0.1C. The Tminmax method is strongly affected by latitude and RH, whereas the other methods are less affected by these variables.
All methods were very stable from year to year for most sites. There was somewhat more variation by season, but a majority of methods had the same sign (i.e. consistently over- or under-estimated the true mean temperature) for all four seasons and for all years.
For a given site, it was difficult to predict which of the three fixed-time methods might over- or under-estimate the true mean temperature. Even the Tminmax method performed better than all the others for some sites.
The use of the USCRN network to study these methods was advantageous in offering one of the highest-quality networks available. However, it is of course limited to the US, with a limited latitude and longitude range. Of interest would be to extend this analysis to a more globally representative group of stations. For example, might it be true that stations at polar and tropical latitudes would confirm the latitude dependence found here, and perhaps even show higher underestimates? Would coastal sites around the world continue to over-estimate true mean temperatures? How would poor-quality sites, such as those affected by urban heat island (UHI) or other effects, depend on these parameters compared to high-quality sites? If large areas around the globe were found to be over- or under-estimating true mean temperatures due to the algorithm employed, how might it affect global climate models (GCMs), which may be tuned to slightly wrong historical temperature fields?