For some time I’ve been critical (and will likely remain so) of the data preparation techniques associated with NASA GISS global temperature data set. The adjustments, the errors, the train wreck FORTRAN code used to collate it, and the interpolation of data in the polar regions where no weather stations exists, have given me such lack of confidence in the data, that I’ve begun to treat it as an outlier.
Lucia however makes a compelling argument for not discarding it as such, but to treat it as part of a group data set. She also makes some compelling logical tests that give an insight into the entire collection of datasets. As a result, I’m going to temper my view of the GISS data a bit and look at how it may be useful in the comparisons she makes.
Here is her analysis:
a guest post by Lucia Liljegren
Trends for the Global Means Surface temperature for five groups (GISS, HadCrut, NOAA/NCDC, UAH/MSU and RSS.) were calculated from Jan 2001-May 2008 using Ordinary Least Squares (OLS) using the method in Lee & Lund. to compute error bars, and Cochrane-Orcutt and compared to the IPCC AR4’s projected central tendency of 2C/century for the trend during the first few decades of this century.
The following results for mean trends and 95% confidence intervals were obtained:
- Ordinary Least Squares average of data sets: The temperature trend is -0.7 C/century ± 2.3C/century. This is inconsistent IPCC AR4 projection of 2C/century to a confidence of 95% and is considered falsified based on this specific test.
- Cochrane Orcutt, average of data sets: The temperature trend is -1.4 C/century ± 2.0 C/century. This is inconsistent with the IPCC AR4 projection of 2 C/century to a confidence of 95% and is considered falsified based on this specific test for an AR(1) process.
- OLS, individual data sets: All except GISS Land/Ocean result in negative trends. The maximum and minimum trends reported were 0.007 C/century and -1.28 C/century for GISS Land/Ocean and UAH MSU respectively. Based on this test, The IPCC AR4 2C/century projection is rejected to a confidence of 95% when compared to HadCrut, NOAA and RSS MSU data. It is not rejected based on comparison to GISS and UAH MSU.
- Cochrane-Orcutt, individual data sets: All individual data sets result in negative trends. The IPCC AR4 2C/century is falsified by each set individually.
- The null hypothesis of 0C/century cannot yet be excluded based on data collected since 2001. This, does not mean warming has stopped. It only means that the uncertainty in the trend is too large to exclude 0C/century based on data since 2001. Bar and Whiskers charts showing the range of trend falling inside the ±95% uncertainty intervals using selected start dates are discussed in Trends in Global Mean Surface Temperature: Bars and Whiskers Through May.
The OLS trends for the mean, and C-O trends for individual groups are compared to data in the figure immediately below:
Click for larger.
Figure 1: The IPCC projected trend is illustrated in brown. The Cochrane – Orcutt trend for the average of all five data sets is illustrated in orange; ±95% confidence intervals illustrated in hazy orange. The OLS trend for the average of all five data sets is illustrated in lavender, with ±95% uncertainty bounds in hazy lavender. Individual data sets were fit using Cochrane-Orcutt, and shown.
Discussion of Figure 1
The individual weather data in figure 1 are scattered, and show non-monotonic variations as a function of time. This is expected for weather data; some bloggers like to refer to this scatter as “weather noise”. In the AR4, the IPCC projected a monontonically increasing level increase in the ensemble average of the weather, often called the climate trend. For the first 3 decades of the century, central tendency of the climate trend was projected to vary approximately linearly at a rate of 2C/century. This is illustrated in brown.
The best estimates for the linear trend consistent with the noisy weather data were computed using Cochrane-Orcutt (CO), illustrated in orange, and Ordinary Least Squares (OLS) illustrated in lavender.
Results for individual hypothesis tests
Some individual bloggers have expressed a strong preference for one particular data set or another. Like Atmoz, I prefer not to drop any widely used metric from consideration. However, because some individuals prefer to examine results for each individual group seperately, I also apply the technique to describe the current results of two hypothesis tests based on each individual measurement system.
The first hypothesis tested, treated as “null” is the IPCC’s projections of a 2C/century. Currently, this is rejected at p=95% under ordinary least squares (OLS) using data from 3 of the five services, but it is not rejected for UAH or GISS. The hypothesis is rejected against all 5 servies when tested using C-O fits.
The second hypothesis tested is the “denier’s hypothesis” of 0C/century. This hypothesis cannot be rejected using data starting in 2001. Given the strong rejection with historic data, and the large uncertainty in the determination of the trend, this “fail to reject” result is likely due to “type 2″ or “beta” error.
That is: The “fail to reject” is likely a false negative. False negatives, or failure to reject false results are the most common error when hypotheses are tested using noisy data.
Results for individual tests are tabulated below:
|Group||OLS Trend||Reject / Fail to Reject?||CO Trend||Reject / Fail to Reject?|
|(C/century)||2C/century||0 C/century||(C/century)||2C/century||0 C/century|
|Average of 5||-0.7
|Reject||Fail to reject||-1.4 ± 2.0||Reject||Fail to reject|
|GISS||0.0 ± 2.3||Fail to
|Fail to reject||-0.4 ± 2.0||Reject||Fail to reject|
|HadCRUT||-1.2 ± 1.9||Reject||Fail to reject||-1.6 ± 1.6||Reject||Fail to reject|
|NOAA||-0.1 ± 1.7||Reject||Fail to reject||-0.3 ± 1.5||Reject||Fail to reject|
|RSS MSU||-1.3 ±2.3 C||Reject||Fail to reject||-2.1 ± 2.1||Reject||Fail to reject|
|UAH MSU||-0.8 ± 3.6||Fail to reject||Fail to reject||-2.0 ± 3.1||Reject||Fail to
The possibility of False Positives
In the context of this test, rejecting a hypothesis when it is true is a false positive. All statistical test involve some assumptions, those underlying this test assume we can correct for red noise in the residuals to OLS using one of two methods: A) The method recommended in Lee&Lund or B) Cochrane-Orcutt, a well known statistical method for time series exhibiting red noise. If these methods are valid, and used to test data, we expect to incorrectly reject true hypotheses at p=95%, 5% of the time. (Note however, finding reject in February, March, April and May do not actually count separately, as the rejections themselves are correlated with each other, being largely based on the same data.)
Given the results we have found, the 2C/century projection for the first few decades of this century is not born out by the current data for weather. It appears inconsistent with underlying trends that could possibly describe the particular weather trajectory we have seen.
There are some caveats that have been raised in the blog-o-sphere. There has been some debate over methods to calculate uncertainty intervals and/or whether one can test hypotheses using short data sets. I have been examining a variety of possible reasons. I find:
- In August, 2007, in a post entitled “Garbage Is Forever”, Tamino used and defended the uses OLS adjusted for red noise to perform hypothesis tests using short data sets, going into some detail in the response to criticism by Carrick, where Tamino stated:
For a reasonable perspective on the application of linear regression in the presence of autocorrelated noise see Lee & Lund 2004, Biometrika 91, 240–245. Your claims that it’s “pretty crazy, from a statistics perspective” and “L2 is only reliable, when the unfit variability in the data looks like Gaussian white noise” raises serious doubts about your statistical sophistication.
Later posts, when this method began falsifying the IPCC AR4 projection of 2 C/century, Tamino appears to have changed his mind about the validity of this method possibly suggesting the uncertainty intervals are too high.
The results here simply show what anyone would obtain using this method: According to this method, the 2C/century is falsified. Meanwhile, re-application to the data since 2000 indicates there is no significant warming since 2000 as illustrated here.
- Gavin Schmidt suggested that “internal variability (weather!)” noise results in a standard error of 2.1C/century in 8 year trends; this is roughly twice the standard error obtained using the method of Lee & Lund, above. To determine if this magnitude of variability made any sense at all, I calculated the variability of 8 year trend in the full thermometer record including volcano eruptions, and measurement noise due to the “bucket-jet inlet transition. I also computed the variability during a relatively long historic period with no volcanic eruptions. A standard error of 2.1 C/century suggested by Gavin’s method exceeded both the variablity in the thermometer record for real earth including volcanic periods and that for periods without volcanic eruptions. (The standard error in 8 year trends computed during periods with no volcanic eruptions is approximately 0.9C/century, which is smaller than estimated for the current data).I attribute the unphysically large spread in 8 year trends displayed by the climate models to the fact that the model runs include
a) different historical forcings, some including volcanic eruptions, some don’t. This results in variability in initial conditions across model runs that do not exist on the real earth
b) different forcings during any year in the 20th century; some include solar some don’t.
c) different parameterizations across models and
d) possibly, inability of some individual models to reproduce the actual characteristics of real-earth weather noise.This is discussed Distribution of 8 Year OLS Trends: What do the data say?
- Atmoz have suggested the flat trend is either to ENSO and JohnV suggested considering the effect of Solar Cycle. The issue of ENSO and remaining correlation in lagged residuals has been discussed in previous posts and the solar cycle is explained here.
- The variability of all 8 month trends that can be computed in the thermometer record is 1.9 C/century; computing starting with a set spected at 100 month intervals resulted in a standard error of 1.4 C/century. These represent the upper bound of standard errors that can be justified based on the empirical record. Variabiity includes features other than “weather noise”– for example, volcano eruptions, non-linear variations in forcing due to GHG’s, and measurement uncertainty, including the “jet transition to bucket inlet” noise. So, these represent the upper limit on variability in experimentally determined 8 year trends.Those who adhere to these will conclude the current trends fall inside the uncertainty intervals for data. If the current measurement uncertainty is as large as experienced during the “bucket to jet inlet transition” associated with World War II, they are entirely correct.
After consideration of the various suggestions about uncertainty intervals, and the issues ENSO, solar cycles and other features, and considering the magnitude of the pre-existing trend I think over all the data indicate:
- It is quite likely the IPCC projection for an underlying climate trend of 2C/century exceeds the current underlying trend.I cannot speculate on the reasons for the over estimate; they may include some combination of poor forecast of emissions when developing the SRES, to the effect of inaccurate initial conditions for the computations of the 20th century, to inaccuracy in GCMs themselves or other factors.
- It remains likely the warming experienced over the past century will resume.While the 2C/century projection falsifies using both OLS and C-O, the flat trend is entirely consistent with the previously experienced warming trend. In fact, additional analysis (which I have not shown) would indicate the current trend is not inconsistent with the rate of warming seen during the late 90s. It is entirely possible natural factors, including volcanic eruptions depressing the temperature during the early 90s, caused a positive excursion in the 10 year trend during that period. Meanwhile, the PDO flip can be causing a negative excursion affecting the trend. These sorts of excursions from the mean trend are entrely consistent with historic data.Warming remains consistent with the data. As the theory attributing the warming to GHG’s appears sound and predates the warming from the 80s and 90s, I confident it will resume.
What will happen during over the next few years
As Atmoz warns, we should expect to see the central tendency of trends move around over the next few years. What one might expect is that, going forward, we will see the trend slowly oscillate about the mean, but eventually the magnitude of the oscillation will decay.
One of my motives in blogging this is to show this oscillation and decay over time and to permit doubters to see the positive trend resumes.
I will now set off on the sort of rampant speculations permitted bloggers. When the next El Nino arrives, we will see a period where the trends go positive. Given trends from the 70s through 90s, and current trends, it seem plausible to me that, using the methods I describe here that that we will experience some 89 month trends with OLS trends of 3C/century – 4 C/century or even greater sometime during the next El Nino. At which point, someone will likely blog about that, the moment the 89 month trend occurs.
This result will entirely consistent with the current findings. An OLS (or CO ) trend of 3-4 C/century is likely even if the true trend is less than 2C/century, and even if CO and OLS do give accurate uncertainty intervals.
What’s seems unlikely? I’d need to do more precise calculations to find a firm dividing line between consistent and inconsistent. For now, I’ll suggest that unless there is a) a stratospheric volcanic eruption, b) the much anticipated release of methane from the permafrost or c) a sudden revision in the method the agencies use to estimate GMST, I doubt we’ll see an 89 month trend greater than 4.6 C/century within the next five years. (I won’t go further because I have no idea what anyone is emitting into the atmosphere!)
Meanwhile, what is the magnitude of the trend for the first three decades of this century? That cannot be known with precision for man years. All I can say is: The current data strongly indicate the current underlying trend less than 2C/century, and likely less than 1.6 C/century!