AGW Bombshell? A new paper shows statistical tests for global warming fails to find statistically significant anthropogenic forcing

graphic_esd_cover_homepage[1]From the journal Earth System Dynamics billed as “An Interactive Open Access Journal of the European Geosciences Union” comes this paper which suggests that the posited AGW forcing effects simply isn’t statistically significant in the observations, but other natural forcings are.

“…We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.”

This is a most interesting paper, and potentially a bombshell, because they have taken virtually all of the significant observational datasets (including GISS and BEST) along with solar irradiance from Lean and Rind, and CO2, CH4, N2O, aerosols, and even water vapor data and put them all to statistical tests (including Lucia’s favorite, the unit root test) against forcing equations. Amazingly, it seems that they have almost entirely ruled out anthropogenic forcing in the observational data, but allowing for the possibility they could be wrong, say:

“…our rejection of AGW is not absolute; it might be a false positive, and we cannot rule out the possibility that recent global warming has an anthropogenic footprint. However, this possibility is very small, and is not statistically significant at conventional levels.”

I expect folks like Tamino (aka Grant Foster) and other hotheaded statistics wonks will begin an attack on why their premise and tests are no good, but at the same time I look for other less biased stats folks to weigh in and see how well it holds up. My sense of this is that the authors of Beenstock et al have done a pretty good job of ruling out ways they may have fooled themselves. My thanks to Andre Bijkerk and Joanna Ballard for bringing this paper to my attention on Facebook.

The abstract and excerpts from the paper, along with link to the full PDF follows.

Polynomial cointegration tests of anthropogenic impact on global warming

M. Beenstock1, Y. Reingewertz1, and N. Paldor2

1Department of Economics, the Hebrew University of Jerusalem, Mount Scopus Campus, Jerusalem, Israel

2Fredy and Nadine Institute of Earth Sciences, the Hebrew University of Jerusalem, Edmond J. Safra campus, Givat Ram, Jerusalem, Israel

 Abstract. 

We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmospheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since during the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.

Introduction

Considering the complexity and variety of the processes that affect Earth’s climate, it is not surprising that a completely satisfactory and accepted account of all the changes that oc- curred in the last century (e.g. temperature changes in the vast area of the Tropics, the balance of CO2 input into the atmosphere, changes in aerosol concentration and size and changes in solar radiation) has yet to be reached (IPCC, AR4, 2007). Of particular interest to the present study are those  processes involved in the greenhouse effect, whereby some of the longwave radiation emitted by Earth is re-absorbed by some of the molecules that make up the atmosphere, such as (in decreasing order of importance): water vapor, car- bon dioxide, methane and nitrous oxide (IPCC, 2007). Even though the most important greenhouse gas is water vapor, the dynamics of its flux in and out of the atmosphere by evaporation, condensation and subsequent precipitation are not understood well enough to be explicitly and exactly quantified. While much of the scientific research into the causes of global warming has been carried out using calibrated gen- eral circulation models (GCMs), since 1997 a new branch of scientific inquiry has developed in which observations of climate change are tested statistically by the method of cointegration (Kaufmann and Stern, 1997, 2002; Stern and Kauf- mann, 1999, 2000; Kaufmann et al., 2006a,b; Liu and Ro- driguez, 2005; Mills, 2009). The method of cointegration, developed in the closing decades of the 20th century, is intended to test for the spurious regression phenomena in non-stationary time series (Phillips, 1986; Engle and Granger, 1987). Non-stationarity arises when the sample moments of a time series (mean, variance, covariance) depend on time. Regression relationships are spurious1 when unrelated non- stationary time series appear to be significantly correlated be- cause they happen to have time trends.

The method of cointegration has been successful in detecting spurious relationships in economic time series data.

Indeed, cointegration has become the standard econometric tool for testing hypotheses with nonstationary data (Maddala, 2001; Greene, 2012). As noted, climatologists too have used cointegration to analyse nonstationary climate data (Kauf- mann and Stern, 1997). Cointegration theory is based on the simple notion that time series might be highly correlated even though there is no causal relation between them. For the relation to be genuine, the residuals from a regression between these time series must be stationary, in which case the time series are “cointegrated”. Since stationary residuals mean- revert to zero, there must be a genuine long-term relationship between the series, which move together over time because they share a common trend. If on the other hand, the resid- uals are nonstationary, the residuals do not mean-revert to zero, the time series do not share a common trend, and the relationship between them is spurious because the time series are not cointegrated. Indeed, the R2 from a regression between nonstationary time series may be as high as 0.99, yet the relation may nonetheless be spurious.

The method of cointegration originally developed by En- gle and Granger (1987) assumes that the nonstationary data are stationary in changes, or first-differences. For example, temperature might be increasing over time, and is there- fore nonstationary, but the change in temperature is station- ary. In the 1990s cointegration theory was extended to the case in which some of the variables have to be differenced twice (i.e. the time series of the change in the change) be- fore they become stationary. This extension is commonly known as polynomial cointegration. Previous analyses of the non-stationarity of climatic time series (e.g. Kaufmann and Stern, 2002; Kaufmann et al., 2006a; Stern and Kaufmann, 1999) have demonstrated that global temperature and solar irradiance are stationary in first differences, whereas green- house gases (GHG, hereafter) are stationary in second differ- ences. In the present study we apply the method of polyno- mial cointegration to test the hypothesis that global warming since 1850 was caused by various anthropogenic phenom- ena. Our results show that GHG forcings and other anthropogenic phenomena do not polynomially cointegrate with global temperature and solar irradiance. Therefore, despite the high correlation between anthropogenic forcings, solar irradiance and global temperature, AGW is not statistically significant. The perceived statistical relation between tem- perature and anthropogenic forcings is therefore a spurious regression phenomenon.

Data and methods

We use annual data (1850–2007) on greenhouse gas (CO2, CH4 and N2O) concentrations and forcings, as well as on forcings for aerosols (black carbon, reflective tropospheric aerosols). We also use annual data (1880–2007) on solar irradiance, water vapor (1880–2003) and global mean tem- perature (sea and land combined 1880–2007). These widely used secondary data are obtained from NASA-GISS (Hansen et al., 1999, 2001). Details of these data may be found in the Data Appendix.

We carry out robustness checks using new reconstructions for solar irradiance from Lean and Rind (2009), for globally averaged temperature from Mann et al. (2008) and for global land surface temperature (1850–2007) from the Berkeley Earth Surface Temperature Study.

Key time series are shown in Fig. 1 where panels a and b show the radiative forcings for three major GHGs, while panel c shows solar irradiance and global temperature. All these variables display positive time trends. However, the time trends in panels a and b appear more nonlinear than their counterparts in panel c. Indeed, statistical tests reported be- low reveal that the trends in panel c are linear, whereas the trends in panels a and b are quadratic. The trend in solar irradiance weakened since 1970, while the trend in temperature weakened temporarily in the 1950s and 1960s.

The statistical analysis of nonstationary time series, such as those in Fig. 1, has two natural stages. The first consists of unit root tests in which the data are classified by their order and type of nonstationarity. If the data are nonstationary, sample moments such as means, variances and co- variances depend upon when the data are sampled, in which event least squares and maximum likelihood estimates of parameters may be spurious. In the second stage, these nonstationary data are used to test hypotheses using the method of cointegration, which is designed to distinguish between genuine and spurious relationships between time series. Since these methods may be unfamiliar to readers of Earth System Dynamics, we provide an overview of key concepts and tests.

clip_image002

Fig. 1. Time series of the changes that occurred in several variables that affect or represent climate changes during the 20th century. a) Radiative forcings (rf, in units of W m−2) during 1880 to 2007 of CH4 (methane) and CO2 (carbon dioxide); (b) same period as in panel a but for Nitrous-Oxide (N2O); (c) solar irradiance (left ordinate, units of W m−2) and annual global temperature (right ordinate, units of ◦C) during 1880–2003.

[…]

3 Results

3.1 Time series properties of the data

Informal inspection of Fig. 1 suggests that the time series properties of greenhouse gas forcings (panels a and b) are visibly different to those for temperature and solar irradiance (panel c). In panels a and b there is evidence of acceleration, whereas in panel c the two time series appear more stable. In Fig. 2 we plot rfCO2 in first differences, which confirms by eye that rfCO2 is not I (1), particularly since 1940. Similar figures are available for other greenhouse gas forcings. In this section we establish the important result that whereas the first differences of temperature and solar irradiance are trend free, the first differences of the greenhouse gas forcings are not. This is consistent with our central claim that anthropogenic forcings are I (2), whereas temperature and solar irradiance are I (1).

image

Fig. 2. Time series of the first differences of rfCO2.

What we see informally is born out by the formal statistical tests for the variables in Table 1.

image

Although the KPSS and DF-type statistics (ADF, PP and DF-GLS) test different null hypotheses, we successively increase d until they concur. If they concur when d = 1, we classify the variable as I (1), or difference stationary. For the anthropogenic variables concurrence occurs when d = 2. Since the DF-type tests and the KPSS tests reject that these variables are I (1) but do not reject that they are I (2), there is no dilemma here. Matters might have been different if according to the DF-type tests these anthropogenic variables are I (1) but according to KPSS they are I (2).

The required number of augmentations for ADF is moot. The frequently used Schwert criterion uses a standard formula based solely on the number of observations, which is inefficient because it may waste degrees of freedom. As mentioned, we prefer instead to augment the ADF test until its residuals become serially independent according to a la- grange multiplier (LM) test. In most cases 4 augmentations are needed, however, in the cases of rfCO2, rfN2O and stratospheric H2O 8 augmentations are needed. In any case, the classification is robust with respect to augmentations in the range of 2–10. Therefore, we do not think that the number of augmentations affects our classifications. The KPSS and Phillips–Perron statistics use the standard nonparametric Newey-West criteria for calculating robust standard errors. In practice we find that these statistics use about 4 autocorrelations, which is similar to our LM procedure for determining the number of augmentations for ADF.

[…]

Discussion

We have shown that anthropogenic forcings do not polynomially cointegrate with global temperature and solar irradiance. Therefore, data for 1880–2007 do not support the anthropogenic interpretation of global warming during this period. This key result is shown graphically in Fig. 3 where the vertical axis measures the component of global temperature that is unexplained by solar irradiance according to our estimates. In panel a the horizontal axis measures the anomaly in the anthropogenic trend when the latter is derived from forcings of carbon dioxide, methane and nitrous oxide. In panel b the horizontal axis measures this anthropogenic anomaly when apart from these greenhouse gas forcings, it includes tropospheric aerosols and black carbon. Panels a and b both show that there is no relationship between temperature and the anthropogenic anomaly, once the warming effect of solar irradiance is taken into consideration.

However, we find that greenhouse gas forcings might have a temporary effect on global temperature. This result is illustrated in panel c of Fig. 3 in which the horizontal axis measures the change in the estimated anthropogenic trend. Panel c clearly shows that there is a positive relationship between temperature and the change in the anthropogenic anomaly once the warming effect of solar irradiance is taken into consideration.

clip_image002[6]

Fig. 3. Statistical association between (scatter plot of) anthropogenic anomaly (abscissa), and net temperature effect (i.e. temperature minus the estimated solar irradiance effect; ordinates). Panels (a)(c) display the results of the models presented in models 1 and 2 in Table 3 and Eq. (13), respectively. The anthropogenic trend anomaly sums the weighted radiative forcings of the greenhouse gases (CO2, CH4 and N2O). The calculation of the net temperature effect (as defined above) change is calculated by subtracting from the observed temperature in a specific year the product of the solar irradiance in that year times the coefficient obtained from the regression of the particular model equation: 1.763 in the case of model 1 (a); 1.806 in the case of model 2 (b); and 1.508 in the case of Eq. (13) (c).

Currently, most of the evidence supporting AGW theory is obtained by calibration methods and the simulation of GCMs. Calibration shows, e.g. Crowley (2000), that to explain the increase in temperature in the 20th century, and especially since 1970, it is necessary to specify a sufficiently strong anthropogenic effect. However, calibrators do not re- port tests for the statistical significance of this effect, nor do they check whether the effect is spurious. The implication of our results is that the permanent effect is not statistically significant. Nevertheless, there seems to be a temporary anthropogenic effect. If the effect is temporary rather than permanent, a doubling, say, of carbon emissions would have no long-run effect on Earth’s temperature, but it would in- crease it temporarily for some decades. Indeed, the increase in temperature during 1975–1995 and its subsequent stability are in our view related in this way to the acceleration in carbon emissions during the second half of the 20th century (Fig. 2). The policy implications of this result are major since an effect which is temporary is less serious than one that is permanent.

The fact that since the mid 19th century Earth’s temperature is unrelated to anthropogenic forcings does not contravene the laws of thermodynamics, greenhouse theory, or any other physical theory. Given the complexity of Earth’s climate, and our incomplete understanding of it, it is difficult to attribute to carbon emissions and other anthropogenic phenomena the main cause for global warming in the 20th century. This is not an argument about physics, but an argument about data interpretation. Do climate developments during the relatively recent past justify the interpretation that global warming was induced by anthropogenics during this period? Had Earth’s temperature not increased in the 20th century despite the increase in anthropogenic forcings (as was the case during the second half of the 19th century), this would not have constituted evidence against greenhouse theory. However, our results challenge the data interpretation that since 1880 global warming was caused by anthropogenic phenomena.

Nor does the fact that during this period anthropogenic forcings are I (2), i.e. stationary in second differences, whereas Earth’s temperature and solar irradiance are I (1), i.e. stationary in first differences, contravene any physical theory. For physical reasons it might be expected that over the millennia these variables should share the same order of integration; they should all be I (1) or all I (2), otherwise there would be persistent energy imbalance. However, during the last 150 yr there is no physical reason why these variables should share the same order of integration. However, the fact that they do not share the same order of integration over this period means that scientists who make strong interpretations about the anthropogenic causes of recent global warming should be cautious. Our polynomial cointegration tests challenge their interpretation of the data.

Finally, all statistical tests are probabilistic and depend on the specification of the model. Type 1 error refers to the probability of rejecting a hypothesis when it is true (false positive) and type 2 error refers to the probability of not rejecting a hypothesis when it is false (false negative). In our case the type 1 error is very small because anthropogenic forcing is I (1) with very low probability, and temperature is polynomially cointegrated with very low probability. Also we have experimented with a variety of model specifications and estimation methodologies. This means, however, that as with all hypotheses, our rejection of AGW is not absolute; it might be a false positive, and we cannot rule out the possibility that recent global warming has an anthropogenic footprint. However, this possibility is very small, and is not statistically significant at conventional levels.

Full paper: http://www.earth-syst-dynam.net/3/173/2012/esd-3-173-2012.pdf

Data Appendix.

image

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

298 Comments
Inline Feedbacks
View all comments
aaron
January 4, 2013 6:13 am

“do not”

richardscourtney
January 4, 2013 6:21 am

davidmhoffer
Your post at January 3, 2013 at 9:47 pm says

Bart says:
January 3, 2013 at 8:35 pm
>>>>>>>>>>>>>>>>>>
Wow. Folks, anyone who skipped through Bart’s post because it was long and technical…. I highly recommend going back and looking at those graphs.

I strongly agree that Bart’s graphs are very informative – everybody needs to see them – but they don’t provide the complete ‘answer’ which Bart assumes.
In my post at January 4, 2013 at 5:52 am I wrote

The equilibrium state of the carbon cycle system defines the stable distribution of CO2 among the compartments of the system. And at any moment the system is adjusting towards that stable distribution. But the equilibrium state is not a constant: it varies at all time scales.
Any change to the equilibrium state of the carbon cycle system induces a change to the amount of CO2 in the atmosphere. Indeed, this is seen as the ‘seasonal variation’ in the Mauna Loa data. However, some of the mechanisms for exchange between the compartments have rate constants of years and decades. Hence, it takes decades for the system to adjust to an altered equilibrium state.

Bart’s graphs show how the short-term processes immediately respond to the altered system state induced by temperature change.
At issue is the long-term trend in rising atmospheric CO2 concentration.
The dynamics of the system show that the carbon cycle can easily sequester ALL annual CO2 emission (both natural and anthropogenic) of each year, but the long-term rise shows that they don’t. At issue is why they don’t.
The reason for the long-term rise in atmospheric CO2 is probably that some mechanisms of the climate system take decades to fully adjust to an altered system state. Indeed, the ice core records indicate that some mechanisms take centuries to adjust.
There are many possible reasons why the equilibrium state of the carbon cycle has changed: most possibilities are natural phenomena, but the anthropogenic emission is one (improbable) possible reason.
Richard

Joe
January 4, 2013 6:25 am

richard telford says:
January 3, 2013 at 4:47 pm
[…} If Beenstock et al’s method cannot find the relationship between CO2 and temperature in the model, then it cannot be trusted if it cannot find the relationship in the real world.
Sorry, Richard, but that’s a complete logical falacy and displays a serious misunderstanding about the nature of scientific testing (whether statistical or physical). There are many valid tests which have assymetric reliability for positive and negative results.
That’s why two different types of error (type 1 and type 2) exist. . As long as the result a test gives is of the type for which the test is reliable, it doesn’t matter at all what the likelihood of false results of the other type are.
In this case, what that means is that the analysis may well falsely indicate “a relationship” in random data but won’t (or is very unlikely to) indicate “no relationship” in data where causality does exist. So getting a result of “no relationship” in this case is a reliable indication that the data are NOT connected even though it would NOT have been reliable indication that they were connected if it had given a result of “relationship”.

Resourceguy
January 4, 2013 7:25 am

To those negative comments on my methodological reference to the world’s central banks, perhaps you are also confused between regulatory and legislative loopholes in the financial sector and central bank operation. In that sense it is much like the assessment of climate variables in which there is disagreement on what happened even in hindsight. I stand by the soundness of the statistical technique in the paper and its common use in other research fields.

January 4, 2013 7:31 am

Joe says:
January 4, 2013 at 6:25 am
richard telford says:
January 3, 2013 at 4:47 pm
[…} If Beenstock et al’s method cannot find the relationship between CO2 and temperature in the model, then it cannot be trusted if it cannot find the relationship in the real world.
Sorry, Richard, but that’s a complete logical falacy and displays a serious misunderstanding about the nature of scientific testing (whether statistical or physical). There are many valid tests which have assymetric reliability for positive and negative results.
—————
Since there is nothing wrong with what you wrote, and I don’t say anything about asymmetrical reliability, I can only assume that you misunderstood what I wrote.
Beenstock et al does not explored the Type II error rate of their method. Therefore when they find no relationship, how sure can we be that there is no relationship and not that the apparent absence of a relationship is because their method has little statistical power. I would not be in the least surprised if their method had little power. They would not be the first people to proclaim an important negative result while using a low-powered method.
I am simply proposing a means by which the Type II error rate of their method could be established. If they could demonstrate that their method had high power on artificial data, more credibility could be given to their analysis on real data.

DeWitt Payne
January 4, 2013 7:31 am

richardscourtney,
E&E will publish pretty much anything. Saying your paper is peer reviewed does not put it in the same league as the papers in, for example, Wigley and Schimel. Gerlich & Tscheuchner’s falsification paper and Miskolczi’s papers were similarly peer reviewed. They’re still wrong. In the end, many peer reviewed papers in the mainstream journals will turn out to be wrong. Sturgeon’s Law (or Revelation) is that 90% of everything is crud. I haven’t read your paper, but I’m betting that you cite Beck and/or Jaworowski. If that is the case, then your paper is definitely in the 90% category.
The fact is that human emissions of CO2 are more than enough to explain the increase in atmospheric CO2. And the model using only human CO2 emission fits the observed levels very well. Any natural process would have to alter that relationship. The only alteration observed is the so-called missing sink. That caused a reduction in the rate of atmospheric CO2 concentration increase, not an increase in the rate.

Joe
January 4, 2013 7:47 am

DeWitt Payne says:
January 4, 2013 at 7:31 am
The fact is that human emissions of CO2 are more than enough to explain the increase in atmospheric CO2. And the model using only human CO2 emission fits the observed levels very well.
The model of the world being flat was more than enough to explain the observation that sailors never returned from over the horizon. Didn’t make it right though!

DeWitt Payne
January 4, 2013 7:55 am

Joe,

In this case, what that means is that the analysis may well falsely indicate “a relationship” in random data but won’t (or is very unlikely to) indicate “no relationship” in data where causality does exist. So getting a result of “no relationship” in this case is a reliable indication that the data are NOT connected even though it would NOT have been reliable indication that they were connected if it had given a result of “relationship”.

I suggest you research the various unit root tests and cointegration theory. Beenstock, et.al. do not find that there is no relationship. They find that any relationship must be spurious because of the structure of the time series. But time series testing for unit roots has problems when there is a non-linear deterministic trend in the data. The tests will find unit roots when none are actually present. Worse, there is no consensus on whether or how to remove deterministic trends before testing.
In fact, there is good reason to believe that the unforced temperature series cannot have d > 0.5, the limit for long term persistence. Thus finding values of d ~ 1 should have been a red flag that the tests were being improperly applied and/or that the series was probably being forced. But Beenstock, et. al. are economists not physical scientists.
You really should read the article that was linked earlier. It’s a complete rejection of cointegration theory in no uncertain terms.

richardscourtney
January 4, 2013 8:25 am

DeWitt Payne:
Your post at January 4, 2013 at 7:31 am consists solely of more unsubstantiated assertion from you.
Clearly, facts and evidence have no possibility of breaking through the armour you have put around your beliefs. You are entitled to believe whatever you want, but I prefer to consider the science and what it indicates.
You say

The fact is that human emissions of CO2 are more than enough to explain the increase in atmospheric CO2. And the model using only human CO2 emission fits the observed levels very well.

Yes, the “human emissions of CO2 are more than enough to explain the increase in atmospheric CO2”: I said that. And I also stated the fact that many natural effects also explain the increase in atmospheric CO2 much better, but you ignore that fact because it does not fit with what you want to believe.
And you don’t say which model you mean when you say “the model using only human CO2 emission fits the observed levels very well”. If you mean the Bern Model then it doesn’t fit the observed levels: it requires unjustifiable smoothing of the data to make it fit.
Our paper provides three models which each uses only human CO2 emission and each fits the observed levels perfectly within the measurement errors and with no smoothing. But so what? Our paper also provides three models which each has the change induced by a different natural cause and they each also fit the observed levels perfectly within the measurement errors and with no smoothing.
As I said,

Data that fits all the possible causes is not evidence for the true cause.

I want to know the true cause(s).
Reality is what it is, and your beliefs cannot change reality whatever it is.
Richard

richardscourtney
January 4, 2013 8:42 am

DeWitt Payne:
This is an addendum to my reply to your post at January 4, 2013 at 7:31 am as substantiation of my claim concerning your beliefs.
You say to me

I haven’t read your paper, but I’m betting that you cite Beck and/or Jaworowski. If that is the case, then your paper is definitely in the 90% category.

Our paper mentions neither Beck and/or Jaworowski.
You would have been able to assess the paper if you had read it.
Your words I quote here are an example of you ‘making stuff up’ in fallacious attempt to justify your fallacious assertions.
If you had an argument worth making then you would make it instead of inventing things in your mind as self-serving justification of your assertions. Those assertions can only be beliefs because they are based solely on assertions justified by untrue assumptions.
Richard

Philip Shehan
January 4, 2013 8:43 am

Quoting from the paper:
“3.1 Time series properties of the data
Informal inspection of Fig. 1 suggests that the time series properties of greenhouse gas forcings (panels a and b) are visibly different to those for temperature and solar irradiance (panel c). In panels a and b there is evidence of acceleration, whereas in panel c the two time series appear more stable.”
Informal inspection of the temperature data of panel c does show acceleration, matching that of the greenhouse gas forcing plots in a and b. The temperature rise appears less dramatic due to different scaling factors used in the 3 plots, but the acceleration of the temperature in the last 40 years compared to the previous 80 is clear to the naked eye. This is confirmed by a formal fit of temperature data to a nonlinear equation.

Philip Shehan
January 4, 2013 8:44 am

Apologies for not including the nonlinear plot in the previous post.
http://www.skepticalscience.com/pics/AMTI.png

brians356
January 4, 2013 8:55 am

A little late to the party here, but my friend in DC (must remain anonymous, but is an energy division lead economist for a prominent three-letter agency) says:
“This is interesting. I have no idea what climate change modelers have done but if their claims of causality in an empirical sense have not taken tests for stationarity of the underlying time series then the regression results would be possibly meaningless. A big if. This is not new statistics and I doubt folks have ignored it. But, like I said, I do not know what the empirical climate models have done.”
Any germane comments welcome.

Joe
January 4, 2013 9:03 am

DeWitt Payne says:
January 4, 2013 at 7:55 am
lots of irrelevent stuff
In case you hand’t noticed, my post was nothing to do with the validity or otherwise of the paper’s findings. I’ll leave that up to people far more qualified than me (or, likely, you) to determine.
But Richard Telford’s original post (which I ignored the fallacy in) and his follow-up contained a very basic logical fallacy that a test can’t be any good unless it provides reliable results in both directions – hence his concern about type 2 error levels in the original, when type 2 errors play no part in the vlaidity regarding type 1.
I considered building a nice analogy to demonstrate the flaw in his reasoning but decided it was easier, and more relevant, to explain it in terms of the paper under discussion. To explain the logical flaw in requiring tests to have equal (or even known) errors of both types didn’t require any discussion about whether or not the analysis in the paper is appropriate. Indeed, introducing such discussion would only obfuscate the point I was explaining to Mr Telford.
Perhaps you should try to fully understand what people are saying before you expect them to accept your own points. After all, given your apparent mis-comprehension of my post, one does wonder how much you might actually comprehend (as opposed to simply repeating from somewhere the far more technical matters that you’re using to criticise the paper?

MattS
January 4, 2013 9:07 am

richardscourtney,
“Those assertions can only be beliefs because they are based solely on assertions justified by untrue assumptions.”
They would still only be beliefs even if they were backed by true assumptions. It only matters that what backs the assertion is an assumption rather than evidence.
🙂

richardscourtney
January 4, 2013 9:14 am

MattS:
re your post addressed to me at January 4, 2013 at 9:07 am.
Yes, of course you are right. I stand corrected. Thankyou.
Richard

January 4, 2013 9:22 am

“While much of the scientific research into the causes of global warming has been carried out using calibrated gen- eral circulation models (GCMs), since 1997 a new branch of scientific inquiry has developed in which observations of climate change are tested statistically by the method of cointegration.”
Gee a new branch of inquiry based on observations – this is the achilles heel that the hockey team will exploit in debunking this upstart idea.

Bart
January 4, 2013 9:58 am

JazzyT says:
January 4, 2013 at 12:12 am
“there will be a 180 degree phase shift due to the 24-month average”
The WoodForTrees site automatically shifts the moving average to have zero phase offset.
“But anthropogenic CO2 would show up mostly as a steady value (an offset) in the derivative of CO2, if it’s the relatively constant rise that people seem to think that it is, and that the Keeling curve indicates.”
Anthropogenic CO2 would show up as a trend in the CO2 derivative, because production has been steadily increasing. There is no room for such an additional term, because the slope is already accounted for by the temperature relationship.
“Measurements of carbon isotopes in atmospheric CO2 show a decreasing concentration of C-13, consistent with the notion that increased CO2 arises from these anthropogenic sources.”
“Consistent with” is not proof. The derivative relationship I have shown reveals that the consistency is spurious happenstance.
richardscourtney says:
January 4, 2013 at 6:21 am
“…but they don’t provide the complete ‘answer’ which Bart assumes.”
We’ve been over this many times and are not going to agree. But, for the record, I do not assume, I observe. The match is virtually perfect and seamless across the observable frequency spread. It is clear that temperature is in the driver’s seat.
DeWitt Payne says:
January 4, 2013 at 7:31 am
“The fact is that human emissions of CO2 are more than enough to explain the increase in atmospheric CO2.”
The fact is, this tells you nothing about whether it is responsible for it, only whether it could be.
“And the model using only human CO2 emission fits the observed levels very well.”
It fits very poorly in the fine detail. As I show, the model using temperature only fits the observed levels very well, too. But across all frequencies, not just in the quadratic term.
“Any natural process would have to alter that relationship.”
The relationship is spurious. It is happenstance. And, it is not at all an unlikely thing to have two increasing time series match a low order polynomial when you can add an arbitrary offset and scaling.
MattS says:
January 4, 2013 at 9:07 am
“richardscourtney,
“Those assertions can only be beliefs because they are based solely on assertions justified by untrue assumptions.”

And, so we reach a state in which an erroneous conclusion propagates from an initial erroneous conclusion which gets all but forgotten, and is always referred to, but never reexamined. A review of Feynmann’s recounting of the measurement of electron charge might be in order. Nobody wanted to go too far from Millikan’s value. Scientists are social animals, too, and they often seek safety in the herd.

Tom in Indy
January 4, 2013 10:13 am

Phillip
Given your claim that temperature change is accelerating over the last 40 years, the lack of acceleration over the last 15 years, nearly 40% of the period in question, contradicts your claim. Try fitting a linear trend, a concave trend and your convex trend to the data for the period 1970-2012 and report back with the R2. I have a hunch that your convex trend will produce the poorest fit.

richardscourtney
January 4, 2013 10:27 am

Bart:
I am replying to a comment in your post at January 4, 2013 at 9:58 am for the information of others. You say

richardscourtney says:
January 4, 2013 at 6:21 am

…but they don’t provide the complete ‘answer’ which Bart assumes.

We’ve been over this many times and are not going to agree. But, for the record, I do not assume, I observe. The match is virtually perfect and seamless across the observable frequency spread. It is clear that temperature is in the driver’s seat.

Yes, we have “been over this many times” and it is clear that we “are not going to agree”.
I have quoted your view here and my view is explained in my post at January 4, 2013 at 6:21 am which you cite.
There are those (e.g. DeWitt Payne) who state certainty that the recent rise in atmospheric CO2 concentration has an anthropogenic cause. And there are others (e.g. yourself) who state certainty that the recent rise in atmospheric CO2 concentration has a natural cause.
I remain ‘on the fence’ about the causality until I see data which convinces me to ‘get off the fence’ on one side. Your data convinces you but not me that I should ‘get off the fence’ on your side.
Richard

newcanf
January 4, 2013 11:03 am

DeWitt Payne.
You are jumping the shark here. Cointegration is a longstanding and mainstream method; Granger and Engle won the Nobel prize for their work in pioneering the field. To rebut this, you repeatedly link to a single paper in a minor finance journal (as a financial professional, I have never even heard of the journal). According to Google Scholar, the paper has been cited a total of three times – all three by the author himself! Given the hundreds of papers published on cointegration in any year, this is a not very impressive achievement. So you disparage E&E, but somehow place significant reliance on this fringe paper in a fringe journal. Not very consistent.

Philip Shehan
January 4, 2013 11:37 am

Sorry if this is a repost but I think I messed up the first attempt.
Tom,
I was commenting on the authors statement about their figure presenting their data from 1880 to the present. That is their chosen data set.
As a scientist I am used to looking at such graphs but believe that even “informal” examination by the untrained eye can discern an accelerating trend in the data. In case some people were having trouble I simply suggested concentrating on the last 40 years of data compared to the previous 80 (actually 90) and “informally” making a linear fit with the minds eye. This impression can be confirmed by actual linear fits to temperature data for the period 1880 to the present, 1880 to 1969, and 1970 to the present.
http://www.woodfortrees.org/plot/gistemp-dts/from:1880/to:2013/plot/gistemp-dts/from:1970/to:2013/trend/plot/gistemp-dts/from:1880/to:1969/trend/plot/gistemp-dts/to:1880/to:2013/trend.
The fit for all the data is clearly inferior to the non linear fit. (Unfortunately the r square values for the linear fits are not given, nor is the function for the non linear plot but it appears to be second order polynomial or exponential.)
http://www.skepticalscience.com/pics/AMTI.png
With regard to a nonlinear fit for the past fifteen years, temperature data is much noisier than greenhouse gas concentration as the former is also dependent on factors such as solar output, volcanic eruptions, el nino and la nina events to name some of the most significant. Temperature trends must be analysed over multidecadal time periods. The noisy data means that the linear function from 1970 to the present is reasonable but is inferior to the nonlinear fit over he longer period.

D Böehm
January 4, 2013 12:17 pm

Philip Shehan says:
“Informal inspection of the temperature data of panel c does show acceleration…”
Wrong.
But I knew this would happen. As Werner Brozek repeatedly shows in great detail and based on extensive data sets, there is no recent acceleration of global temperatures. Faced with that undeniable fact, the alarmist crowd has one of two choices:
1. Admit that despite the rise in CO2, there has been no acceleration of global temperatures, and reassess their failed conjecture, or…
2. Lie about it.
Global temperatures are not accelerating. In fact, as the WFT chart shows, global warming has stopped for the past decade and a half. Claiming that global temperatures are “accelerating” when the data shows otherwise is pure mendacity.

Philip Shehan
January 4, 2013 1:02 pm

D.Boehm,
All I can do is redirect you to my 11.37 PM post.
Again I am specifically analysing the data and the claims made for it from 1880 by the papers authors. I have tried to be polite but since you are implying I am a liar, only wilful self delusion, ignorance or dishonesty can lead you ignore mathematical analysis of the entire data set and cherry pick a 5% segment of the total data carefully selected to begin with an extreme el nino southern summer of 1997-98, which in no way invalidates the 130 year trend.
Why didn’t you pick the 15 year period between 1940 and 1955 to prove that temperatures from 1880 to present have been dropping?
http://www.woodfortrees.org/plot/gistemp-dts/from:1940/to:1955/plot/gistemp-dts/from:1940/to:1955/trend

Matthew R Marler
January 4, 2013 1:06 pm

When the paper was first put up on Beenstock’s web page I bought a couple of books on the topic of non-linear co-integrated vector autoregressive (VAR) processes. Linear co-integrated VAR processes have been studied for decades. Except for the possibility of programming errors (and I hope that the authors follow a recently and widely but not universally promoted standard of putting all of their code, data, intermediate results, etc on line), I have two criticisms of the paper:
1. The standard: it is really hard to infer causation from vector time series without interventions (interventions can be conducted in chemical process control, where the VAR processes have been used with success.) All they have shown, with that caveat in mind, is that it is possible, contrary to a claim by IPCC AR(4), to create and estimate a reasonable model for 20th century temperature change that gives little or no weight to CO2 changes. In a sense, this is a complicated counterpoise to Vaughan Pratt’s modeling of a few weeks ago, in which he showed that: assuming a functional form for the CO2 effect he could estimate a filter to reveal that functional form. In each case, by enlarging the total field of functions under consideration, you can generally get a model to justify any a priori chosen conclusion.
2. I would like to have seen more graphs displaying the estimated non-linear relationships between the measured variables at each time point and the full set of variables at each lag: model and data. This is among the things that I hope they provide on line, but if they put up their data, model and results it will be possible for others (maybe I will) to produce those plots.
I think the paper is a solid contribution to the topic of modeling multivariate climate data. Now that their model has been published, its predictions can be updated as new data on CO2 concentrations and solar indices become available, and we can see how well it does on “1-year ahead”, “5-year ahead” and “10-year ahead” predictions without changing model parameters. As with all of the other models, if the mean square prediction error is small enough, we may begin to rely on its predictions.

1 3 4 5 6 7 12