AGW Bombshell? A new paper shows statistical tests for global warming fails to find statistically significant anthropogenic forcing

graphic_esd_cover_homepage[1]From the journal Earth System Dynamics billed as “An Interactive Open Access Journal of the European Geosciences Union” comes this paper which suggests that the posited AGW forcing effects simply isn’t statistically significant in the observations, but other natural forcings are.

“…We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.”

This is a most interesting paper, and potentially a bombshell, because they have taken virtually all of the significant observational datasets (including GISS and BEST) along with solar irradiance from Lean and Rind, and CO2, CH4, N2O, aerosols, and even water vapor data and put them all to statistical tests (including Lucia’s favorite, the unit root test) against forcing equations. Amazingly, it seems that they have almost entirely ruled out anthropogenic forcing in the observational data, but allowing for the possibility they could be wrong, say:

“…our rejection of AGW is not absolute; it might be a false positive, and we cannot rule out the possibility that recent global warming has an anthropogenic footprint. However, this possibility is very small, and is not statistically significant at conventional levels.”

I expect folks like Tamino (aka Grant Foster) and other hotheaded statistics wonks will begin an attack on why their premise and tests are no good, but at the same time I look for other less biased stats folks to weigh in and see how well it holds up. My sense of this is that the authors of Beenstock et al have done a pretty good job of ruling out ways they may have fooled themselves. My thanks to Andre Bijkerk and Joanna Ballard for bringing this paper to my attention on Facebook.

The abstract and excerpts from the paper, along with link to the full PDF follows.

Polynomial cointegration tests of anthropogenic impact on global warming

M. Beenstock1, Y. Reingewertz1, and N. Paldor2

1Department of Economics, the Hebrew University of Jerusalem, Mount Scopus Campus, Jerusalem, Israel

2Fredy and Nadine Institute of Earth Sciences, the Hebrew University of Jerusalem, Edmond J. Safra campus, Givat Ram, Jerusalem, Israel

 Abstract. 

We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmospheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since during the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.

Introduction

Considering the complexity and variety of the processes that affect Earth’s climate, it is not surprising that a completely satisfactory and accepted account of all the changes that oc- curred in the last century (e.g. temperature changes in the vast area of the Tropics, the balance of CO2 input into the atmosphere, changes in aerosol concentration and size and changes in solar radiation) has yet to be reached (IPCC, AR4, 2007). Of particular interest to the present study are those  processes involved in the greenhouse effect, whereby some of the longwave radiation emitted by Earth is re-absorbed by some of the molecules that make up the atmosphere, such as (in decreasing order of importance): water vapor, car- bon dioxide, methane and nitrous oxide (IPCC, 2007). Even though the most important greenhouse gas is water vapor, the dynamics of its flux in and out of the atmosphere by evaporation, condensation and subsequent precipitation are not understood well enough to be explicitly and exactly quantified. While much of the scientific research into the causes of global warming has been carried out using calibrated gen- eral circulation models (GCMs), since 1997 a new branch of scientific inquiry has developed in which observations of climate change are tested statistically by the method of cointegration (Kaufmann and Stern, 1997, 2002; Stern and Kauf- mann, 1999, 2000; Kaufmann et al., 2006a,b; Liu and Ro- driguez, 2005; Mills, 2009). The method of cointegration, developed in the closing decades of the 20th century, is intended to test for the spurious regression phenomena in non-stationary time series (Phillips, 1986; Engle and Granger, 1987). Non-stationarity arises when the sample moments of a time series (mean, variance, covariance) depend on time. Regression relationships are spurious1 when unrelated non- stationary time series appear to be significantly correlated be- cause they happen to have time trends.

The method of cointegration has been successful in detecting spurious relationships in economic time series data.

Indeed, cointegration has become the standard econometric tool for testing hypotheses with nonstationary data (Maddala, 2001; Greene, 2012). As noted, climatologists too have used cointegration to analyse nonstationary climate data (Kauf- mann and Stern, 1997). Cointegration theory is based on the simple notion that time series might be highly correlated even though there is no causal relation between them. For the relation to be genuine, the residuals from a regression between these time series must be stationary, in which case the time series are “cointegrated”. Since stationary residuals mean- revert to zero, there must be a genuine long-term relationship between the series, which move together over time because they share a common trend. If on the other hand, the resid- uals are nonstationary, the residuals do not mean-revert to zero, the time series do not share a common trend, and the relationship between them is spurious because the time series are not cointegrated. Indeed, the R2 from a regression between nonstationary time series may be as high as 0.99, yet the relation may nonetheless be spurious.

The method of cointegration originally developed by En- gle and Granger (1987) assumes that the nonstationary data are stationary in changes, or first-differences. For example, temperature might be increasing over time, and is there- fore nonstationary, but the change in temperature is station- ary. In the 1990s cointegration theory was extended to the case in which some of the variables have to be differenced twice (i.e. the time series of the change in the change) be- fore they become stationary. This extension is commonly known as polynomial cointegration. Previous analyses of the non-stationarity of climatic time series (e.g. Kaufmann and Stern, 2002; Kaufmann et al., 2006a; Stern and Kaufmann, 1999) have demonstrated that global temperature and solar irradiance are stationary in first differences, whereas green- house gases (GHG, hereafter) are stationary in second differ- ences. In the present study we apply the method of polyno- mial cointegration to test the hypothesis that global warming since 1850 was caused by various anthropogenic phenom- ena. Our results show that GHG forcings and other anthropogenic phenomena do not polynomially cointegrate with global temperature and solar irradiance. Therefore, despite the high correlation between anthropogenic forcings, solar irradiance and global temperature, AGW is not statistically significant. The perceived statistical relation between tem- perature and anthropogenic forcings is therefore a spurious regression phenomenon.

Data and methods

We use annual data (1850–2007) on greenhouse gas (CO2, CH4 and N2O) concentrations and forcings, as well as on forcings for aerosols (black carbon, reflective tropospheric aerosols). We also use annual data (1880–2007) on solar irradiance, water vapor (1880–2003) and global mean tem- perature (sea and land combined 1880–2007). These widely used secondary data are obtained from NASA-GISS (Hansen et al., 1999, 2001). Details of these data may be found in the Data Appendix.

We carry out robustness checks using new reconstructions for solar irradiance from Lean and Rind (2009), for globally averaged temperature from Mann et al. (2008) and for global land surface temperature (1850–2007) from the Berkeley Earth Surface Temperature Study.

Key time series are shown in Fig. 1 where panels a and b show the radiative forcings for three major GHGs, while panel c shows solar irradiance and global temperature. All these variables display positive time trends. However, the time trends in panels a and b appear more nonlinear than their counterparts in panel c. Indeed, statistical tests reported be- low reveal that the trends in panel c are linear, whereas the trends in panels a and b are quadratic. The trend in solar irradiance weakened since 1970, while the trend in temperature weakened temporarily in the 1950s and 1960s.

The statistical analysis of nonstationary time series, such as those in Fig. 1, has two natural stages. The first consists of unit root tests in which the data are classified by their order and type of nonstationarity. If the data are nonstationary, sample moments such as means, variances and co- variances depend upon when the data are sampled, in which event least squares and maximum likelihood estimates of parameters may be spurious. In the second stage, these nonstationary data are used to test hypotheses using the method of cointegration, which is designed to distinguish between genuine and spurious relationships between time series. Since these methods may be unfamiliar to readers of Earth System Dynamics, we provide an overview of key concepts and tests.

clip_image002

Fig. 1. Time series of the changes that occurred in several variables that affect or represent climate changes during the 20th century. a) Radiative forcings (rf, in units of W m−2) during 1880 to 2007 of CH4 (methane) and CO2 (carbon dioxide); (b) same period as in panel a but for Nitrous-Oxide (N2O); (c) solar irradiance (left ordinate, units of W m−2) and annual global temperature (right ordinate, units of ◦C) during 1880–2003.

[…]

3 Results

3.1 Time series properties of the data

Informal inspection of Fig. 1 suggests that the time series properties of greenhouse gas forcings (panels a and b) are visibly different to those for temperature and solar irradiance (panel c). In panels a and b there is evidence of acceleration, whereas in panel c the two time series appear more stable. In Fig. 2 we plot rfCO2 in first differences, which confirms by eye that rfCO2 is not I (1), particularly since 1940. Similar figures are available for other greenhouse gas forcings. In this section we establish the important result that whereas the first differences of temperature and solar irradiance are trend free, the first differences of the greenhouse gas forcings are not. This is consistent with our central claim that anthropogenic forcings are I (2), whereas temperature and solar irradiance are I (1).

image

Fig. 2. Time series of the first differences of rfCO2.

What we see informally is born out by the formal statistical tests for the variables in Table 1.

image

Although the KPSS and DF-type statistics (ADF, PP and DF-GLS) test different null hypotheses, we successively increase d until they concur. If they concur when d = 1, we classify the variable as I (1), or difference stationary. For the anthropogenic variables concurrence occurs when d = 2. Since the DF-type tests and the KPSS tests reject that these variables are I (1) but do not reject that they are I (2), there is no dilemma here. Matters might have been different if according to the DF-type tests these anthropogenic variables are I (1) but according to KPSS they are I (2).

The required number of augmentations for ADF is moot. The frequently used Schwert criterion uses a standard formula based solely on the number of observations, which is inefficient because it may waste degrees of freedom. As mentioned, we prefer instead to augment the ADF test until its residuals become serially independent according to a la- grange multiplier (LM) test. In most cases 4 augmentations are needed, however, in the cases of rfCO2, rfN2O and stratospheric H2O 8 augmentations are needed. In any case, the classification is robust with respect to augmentations in the range of 2–10. Therefore, we do not think that the number of augmentations affects our classifications. The KPSS and Phillips–Perron statistics use the standard nonparametric Newey-West criteria for calculating robust standard errors. In practice we find that these statistics use about 4 autocorrelations, which is similar to our LM procedure for determining the number of augmentations for ADF.

[…]

Discussion

We have shown that anthropogenic forcings do not polynomially cointegrate with global temperature and solar irradiance. Therefore, data for 1880–2007 do not support the anthropogenic interpretation of global warming during this period. This key result is shown graphically in Fig. 3 where the vertical axis measures the component of global temperature that is unexplained by solar irradiance according to our estimates. In panel a the horizontal axis measures the anomaly in the anthropogenic trend when the latter is derived from forcings of carbon dioxide, methane and nitrous oxide. In panel b the horizontal axis measures this anthropogenic anomaly when apart from these greenhouse gas forcings, it includes tropospheric aerosols and black carbon. Panels a and b both show that there is no relationship between temperature and the anthropogenic anomaly, once the warming effect of solar irradiance is taken into consideration.

However, we find that greenhouse gas forcings might have a temporary effect on global temperature. This result is illustrated in panel c of Fig. 3 in which the horizontal axis measures the change in the estimated anthropogenic trend. Panel c clearly shows that there is a positive relationship between temperature and the change in the anthropogenic anomaly once the warming effect of solar irradiance is taken into consideration.

clip_image002[6]

Fig. 3. Statistical association between (scatter plot of) anthropogenic anomaly (abscissa), and net temperature effect (i.e. temperature minus the estimated solar irradiance effect; ordinates). Panels (a)(c) display the results of the models presented in models 1 and 2 in Table 3 and Eq. (13), respectively. The anthropogenic trend anomaly sums the weighted radiative forcings of the greenhouse gases (CO2, CH4 and N2O). The calculation of the net temperature effect (as defined above) change is calculated by subtracting from the observed temperature in a specific year the product of the solar irradiance in that year times the coefficient obtained from the regression of the particular model equation: 1.763 in the case of model 1 (a); 1.806 in the case of model 2 (b); and 1.508 in the case of Eq. (13) (c).

Currently, most of the evidence supporting AGW theory is obtained by calibration methods and the simulation of GCMs. Calibration shows, e.g. Crowley (2000), that to explain the increase in temperature in the 20th century, and especially since 1970, it is necessary to specify a sufficiently strong anthropogenic effect. However, calibrators do not re- port tests for the statistical significance of this effect, nor do they check whether the effect is spurious. The implication of our results is that the permanent effect is not statistically significant. Nevertheless, there seems to be a temporary anthropogenic effect. If the effect is temporary rather than permanent, a doubling, say, of carbon emissions would have no long-run effect on Earth’s temperature, but it would in- crease it temporarily for some decades. Indeed, the increase in temperature during 1975–1995 and its subsequent stability are in our view related in this way to the acceleration in carbon emissions during the second half of the 20th century (Fig. 2). The policy implications of this result are major since an effect which is temporary is less serious than one that is permanent.

The fact that since the mid 19th century Earth’s temperature is unrelated to anthropogenic forcings does not contravene the laws of thermodynamics, greenhouse theory, or any other physical theory. Given the complexity of Earth’s climate, and our incomplete understanding of it, it is difficult to attribute to carbon emissions and other anthropogenic phenomena the main cause for global warming in the 20th century. This is not an argument about physics, but an argument about data interpretation. Do climate developments during the relatively recent past justify the interpretation that global warming was induced by anthropogenics during this period? Had Earth’s temperature not increased in the 20th century despite the increase in anthropogenic forcings (as was the case during the second half of the 19th century), this would not have constituted evidence against greenhouse theory. However, our results challenge the data interpretation that since 1880 global warming was caused by anthropogenic phenomena.

Nor does the fact that during this period anthropogenic forcings are I (2), i.e. stationary in second differences, whereas Earth’s temperature and solar irradiance are I (1), i.e. stationary in first differences, contravene any physical theory. For physical reasons it might be expected that over the millennia these variables should share the same order of integration; they should all be I (1) or all I (2), otherwise there would be persistent energy imbalance. However, during the last 150 yr there is no physical reason why these variables should share the same order of integration. However, the fact that they do not share the same order of integration over this period means that scientists who make strong interpretations about the anthropogenic causes of recent global warming should be cautious. Our polynomial cointegration tests challenge their interpretation of the data.

Finally, all statistical tests are probabilistic and depend on the specification of the model. Type 1 error refers to the probability of rejecting a hypothesis when it is true (false positive) and type 2 error refers to the probability of not rejecting a hypothesis when it is false (false negative). In our case the type 1 error is very small because anthropogenic forcing is I (1) with very low probability, and temperature is polynomially cointegrated with very low probability. Also we have experimented with a variety of model specifications and estimation methodologies. This means, however, that as with all hypotheses, our rejection of AGW is not absolute; it might be a false positive, and we cannot rule out the possibility that recent global warming has an anthropogenic footprint. However, this possibility is very small, and is not statistically significant at conventional levels.

Full paper: http://www.earth-syst-dynam.net/3/173/2012/esd-3-173-2012.pdf

Data Appendix.

image

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

298 Comments
Inline Feedbacks
View all comments
January 3, 2013 6:05 pm

Mervyn says:
January 3, 2013 at 5:22 pm
concentrate on natural variability and theories such as Henrik Svensmark’s cloud theory, which is backed by observational data
Actually it is not: http://www.leif.org/EOS/swsc120049-GCR-Clouds.pdf

Adam
January 3, 2013 6:12 pm

The paper is from Israel. So following the logic of the alarmists anybody who disagrees with it is a Holocaust denying anti Semite who loves Hitler.

willb
January 3, 2013 6:37 pm

DeWitt Payne says:
January 3, 2013 at 4:47 pm
“And then there’s Ferdinand Englebeen’s page:”
I don’t know what Ferdinand Englebeen is smoking, but the Mass Balance argument for attributing the increase in atmospheric carbon dioxide to anthropogenic causes is anything but overwhelming. The argument seems to be based on the premise that the natural CO2 fluxes into and out of the atmosphere remain unchanged regardless of atmospheric CO2 concentration. IMHO this is not particularly good logic. It also violates Le Chatelier’s principle.
According to the carbon cycle theory, there are natural carbon fluxes into and out of the atmosphere that are on-going and continuous. The argument for Mass Balance goes something like this: When humans add X amount of CO2 to the atmosphere in any given year, these natural fluxes adjust themselves such that X/2 of the added amount is removed and X/2 remains (forever, or at least for a very long time). Because the amount of additional CO2 is less than the amount added by humans, Mass Balance says the increase must be due solely to anthropogenic CO2.
Suppose in the following year there is no anthropogenic CO2 added to the atmosphere. What happens to the X/2 quantity of anthropogenic CO2 from the previous year that is still in the atmosphere? According to Mass Balance, this added CO2 remains as a permanent increase to the atmospheric CO2 concentration. What happens to the natural fluxes? They presumably stay in balance, as they were in the year before the anthropogenic CO2 addition. Mass Balance therefore seems to be saying the fluxes into and out of the atmosphere will remain the same as they were two years ago, before anthropogenic CO2 was added. This despite the fact that the atmospheric CO2 concentration has increased.
So where is the equilibrium shift that, according to Le Chatelier’s principle, counteracts this increase in concentration? Or does Le Chatelier’s principle not apply in this case?

John Mason
January 3, 2013 6:43 pm

The obvious gets a paper. Of course their ‘might’ have been a ‘temporary’ effect when co2 rise and temp rise happened to correspond. So, their might be a ‘temporary’
Any common sense observer has seen that our rise in temps in the later part of the 20th century continues a non-remarkable trend since the end of the little ice age.
I expect many more papers like this to give some face saving backing away from the prior positions of dangerous AGW. I still find it silly this couching of even a paper like this that says, this doesn’t mean there isn’t a temporary AGW effect – but rather just saying the models have not been rigorously tested with statistics.

David Jay
January 3, 2013 7:24 pm

Wait, can guys from Israel be “deniers”?
I get so confused…

Rob Ricket
January 3, 2013 7:39 pm

Friends, I’m just a layman with a limited understanding of advanced statistics, but it seems that the biggest bone of contention lies with the claim that carbon forcing has a limited shelf life. Does this not essentially affirm the warmest position regarding the warming effect of GHG’s?
If the findings are valid, then forcing will continue for forty years after GHG’s stabilize. Alternatively, how can it be claimed on one hand that forcing occurs for a limited period, but does not cointegrate with global temp?
If the statistical methodogy is correct, then the paper essentially proves that one of the data sets is inaccurate. Obviously, the lowest hanging fruit is Mann’s proxy data.
Red, I’ll take Michal Mann for $250.

Henry Clark
January 3, 2013 7:54 pm

Figure 1 uses untrue temperature data, including falsely depicting 2007 as 0.6 degrees Celsius warmer than 1980 when the actual figure from satellite data was not more than 0.3 degrees warmer. There was not more than half as much warming over that period. Partially correcting the graph improves the relationship of temperature with solar activity, turning the graph of figure 1 into http://s9.postimage.org/yrkytofyn/fixedplotb.jpg (click to enlarge).
One of the most common fallacies in much of what gets called science these days is style over substance. Like GIGO, superficially formal writing and numbers can be only misleading illusions, falsely impressing viewers yet irrelevant when the basic data and assumptions are off. What happens is a little like the story of the Emperor’s New Clothes: Most people fear seeming unsophisticated and hesitate to criticize such.
However, one of the most important things I ever learned (in another context) was not just to calculate but to state my assumptions before calculation, recognizing that the internal correctness of the math itself was simply irrelevant unless the starting assumptions were correct.
Implicit unstated assumptions in this paper include (incorrectly) treating Hansen’s GISS as a trustworthy temperature source. Actually such is utterly compromised.*
* (A simple example is http://www.giss.nasa.gov/research/briefs/hansen_07/fig1x.gif versus http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.D.gif , where the former shows shows the 5-year mean of U.S. temperature in the high point of the 1980s was 0.4 degrees Celsius cooler than such in the 1930s but the latter is fudged to make the same less than 0.1 degrees Celsius apart).
Get a statistics expert to analyze the paper in a conventional manner, and the illusion of a sophisticated criticism can be increased. But unless they are intelligent enough and sufficiently far from typical naivety (or bias) to criticize starting assumptions and input data rather than treating them as a given, such would be inferior to even my quick casual comments here. (Peer review can be junk due to narrow-minded analysis; an example in another context is how a paper was published and applauded for claiming a 40% decline in phytoplankton over the past several decades, when so much as a look at fish catches, let alone contradictions from other plankton measurements, would show such to be BS, a bit like Mann’s hockey stick was not properly flagged for blatant contradiction to about everything of relevance published before the era of politicized science).
With that said, despite an incorrect sun versus temperature depiction, this paper happens to be correct on lack of correlation of temperature with CO2, though that can be seen in other ways more blatantly, like http://wattsupwiththat.com/2012/04/11/does-co2-correlate-with-temperature-history-a-look-at-multiple-timescales-in-the-context-of-the-shakun-et-al-paper/
The paper may be helpful in a way. Much of the CAGW movement’s more naive follower population is comprised of people who utterly fall for superficial appeals to authority, so perhaps the contradiction with CAGW claims could help disrupt their mindsets. But it is style over substance, especially in the context of how multiple climate forcings combine (especially solar / GCR variation like http://s13.postimage.org/ka0rmuwgn/gcrclouds.gif and http://s10.postimage.org/l9gokvp09/composite.jpg though with El Nino echos of past ocean heat back to the atmosphere).

January 3, 2013 7:58 pm

says:
January 3, 2013 at 10:50 am
________________
Nice analogy

DeWitt Payne
January 3, 2013 8:02 pm

willb,

The argument for Mass Balance goes something like this: When humans add X amount of CO2 to the atmosphere in any given year, these natural fluxes adjust themselves such that X/2 of the added amount is removed and X/2 remains (forever, or at least for a very long time). Because the amount of additional CO2 is less than the amount added by humans, Mass Balance says the increase must be due solely to anthropogenic CO2.

That’s not how it works. I suggest you read my article from The Air Vent to see how the Bern model works in practice. For example, the amount that remains in the atmosphere isn’t half. That’s the apparent value because anthropogenic emission is continually increasing. If human emissions were to cease instantly, the atmospheric CO2 concentration would decay to a level that would be the preindustrial level plus about 15% of the total amount emitted. The initial decay would be rapid, but it would take hundreds to thousands of years to reach a new steady state because the full mixing of the ocean takes that long. This graph is about what the CO2 level would have done if all human emissions ceased in 2005. The new steady state value would be about 320 ppmv. And you’re neglecting the isotope ratio and oxygen concentration data that is in agreement with the source of the increase being mostly fossil fuel combustion.

DeWitt Payne
January 3, 2013 8:05 pm

That should be 22% not 15% so about 335 ppmv, not 320 ppmv, at steady state.

Rob Ricket
January 3, 2013 8:17 pm

Good catch Henry.

Henry Clark
January 3, 2013 8:24 pm

Rob Ricket: Thanks.

Bart
January 3, 2013 8:35 pm

DeWitt Payne says:
January 3, 2013 at 2:15 pm
“Atmospheric CO2 concentration is not a random variable. It is almost completely deterministic. There is measurement error and year to year variability, but those factors are small compared to the deterministic change. We know where it comes from and how much is emitted each year.”
And, we know that part (the anthropogenic input) has negligible impact on the overall concentration. The data show that atmospheric CO2 concentration is almost completely driven by surface temperatures. In this WoodForTrees plot, it is clear that CO2 is dominated by an affine-in-temperature differential equation of the form
dCO2/dt = k*(T – To)
where “k” is a coupling constant, “T” is the global temperature anomaly, and “To” is an equilibrium temperature. Here is another such comparison with GISTEMP. Any of the major temperature sets will generally do, as they are all more-or-less affinely related. Any time you have a continuous flow into and out of a system which can be modulated by a particular variable, you can get an integral relationship of this sort for the residual which gets left behind.
Here is an example of what you get when you integrate the relationship. Clearly, there is very little room for human influence on the measured concentration. It is simply inconsistent with the data. Some quibble that the linear term is an artifact of the choice of “To”, and that provides most of the match in the integrated output. But, there has to be some value of “To”, because the temperature anomaly “T” is measured relative to an arbitrary baseline. More importantly, it has no effect on the slope of the CO2 rate of change, which matches the temperature slope quite well, when you choose “k” to match the variation, the peaks and valleys, between the time series. And, since the anthropogenic input rate itself has a slope, there is, again, no room for it to any level of significance.
Since differentiation necessarily imparts a 90 degree phase advance, it follows that coincidence between the peaks and valleys in the temperature and the rate of change of CO2 implies that CO2 lags temperature, and therefore the direction of causality is temperature-to-CO2. Or, one may consider that on a more elementary level, it would be absurd to argue that the temperature depends on the rate at which CO2 is changing, and not the overall level and, again, we conclude the direction of causality is temperature-to-CO2.
It also necessarily follows that the Earth’s mechanisms for sequestering CO2 have been grossly underestimated, the residence time conversely grossly overestimated, and the natural flows into the system grossly underestimated, as well. It is hardly surprising given that such estimates have been largely paper exercises without closed loop confirmation. This is what happens when you guess at an answer, and decide if it is right or not by taking a vote: Fiasco.

Mooloo
January 3, 2013 8:50 pm

If human emissions were to cease instantly, the atmospheric CO2 concentration would decay to a level that would be the preindustrial level plus about 15% of the total amount emitted. The initial decay would be rapid, but it would take hundreds to thousands of years to reach a new steady state because the full mixing of the ocean takes that long.
So the sinks get rid of most of it quickly, but “know” to leave the other 15% for thousands of years? That is beyond ridiculous.
If this method was correct, each time a serious volcanic eruption occurred in which there was a significant amount of surplus CO2 over normal then 15% of the excess would fail to be absorbed. Over time, then, CO2 would inexorably rise.
That “it would take hundreds to thousands of years to reach a new steady state” is a red herring. The excess would be absorbed into quicker sinks and only then slowly equilibrate with the slower ones. That one sink is not yet in equilibrium has no bearing on whether the other ones are mopping up any significant excess.
The only way the slowness of the oceans will have any effect is if the other sinks are full.

Bart
January 3, 2013 8:50 pm

DeWitt Payne says:
January 3, 2013 at 8:02 pm
These falacious arguments have been hammered out numerous times on these boards, often with the participation of Ferdinand. The “mass balance” argument begs the question – it only holds if you a priori assume that the source of the rise observed in the 20th century is attributable to humans. Here is a repeat of a previous response many, many moons ago to others:
———————————————————-

Let
M = measured concentration
A = anthropogenic emissions
N = natural emissions
U = natural uptake
We know M = A + N – U. We measure M. We calculate A. From that, we know N-U, and we know that A is approximately twice M, so we know N-U is negative. As you say, it is a net sink.
But, that’s all we know. We do not know N or U individually.
The reservoirs expand in response to both natural and anthropogenic emissions. This is the nature of a DYNAMIC SYSTEM.
Thus, we can take U as composed of two terms:
UA = natural uptake of anthropogenic emissions
UN = natural uptake of natural emissions
So, we only know N-UA-UN. Suppose UA = A. Then M = N – UN, N is greater than UN, and the rise is entirely natural. Equality would never be precisely the case, but it depends on the sequestration time. If that time is arbitrarily small, then it is possible to within an arbitrarily small deviation to have UA = A. We simply do not know. As the sequestration time increases, anthropogenic emissions induce a greater share of the measured concentration. But, we do not know the sequestration time.
This is a DYNAMIC SYSTEM. It actively responds to changing inputs. You cannot do a static analysis on such a system and expect generally, or even usually, to get the right answer.

————————————————
This was written prior to my discovery (Allan MacRae, who posts here occasionally, has claim to discovering it first, I hasten to say) of the strong and compelling correlation between the rate of CO2 and temperatures. We do, in fact, know more, with this added bit of information. We know that UA is quite close to A, and we know that atmospheric CO2 concentration is largely temperature driven.

davidmhoffer
January 3, 2013 9:47 pm

Bart says:
January 3, 2013 at 8:35 pm
>>>>>>>>>>>>>>>>>>
Wow. Folks, anyone who skipped through Bart’s post because it was long and technical…. I highly recommend going back and looking at those graphs.

Climate Ace
January 3, 2013 10:58 pm

Adam
The paper is from Israel. So following the logic of the alarmists anybody who disagrees with it is a Holocaust denying anti Semite who loves Hitler.
You should be disgusted with yourself, belittling the Holocaust and anti-Semitism like you do. The Holocaust is not, repeat NOT, a climate argument toy.
I am disgusted that the moderator lets comments like that through.
[Reply: we moderate with a light touch here. Heavier moderation leads to censorship. You have the right to respond to comments you disagree with. — mod.]

January 3, 2013 11:11 pm

Dear aaron and Steveta_uk,
be sure that I’ve noticed that the word “fail” is pretty much the only English word that is being used in these contexts these days. It’s cultural, indeed. But I am still convinced that it affects the listeners’ and readers’ thinking because they inevitably create an emotional association of the possible results with “good” and “evil” or “success” and “failure”.
So yes, my comment was a recommendation to change the culture and favorite formulations in the English language… Incidentally, there are many contexts in which I think that exactly the opposite “emotional message” is appropriate and desirable. In those cases, I replace the word “fail” by “refuse”. For example, statistical tests refused to find a global warming smoking gun in this case. This sounds like someone wanted these tests to do a dirty job but these tests have some human rights and they just didn’t want to obey. 😉 They refused because there’s ultimately no empirically detectable CO2-caused global warming anywhere, after all.
Aside from “fail” and “refuse”, there also exist more neutral verbs, obviously.
All the best
Lubos

JazzyT
January 4, 2013 12:12 am

Bart says:
January 3, 2013 at 8:35 pm

And, we know that part (the anthropogenic input) has negligible impact on the overall concentration. The data show that atmospheric CO2 concentration is almost completely driven by surface temperatures. In this WoodForTrees plot, it is clear that CO2 is dominated by an affine-in-temperature differential equation of the form
dCO2/dt = k*(T – To)
where “k” is a coupling constant, “T” is the global temperature anomaly, and “To” is an equilibrium temperature. Here is another such comparison with GISTEMP.

The Keeling curve shows a steady rise with a roughly sinusoidal pattern superimposed on it. (It’s available in many places; here is one of them: http://scrippsco2.ucsd.edu/program_history/keeling_curve_lessons_3.html) This sinusoid has a period of one year, and an amplitude of about 5 ppm. The standard interpretation for this is that there is a long-term in CO2 concentrations due to fossil fuel burning, and a series of short-term seasonal variations due to uptake of CO2 during spring and summer, and a release of CO2 from decaying plant matter in the fall and winter. It’s not surprising that these natural processes would be well correlated with temperture, both in terms of the seasons themselves, and also for warmer or cooler years. For these, CO2 shoud follow Northern hemisphere temperature (there’s more land, and more plant life, in the NH) but with a 180 degree phase shift, and then perhaps some lag on top of that.

Since differentiation necessarily imparts a 90 degree phase advance, it follows that coincidence between the peaks and valleys in the temperature and the rate of change of CO2 implies that CO2 lags temperature, and therefore the direction of causality is temperature-to-CO2. Or, one may consider that on a more elementary level, it would be absurd to argue that the temperature depends on the rate at which CO2 is changing, and not the overall level and, again, we conclude the direction of causality is temperature-to-CO2.

The derivative of CO2 concentration will mostly follow the seasonal variations, but these are themselves averaged out by a 24-month running average in the graphs that Bart referenced. The 90-degree phase shift will show up (although for some higher frequencies, with period less than one year, there will be a 180 degree phase shift due to the 24-month average). However, these higher frequencies will actually show up as lower frequencies (see Nyquist-Shannon sampling theorem). A glance at the curve shows that these lower frequencies should not cause too much trouble, but it would be a good idea to check to be sure. As for the time domain, going from one point to the next in the series of the derivative of CO2 concentration represents moving the average, i.e., adding a data value a year in the future while dropping one a year in the past. Finally, the first graph uses Hadcrut4sh, which is a Southern hemisphere data set, and so adds a 180 degree phase shift in temperature, for comparison with the CO2 fluctuations, which are predominantly a Northern hemisphere phenomenon.
It’s not obvious what to make of the resulting comparisons, in terms of fluctuations, phase shifts, etc. What is obvious, however, is that the shorter-term fluctuations that dominate the derivative of CO2 concentration (fluctuations in terms of a few years) should show the effects of similar variations in natural processes. They should also show short-term variations in anthropogenic CO2, if there actually are any. It’s to be expected that natural processes, governed at least partly by temperature, would show up in the derivative of CO2. But anthropogenic CO2 would show up mostly as a steady value (an offset) in the derivative of CO2, if it’s the relatively constant rise that people seem to think that it is, and that the Keeling curve indicates.
DeWitt Payne says:
January 3, 2013 at 8:02 pm
“And you’re neglecting the isotope ratio and oxygen concentration data that is in agreement with the source of the increase being mostly fossil fuel combustion.”
A lot of people will have a very hard time taking seriously any discussion about atmospheric CO2 concentration if it neglects isotope concentrations. Since plants take up C-12 in preference to C-13, either deforestation or burning of fossil fuels (ancient plants) tends to put more C-12 into the atmosphere, along with less C-13. Measurements of carbon isotopes in atmospheric CO2 show a decreasing concentration of C-13, consistent with the notion that increased CO2 arises from these anthropogenic sources.

January 4, 2013 12:49 am

Yes but…
If a gas mixture that contains CO2 is irradiated by a source at, say, 15°C it will absorb more or less energy in relation with the CO2 concentration. To enable the release of this absorbed energy to the environment (the outer space) the source temperature must change, e.g. increase by approx 0.54°C if the CO2 concentration is doubling. This is primary forcing, just physics.
As it is undisputed that CO2 concentration went up: where did the additional absorbed energy go if no global temperature change is correlated with it?
The paper is very interesting because it points to a need for other interpretations of climate change (or no change) than the monomaniac AGW theory.
As write the authors in their conclusion: This is not an argument about physics, but an argument about data interpretation.

DirkH
January 4, 2013 1:27 am

richard telford says:
January 3, 2013 at 4:47 pm
“I am glad I amused you, but it how realistically the CGMs model climate is irrelevant for the type of analysis I am proposing. What is relevant is that there is a time series of global temperatures and a time series CO2 and other forcing that generated this temperature series in the model. If Beenstock et al’s method cannot find the relationship between CO2 and temperature in the model, then it cannot be trusted if it cannot find the relationship in the real world.”
Go ahead, show that this guy was wrong.
http://en.wikipedia.org/wiki/Granger_causality

LazyTeenager
January 4, 2013 2:25 am

I don’t understand this paper but my instincts are saying its fiddling with statistics that are divorced from the physics of the system.
Might be illuminating to build a fake black box model that includes some degree of causality, originating from multiple sources and mushed up with random variation and multiple response time scales. Then apply this same kind of analysis to see if it correctly identifies the underlying multiple causes. If it can’t the methodology is broken.

Steveta_uk
January 4, 2013 4:44 am

Lubos, I love the idea of a test refusing to produce the required result.
It’s similar to the use of the work “but” in logical statements. “A AND NOT B” gives exactly the same result as “A BUT NOT B”. But somehow using “BUT” implies a level of dissapointment. Like “Christmas but no presents.”

Alan D McIntire
January 4, 2013 5:38 am

“Nevertheless, there seems to be a temporary anthropogenic effect. If the effect is temporary rather than permanent, a doubling, say, of carbon emissions would have no long-run effect on Earth’s temperature, but it would in- crease it temporarily for some decades.”
“Indeed, the increase in temperature during 1975–1995 and its subsequent stability are in our view related in this way to the acceleration in carbon emissions during the second half of the 20th century.”
I suspect that the reason for this temporary “jump” is due to measurement bias. We measure temperatures where people LIVE and are producing energy, not in the uninhabited boondocks. When economic activity increases, we use more energy, which ultimately winds up as “waste” heat. That temporary jump is is a result of measuring the increase in local waste heat produced. No increase in economic activity implies no increase in waste heat, therefore no increase in temperatures regardless of what CO2 does.

richardscourtney
January 4, 2013 5:52 am

DeWitt Payne:
My post at January 3, 2013 at 2:48 pm asked you to justify your silly assertion that you knew the cause of recent rise in atmospheric CO2 concentration and it said

I don’t know if the cause of the recent rise in atmospheric CO2 concentration is entirely anthropogenic, or entirely natural, or a combination of anthropogenic and natural causes. But I want to know.
At present it is not possible to know the cause of the recent rise in atmospheric CO2 concentration, and people who think they “know” the cause are mistaken because at present the available data can be modeled as being entirely caused by each of a variety of causes both anthropogenic and natural.
(ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005) ).

Your reply to me at January 3, 2013 at 4:32 pm says in total

E&E and you’re a co-author? Pull the other one.
As long as we’re self-referencing: http://noconsensus.wordpress.com/2010/03/04/where-has-all-the-carbon-gone/

So, you ignore peer reviewed work because I contributed to it, and you cite your blog post which is twaddle.
The isotope data shows a change in the correct direction for it to have been induced by the anthropogenic emission (there is a 50:50 chance that it would be in the correct direction) but it has the wrong magnitude by a factor of ~6 if it were induced by the anthropogenic emission. There is no reason to suppose that any of the isotope change was induced by the anthropogenic emission when most of it cannot have been.
The facts are that the recent rise in atmospheric CO2 concentration can be modeled in a variety of ways as having a purely natural or a purely anthropogenic cause.
Each of the models in our paper matches the available empirical data without use of any ‘fiddle-factor’ such as the ‘5-year smoothing’ the UN Intergovernmental Panel on Climate Change (IPCC) uses to get the Bern Model to agree with the empirical data.
So, if one of the six models of our paper is adopted then there is a 5:1 probability that the choice is wrong. And other models are probably also possible. And the six models each give a different indication of future atmospheric CO2 concentration for the same future anthropogenic emission of carbon dioxide. Three of our models assumed a purely anthropogenic cause of the recent rise in atmospheric CO2 concentration and the other three assumed a purely natural cause.
Data that fits all the possible causes is not evidence for the true cause. Data that only fits the true cause would be evidence of the true cause. But the findings in our paper demonstrate that there is no data that only fits either an anthropogenic or a natural cause of the recent rise in atmospheric CO2 concentration. Hence, the only factual statements that can be made on the true cause of the recent rise in atmospheric CO2 concentration are
(a) the recent rise in atmospheric CO2 concentration may have an anthropogenic cause, or a natural cause, or some combination of anthropogenic and natural causes,
but
(b) there is no evidence that the recent rise in atmospheric CO2 concentration has a mostly anthropogenic cause or a mostly natural cause.
Indeed, since you don’t want to read the paper, I will mention a volcanic possibility which the paper does not mention but disproves the certainty with which you delude yourself.
CO2 is in various compartments of the carbon cycle system, and it is exchanged between them. Almost all of the CO2 is in the deep oceans. Much is in the upper ocean surface layer. Much is in the biosphere. Some is in the atmosphere. etc..
The equilibrium state of the carbon cycle system defines the stable distribution of CO2 among the compartments of the system. And at any moment the system is adjusting towards that stable distribution. But the equilibrium state is not a constant: it varies at all time scales.
Any change to the equilibrium state of the carbon cycle system induces a change to the amount of CO2 in the atmosphere. Indeed, this is seen as the ‘seasonal variation’ in the Mauna Loa data. However, some of the mechanisms for exchange between the compartments have rate constants of years and decades. Hence, it takes decades for the system to adjust to an altered equilibrium state.
The observed increase of atmospheric CO2 over recent decades could be an effect of such a change to the equilibrium state. If so, then the cause of the change is not known.
One such unknown variable is volcanic emission of sulphur ions below the sea decades or centuries ago.
The thermohaline circulation carries ocean water through the deeps for centuries before those waters return to ocean surface. The water acquires sulphur ions as it passes undersea volcanoes and it carries that sulphur with it to the ocean surface layer decades or centuries later. The resulting change to sulphur in the ocean surface layer alters the pH of the layer.
An alteration of ocean surface layer pH alters the equilibrium concentration of atmospheric CO2.
A reduction to surface layer pH of only 0.1 (which is much too small to be detectable) would induce more than all the change to atmospheric CO2 concentration of 290 ppmv to ~400 ppmv which has happened since before the industrial revolution.
I don’t know if this volcanic effect has happened, and I doubt that it has. But it demonstrates how changed equilibrium conditions could have had the observed recent effect on atmospheric CO2 concentration whether or not there was a change in temperature and whether or not the anthropogenic CO2 emission existed.
Simply, you are wrong. And it seems you are willfully wrong.
Richard