AGW Bombshell? A new paper shows statistical tests for global warming fails to find statistically significant anthropogenic forcing

graphic_esd_cover_homepage[1]From the journal Earth System Dynamics billed as “An Interactive Open Access Journal of the European Geosciences Union” comes this paper which suggests that the posited AGW forcing effects simply isn’t statistically significant in the observations, but other natural forcings are.

“…We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.”

This is a most interesting paper, and potentially a bombshell, because they have taken virtually all of the significant observational datasets (including GISS and BEST) along with solar irradiance from Lean and Rind, and CO2, CH4, N2O, aerosols, and even water vapor data and put them all to statistical tests (including Lucia’s favorite, the unit root test) against forcing equations. Amazingly, it seems that they have almost entirely ruled out anthropogenic forcing in the observational data, but allowing for the possibility they could be wrong, say:

“…our rejection of AGW is not absolute; it might be a false positive, and we cannot rule out the possibility that recent global warming has an anthropogenic footprint. However, this possibility is very small, and is not statistically significant at conventional levels.”

I expect folks like Tamino (aka Grant Foster) and other hotheaded statistics wonks will begin an attack on why their premise and tests are no good, but at the same time I look for other less biased stats folks to weigh in and see how well it holds up. My sense of this is that the authors of Beenstock et al have done a pretty good job of ruling out ways they may have fooled themselves. My thanks to Andre Bijkerk and Joanna Ballard for bringing this paper to my attention on Facebook.

The abstract and excerpts from the paper, along with link to the full PDF follows.

Polynomial cointegration tests of anthropogenic impact on global warming

M. Beenstock1, Y. Reingewertz1, and N. Paldor2

1Department of Economics, the Hebrew University of Jerusalem, Mount Scopus Campus, Jerusalem, Israel

2Fredy and Nadine Institute of Earth Sciences, the Hebrew University of Jerusalem, Edmond J. Safra campus, Givat Ram, Jerusalem, Israel

 Abstract. 

We use statistical methods for nonstationary time series to test the anthropogenic interpretation of global warming (AGW), according to which an increase in atmospheric greenhouse gas concentrations raised global temperature in the 20th century. Specifically, the methodology of polynomial cointegration is used to test AGW since during the observation period (1880–2007) global temperature and solar irradiance are stationary in 1st differences whereas greenhouse gases and aerosol forcings are stationary in 2nd differences. We show that although these anthropogenic forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcing, aerosols, solar irradiance and global temperature are not polynomially cointegrated. This implies that recent global warming is not statistically significantly related to anthropogenic forcing. On the other hand, we find that greenhouse gas forcing might have had a temporary effect on global temperature.

Introduction

Considering the complexity and variety of the processes that affect Earth’s climate, it is not surprising that a completely satisfactory and accepted account of all the changes that oc- curred in the last century (e.g. temperature changes in the vast area of the Tropics, the balance of CO2 input into the atmosphere, changes in aerosol concentration and size and changes in solar radiation) has yet to be reached (IPCC, AR4, 2007). Of particular interest to the present study are those  processes involved in the greenhouse effect, whereby some of the longwave radiation emitted by Earth is re-absorbed by some of the molecules that make up the atmosphere, such as (in decreasing order of importance): water vapor, car- bon dioxide, methane and nitrous oxide (IPCC, 2007). Even though the most important greenhouse gas is water vapor, the dynamics of its flux in and out of the atmosphere by evaporation, condensation and subsequent precipitation are not understood well enough to be explicitly and exactly quantified. While much of the scientific research into the causes of global warming has been carried out using calibrated gen- eral circulation models (GCMs), since 1997 a new branch of scientific inquiry has developed in which observations of climate change are tested statistically by the method of cointegration (Kaufmann and Stern, 1997, 2002; Stern and Kauf- mann, 1999, 2000; Kaufmann et al., 2006a,b; Liu and Ro- driguez, 2005; Mills, 2009). The method of cointegration, developed in the closing decades of the 20th century, is intended to test for the spurious regression phenomena in non-stationary time series (Phillips, 1986; Engle and Granger, 1987). Non-stationarity arises when the sample moments of a time series (mean, variance, covariance) depend on time. Regression relationships are spurious1 when unrelated non- stationary time series appear to be significantly correlated be- cause they happen to have time trends.

The method of cointegration has been successful in detecting spurious relationships in economic time series data.

Indeed, cointegration has become the standard econometric tool for testing hypotheses with nonstationary data (Maddala, 2001; Greene, 2012). As noted, climatologists too have used cointegration to analyse nonstationary climate data (Kauf- mann and Stern, 1997). Cointegration theory is based on the simple notion that time series might be highly correlated even though there is no causal relation between them. For the relation to be genuine, the residuals from a regression between these time series must be stationary, in which case the time series are “cointegrated”. Since stationary residuals mean- revert to zero, there must be a genuine long-term relationship between the series, which move together over time because they share a common trend. If on the other hand, the resid- uals are nonstationary, the residuals do not mean-revert to zero, the time series do not share a common trend, and the relationship between them is spurious because the time series are not cointegrated. Indeed, the R2 from a regression between nonstationary time series may be as high as 0.99, yet the relation may nonetheless be spurious.

The method of cointegration originally developed by En- gle and Granger (1987) assumes that the nonstationary data are stationary in changes, or first-differences. For example, temperature might be increasing over time, and is there- fore nonstationary, but the change in temperature is station- ary. In the 1990s cointegration theory was extended to the case in which some of the variables have to be differenced twice (i.e. the time series of the change in the change) be- fore they become stationary. This extension is commonly known as polynomial cointegration. Previous analyses of the non-stationarity of climatic time series (e.g. Kaufmann and Stern, 2002; Kaufmann et al., 2006a; Stern and Kaufmann, 1999) have demonstrated that global temperature and solar irradiance are stationary in first differences, whereas green- house gases (GHG, hereafter) are stationary in second differ- ences. In the present study we apply the method of polyno- mial cointegration to test the hypothesis that global warming since 1850 was caused by various anthropogenic phenom- ena. Our results show that GHG forcings and other anthropogenic phenomena do not polynomially cointegrate with global temperature and solar irradiance. Therefore, despite the high correlation between anthropogenic forcings, solar irradiance and global temperature, AGW is not statistically significant. The perceived statistical relation between tem- perature and anthropogenic forcings is therefore a spurious regression phenomenon.

Data and methods

We use annual data (1850–2007) on greenhouse gas (CO2, CH4 and N2O) concentrations and forcings, as well as on forcings for aerosols (black carbon, reflective tropospheric aerosols). We also use annual data (1880–2007) on solar irradiance, water vapor (1880–2003) and global mean tem- perature (sea and land combined 1880–2007). These widely used secondary data are obtained from NASA-GISS (Hansen et al., 1999, 2001). Details of these data may be found in the Data Appendix.

We carry out robustness checks using new reconstructions for solar irradiance from Lean and Rind (2009), for globally averaged temperature from Mann et al. (2008) and for global land surface temperature (1850–2007) from the Berkeley Earth Surface Temperature Study.

Key time series are shown in Fig. 1 where panels a and b show the radiative forcings for three major GHGs, while panel c shows solar irradiance and global temperature. All these variables display positive time trends. However, the time trends in panels a and b appear more nonlinear than their counterparts in panel c. Indeed, statistical tests reported be- low reveal that the trends in panel c are linear, whereas the trends in panels a and b are quadratic. The trend in solar irradiance weakened since 1970, while the trend in temperature weakened temporarily in the 1950s and 1960s.

The statistical analysis of nonstationary time series, such as those in Fig. 1, has two natural stages. The first consists of unit root tests in which the data are classified by their order and type of nonstationarity. If the data are nonstationary, sample moments such as means, variances and co- variances depend upon when the data are sampled, in which event least squares and maximum likelihood estimates of parameters may be spurious. In the second stage, these nonstationary data are used to test hypotheses using the method of cointegration, which is designed to distinguish between genuine and spurious relationships between time series. Since these methods may be unfamiliar to readers of Earth System Dynamics, we provide an overview of key concepts and tests.

clip_image002

Fig. 1. Time series of the changes that occurred in several variables that affect or represent climate changes during the 20th century. a) Radiative forcings (rf, in units of W m−2) during 1880 to 2007 of CH4 (methane) and CO2 (carbon dioxide); (b) same period as in panel a but for Nitrous-Oxide (N2O); (c) solar irradiance (left ordinate, units of W m−2) and annual global temperature (right ordinate, units of ◦C) during 1880–2003.

[…]

3 Results

3.1 Time series properties of the data

Informal inspection of Fig. 1 suggests that the time series properties of greenhouse gas forcings (panels a and b) are visibly different to those for temperature and solar irradiance (panel c). In panels a and b there is evidence of acceleration, whereas in panel c the two time series appear more stable. In Fig. 2 we plot rfCO2 in first differences, which confirms by eye that rfCO2 is not I (1), particularly since 1940. Similar figures are available for other greenhouse gas forcings. In this section we establish the important result that whereas the first differences of temperature and solar irradiance are trend free, the first differences of the greenhouse gas forcings are not. This is consistent with our central claim that anthropogenic forcings are I (2), whereas temperature and solar irradiance are I (1).

image

Fig. 2. Time series of the first differences of rfCO2.

What we see informally is born out by the formal statistical tests for the variables in Table 1.

image

Although the KPSS and DF-type statistics (ADF, PP and DF-GLS) test different null hypotheses, we successively increase d until they concur. If they concur when d = 1, we classify the variable as I (1), or difference stationary. For the anthropogenic variables concurrence occurs when d = 2. Since the DF-type tests and the KPSS tests reject that these variables are I (1) but do not reject that they are I (2), there is no dilemma here. Matters might have been different if according to the DF-type tests these anthropogenic variables are I (1) but according to KPSS they are I (2).

The required number of augmentations for ADF is moot. The frequently used Schwert criterion uses a standard formula based solely on the number of observations, which is inefficient because it may waste degrees of freedom. As mentioned, we prefer instead to augment the ADF test until its residuals become serially independent according to a la- grange multiplier (LM) test. In most cases 4 augmentations are needed, however, in the cases of rfCO2, rfN2O and stratospheric H2O 8 augmentations are needed. In any case, the classification is robust with respect to augmentations in the range of 2–10. Therefore, we do not think that the number of augmentations affects our classifications. The KPSS and Phillips–Perron statistics use the standard nonparametric Newey-West criteria for calculating robust standard errors. In practice we find that these statistics use about 4 autocorrelations, which is similar to our LM procedure for determining the number of augmentations for ADF.

[…]

Discussion

We have shown that anthropogenic forcings do not polynomially cointegrate with global temperature and solar irradiance. Therefore, data for 1880–2007 do not support the anthropogenic interpretation of global warming during this period. This key result is shown graphically in Fig. 3 where the vertical axis measures the component of global temperature that is unexplained by solar irradiance according to our estimates. In panel a the horizontal axis measures the anomaly in the anthropogenic trend when the latter is derived from forcings of carbon dioxide, methane and nitrous oxide. In panel b the horizontal axis measures this anthropogenic anomaly when apart from these greenhouse gas forcings, it includes tropospheric aerosols and black carbon. Panels a and b both show that there is no relationship between temperature and the anthropogenic anomaly, once the warming effect of solar irradiance is taken into consideration.

However, we find that greenhouse gas forcings might have a temporary effect on global temperature. This result is illustrated in panel c of Fig. 3 in which the horizontal axis measures the change in the estimated anthropogenic trend. Panel c clearly shows that there is a positive relationship between temperature and the change in the anthropogenic anomaly once the warming effect of solar irradiance is taken into consideration.

clip_image002[6]

Fig. 3. Statistical association between (scatter plot of) anthropogenic anomaly (abscissa), and net temperature effect (i.e. temperature minus the estimated solar irradiance effect; ordinates). Panels (a)(c) display the results of the models presented in models 1 and 2 in Table 3 and Eq. (13), respectively. The anthropogenic trend anomaly sums the weighted radiative forcings of the greenhouse gases (CO2, CH4 and N2O). The calculation of the net temperature effect (as defined above) change is calculated by subtracting from the observed temperature in a specific year the product of the solar irradiance in that year times the coefficient obtained from the regression of the particular model equation: 1.763 in the case of model 1 (a); 1.806 in the case of model 2 (b); and 1.508 in the case of Eq. (13) (c).

Currently, most of the evidence supporting AGW theory is obtained by calibration methods and the simulation of GCMs. Calibration shows, e.g. Crowley (2000), that to explain the increase in temperature in the 20th century, and especially since 1970, it is necessary to specify a sufficiently strong anthropogenic effect. However, calibrators do not re- port tests for the statistical significance of this effect, nor do they check whether the effect is spurious. The implication of our results is that the permanent effect is not statistically significant. Nevertheless, there seems to be a temporary anthropogenic effect. If the effect is temporary rather than permanent, a doubling, say, of carbon emissions would have no long-run effect on Earth’s temperature, but it would in- crease it temporarily for some decades. Indeed, the increase in temperature during 1975–1995 and its subsequent stability are in our view related in this way to the acceleration in carbon emissions during the second half of the 20th century (Fig. 2). The policy implications of this result are major since an effect which is temporary is less serious than one that is permanent.

The fact that since the mid 19th century Earth’s temperature is unrelated to anthropogenic forcings does not contravene the laws of thermodynamics, greenhouse theory, or any other physical theory. Given the complexity of Earth’s climate, and our incomplete understanding of it, it is difficult to attribute to carbon emissions and other anthropogenic phenomena the main cause for global warming in the 20th century. This is not an argument about physics, but an argument about data interpretation. Do climate developments during the relatively recent past justify the interpretation that global warming was induced by anthropogenics during this period? Had Earth’s temperature not increased in the 20th century despite the increase in anthropogenic forcings (as was the case during the second half of the 19th century), this would not have constituted evidence against greenhouse theory. However, our results challenge the data interpretation that since 1880 global warming was caused by anthropogenic phenomena.

Nor does the fact that during this period anthropogenic forcings are I (2), i.e. stationary in second differences, whereas Earth’s temperature and solar irradiance are I (1), i.e. stationary in first differences, contravene any physical theory. For physical reasons it might be expected that over the millennia these variables should share the same order of integration; they should all be I (1) or all I (2), otherwise there would be persistent energy imbalance. However, during the last 150 yr there is no physical reason why these variables should share the same order of integration. However, the fact that they do not share the same order of integration over this period means that scientists who make strong interpretations about the anthropogenic causes of recent global warming should be cautious. Our polynomial cointegration tests challenge their interpretation of the data.

Finally, all statistical tests are probabilistic and depend on the specification of the model. Type 1 error refers to the probability of rejecting a hypothesis when it is true (false positive) and type 2 error refers to the probability of not rejecting a hypothesis when it is false (false negative). In our case the type 1 error is very small because anthropogenic forcing is I (1) with very low probability, and temperature is polynomially cointegrated with very low probability. Also we have experimented with a variety of model specifications and estimation methodologies. This means, however, that as with all hypotheses, our rejection of AGW is not absolute; it might be a false positive, and we cannot rule out the possibility that recent global warming has an anthropogenic footprint. However, this possibility is very small, and is not statistically significant at conventional levels.

Full paper: http://www.earth-syst-dynam.net/3/173/2012/esd-3-173-2012.pdf

Data Appendix.

image

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
298 Comments
Inline Feedbacks
View all comments
Billy Ruff'n
January 3, 2013 12:31 pm

This should give Steve McIntyre something to do when he gets home from Asia. It will be interesting to see what his take is on the statistical methods employed.

Bob
January 3, 2013 12:32 pm

Anthony, do what you can to keep this paper from Mosher – he might just wax in-eloquently.

January 3, 2013 12:32 pm

One of many problems I have with reconstructions of things like irradiance and temperature is that they never seem to come with an error analysis. The curves above (wherever they are from) are no exception — we see a simple jiggy line that is supposed to be “temperature”, or “irradiance”, or “CO_2 level” back to 1880, with no error bars at all.
Yet those error bars have to exist, and have to be rather large for anything inferred by means of proxies or measurements with crude instrumentation at the beginning of the 20th century or earlier).
It would actually be really lovely to have honest error bars — with any reasonable interpretation of error — both in the figures and, one would expect, in use in the statistical analysis of the timeseries. Otherwise one has a compounding of assumptions and errors.
Lacking both error bars and a single set of solar data that the entire community endorses within those error bars (so that the error bars reflect among other things disagreement within that community) it is going to be impossible to create a statistical study of solar state and global climate that means anything at all. This (lack of a) model then becomes an important Bayesian prior in further statistical analysis of possible causes, because the amount of warming one attributes to CO_2 clearly has to depend to some extent on how much you attribute to insolation, so if the latter is uncertain the former is even more uncertain (and vice versa). Errors tend to grow like SE = \sqrt(SE_1^ + SE_2^2) after all, and that’s for simple linear models with favorable assumptions — it can be much worse.
Lief, is there a set of solar data that everybody in the solar community endorses (or at least, has their disagreements duly entered in in the form of additional uncertainty in the claims)? Or is it really just a matter of flipping coins and grabbing a paper at random? Picking the paper (either way) that produces the conclusions you want to assert?
It would be very useful to see this computation redone not for the 1880-2012 data, but only for the e.g. 1979-2012 data that is moderately reliable. The idea is actually a good one and I’ve looked at it myself — the problem can be reduced to looking at global temperature and the e.g. Mauna Loa CO_2 curve and comparing them — there are obviously lots of ways of mapping the one monotonic function (CO_2) into a linear (over this timescale) model for temperature plus noise, but the noise is them many times larger than the trend, the fit interval is short, and one ignores completely the dynamics of the noise. It is clear at a glance, however, that there is no short-run correlation between changes in temperature and CO_2 level — only the weak trend over the entire timeseries, which puts 2012 as the 9th warmest year in 33, remarkably close to both mean and median. I don’t need to do student t to measure p to tell you that p for the null hypothesis is not going to be reassuringly low over the entire interval.
They should also attempt a nonstationary timeseries analysis, treating the temperature like a Hurst-Kolmogorov variable with a possibly directed stochastic noise term wrt to eh hypothesized drivers.
rgb

January 3, 2013 12:38 pm

Very thorough, a brilliant paper.

Resourceguy
January 3, 2013 1:14 pm

It’s about time someone stepped up to do this kind of unit root test. There are thousands of such studies published all the time on other topics.

Stephen Richards
January 3, 2013 1:22 pm

richard telford says:
January 3, 2013 at 9:47 am
It is a great shame that they did not test if their methods had any statistical power. This would have been easy to do using GCM output and would have greatly strengthened their paper

You clown !!!

Resourceguy
January 3, 2013 1:34 pm

To all the negative commentators above, I will remind you that ALL of the top research departments of the world’s central banks use this methodology and result format.

January 3, 2013 1:48 pm

rgbatduke says:
January 3, 2013 at 12:32 pm
Leif, is there a set of solar data that everybody in the solar community endorses (or at least, has their disagreements duly entered in in the form of additional uncertainty in the claims)? Or is it really just a matter of flipping coins and grabbing a paper at random? Picking the paper (either way) that produces the conclusions you want to assert?
There is no such set(s). I am leading two workshops [involving the foremost researchers in the solar community] to produce such sets:
1) http://www.leif.org/research/Svalgaard_ISSI_Proposal_Base.pdf
2) http://ssnworkshop.wikia.com/wiki/Home
Our work is not finished yet, although we do have some preliminary findings [basically what I have been talking about for some time here on WUWT].
Amazingly, there is some resistance among our ‘users’ to our attempt to create a vetted, agreed upon data set. It seems to be most convenient to have several [and wrong] sets to pick from to support everyone’s pet conclusions.

January 3, 2013 1:48 pm

This is a most interesting paper, and potentially a bombshell, because they have taken virtually all of the significant observational datasets (including GISS and BEST) along with solar irradiance from Lean and Rind, and CO2, CH4, N2O, aerosols, and even water vapor data and put them all to statistical tests (including Lucia’s favorite, the unit root test) against forcing equations. Amazingly, it seems that they have almost entirely ruled out anthropogenic forcing in the observational data, but allowing for the possibility they could be wrong, say:
“…our rejection of AGW is not absolute; it might be a false positive, and we cannot rule out the possibility that recent global warming has an anthropogenic footprint. However, this possibility is very small, and is not statistically significant at conventional levels.”

==============================================================================
I remind that I’m a layman here, but I wonder what they would have concluded if Watts et al had been included?

EternalOptimist
January 3, 2013 1:51 pm

We are pretty sure. But we are not 100% sure
and if you want to prove us wrong – heres how
how long have I waited to hear this. And, guys, I dont even care if you are wrong, just that admission that you are not walking on water is fantastic. reality at last

Other_Andy
January 3, 2013 2:00 pm

Resourceguy says:
“To all the negative commentators above, I will remind you that ALL of the top research departments of the world’s central banks use this methodology and result format.”
With the ‘banking collapse’ a few years ago and the state of the world economy at the moment that should at least give you a pause for thought.

DeWitt Payne
January 3, 2013 2:15 pm

Resourceguy,

I will remind you that ALL of the top research departments of the world’s central banks use this methodology and result format.

And looking at the global economy, I would say it’s working really well. /sarc
Just one fundamental flaw of many. Atmospheric CO2 concentration is not a random variable. It is almost completely deterministic. There is measurement error and year to year variability, but those factors are small compared to the deterministic change. We know where it comes from and how much is emitted each year. Applying a unit root test to this data without removing the deterministic trend is therefore invalid.

Climate Ace
January 3, 2013 2:36 pm

Did they forget to polynomially cointegrate ocean heat?

richardscourtney
January 3, 2013 2:48 pm

DeWitt Payne:
At January 3, 2013 at 2:15 pm you assert

Atmospheric CO2 concentration is not a random variable. It is almost completely deterministic. There is measurement error and year to year variability, but those factors are small compared to the deterministic change. We know where it comes from and how much is emitted each year. Applying a unit root test to this data without removing the deterministic trend is therefore invalid.

You “know where it comes from”? Really? How?
I don’t know if the cause of the recent rise in atmospheric CO2 concentration is entirely anthropogenic, or entirely natural, or a combination of anthropogenic and natural causes. But I want to know.
At present it is not possible to know the cause of the recent rise in atmospheric CO2 concentration, and people who think they “know” the cause are mistaken because at present the available data can be modeled as being entirely caused by each of a variety of causes both anthropogenic and natural.
(ref. Rorsch A, Courtney RS & Thoenes D, ‘The Interaction of Climate Change and the Carbon Dioxide Cycle’ E&E v16no2 (2005) ).
The econometrics paper under discussion may or may not be found to contain many flaws (time will tell) but it displays a refreshing willingness to admit we don’t “know” anything about the climate issue with certainty.
Richard

DR
January 3, 2013 3:05 pm

SORCE Science Meeting, 18-19th September, 2012
http://lasp.colorado.edu/sorce/news/2012ScienceMeeting/docs/presentations/S2-01_Ineson_sorce2012.pdf

SIM measured a decline in ultraviolet from 2004-2007 that is a factor of 4 to 6 times larger than typical previous estimates

rollsthepaul
January 3, 2013 3:51 pm

Could all of this, have only achieved a firm grasp of the obvious?

Alan Millar
January 3, 2013 4:03 pm

Well this is good look at things from the statistical angle. The technique is robust but, as we all know, the period of reliable data is far too short to draw any conclusions.
It is not proof of anything but adds to the debate and does prove the uncertainty that is present in clmate science at the moment.
The result could have been predicted just by eyeballing the various data sets actually. There is clearly not a good correlation between CO2, temperature, aerosols and black carbon since 1880.
However, it is always useful to have this backed up by robust methodology.
Alan

DeWitt Payne
January 3, 2013 4:32 pm

richardscourtney,
E&E and you’re a co-author? Pull the other one.
As long as we’re self-referencing: http://noconsensus.wordpress.com/2010/03/04/where-has-all-the-carbon-gone/

January 3, 2013 4:34 pm

Anthony:
Andre Bijerk and Joanna Ballard might be two of the best, but perhaps credit for bring Beenstock, et al. to WUWT-land should go to the following contributor to Tips and Notes:
Brian H says:
December 9, 2012 at 10:19 pm
h/t DirkH;
http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf
This means,
crucially, that a doubling of greenhouse gas forcings does not permanently increase
global temperature.
From there, it was easy to track down the published paper, also as noted soon thereafter in Tips and Notes.

January 3, 2013 4:47 pm

DirkH says:
January 3, 2013 at 12:06 pm
richard telford says:
January 3, 2013 at 9:47 am
“It is a great shame that they did not test if their methods had any statistical power. This would have been easy to do using GCM output and would have greatly strengthened their paper.”
Thanks, that made me laugh.
Richard, have the computer kiddies in the modeling departments learned how to model convective fronts in the meantime?
———————–
I am glad I amused you, but it how realistically the CGMs model climate is irrelevant for the type of analysis I am proposing. What is relevant is that there is a time series of global temperatures and a time series CO2 and other forcing that generated this temperature series in the model. If Beenstock et al’s method cannot find the relationship between CO2 and temperature in the model, then it cannot be trusted if it cannot find the relationship in the real world.

DeWitt Payne
January 3, 2013 4:47 pm

And then there’s Ferdinand Englebeen’s page: http://www.ferdinand-engelbeen.be/klimaat/co2_measurements.html#The_mass_balance
Also, The Carbon Cycle, T.M.L. Wigley and D.S. Schimel ed., Cambridge University Press, 2000.
The evidence, as opposed to speculative assertions, is overwhelming that humans have caused the atmospheric CO2 level to increase from the preindustrial level of 280 ppmv.

D Böehm
January 3, 2013 5:01 pm

DeWitt Payne,
True. Human activity has added [harmless, beneficial] CO2 to the atmosphere. It is still a very tiny trace gas, and it will never be a significant part of the atmosphere.
And yet, the planet has been up to 8ºC warmer repeatedly in the geologic past, without regard to CO2 levels. We are currently in a geologically cool period [top of the chart]. Ferdinand Engelbeen has also stated that the rise in CO2 is harmless. Thus, there is no need whatever to reduce CO2 to pre-industrial levels.
On net balance, more CO2 is better for the biosphere, and there is no verifiable, scientifically testable global harm as a result of the rise in CO2. Really, it’s all good.

Mervyn
January 3, 2013 5:22 pm

How many more such studies will it take before the United Nations instructs the IPCC to abandon its quest of proving anthropogenic global warming and to concentrate on natural variability and theories such as Henrik Svensmark’s cloud theory, which is backed by observational data and b experimentation?

January 3, 2013 6:01 pm

The evidence, as opposed to speculative assertions, is overwhelming that humans have caused the atmospheric CO2 level to increase from the preindustrial level of 280 ppmv.

I don’t think anyone is arguing that humans have or haven’t added CO2 to the atmosphere. The argument is over how sensitive the climate is to it. The IPCC climate sensitivity numbers are basically speculative and observations over time have shown them to be false. We get about 1C of warming for each doubling of CO2 in the atmosphere. Most of that warming from pre-industrial levels has already happened. CO2 impact is logarithmic and each ton of CO2 added to the atmosphere has LESS impact than the ton before it had. To get 1C of warming from today’s level, we would have to double atmospheric CO2 from today’s level. The “feedbacks” that the IPCC has speculated to exist haven’t turned up in real life. We are spending hundreds of billions and fleecing the populations of this planet based on predictions of a fairytale.
Wake me up when we get to 1600ppm of CO2 … but we will never make it that far. China currently has 30 nuclear power plants under construction in various phases of completion. US CO2 emissions are falling, China’s emissions will begin to fall in about another 10 years as more of their nuclear generation comes online, Excess CO2 above Earth’s equilibrium amount begins to come out of the atmosphere as soon as the emissions are stopped. I doubt we will even get to 800 ppm, let alone 1600. But more importantly, nobody has shown a good reason to even reduce CO2 emissions. Why should we? Nobody has shown what PORTION of warming since the end of the LIA is due to CO2 to my satisfaction. They are simply running around talking about what “could” happen in the future based purely on climate models programmed with speculative feedbacks to CO2 increases.
It’s theft. It’s a racket. It is robbery.

January 3, 2013 6:04 pm

Mervyn says:
January 3, 2013 at 5:22 pm
concentrate on natural variability and theories such as Henrik Svensmark’s cloud theory, which is backed by observational data
Actually it is not: http://www.leif.org/EOS/swsc120049-GCR-Clouds,pdf
“it is clear that there is no robust evidence of a widespread link between the cosmic ray flux and clouds” and
“In this paper we have examined the evidence of a CR-cloud relationship from direct and indirect observations of cloud recorded from satellite- and ground-based measurement techniques. Overall, the current satellite cloud datasets do not provide evidence supporting the existence of a solar-cloud link. Although some positive evidence exists in ground-based studies, these are all from highly localized data and are suggested to operate via global electric circuit based mechanisms: the effects of which may depend on numerous factors and vary greatly from one location to the next. Consequently, it is unclear what the overall implications of these localized findings are. By virtue of a lack of strong evidence detected from the numerous satellite- and ground-based studies, it is clear that if a solar-cloud link exists the effects are likely to be low amplitude and could not have contributed appreciably to recent [anthropogenic] climate changes.”