New paper on mathematical analysis of GHG

Polynomial Cointegration Tests of the Anthropogenic Theory of Global Warming

Michael Beenstock and Yaniv Reingewertz – Department of Economics, The Hebrew University, Mount Scopus, Israel.

Abstract:

We use statistical methods designed for nonstationary time series to test the anthropogenic theory of global warming (AGW). This theory predicts that an increase in atmospheric greenhouse gas concentrations increases global temperature permanently. Specifically, the methodology of polynomial cointegration is used to test AGW when global temperature and solar irradiance are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences.

We show that although greenhouse gas forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcings, global temperature and solar irradiance are not polynomially cointegrated, and AGW is refuted. Although we reject AGW, we find that greenhouse gas forcings have a temporary effect on global temperature. Because the greenhouse effect is temporary rather than permanent, predictions of significant global warming in the 21st century by IPCC are not supported by the data.

Paper here (PDF)

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

281 Comments
Inline Feedbacks
View all comments
Bart
February 18, 2010 1:14 pm

davidmhoffer (12:33:51) :
“The damn (sic, or maybe not 🙂 analogy was similarly designed to illustrate a concept, and in fact, for the amount of water flowing past the dam[] , the long term average is in fact the same over the long haul, and raising or lowering the dam[] in fact has a huge but temporary effect on the flow rate (like the paper suggests).”
Interesting article here. Perhaps a better point could be made if you updated your analogy to say that extending the width of the dam has no steady state effect at all, whether of flow rate or of retention of water.

DirkH
February 18, 2010 1:14 pm

“DeWitt Payne (07:53:35) :
Another question: Is the choice really just between I(0), I(1), I(2) … I(n) where n is an integer only? Why not I(0.9)? ”
They talk about “first differences”, “2nd differences” etc… the analogon to 1st and 2nd differential in a discrete time series. You can’t differentiate 0.9 times…

Bart
February 18, 2010 1:16 pm

Bart (13:14:14)
Maybe “depth” would be a better word than width. I mean, of course, the dimension in the direction of flow.

DirkH
February 18, 2010 1:17 pm

“DirkH (13:14:58) :
[…]
They talk about “first differences”, “2nd differences” etc… the analogon to 1st and 2nd differential”
Sorry, i mean “1st and 2nd derivative”, i don’t know if “differential” has the same meaning in english.

Nick Stokes
February 18, 2010 1:19 pm

Re: VS (Feb 18 02:29),
I missed your earlier post. But it’s confusing. You say there that “Beenstock and Reingewertz (2009)- confirm I(1)”
but now you agree that, no, they didn’t, but relied on earlier results, and don’t quote any test statistics or uncertainty intervals. But “earlier results” would seem to be Kaufmann and Stern, which you are quite critical of.
It seems to me the key issue is not whether I(1) is an adequate approximant for temperature, but whether I(2) can be ruled out. I don’t see that anyone has tested that. It seems to me that the temperature plot has a lot of structure which doesn’t correspond to any polynomial order, and there will be a lot of difficulty in uniquely associating it with any I(n).
One could also note that the K&S references are up to ten or more years old, and there has been a lot of temperature measurement since then.

February 18, 2010 3:45 pm

Would taking the square of the temperature change anything? In thermodynamics T^2 is a measure of energy, which is what we’re looking for.
Also, if I can find the data, could someone do the same analysis of global temperature versus human sin? Some people seem to think there’s a direct link.

February 18, 2010 7:39 pm

Bart (13:14:14) :
Interesting article here. Perhaps a better point could be made if you updated your analogy to say that extending the width of the dam has no steady state effect at all, whether of flow rate or of retention of water>
interesting article, thanks for the link.
Not quite what the paper says and not quite my point. how about this? Go down to a pond and throw in a fist sized rock. You will be able to SEE the ripples across the surface of the pond. With a keen eye you could even measure them with a decent ruler. Now in theory, the rock is now laying at the bottom of the pond, and so the level in the pond has to be higher. It is. But the ripples from throwing the rock in are WAY out of proportion to the increase in the level of the pond. That’s what the author’s of the paper were getting at. The increase in CO2 causes ripples, but the total change in temperature is tiny.

DeWitt Payne
February 18, 2010 7:55 pm

Re: DirkH (Feb 18 13:17),
Sorry, but you’re wrong. There is such a thing as fractional calculus. More to the point:
The first order autoregressive model, y(t)=a1y(t-1)+epsilon(t), where epsilon(t) is a serially uncorrelated stochastic process with mean zero and constant variance, has a unit root when a1 = 1. In this example, the characteristic equation is m − a1 = m − 1. The root of the equation is m = 1. But a1 does not have to be an integer, it can be any real number. In which case, differencing of any integer order will not make the series stationary. If a1 is close to 1, then the ADF test will fail to reject the null hypothesis that a1 is equal to one. I think.

February 18, 2010 8:03 pm

Yes, but that analogy opens you up for the obvious retort, “but there are six billion people throwing rocks in the pond, every day! It’ll fill up in a few weeks!”
To anticipate counter arguments, you have to learn to think like a person who would bomb a city with polar bears.

February 18, 2010 8:20 pm

George Turner (20:03:30) :
Yes, but that analogy opens you up for the obvious retort, “but there are six billion people throwing rocks in the pond, every day! It’ll fill up in a few weeks!”
Fallen into my trap… and so fast too!
Yes let’s scale the pond up to the size of a planet. 6 billion people throwing a fraction of a grain of sand into the ocean every day. How have they done so far? Well, since 1920 they’ve gone from 0.000280 of the atmosphere all the way to (shudder) 0.000380 of the atmosphere. Omigosh! 0.000100! They’ll have that sucker filled up in a few hundred years! panic! everyone panic!
I predict they run out of oil before the glaciers melt. Ha!

February 18, 2010 8:27 pm

BTW I live in Winnipeg where polar bear sculptures appear all over the city every year. I have never seen one being put in place, or sculpted or anything. I just assumed that people were putting them up when I wasn’t paying attention. I had no idea that they were BOMBS! I will keep an eye on the sky from now on.
When I first read your post I thought you meant real polar bears. I think getting them into the bomb bay might be harder than you think. I’ve been nose to nose with a black bear a couple of times, and polar bears are a LOT bigger. And they eat seals which I have been told my physique resembles. So I have a certain amount of antipathy toward the concept of trying to put one into a bomb bay.

February 18, 2010 8:36 pm

and further to the record, Winnipeg is a winter city where surviving 10 days in succession of -30 is a badge of honor. That said, we have no sceptics here. We call them pessimists.

February 18, 2010 10:04 pm

Davidmhoffer,
But your argument used decimal numbers. Your opponents don’t do decimals. 🙂
BTW, have you ever heard of the top-secret bat-bomb project in WW-II? Prior to confidence in the A-bomb, and bunch of researchers (and a famous actor, and a tiger mascot) were on a project that was going to drop millions of Texas free-tailed bats on Japanese cities, each with a tiny little incendiary device glued to its fur. The bats were kept air-conditioned so they stayed torpid, then at altitude the cold temperatures would keep them that way. They were to be dropped in extending egg-crate carriers with a parachute. Once they reached lower altitudes the warmer air would wake them up, and they’d fly out and roost under the eaves and in the nooks and crannies of Japanese houses.
During an airbase photoshoot with about a dozen of the little guys the photographer took too long, the bats warmed up, and they flew away, setting fire to the base. Due to the secrecy of the project, the Air Corps couldn’t let the local fire department in and the whole base went up in flames.
Not long after that the A-bomb test worked and the project was cancelled, because there was no way we could intimidate Stalin with bombers loaded with little bats.
Bat Bomb at Amazon.

Bart
February 18, 2010 11:35 pm

davidmhoffer (19:39:49) :
Not quite what the paper says…
Perhaps not precisely. The comment at “perryalger Feb 17, 10:49 AM” made an impression on me, and I guess I was cuing off of that. At 15 microns, he says, CO2 is already absorbing all reflected radiation, so adding more is akin to increasing the depth of the dam.
DeWitt Payne (19:55:28) :
“But a1 does not have to be an integer, it can be any real number. In which case, differencing of any integer order will not make the series stationary.”
Your point is correct – fractional derivatives do exist. However, in your example, you need to be more precise. If the equation is assumed to have been initialized in the infinte past, and the absolute value of a1 is less than unity (these requirements mean the output is bounded and there is no exponentially decaying remnant of the initial state to contend with), it is wide sense stationary. If the pdf of epsilon(t) is Normal, then it is strict sense stationary. See this page on autoregressive processes.
I think you mean “differencing of any integer order will not make the increments independent.”

Editor
February 18, 2010 11:44 pm

Your point is correct – fractional derivatives do exist.

An honest question. My understanding that the difference between a “first derivative” and a “first difference” is that the former is done on a continuous function, and the latter on a discrete dataset.
First, is that the case? Second, while fractional derivatives exist, does fractional differencing exist?
Thanks,
w.

DirkH
February 19, 2010 3:34 am

Willis:
“Second, while fractional derivatives exist, does fractional differencing exist?

World of wonders, it seems so, Willis:
http://www.research.ibm.com/people/h/hosking/abs.pub06.html
“Abstract. The family of autoregressive integrated moving-average processes, widely used in time series analysis, is generalized by permitting the degree of differencing to take fractional values. […]”
Looks like a really wicked operator to me – a discrete approximation of a fractional derivative maybe. I’m fascinated. The next question would of course be: Has econometrics or statistics already taken advantage of that?
Yes, indeed:
http://ideas.repec.org/p/boc/bocoec/317.html
“Fractional Differencing Modeling and Forecasting of Eurocurrency Deposit Rates”

February 19, 2010 7:26 am

George Turner;
BTW, have you ever heard of the top-secret bat-bomb project in WW-II?>
I had not heard that one, sounds fascinating. Just far fetched enough, yet logical enough, to prompt additional reading. Reminds me of the Russians in WW-II going with a similar strategy. They rounded up dogs and only fed them under a tank. Once they had the dogs trained to eat only under a tank, they starved them for a while, strapped magnetic mines to their backs, and released them into the face of a Nazi tank advance. Instead of racing toward the Nazi tanks, they scattered. Turned out they could tell the difference between a Nazi tank and a Russian tank and they had been trained with Russian tanks. The Russians had to shoot every dog on sight for a month and they lost quite a few of their own tanks in theprocess.

DeWitt Payne
February 19, 2010 8:04 am

Re: Bart (Feb 18 23:35),

I think you mean “differencing of any integer order will not make the increments independent.”

Thanks for the correction.
The question still remains, though. Are the conclusions of B&R in their paper still valid if the time series aren’t I(0), I(1) or I(2) because there are no unit roots? If a series is noisy enough, is it even possible to determine if it’s I(2) because each differencing decreases the signal to noise ratio because the variances are additive? The treatment of error in regressions of autocorrelated series that I am familiar with at places like The Blackboard and Climate Audit has been to reduce the number of degrees of freedom based on the lag(1) correlation coefficient, which is usually less than 1.
Another question: Does conversion to an anomaly affect the integration order of a time series?

VS
February 19, 2010 9:30 am

DirkH (03:34:08) :
That’s very interesting! I wasn’t aware of that method (but, in my defense, time series analysis is not really my field :).. the only issue is that it takes the problem of interpreting the coefficients (already a big hurdle in TSA) to a whole new level.
In any case, perhaps that’s something I might try on climate series, once I find the time for it… it looks like it takes some programming to implement.
DeWitt Payne (08:04:56) :
It doesn’t really matter how ‘noisy’ a series is for detecting a stochastic trend, if by noise you mean white noise (i.e. idiosyncratic error).
You are right that differencing ‘kills the signal’, but only if performed on a series which doesn’t contain a stochastic trend in the first place. If the series is in fact a random walk, that ‘signal’ wasn’t there to start with 🙂 Luckily we have tests to help us out there..

MikeN
February 19, 2010 12:38 pm

So is TomP the new proxy for RealClimate, the blog that deigns to ignore WUWT?

DeWitt Payne
February 19, 2010 1:03 pm

Re: VS (Feb 19 09:30),
Is it true that the impulse response of a random walk or I(1) series is to fail to return to the trend (effectively a step change)? If so, then how can the temperature series be I(1) regardless of the ADF statistic when there are many examples of impulses that do indeed return to the trend (ENSO events and volcano eruptions specifically)? The lower stratosphere series looks at first glance like a step change response to volcano eruptions, but on closer examination it looks more like a warming pulse that decays rapidly on top of a cooling pulse that decays at a much slower rate. With any luck the graph will display. Otherwise, the link is here.

DeWitt Payne
February 19, 2010 7:51 pm

A couple of references on the problems dealing with cointegration of time series with near unit roots:
http://www.informaworld.com/smpp/content%7Econtent=a713692127&db=all
Abstract:
This paper argues that the predominant method of estimating equilibrium relationships in macroeconometric models, namely the VECM system of Johansen, is severely flawed if the underlying variables are distributed as near unit root processes. Researchers may apply cointegration techniques to these processes, as the power of rejecting near unit roots using standard unit root tests is extremely low. Using Monte Carlo analysis, problematic behaviour of cointegration analysis is found in detecting the true underlying form of the connection between the near unit root processes. Furthermore the connecting vector is imprecisely estimated, resulting in problematic inference for error correction models.
http://www.federalreserve.gov/pubs/ifdp/2007/907/ifdp907.pdf
Methods of inference based on a unit root assumption in the data are typically not robust to even small deviations from this assumption. In this paper, we propose robust procedures for a residual-based test of cointegration when the data are generated by a near unit root process. A Bonferroni method is used to address the uncertainty regarding the exact degree of persistence in the process. We thus provide a method for valid inference in multivariate near unit root processes where standard cointegration tests may be subject to substantial size distortions and standard OLS inference may lead to spurious results. Empirical illustrations are given by: (i) a re-examination of the Fisher hypothesis, and (ii) a test of the validity of the cointegrating relationship between aggregate consumption, asset holdings, and labor income, which has attracted a great deal of attention in the recent finance literature.
These papers would seem to cast doubt on the validity of the conclusions of the B&R paper as it seems to be based on the conclusion that the temperature series is identically I(1), which it clearly isn’t.

DirkH
February 20, 2010 2:04 am

“DeWitt Payne (13:03:12) :
Re: VS (Feb 19 09:30),
Is it true that the impulse response of a random walk or I(1) series is to fail to return to the trend (effectively a step change)?”
I don’t think so. I’m trying to think about this in terms of signal processing and i hope i’m making sense….
I(1) is first differences, right? So that would be a FIR filter that does z(t) = i(t) – i(t-1) , z=output signal, i=input signal. This removes the DC component completely already and has a high-pass characteristic, letting high frequencies pass and dampening low frequencies with no specific f0.
So i would think that I(1) would return to the trend. I think the random walk you mention would be I(0). Does this make sense?
Talking about fractional differencing: For the moment, i see that as a method of using linear combinations of I(0), I(1) and I(2) (more if you need) or in SP terms a FIR filter like
z(t) = a * i(t) + b * i(t-1) + c * i(t-2)
– kind of like linearly mixing signal, first difference, second difference, slowly shifting the impulse response from a I(1) characteristic to a I(2) characteristic for instance. I hope you get the picture, it sounds less magical if you think about it in terms of a sound engineer who uses an equalizer to modify the frequency characteristic of a signal.
So if we are not certain whether temperature can safely be said to be I(1) we could model it as such a combination, same for CO2. I don’t know at the moment, though, whether tests for Granger causality can be done with such an approach or whether it is meaningful to do so.

VS
February 20, 2010 4:55 am

DeWitt Payne (13:03:12) (and DirkH (02:04:55)):
The series you posted is ‘detrended’ (i.e. you gave me the error_hat(t) of y(t)=a+b*t+error_hat(t)), which makes the errors usuitable for inference. Whether the series actually has a deterministic or stochastic trend is the whole question we are trying to answer with these unit root tests.
Also, about the error correction (what you are talking about when you wrote ‘returning to a trend’): A cointegrated relationship implies that, while the two (or more) series are random walks, they share a common random walk component, or a stochastic trend if you will.
This in it’s turn implies an error correction mechanism, where deviations from the (common) stochastic trend are corrected over the longer run.
So, for example, lets say that temperatures and solar irradiance (both I(1)) are cointegrated, and therefore share a common stochastic trend. Now, the implication of this is the following: a deviation of, say temperatures from this stochastic trend can happen due to an exogenous shock, but in the mid-long run, it will converge back to its long run equilibrium relationship (i.e. the common stochastic trend) with solar irradiance.
Note that we do not have to ‘explain’ the stochastic trend in order to make this statistical inference, we’re just measuring, and that’s the beauty of it.
The following should get you started on ECM’s:
http://ricardo.ecn.wfu.edu/~cottrell/ecn215/error_corr_2004.pdf
DeWitt Payne (19:51:18) :
There are always problems when you deviate from your base assumptions.
However, in light of all the tests which point to a I(1) process, it is a bit irresponsible to conclude that temperature is not I(1).. 🙂 I mean, what do you have the tests for then?
But OK, let us assume that you are right and that temperature is ‘near unit root’. Loosely put, this implies that temperatures are I(0) with very high persistence (which could hypothetically be the case, see overview I posted at: VS (11:10:03)). In its turn this would imply the alleged link with GHG’s, which are I(2), is even less likely.
They also state: “Researchers may apply cointegration techniques to these processes, as the power of rejecting near unit roots using standard unit root tests is extremely low”
Again, this implies that a cointegrating relationship is even LESS likely, further contradicting the AGWH. I don’t think these papers invalidate the BR approach per se (they reject the relationships, remember). Rather, they invalidate the Kaufmann et al approach.
DirkH (02:04:55) :
“Talking about fractional differencing: For the moment, i see that as a method of using linear combinations of I(0), I(1) and I(2) (more if you need) or in SP terms a FIR filter like z(t) = a * i(t) + b * i(t-1) + c * i(t-2)”
I don’t quite understand what you mean with a ‘linear combination of I(0), I(1) and I(2)”, as the notation given doesn’t correspond to what you wrote below (you wrote z(t)=(a+b*L+c*L^2)*i(t), where L is the lag operator).
I(d) is a characteristic of a series (i.e. how many times you have to difference the original series to obtain a stationary series) not the actual differencing (that would be the (1-L) operation :).
If you in fact meant a cointegrating relationship between I(1) (temperature, solar irradiance) and I(2) (GHG’s) variables: well, this is exactly what BR test for when they perform polynomial cointegration tests! 🙂

DirkH
February 20, 2010 6:01 am

“VS (04:55:47) :
[…]
If you in fact meant a cointegrating relationship between I(1) (temperature, solar irradiance) and I(2) (GHG’s) variables: well, this is exactly what BR test for when they perform polynomial cointegration tests!”
Thanks for the clarification, VS. Sloppy wording on my side. I’m just trying to get a grip on the concept of fractional differencing. I see it as a spectral thing. How many times you have to difference a series actually just means how often you apply the same filter so it’s nothing too magical…

Verified by MonsterInsights