New paper on mathematical analysis of GHG

Polynomial Cointegration Tests of the Anthropogenic Theory of Global Warming

Michael Beenstock and Yaniv Reingewertz – Department of Economics, The Hebrew University, Mount Scopus, Israel.

Abstract:

We use statistical methods designed for nonstationary time series to test the anthropogenic theory of global warming (AGW). This theory predicts that an increase in atmospheric greenhouse gas concentrations increases global temperature permanently. Specifically, the methodology of polynomial cointegration is used to test AGW when global temperature and solar irradiance are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences.

We show that although greenhouse gas forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcings, global temperature and solar irradiance are not polynomially cointegrated, and AGW is refuted. Although we reject AGW, we find that greenhouse gas forcings have a temporary effect on global temperature. Because the greenhouse effect is temporary rather than permanent, predictions of significant global warming in the 21st century by IPCC are not supported by the data.

Paper here (PDF)

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

281 Comments
Inline Feedbacks
View all comments
Tom P
February 16, 2010 4:33 pm

George Turner (14:54:09) :
“But we don’t have a complete understanding of all the physical factors that may come into play. Letting the data speak for itself instead of torturing it until is shows what you expect to see is probably the safer method.”
But you don’t need a complete understanding of all the physical factors to produce some constraints on the possible relationships between the parameters. Those constraints, though, can be very useful in making sense of the data while making no assumptions about the relationship you’re trying to establish. What you recommend as a “safer method” can in fact result in an unphysical solution based on oversimplified statistical methods.
“One of the problems I have with the simple greenhouse effect is that it perfectly models our atmosphere as a semi-transparent solid.”
None of the models used today make such a gross simplification.
VS (15:20:41)
“There is a lot of ‘distance’ between experimental physical results, and climate models. I reckon it’s about the same as the distance between macroeconomic models and things we know about people and the nature of their preferences.”
No, people are much more complicated than the Earth’s climate! Of course now that all but a few accept there is some level of influence of humans on climate, even that relative simplicity is being lost.

NickB.
February 16, 2010 5:11 pm

P
One difference between time-series analysis in econometrics and physical science is the additional constraint in the latter that any relationship has to conform to known physical laws. Kaufmann and Stern recognise this, Beenstock and Reingewertz apparently don’t.
I’m really hoping I misread that because it almost sounded like you said that a statistical analysis done for physical science has to give an answer that fits preconceived notions and if not it is invalid. While if it is wrestled into its proper result then its good, solid science. Please clarify!
As much as everyone loves swinging the “it’s a simple physics problem” hammer, once CO2 gets out of the lab and out there the whole problem becomes something completely different. Anyone who has studied Economics can probably brainstorm hundreds of examples of logical theoretical linking between variables that are quite literally lost in the noise once you try and find them in the wild. The whole point of this exercise and discipline is to come at it from the other direction, find solid signals, and then use that to support/refute existing empirical theory, or even come up with new theories.
A counterintuitive (sorry, but I cannot use the term aphysical to describe this) result for this type of analysis should be seen as a good thing – it means we’re probably going to learn something new here (assuming, of course, that their approach is solid, which does seem to be the case per VS’ reviews)… and like I said earlier, if the physical science on AGW is as solid as you think it is, the result could also mean the data is bad.
That said, there is absolutely nothing I have seen so far to indicate that there is any grounds for refuting/rejecting this analysis.

JDN
February 16, 2010 7:30 pm

DirkH (00:04:18) :
“JDN (20:43:35) :
[…]
summarize their independent variables, justify their choices of variables (e.x. they introduce rfCO2 without explanation”
“In Table 1 we provide details of the classification procedure for the radiative
forcing of CO2 (rfCO2).”
Questions?
—————————-
Oh, just the typical reviewers comments:
1) Define radiative forcing of CO2.
1a) Is it taken from theory or measured somehow? Is it animal, vegetable or mineral? Seriously, this paper is so bad it’s not even wrong…. literally. Unless someone is exactly you or someone in a very tight field for whom this jargon is second nature, there is no possibility of checking its accuracy. If you were to come back to this article 20 years from now, do you really expect to understand it yourself?
1b) Does it belong to a particular theory. I would imagine there are hundreds of definitions. Let’s just nail this one thing down. Also, why did you select this one definition?
2) Can you state a hypothesis in terms which don’t involve “unit root of the variable”? This bit of jargon is specific to your chosen method, non-stationary time series. In science, we have hypotheses where the mathematics is brought in to solve the hypothesis. In other words, connect your mathematics to the real world.
3) “In Table 1 we provide details of the classification procedure for the radiative
forcing of CO2 (rfCO2).” This statement is untrue. You don’t provide any detail, you just list some tests. Procedures involve either explaining data input, manipulation, data sorting/output or *clearly* citing references which use exactly the same procedures or something similar.
4) What do non-stationary time series mean to you (the author) (REF?)
5) “The method of cointegration is designed to test hypotheses with time series data that are non-stationary to the same order…” Really, are they designed for that purpose? Well, do they actually succeed? What conditions must be met for that success and how did you check whether those conditions were met?
5a) For those of us who don’t feel like crawling through all this jargon, what is the method of cointegration and why should we believe anything that comes out of that method? Since this will certainly be the first time hearing about it for many people, how about mentioning some successful application of the method and not just the fact that people are using it. People do a lot of silly things.
And there’s more to be sure, so, if you’re still reading this thread, how about some answers?

NickB.
February 16, 2010 8:10 pm

Dave Tufte
Thanks for the summary explanation. I’m sure you’ve forgotten more on this subject than I ever learned, but from one rusty economist to a sharp one… I’m glad to see my suspicions confirmed or denied by someone else from our area of study.
Tom P
I didn’t mean to pile on there, and FWIW I hadn’t seen your most recent reply when I posted. What I was really trying to get at is that there are times (as always assuming the methods and approach are solid and unbiased) when a counterintuitive result might result in a new and interesting insight… and one that might not be apparent on first review.

Alan Wilkinson
February 16, 2010 8:25 pm

Very interesting discussion. I concur that it is foolish to ignore well established theory from other fields here.
However, since it comes to the wrong conclusions there is little chance the paper will be published in Nature under present management.

VS
February 17, 2010 2:39 am

Dave Tufte (15:50:00) :
Yes, you are right when you stress that the basic problem is that GHG’s are I(2) and temperature is I(1). Asymptotically (if the sample size, n, goes to infinity) these two stochastic trends are independent.
But then you state:
“First, disregard the polynomial part. That’s a modelling tweak that probably doesn’t matter too much (if anything, it makes me think they fished a bit and thus doubt the conclusions).”
Beenstock and Reingewertz indeed use a ‘model tweak’ that would allow for these two series to be cointegrated on the next level. But that ‘model tweak’ is not in itself something trivial. Polynomial cointegration has first been described by Yoo (1986), and since then a solid body of literature has been developed on the topic, where contributors include the likes of Johansen (!) and Granger (!). This is the established way to deal with I(2)/I(1) relationships.
So, the difference between Kaufmann et al (2006) and Beenstock and Reingewertz (2009) is the following:
**Kaufmann et al (2006) attempt to cointegrate temperature, I(1) with the sum of radiative forcing, which is I(2) (equation (3), p. 255). This is plain wrong.
**Beenstock and Reingewerts (2009) first cointegrate various greenhouse gases back to a I(1) variable, and then attempt to cointegrate temperature with this I(1) variable. This is the accurate approach.
They find that solar irradiance is the most important factor determining temperature levels (what a surprise).
“This shows that the first differences of greenhouse gases are empirically important but not their levels. The most important variable is solar irradiance. Dropping this variable, but retaining the first differences of the greenhouse gas forcings, adversely affects all three cointegration test statistics”
They then proceed to show what happens if you ignore the different order of integration, like Kaufmann et al (2006) did.
“Haldrup’s (1994) critical value of the cointegration test statistic when there are three I(2) variables and two I(1) variables is about -4.25. Therefore equation (4) is clearly not polynomially cointegrated, and the conclusions of these studies regarding the effect of rfCO2 on global temperature are incorrect and spurious.”
Specifically, they argue that the following conclusion, by Kaufmann et al (2006) on p.255, is spurious:
“The ADF statistic strongly rejects (P < 0.01) the null hypothesis that the residual contains a stochastic trend, regardles of the lag length used in Equation (2) (Table I), which indicates that the variables in (3) cointegrate."
Let me stress the most important point here. By incorrectly applying these procedures Kaufmann et al. (2006) conclude that an increase in CO2 has a permanent effect on temperatures. Beenstock and Reingewerts (2009), by correctly applying the procedure, conclude that it is in fact only temporary.
So, you are right to state that the variables being I(1) and I(2) respectively, is the main issue. However, the procedure described by Beenstock and Reingewerts (2009) is not just a 'model tweak'.

VS
February 17, 2010 3:38 am

JDN (19:30:34) :
For the record, Beenstock and Reingewerts (2009) use NASA’s GIS data.
Also, with all due respect, the questions you asked are not ‘typical reviewer comments’, but rather comments from somebody unfamiliar with the methods employed. Now, there is nothing wrong with that an sich, so here are some answers to your questions:
1) The same way Kaufman et al (2006), and everybody else in the literature, define it.
1a) See answer above. As for the ‘jargon’, it’s statistical modeling. I can understand that coming from science it can be a bit frustrating to encounter (mathematical) language and jargon that you are not familiar with, but that’s just the result of 60 years of development in a field. I can’t read quantum field theory papers either, and you don’t see me dismissing their findings because of that.
Also, in all honesty, I find the language and methods used by Beenstock and Reingewerts (2009) far more clear than those used by Kaufmann et al (2006). To start with, they report all their tests (unlike Kaufmann et al, where I had to dig through the whole paper to infer that, somewhere in the corner, they take their main variable of interest, GLOBL, to be I(1), see the entire discussion above).
1b) They are testing a correlation here. I don’t really see your point (DirkH (14:21:44) explained the idea behind it well, I think). They first reject the specifications used before on the basis of test results, and then they try to find a correlation structure, any correlation structure, which could plausibly be in there somewhere. In short, they are testing a necessary, rather than sufficient, (statistical) conditions for causality.
If you cannot detect a nudge in temperatures, on any level, due to CO2 forcing, then what on Earth is all the CO2 fuss about?
2) Well, we find that temperature levels are a random walk (see answer 4 below), and that GHG’s first differences are a random walk. Does that help?
3) When you apply a regular t-test on the difference of means, do you also refer to all the literature on the topic? The test procedures in Table 1 are the results of standard methods (I.e. Augmented Dickey Fuller tests), used by every single paper on the topic I read in the past few days, to establish the order of integration. If you are interested in the theory behind them, you can start with this:
http://nobelprize.org/nobel_prizes/economics/laureates/2003/ecoadv.pdf
4) Here’s the most simple example of a nonstationary time series:
Y(t)=rho*Y(t-1) + error(t)
with rho=1 (which is what makes the series nonstationary)
where the error term is identically and independently distributed. This is called a random walk. The series is integrated of the first order, or has an unit root if you wish (i.e. the value of the coefficient of Y(t-1), rho, is equal to unity, making it a random walk).
If you then take first differences, you get:
D_Y(t)= error(t)
which is a stationary series, hence Y(t) is I(1). The issue here is, that once rho=1, the series Y(t) is non-stationary, making standard statistical inference on the value of rho, invalid. We therefore have to apply non-standard tests (e.g. ADF test equation) to distinguish between the cases rho=1 and |rho|<1 (i.e. between the series being I(1) or I(0)).
5) Again, there is a whole body of literature on the topic, and a Nobel Prize has been handed out for work on it. In fact, all those test statistics you were referring to, are tests to see whether conditions for cointegration are met. Kaufmann et al (2006) inappropriately test those conditions.
5a) Good question, people do a lot of silly things. Cointegration is currently the standard approach in nonstationary time series analysis (and all the papers I read so far on the topic, and my own tests, are quite conclusive on the point that we are dealing with a bunch of nonstationary series).
So, we might doubt the merit/value of the entire field of time series statistics… but I doubt that that will get us anywhere in the current discussion.
Hope this helps 🙂
PS. It is quite curious that Kaufmann et al (2006) passed peer review with that error included, especially since we know for over 20 years that the method they employed is incorrect. Perhaps it's because they submitted it to 'Climate Change' rather than the 'Journal of Econometrics'…?

February 17, 2010 6:25 am

VS (03:38:56) :
JDN (19:30:34) :
For the record, Beenstock and Reingewerts (2009) use NASA’s GIS data.
Also, with all due respect, the questions you asked are not ‘typical reviewer comments’, but rather comments from somebody unfamiliar with the methods employed. Now, there is nothing wrong with that an sich, so here are some answers to your questions:
1) The same way Kaufman et al (2006), and everybody else in the literature, define it.
1a) See answer above. As for the ‘jargon’, it’s statistical modeling. I can understand that coming from science it can be a bit frustrating to encounter (mathematical) language and jargon that you are not familiar with, but that’s just the result of 60 years of development in a field. I can’t read quantum field theory papers either, and you don’t see me dismissing their findings because of that.

But if you publish a paper using such jargon in another field where it hasn’t previously used you have an obligation to explain it. I’m sure you’d expect that if someone tried to publish in the ‘Journal of Econometrics’ using quantum field terminology…?

Roger Knights
February 17, 2010 6:36 am

TYPO:

They are claiming that if the height of the damn dam goes up …

Alan Wilkinson
February 17, 2010 1:08 pm

VS, many thanks for your very helpful expositions.

NickB.
February 17, 2010 2:31 pm

Alan,
Agreed – nice job VS! Really solid complex statistics truly is an art form, and specific to Econometrics analyses, one that is apparently missing from this climate discussion. Thanks!

Roger Abel
February 17, 2010 6:15 pm

The AGW crusaders depend on the truthness of the temperature measurements done the last 30 years. The raw data has been passed on through NOAA, and manipulated data to the IPCC and World is delivered by NCDC, GISS and CRU.
The climategate letters/data and Anthony Watts/Joseph D’Aleo report, “Surface Temperature Records: Policy Driven Deceptions?”, states that both measurement stations, and the measuerementvalues are HEAVILY TAMPERED WITH !!! Please dig into this, and realise that nothing in the IPCC reports holds water if these temperatures cannot be relied upon.
BOTTOM LINE: Measurement stations are badly cited. Those picked for the raw dataset are heavily biased to warmest places on earth(towns, coasts, airports). North and south areas of the earth are almost not represented in the collected data used for mean temperature analyses. The adjustments done by CRU, NCDC and GISS is further more biased towards warming trends by manipulating the data (not correcting them!!!)
THE RISING TEMPERATURES ARE MAN-MAD(e) -orchestrated by IPCC&Assosiates!
Underwater vucanos close to the southpole and under the northpole shurely melting of ice -NO DOUBT! Himalaya claciers are melting due to manmade deforesting and agriculture development in it’s neigbourhood. The changes to the Earth magnetic fields due to poleshifts are confusing animals so they are getting lost. IPCC is telling us that this evidence for Global Warming???
This is a political/ideological campagne with NO BASIS in real science. It is developed behind the curtains since the fifties, with the goal of replacing the oilmarket with a CO2 market and new energy sources. This is done cleverly by forcing people to pay for it by taxes imposed on us through FEAR(basic instincts easily overrides sense -our brain is made that way for surviving). Those on top making money on the oildriven market today, will continue to make money on a CO2driven market tomorrow -and they are getting away without paying for the transition too 😮
NO DOUBT there are clever “brains” behind this…and they have the money to put behind the making of this deception of AGW too. Most people don’t care, cause they don’t understand…and of course these “brains” KNOW that too!
-As to this statistical analyse against CO2…it’s printed out and put into my HUUGE pile of debunking AGW evidence yet to be read and understood :-/
Kindly, Roger Abel,
Norway -with “Press Freedom” to shut up about any contraditions to AGW and holding the people without knowledge about fraudulent political actions !!! There is still NO debate on any AGW contradictions in media here in Norway 🙁

February 17, 2010 7:32 pm

Re: VS (Feb 17 03:38),
I couldn’t see anywhere in the B&R paper where they showed that temperature was I(1), and not I(2). What test is used? It seems to me that recent temperature history doesn’t conform satisfactorily to any polynomial model. Nor did I see solar I(1) tested.

Richard Sharpe
February 17, 2010 8:06 pm

NickB. (14:31:13) said:

Alan,
Agreed – nice job VS! Really solid complex statistics truly is an art form, and specific to Econometrics analyses, one that is apparently missing from this climate discussion. Thanks!

Yes, and I seem to remember Steve McIntyre pointing out that methods from Econometrics are more solid than those from Climate Science and are perhaps to be explored 🙂

JDN
February 17, 2010 8:20 pm

VS (03:38:56) :
You know what’s absolutely hilarious… when I went to look up Kaufmann (2006) in the reference section of Beenstock & Reingewertz (2009), I discovered that they listed the wrong journal (Climate Change -> Climatic Change) and the wrong page numbers (248 -> 249). They haven’t even proofed their reference section for publication.
Seriously, though, why not just spell out the definition of radiative forcing of CO2? Why make me go through journals to get it? It was a simple question, and, you blew me off.
According to Kaufmann (2006), radiative forcing has unit of W/m^2, making it irradiance (a physics standard unit taught in every optics course). There are two tiny, eensy, weensy little problems with this definition, 1) The units are wrong for something called a “forcing”, and 2) the units are wrong for something called a “forcing”. I realize that it’s the same objection twice, but, I thought it was so important it was worth repeating. They don’t address how things get forced or the fact that radiative heat transfer should have units of W/m^3 for the atmosphere, plants & water but W/m^2 for reflective surfaces. Is the planet nothing but surface? Additionally, a forcing should have units of rate of change of something per unit of something else. And that’s just for starters. I guarantee you they have oversimplified the problem to make it work with their technique. Kaufmann (2006) also had the most amusing term: “explanatory variable”. It’s such an abuse of jargon, it’s hilarious. Don’t tell me it’s standard terminology; I don’t want to know.
You put up a link to the fact that this jargon-riddled, so-called statistical method has won a nobel prize in economics. Well, our buddy Al Gore has one of those too. Ipso facto, he must be a genius. That ought to blow your mind.
Well, this has really been educational. If I ever learn non-stationary time series, it will be to prove its worthlessness. After its so-called method is applied, all you end up doing is “proving” that some variable undergoes a random walk. Such a feat is just not possible without some more substantial mathematics. I say that this entire statistical technique is on marshy ground based solely on the fact that encryption is not threatened by your technique. I want to remind you that mathematicians regularly excoriate physicists on their sloppy use of mathematics. I suspect that if you look through the literature, there is some statistician jumping up and down about how useless this method is. That’s why it’s so filled-up with jargon, to cover its rotting body.
So, once again, I condemn this paper, the entire technique it’s based upon, and the attendant bad physics. I’ve seen enough and don’t feel obligated to plunge into its swampy morass any further. I suppose if someone wants to correct the gaping, fundamental problems with this paper, I might have another look, but, life is too short to waste it on people who are deliberately obfuscating their methods with misappropriated terminology and an infinite regress of definitions.

Alan Wilkinson
February 17, 2010 8:49 pm

JDN, I think more listening and less frothing would improve both your health and knowledge.
There is no physics in the paper. It is an exploration of a claimed causal relationship given two data series. The expertise applied is appropriate to the scope of the paper.

Editor
February 17, 2010 9:14 pm

JDN (20:20:17)

… According to Kaufmann (2006), radiative forcing has unit of W/m^2, making it irradiance (a physics standard unit taught in every optics course). There are two tiny, eensy, weensy little problems with this definition, 1) The units are wrong for something called a “forcing”, and 2) the units are wrong for something called a “forcing”. I realize that it’s the same objection twice, but, I thought it was so important it was worth repeating. They don’t address how things get forced or the fact that radiative heat transfer should have units of W/m^3 for the atmosphere, plants & water but W/m^2 for reflective surfaces. Is the planet nothing but surface? Additionally, a forcing should have units of rate of change of something per unit of something else. And that’s just for starters. I guarantee you they have oversimplified the problem to make it work with their technique. Kaufmann (2006) also had the most amusing term: “explanatory variable”. It’s such an abuse of jargon, it’s hilarious. Don’t tell me it’s standard terminology; I don’t want to know.

For historical reasons, everyone in the field calls it a “forcing”. A number of people have made your comment over the last decade or so, and the comment is 100% correct.
The first time someone made this point, it was interesting. The tenth time, not so much. For you to make the same point for the 4,323rd time is tendentious. Every field has its own jargon, its own often-strange use of terms. Many times these terms do not have their normal, everyday meaning. So what? The authors of this paper call it “forcing”, which is what everyone in the field, from Phil Jones to Steve McIntyre and every single other scientist in the field calls it. You don’t like it? … Tough. After a decade, complaining about it just makes you look out of touch.

You put up a link to the fact that this jargon-riddled, so-called statistical method has won a nobel prize in economics. Well, our buddy Al Gore has one of those too. Ipso facto, he must be a genius. That ought to blow your mind.

For someone who seems to be pedantic about small points, perhaps you could give us the details on Al Gore’s Nobel Prize in Economics …
For your future reference, there are two kinds of “Nobel Prizes”, which are given by entirely different groups of people. The first kind of Nobel is given to brilliant mathematicians and scientists for Chemistry, Economics, Physics, Mathematics, and the like. The second kind is the “Nobel Peace Prize”, which is given to peaceful folks like Yassir Arafat and overweight carbon millionaires like Al Gore. If you haven’t noticed the difference between the science prizes and the peace prize at this late date, you need more help than we can offer you here.

DeWitt Payne
February 17, 2010 9:24 pm

It looks like B&R used the Lean, Beer and Bradley TSI reconstruction (reference xi). Other researchers, Leif Svalgaard, e.g., don’t agree and think L,B&B significantly overestimate the solar variability. Treating the forcings from CO2, N2O and CH4 as independent variables while leaving out any contribution from aerosols, land use changes and other forcings seems questionable to me.

DeWitt Payne
February 17, 2010 9:57 pm

And more to the point, while temperature may have been I(1) when changes in solar activity were conceded by all to be the principal driving force, is it still? It looks like B&R analyzed from 1850 to 2006. For something like 2/3 to 3/4 of that time, solar would have been dominant. It’s the temperature increase from 1970 on that is supposed to be most heavily influenced by well-mixed ghg’s. If we restrict the analysis to 1970 on, are ghg’s still I(2)?

Alan Wilkinson
February 18, 2010 12:17 am

DeWitt Payne, if CO2 only influenced temperature after 1970, what was it doing since 1750:
http://cdiac.ornl.gov/trends/co2/graphics/lawdome.gif
And don’t forget it’s effect is supposed to be logarithmic – i.e. becoming less effective per delta.
The AGW claim is that CO2 impact will extrapolate to catastrophic levels. The data is saying temperature is not following CO2 sufficiently rapidly to support that claim. Unless you can point to major discontinuities in other factors for the future compared with the past there isn’t much of a leg left for AGW to stand on.

VS
February 18, 2010 2:12 am

JDN (20:20:17) :
I’m really not knowledgeable enough to comment on the use and definitions of radiative forcing variables. The definition used by both Kaufmann et al (2006) and Beenstock and Reingewertz (2009) is in line with what everybody else does, though. I’m sorry if it seemed like I was ‘blowing you off’, but I was more interested in the alleged statistical relationships.
Then you wrote: “Additionally, a forcing should have units of rate of change of something per unit of something else.”
For the record, forcing is indeed defined as a rate of change by Kaufmann et al (2006), and presumably then also by Beenstock and Reingewertz (see below for the references they use for their data).
On page 269 of Kaufmann et al (2006) give equation (20).
RFCO2t = 6.3 ln(CO2 t/CO2 1860)
which, if you look carefully, is a ‘rate of change’, with a reference to a given base year (namely 1860). It is furthermore in line with the defintion given here (with C0 being CO2 1860):
http://en.wikipedia.org/wiki/Radiative_forcing#Example_calculations
Except that the ‘multiplier’ given there is 5.35 and K2006 employs 6.3. I don’t know what that last point implies in general, but for Kaufman’s model it implies a relatively stronger impact of CO2 in the aggregate radiative forcing variable (RFAGG, eq. (23), p. 269) than would be the case if 5.35 were used as the factor.
You also stated “I suspect that if you look through the literature, there is some statistician jumping up and down about how useless this method is.”
Oh I’m sure you can find more than one, and that’s what makes econometrics a scientific discipline. Lack of people ‘jumping up and down’ would be far more worrisome.
DeWitt Payne (21:57:31) :
That’s a very good point, and Beenstock and Reingewertz considered, and tested, that possibility:
“We also check whether rfCO2 is I(1) subject to a structural break. A break in the stochastic trend of rfCO2 might create the impression that d = 2 when in fact its true value is 1. We apply the test suggested by Clemente, Montanas and Reyes (1998) (CMR). The CMR statistic (which is the ADF statistic allowing for a break) for the first difference of rfCO2 is -3.877. The break occurs in 1964, but since the critical value of the CMR statistic is -4.27 we can safely reject the hypothesis that rfCO2 is I(1) with a break in its stochastic trend.”
…but we’re all good 😉
———————–
For JDN:
References to data/variables used by Beenstock and Reingewertz (2009)
Hansen, J.,Ruedy, R., Glascoe, J. & Sato,M. GISS analysis of surface temperature change. Journal of Geophysical Research 104, 30997-31002 (1999).
Hansen, J. et al. A closer look at United States and global surface temperature change. Journal of Geophysical Research 106, 23947-23963 (2001).

VS
February 18, 2010 2:29 am

Nick Stokes (19:32:35) on temperature (not) being I(1).
Take a look at the discussion above, specifically VS (11:10:03).
Also, BR2009 state that they confirm previous findings that temperature and solar irradiation are I(1), and they state that again in Table 2. The test statistics are missing though, but I suspect that’s because they didn’t think this is a point of dispute (again, see discussion above :).

DeWitt Payne
February 18, 2010 6:19 am

Re: VS (Feb 18 02:12),
But did they or anyone else test the temperature series for a break?
I’d also like to see the technique applied to one or more climate model temperature series where we know for a fact that ghg’s influence the temperature. If climate model temperature series are I(2), that would cast some doubt on their validity. OTOH, if they are I(1) and the ghg forcing is I(2), then there’s something wrong with how the test is being performed.

DeWitt Payne
February 18, 2010 7:53 am

Another question: Is the choice really just between I(0), I(1), I(2) … I(n) where n is an integer only? Why not I(0.9)? My quick and dirty research on the topic says that an I(1) time series doesn’t show recovery to the trend after a shock, that is it’s a random walk. But one can see at a glance with the temperature series that shocks such as the Pinatubo eruption and ENSO events do show recovery to the trend. So what is it that I don’t understand correctly here?

DirkH
February 18, 2010 1:12 pm

“JDN (20:20:17) :
[…]
Well, this has really been educational. If I ever learn non-stationary time series, it will be to prove its worthlessness. After its so-called method is applied, all you end up doing is “proving” that some variable undergoes a random walk.[…]”
Now that’s what i call blowing off steam. If i didn’t know this is JDN i would have assumed it’s Gavin S. himself.
VS and Dave Tufte, thanks to the both of you for your calm and very illustrative writing! I’m a software engineer and the only things i had to do with time series by now were simple signal processing algorithms, but this thread has become the most fascinating reading for me, and it looks like econometrics has a lot of insight for me to offer!
For me, the fact that Beerstein and Reingewertz do *NOT* have to rely on any assumed physical mechanism is the very strength of their approach. It creates a constraint that any supposed physical mechanism has to fulfill to be a valid candidate. Really a nail in the coffin for CO2 as the major climate driver IMHO.
I also think that B&R harmonize wonderfully with Ferenc Miskolczi’s theory. It all makes sense.

Verified by MonsterInsights