Polynomial Cointegration Tests of the Anthropogenic Theory of Global Warming
Michael Beenstock and Yaniv Reingewertz – Department of Economics, The Hebrew University, Mount Scopus, Israel.
Abstract:
We use statistical methods designed for nonstationary time series to test the anthropogenic theory of global warming (AGW). This theory predicts that an increase in atmospheric greenhouse gas concentrations increases global temperature permanently. Specifically, the methodology of polynomial cointegration is used to test AGW when global temperature and solar irradiance are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences.
We show that although greenhouse gas forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcings, global temperature and solar irradiance are not polynomially cointegrated, and AGW is refuted. Although we reject AGW, we find that greenhouse gas forcings have a temporary effect on global temperature. Because the greenhouse effect is temporary rather than permanent, predictions of significant global warming in the 21st century by IPCC are not supported by the data.
Paper here (PDF)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
davidmhoffer,
Yes, seriously. Most models are simplified versions of the shallow water equations developed by a guy who wore a powdered wig and tube socks. The exceptions is MIT’s model which, as I’ve been told, is pure Navier-Stokes equations, which is great except that Navier-Stokes equations are invalid in conditions of evaporation or condensation (ie. the weather).
Fluid flow is a very difficult problem, and weather flows depend on tiny differences which would disappear into the round-off error of an airplane wing plowing along at hundred miles an hour, a flow which itself assigns empirical fudge factors for turbulence.
As one research paper pointed out, the equations used to model fluid flow are reversible, so they’re perfectly happy unmixing a fluid. They can have time flow forward or backward with equal accuracy, which means that, as currently formulated, they ignore effects that maintain important laws of thermodynamics. In short, they ignore entropy, turbulence, and other crucial phenomenon, but do at least provide a passable guess at some short term or large scale behaviors.
So they’re not completely useless, just not accurate and no way to achieve accuracy as currently mathematically formulated.
George Turner;
thanks for answering, but I am flabbergasted. I knew they ignored a lot of thermodynamics and entropy and so on which I have been trying to explain to some people, but this revelation has my head spinning in disbelief. Think about this series of questions.
Q. What area of the earth do we have the least data about?
A. Arctic Zones
Q What are of the earth shows the greatest variance from mean temp?
A. Arctic Zones
Q. Which parts of the earth radiate the most energy into space compared to what they retain from solar?
A. Arctic Zones.
Q. If there were errors made in constructing the models, they would be most difficult to discern when applyng the results to the areas we have the least data about to correlate to, which would be?
A. Arctic Zones.
So, if we started with a 2D model and extrapolated in onto a sphere, meaning that the accuracy of the model would be greatest at the equatorial regions, and worst at the Arctic Zones. You know the Arctic Zones… the ones with the most temp variability, the most negative feedback, and the least data with which to figure out if the model was right or not. so the models are by default the most innacurate at the very spot that we need the most accuracy to figure out if net heating or cooling is happening and if the model is right or not.
Should I assume the modelers are not able to figure this out themselves, or should I assume they got a bad feeling about what they might find out when they clean it up?
davidmhoffer (20:13:02) :
I think you’ve taken the term radiative forcing too literally. From wikipedia (not the best source… got it):
“In climate science, radiative forcing is loosely defined as the change in net irradiance at the atmospheric boundary between the troposphere and the stratosphere (the tropopause). Net irradiance is the difference between the incoming radiation energy and the outgoing radiation energy in a given climate system and is measured in Watts per square meter. ”
So AGW is not saying CO2 generates that energy. It’s saying it’s changing the transmission rate of the energy between parts of the atmosphere ie increasing energy storage (something you agree it can do).
This effect of CO2 does not violate thermodynamics. CO2 slows escape of energy, so outgoing radiation is REDUCED by CO2. Since incoming radiation is still constant (relatively), then there is energy buildup in the atmosphere until the incoming/outgoing radiation balances again. More energy in the atmosphere = higher temps.
As a skeptic, I agree with all of this. Where I think AGW went wrong is in calculating the effect. I believe that other atmospheric processes produce a negative feedback, not a positive one as AGW requires.
The idea that CO2 to be an I(2) is an interesting idea. Since the CO2 is not removed from the air, then perhaps it is an effect of the generation of the CO2 and not the CO2 itself. So what is it about burning fossil fuels that would have a temperature effect with a half-life of 1 year? The chemical energy released? The black carbon?
Or maybe the correlation isn’t physical, but socio-economic. I’ll bet the increase in the rate of CO2 emissions is highly correlated with the UHI. Every time we build a building or add a road, increasing UHI, we also increase the rate of CO2 emissions to power the building or fuel the car that drives on the road. That might make urbanization an I(1) variable, proving the temperature record is badly contaminated by UHI.
Some Guy (16:59:14) : You asked about sources of CO2 PPMs other than Mauna Loa. I wondered the same yesterday and did some digging. There are several other such studies in existence, and I am mighty relieved to report that they show (a) The same PPM to within about 10 ppm (b) The same upward trend (c) The same annual variation.
Why do I say “relieved”? Because there is a danger, having been so deceived by the likes of Mann and Jones, that we will cease believing in anything. Of course, that would be a gross overreaction; uberscepticism. We “sceptics”, if we accept such a label, are not naysayers; we just want to differentiate between untested and tested hypotheses. Between, on the one hand, plausible and well-presented fallacies and, on the other, repeatable solid confirmable truths.
Looking at the Mauna Loa graph, I found it rather too tidy, rather too regular. I intuited that measurement error and natural variation ought to make the shape rather more chaotic. Having found independent verification, I am glad to report that my intuition was wrong; I conclude that the Mauna Loa dataset is clean and honest.
@Tom P (17:45:14) :
“I’d like to see your results from the whole period of GISTEMP, 1880 to present, and Hadcrut, from 1850. Truncation of a series will tend to hide any I(2) behaviour. Both full series show very similar positive second derivatives.”
Tom, where do Kaufmann et al (2006) state that temperature is not I(1)?
I’ve scrolled through their paper (I didn’t read it carefully, I admit, the baseless hypothesizing was too much to handle while digesting lunch ;), and the ADF stat for GLOBL (the temp seires) is not listed. They do however find that RFAGG is I(1) in Table 1.
They then proceed to estimate a cointegration relationship between GLOBL and RFAGG, which implies that they think that GLOBL is I(1), because otherwise there couldn’t be a cointegration relationship to start with.
Furthermore, in an earlier paper, Kaufmann and Stern (2002), write:
“Consistent with the results of previous research (Woodward and Gray, 1995; Bloomfield and Nychka, 1992; Stern and Kaufmann, 1999), the results in Table 1 show that the temperature data are I(0) or I(1).”
Table 1 in Kaufmann and Stern (2002) furthermore gives the ADF stats for the temperature series NHEM and SHEM.
Northern Hemisphere (test statistic)
Level (don’t reject h0) -2.85.
First difference (reject h0) -11.67
Conclusion: I(1)
Southern Hemisphere (test statistic)
Level (reject h0) -3.55
Conclusion: I(0)
Where the employed significance level is 0.05 (with 0.1 both series would be I(1)). In any case, there seems to be no concrete evidence of temperature being I(2).
I also went on to look through Kaufmann and Stern (2002)’s references, listed above. Note that I had no (free) access to Bloomfield and Nychka (1992), so I didn’t check that one.
Woodward and Grey (1995): In section 6 they reject the H0 that the series is I(0), but for some curious reason they fail to continue for an unit root in the I(1) series.
Kaufmann and Stern (1999): In section 3.3 they perform the univariate unit root tests, with some modifications and conclude that “The KPSS test shows that all the temperature series are I(1),” They furthermore state that these tests were performed on the longest series available.
Finally, I took the data you suggested (CRUTEM3, GISSTEMP, HADCRUT), and performed the tests, I employed the same methodology as earlier (e.g. SIC based lag selection). For all three datasets the conclusion is unambiguous: global temperatures are I(1).
** CRUTEM3, global mean, 1850-2008:
Level series, ADF test statistic (p-value<):
-0.329923 (0.9164)
First difference series, ADF test statistic (p-value<):
-13.06345 (0.0000)
Conclusion: I(1)
** GISSTEMP, global mean, 1881-2008:
Level series, ADF test statistic (p-value<):
-0.168613 (0.6234)
First difference series, ADF test statistic (p-value<):
-11.53925 (0.0000)
Conclusion: I(1)
** HADCRUT, global mean, 1850-2008
Level series, ADF test statistic (p-value<):
-1.061592 (0.2597)
First difference series, ADF test statistic (p-value<):
-11.45482 (0.0000)
Conclusion: I(1)
…
So, can we agree that temperature is I(1)? 🙂
PS. I notice that some people in this thread are taking the results in the paper and then figuring out how to 'fit' a physical model to them. Note that this kind of practice (i.e. data mining) is heavily frowned upon in the econometrics community: you first come up with your hypothesis, and then you test it. That's the only clean way to go about it.
PPS.
I just saw I used ‘Means based on Land-surface air temperature anomalies only’ version of GISSTEMP. So I ran the test again for ‘Combined land-surface air and sea-surface water temperature anomalies’.
** GISSTEMP, combined, global mean, 1881-2008:
Level series, ADF test statistic (p-value<):
-0.543388 (0.4722)
First difference series, ADF test statistic (p-value<):
-5.585529 (0.0000)
Conclusion: …this one is I(1) too.
PPPS. Excuse the spam, but the results for that last test should read:
Levels: -0.301710 (0.5752)
First differences: -10.84587 (0.0000)
The conclusion doesn’t change.
VS (04:16:37) :
“Where do Kaufmann et al (2006) state that temperature is not I(1)?”
It’s on page 272:
“Consistent with our argument that temperature itself is not I(1), the
increase in solar activity has little effect beyond the first year.”
Stern and Kaufmann (1999) give some very useful background concerning the (mis)use of univariate tests to determine the stationarity order of temperature, and caveats which seems to have been ignored by Beenstock and Reingewertz:
“Statistical theory suggests that it will be difficult to detect an I(2) trend in a noisy time series such as global and hemispheric temperature series especially when the series is inappropriately approximated as a purely autoregressive process (Hamilton, 1994; Schwert, 1989; Phillips and Perron, 1988;Kim and Schmidt, 1990; Harvey, 1993; Pantula, 1991). An alternative approach is to model the I(2) trend and noise processes separately using the structural time series approach promoted by Harvey (1989).”
Stern and Kaufmann suggests how Beenstock and Reingewertz might have gone awry in their analysis.
Street;
This effect of CO2 does not violate thermodynamics. CO2 slows escape of energy, so outgoing radiation is REDUCED by CO2. Since incoming radiation is still constant (relatively), then there is energy buildup in the atmosphere until the incoming/outgoing radiation balances again. More energy in the atmosphere = higher temps>
1. The outgoing radiation is in fact reduced. TEMPORARILY
2. As you said, there is energy build up until incoming and outgoing balance….which is why the reduced outgoing radiation from CO2 increase is TEMPORARY
3. The process by which the complexities of the system as a whole fluctuate to arrive at a new equilibrium produce oscillations in temperature that are TEMPORARY.
4. When a new equilibrium point is reached, the steady state will be a higher temperature that is a minor increase in comparison to the TEMPORARY oscillations.
davidmhoffer (20:13:02) :
Bart:
The key to falsifying the AGW hypothesis is to show that either the “capacitance” is not changing as a result of anthropogenic emissions, or that the magnitude of that change due to anthropogenic emissions is insignificant, not in denying well established radiative physics>
You are correct and missing the point all at the same time. Yes the lake fills up and so there is potential energy stored in the lake. And yes, that slice of CO2 charges the earth capacitor making it hotter. BUT:
The AGW Hypothesis is that doubling CO2 adds 3.7 watts/m2 being radiated toward earth surface resulting in a direct rise of 1.1 degrees. At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance to outer space of 6.1 watts/m2. We can delve into ever more detailed analysis of the who said what who meant what variety, but that math does not work. In order to achieve the proposed temperature increase the AGW Hypothesis is built on the assumption that there is a long term positive feedback from CO2 that exceeds the resulting long term negative feedbacks. In the most ridiculous version a tipping point driving runaway warming happens. THIS is the point I am trying to make about the physics.
And what you’ve succeeded in doing is reveal that you don’t understand the physics!
At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance to outer space of 6.1 watts/m2.
This is not true, the correct version would be: At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance back into the atmosphere of 6.1 watts/m2. Not all of that radiance will make it back into space because of absorption, scattering etc., according to Trenberth about 60.2% (235/390) of the surface radiance makes it into space so that’s 0.602*6.1W/m^2 = 3.7W/m^2!
davidmhoffer (06:20:11) :
Please keep in mind, I’m just clarifying how the physics works. I disagree with AGW about the magnitude of the effects. I am also not talking about what this paper implies as no one has suggested a physical mechanism by which CO2 would have such a fast effect. I believe it may exist, but we can’t really discuss it until someone figures out what it is…..
“1. The outgoing radiation is in fact reduced. TEMPORARILY”
According to AGW, the reduction persists until the temperature goes up to a level that increases outgoing radition to balance the input. In that sense it is temporary, but the energy storage (temp) of the atmosphere persists as long as the CO2 is there. The physical mechanism for this in AGW is a long-term process.
“2. As you said, there is energy build up until incoming and outgoing balance….which is why the reduced outgoing radiation from CO2 increase is TEMPORARY”
Same as #1.
“3. The process by which the complexities of the system as a whole fluctuate to arrive at a new equilibrium produce oscillations in temperature that are TEMPORARY.”
That statement cannot be evaluated until we know what processes we are talking about. The radiative physics alone would imply the temperature would persist as long as the CO2 persists. With negative feedback, I believe this temp increase would be small, but it would exist. If you talking about the results of this paper, again we don’t know the physical mechanism that results in CO2 being an I(2).
“4. When a new equilibrium point is reached, the steady state will be a higher temperature that is a minor increase in comparison to the TEMPORARY oscillations.”
If your basing that off this paper, then I agree that it may be true. However, without a physical mechanism, all we have are some interesting statistics.
Phil;
This is not true, the correct version would be: At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance back into the atmosphere of 6.1 watts/m2. Not all of that radiance will make it back into space because of absorption, scattering etc., according to Trenberth about 60.2% (235/390) of the surface radiance makes it into space so that’s 0.602*6.1W/m^2 = 3.7W/m^2>
Yes! So there’s an “extra” 3.7w/m2 going down, and an “extra” 3.7 w/m2 going up… which nets to… doing the math in my head here…. zero. So the amount of energy going into the system equals exactly the amount coming out over the long term. Except wait a second… if the CO2 is already heated up enough to re-radiate an “extra” 3.7 w/m2 down, then it is ALSO hot enought to re-radiate an “extra” 3.7 w/m2 up. so it didn’t even need a boost from earth radiance at all. Now that of course is not how it would happen, I’m just making a point. The temperature gradient would change and the intensity of the new curve would be interesting to understand because it would result in different temp changes at different layers. But the energy balance must be zero in the long term and the AGW theories are not consistent with that.
@ur momisugly grumpy old man (13:43:10)
The humidity theory was a reference to the post by Gary Palmgren (16:15:27) that summarized Miscolczi’s theory as such:
Miskolczi claims that the semitransparent nature of the atmosphere in contact with an essentially infinite source of greenhouse gas in the form of water vapor from the oceans is in a state of dynamic equilibrium. As CO2 increases, a little water vapor rains out to keep the net optical density of the atmosphere constant. Remarkably, radiosonde data shows that the humidity above 300 mb has decreased over the last 50 years as CO2 has gone up. This fact rejects all of the GCMs that assume constant relative humidity (which is, or was, all of them).
@ur momisugly Some Guy (16:59:14)
@ur momisugly Brent Hargreaves (03:48:13)
I don’t have the link handy but there was a link here to a report about “lumpy” CO2 distributions detected by one of the NASA satellites – the “lumpy” part was very much overstated (delta was in the neighborhood of 6 or 8 ppm if I remember correctly, definitely within the 10 ppm Brent referenced) – but the average distribution was well in line with the Mauna Loa observations.
So the theory that CO2 is quickly and relatively well distributed through the atmosphere seems to be confirmed, and the Mauna Loa data seems to be a valid representation of atmospheric CO2 levels as a whole
davidmhoffer (08:19:57) :
Phil;
This is not true, the correct version would be: At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance back into the atmosphere of 6.1 watts/m2. Not all of that radiance will make it back into space because of absorption, scattering etc., according to Trenberth about 60.2% (235/390) of the surface radiance makes it into space so that’s 0.602*6.1W/m^2 = 3.7W/m^2>
Yes! So there’s an “extra” 3.7w/m2 going down, and an “extra” 3.7 w/m2 going up… which nets to… doing the math in my head here…. zero. So the amount of energy going into the system equals exactly the amount coming out over the long term. Except wait a second… if the CO2 is already heated up enough to re-radiate an “extra” 3.7 w/m2 down, then it is ALSO hot enought to re-radiate an “extra” 3.7 w/m2 up. so it didn’t even need a boost from earth radiance at all. Now that of course is not how it would happen, I’m just making a point. The temperature gradient would change and the intensity of the new curve would be interesting to understand because it would result in different temp changes at different layers. But the energy balance must be zero in the long term and the AGW theories are not consistent with that.
Again you reveal your ignorance as that is exactly the expectation of AGW theory, increase the CO2 concentration in the atmosphere and the surface temperature must increase to balance the energy fluxes at the top of the atmosphere!
Thank you Nick B. for the humidity info. I wish I understood what the mechanism was for causing the water vapor to decrease when CO2 rises.
Now that I have a dog in this hunt (http://www.2bc3.com/warming.html), I’ll change to my real name Lon Hocker, from my accurate name “grumpy old man”.
davidm and phil,
To be picky, the average surface temperature isn’t sufficient to calculate radiance becacuse radiance is based on temperature to the fourth power of each part of the surface that used to compute the average temperature.
If half the planet (night side) is absolute zero and half is twice 288K (day side), the radiance is sigma*(1/2*0 + 1/2*(2*T)^4) = sigma*8T where T is the average temperature and the 1/2’s are because the area of the planet was 1/2 cold and 1/2 hot. So instead of radiating 390 W/m^2 it radiates 3,120 W/m^2 but the average temperature is EXACTLY the same.
Lon Hocker (09:55:45)
Welcome sir! The first rule of Fight Club is you do not talk about Fight Club 😉
A quick note on Miscolczi, the last time I looked at least, the underlying physics for his theory is very complex and there have been allegations made by the RC crowd of fundamental flaws in his approach. Again, AFAIK, these critiques have not been responded to. His theory does seem to match the observed data better than the prevailing consensus approach used in the GCMs, so there very well could be something groundbreaking here. Here’s the WUWT thread on him: http://wattsupwiththat.com/2008/06/26/debate-thread-miskolczi-semi-transparent-atmosphere-model/
@Tom P
OK, I see I have to read the paper more carefully. On page Kaufmann 252 et al (2006) write
“Instead, the results indicate that the time series for temperature, anthropogenic emissions of CO2 and their atmospheric concentrations contain a stochastic trend.”
Like everybody else, they find that temperature and co2 are not I(0). On page 253 they then infer (second part of Table I) that CO2 is I(2). They don’t publish the test statistics for the temperature series though, which awkward, especially since it is the most important variable in their analysis.
They assert that if they find a cointegrating relationship between temperatures and RFAGG (aggregated radiative forcing, including CO2), they will prove that human activity, by causing RFAGG to have a stochastic trend (through economic development and such), causes temperature to have a stochastic trend too.
“Rather, the stochastic trends in temperature reflect the stochastic trends in the radiative forcing of greenhouse gases and anthropogenic sulfur emissions. These trends are like “fingerprints” that can be used to identify the effect of radiative forcing on temperature.”
So their statement on page 271, that “consistent with our argument that temperature itself is not I(1), the increase in solar activity has little effect beyond the first year.” is actually their hypothesis. Namely, that the stochastic trend in temperature can be explained by the stochastic trend in GHG’s. They actually refer to GLOBL (the global mean temperature series, so not ‘temperature in itself’, according to them) as being I(1) on page 267. This is opinion, not data driven fact.
Kaufmann and Stern, over and over again (I’ve now read enough of his papers) bend over backward to allow for some kind of relationship between GHG’s and temperatures within a cointegration framework. It’s clear that they have their preferred hypothesis, and that they are attempting to find a matching method. However, as both Beenstock and Reingewertz (2009), and Kaufmann and Stern (2000) state, the issue with GHG’s and temperatures is that one is I(1) and the other is I(2).
“Normally, this difference would be sufficient to reject the hypothesis that global temperature is related to the radiative forcing of greenhouse gases, since I(1) and I(2) variables are asymptotically independent” BR2009
“The univariate tests indicate that the temperature data are I(1) while the trace gases are I(2). That is, the gases contain stochastic slope components that are not present in the temperature series. This result implies that there cannot be a linear long-run relation between gases and temperature. These univariate tests are not, however, conclusive.” KS2000
They go on to present their model, but then fail to acknowledge that they are trying to cointegrate the two stochastic trends of different order (i.e. GHG’s and temperature) as if nothing is wrong. Beenstock and Reingewertz state the following about Kaufmann and Stern (2006):
“Others noticed that they [GHG’s] are I(2) variables, but inappropriately used standard cointegration tests instead of polynomial cointegration tests”
Beenstock and Reingewertz (2009) then say, ‘OK, we’ll give the AGWH the benefit of the doubt’ and take temperature to be I(1) and allow GHG’s to be I(2) and then test, very generally, for polynomial cointegration. This would allow for a correlation structure between temperatures and CO2.
And then they reject that relationship. So, simply put, it boils down to whom you believe. Is temperature I(1) or I(2) or what? Here are the results of my little literature review on the nature of temperature data:
** Woodward and Grey (1995)
– reject I(0), don’t test for I(1)
** Kaufmann and Stern (1999)
– confirm I(1) for all series
** Kaufmann and Stern (2000)
– ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
– PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
** Kaufmann and Stern (2002)
– confirm I(1) for NHEM
– find I(0) for SHEM
** Beenstock and Reingewertz (2009)
– confirm I(1)
I also managed to replicate the tests using four different datasets (two versions of GISSTEMP, HADCRUT, CRUTEM3), and found, in all instances, with or without drift in the test equation, a I(1) relationship.
So I guess temperature is I(1), right? And GHG’s I(2)? And then the Beenstock and Reingewertz approach is far more correct than the Kaufmann and Stern (2006) approach (which is plain wrong if temperature and GHG’s are not of the same I(p))?
Then I guess I’ll go with Beenstock and Reingewertz.
PS. After reading three Kaufmann and Stern papers I have to say that they grant themselves a lot of leeway in picking which test results are ‘conclusive’ and which can be ignored. Most of the time, the results that can be ignored are the ones that are in conflict with hypothesized physical relationships within the AGWH… the same hypothesized physical relationships they are trying to test.
Very strange methodology.
Phil. (09:48:27) :
“Again you reveal your ignorance …”
Having dealt with this guy in the past, I can say this is beyond the snark of the Pot vis a vis the Kettle. Don’t let him waste your time.
davidmhoffer (20:13:02) :
“In order to achieve the proposed temperature increase the AGW Hypothesis is built on the assumption that there is a long term positive feedback from CO2 that exceeds the resulting long term negative feedbacks.”
I believe you have it backwards. The AGW hypothesis relies on short term positive feedback, particularly with water vapor, to amplify the increase in the analogous capacitance or the height of the dam, but long term, the T^4 feedback forms a dominating outer feedback loop to keep temperature BIBO stable overall. At least, this is the consensus view, though the wilder-eyed prophets of doom preach that positive feedback of melting permafrost, etc., will overwhelm T^4 and send us spiraling down the path of a runaway greenhouse. I agree, based on past history, that this scenario is dubious at best. But, falsification of the runaway hypothesis will not incapacitate the AGW beast.
“…by raising a slight technicality and then claiming…”
Then, please stop saying things like “the amount of energy going from Sun to Earth will equal exactly the amount of energy being radiated back by the Earth.” Say, the power or the energy flux. And, “But the height of the dam[] makes no difference at all.” It does make a difference. More CO2 in the air will raise temperatures on Earth, though I agree, to a much lesser extent than has been claimed.
Bart
Then, please stop saying things like “the amount of energy going from Sun to Earth will equal exactly the amount of energy being radiated back by the Earth.” Say, the power or the energy flux. And, “But the height of the dam[] makes no difference at all.” It does make a difference. More CO2 in the air will raise temperatures on Earth, though I agree, to a much lesser extent than has been claimed.>
power being energy/unit of time, fine with your terms of reference, but the conversation started out as a generalization regarding energy balance and the wording I used was sufficient to illustrate the concept, not build a model. The damn analogy was similarly designed to illustrate a concept, and in fact, for the amount of water flowing past the damn, the long term average is in fact the same over the long haul, and raising or lowering the damn in fact has a huge but temporary effect on the flow rate (like the paper suggests). If we were talking about a really big damn and a really tiny river that would be somewhat different but the Sun is a really big river and CO2 a teeny weeny damn. “no difference” is pretty much correct in the context it was presented. Technicaly there are 1024 bytes in a kilobyte, for practical purposes 1000 is close enough.
I’m not trying to build an analogy here that can be converted to a computer model, I’m trying to illustrate some basic concepts. the scaremongers leave the impression that co2 generates new power input. it doesn’t. it messes with the flow of the power already there by daming it up. but its a teeny weeny damn.
VS (11:10:03)
“After reading three Kaufmann and Stern papers I have to say that they grant themselves a lot of leeway in picking which test results are ‘conclusive’ and which can be ignored. Most of the time, the results that can be ignored are the ones that are in conflict with hypothesized physical relationships within the AGWH… the same hypothesized physical relationships they are trying to test.”
One difference between time-series analysis in econometrics and physical science is the additional constraint in the latter that any relationship has to conform to known physical laws. Kaufmann and Stern recognise this, Beenstock and Reingewertz apparently don’t.
Hence, when univariate analysis comes up with an unphysical I(1) relationship, Kaufmann and Stern re-examine the assumptions behind the statistics and come up with an improved and more sensitive approach. Beenstock and Reingewertz just plough on regardless.
Econometrics is not my field but Kaufmann looks like he’s considerably more highly cited than Beenstock.
TomP,
But we don’t have a complete understanding of all the physical factors that may come into play. Letting the data speak for itself instead of torturing it until is shows what you expect to see is probably the safer method.
For example:
Increased CO2 means a fast plant metabolism and plants also uptake large amounts of water vapor, a more important greenhouse gas.
Increased CO2 affects algae growth and thus may make tiny changes in the oceans’ albedo or IR emissivity.
Increased CO2 might differentially have these effects in nutrient rich/warm environments, making tiny changes to the atmospheric circulation and the relative lattitudinal cloud coverage and distribution.
CO2 blocks downward IR from the sun, keeping it from directly melting snow (snow is a great IR absorber/emitter but is highly reflective in the visible spectrum). This might change polar circulation patterns.
Perhaps as radiative cooling slows down, convective cooling speeds up. Perhaps above a certain threshhold this makes clouds more likely to form thunderstorms with emit upward radiation via sprites, X-rays, and gamma rays. Perhaps this modifies the global electric charge distribution, affecting everything from surface evaporation to cloud formation to the ions in the mesosphere out to a couple of Earth radii.
One of the problems I have with the simple greenhouse effect is that it perfectly models our atmosphere as a semi-transparent solid. If it’s a perfect model of a solid, it’s bound to be a far less than perfect model of a gas, especially a wet gas.
@Tom P
You wrote:
“One difference between time-series analysis in econometrics and physical science is the additional constraint in the latter that any relationship has to conform to known physical laws. Kaufmann and Stern recognise this, Beenstock and Reingewertz apparently don’t.”
Are you suggesting that the (estimated) model presented on p. 269 (eq. 13-22) of Kaufmann et al (2006) comes anywhere near ‘confounding to known laws of physics’? It can at best be described as a hypothetical approximation or educated guess.
There is a lot of ‘distance’ between experimental physical results, and climate models. I reckon it’s about the same as the distance between macroeconomic models and things we know about people and the nature of their preferences.
Also note that time series analysis is a sub-field of econometrics; there are no two TSA variants as probability laws don’t change when the interpretation of your coefficients does so.
Finally, establishing a cointigration relationship between I(1) and I(2) series via a regular ADF test (i.e. Kaufmann et al 2006) doesn’t ‘confound to known laws’ of mathematics and probability theory. Perhaps Kaufmann and Stern could begin by recognizing that, first.
Ahem. I meant: perhaps Kaufmann, Kauppi and Stock could begin by recognizing that, first. 🙂
I think I can fill in a lot of blanks in this discussion.
First, someone mentioned qualifications. I am an economist who has actually been hired by an environmental economics Ph.D. program to teach stuff like cointegration to their students. I know the theory and techniques in the paper, and how it would be applied to this data (in fact, I did something like this as a class exercise in Spring 2000 at Tulane).
Second, I’ve been looking at GW data casually since 1988. I’m not a skeptic: I’ve thought since around 1990 that a lot of the GW statistical relationships did not, and could not be made to, make sense. Data is like building blocks, and the statistics is how it all goes together – if it looks like a Frankenstein monster you’ve got a problem.
Third, the reason you should pay at least some attention to economists in these discussions is that, like climatology, our field is largely non-experimental. We’re used to looking at a past data set and opining on what it can and can’t be consistent with.
So, here’s a primer on the point of this paper.
First, disregard the polynomial part. That’s a modelling tweak that probably doesn’t matter too much (if anything, it makes me think they fished a bit and thus doubt the conclusions).
Second, integration is used here in the sense of a summation. Specifically, that an observation is the sum of its past value plus some forcing behavior. It isn’t clear that CO2 data is integrated. But it seems plausible: today’s concentration arises from yesterday’s concentration plus today’s changes. Now, that isn’t being used as a definitional identity: it’s a claim that the CO2 from yesterday (for the most part) didn’t go anywhere, which seems plausible. The same is probably true of temperature measurements: temperature today depends on temperature yesterday because that heat didn’t completely dissipate. Both of these merely mean that the observed data depends on the past history of the same data. Nothing controversial there.
Third, the problem with integrated variables is spurious correlation. This is where two integrated variables have a high correlation coefficient, but that number is meaningless because the data series aren’t related to each other (example below). In particular, this means (calling Al Gore) that plots of the data series against each other look like they’re related. So, here’s an example of two data series that must be unrelated, but which are each integrated, and which will match up nicely on a scatter plot: a time series of the number of total wins the Yankees have on every calendar day of a season against the same series for the Red Sox. Obviously, these aren’t related (other than on days when one team beats the other). And yet if you graphed them against each other, they both start at 0, and progress together up through 20, 40, 70, perhaps even 90 – almost in lock step. So, if CO2 concentration and temperature (however measured) are integrated, spurious correlation is a potential outcome. A famous econometrics paper from the 1980’s showed that the R-squared of spuriously correlated series actually has an expected value close to 0.5.
Fourth, the only way two integrated series can be related to each other without spurious correlation is if they are cointegrated. This is statistically testable with any pair of series – whether you are a climatologist, physical scientist, or whatever. No one should be knocking this: it should be a standard part of the statistical tools used by climatologists on most of their data. It isn’t.
Lastly, all of this can also be done with sets of data rather than pairs, it just makes the math more complex.
What these guys are showing is that, for the data they used, the series are integrated (of different orders), are not cointegrated, and therefore any apparent relation between them must be spurious. The rest is just details.
Methodologically, the appropriate way to criticize this result is to find appropriate data that do appear to be cointegrated. This is not something that people are doing out in the literature. Having done this with some climate data myself over the years, I have a sneaking suspicion that it’s because none of it is cointegrated.
A couple of extra thoughts: not one, but two Nobel prizes in economics have been awarded for the theory of integration and cointegration. We’re all quite sure there will be at least one more.
Also, the lack of cointegration between a series and a revision of the same series is actually a huge result. Cointegration of revisions is a standard way to tell if you’ve done something moronic in your revision. The fact that a series fails this is a huge red flag.