New paper on mathematical analysis of GHG

Polynomial Cointegration Tests of the Anthropogenic Theory of Global Warming

Michael Beenstock and Yaniv Reingewertz – Department of Economics, The Hebrew University, Mount Scopus, Israel.

Abstract:

We use statistical methods designed for nonstationary time series to test the anthropogenic theory of global warming (AGW). This theory predicts that an increase in atmospheric greenhouse gas concentrations increases global temperature permanently. Specifically, the methodology of polynomial cointegration is used to test AGW when global temperature and solar irradiance are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences.

We show that although greenhouse gas forcings share a common stochastic trend, this trend is empirically independent of the stochastic trend in temperature and solar irradiance. Therefore, greenhouse gas forcings, global temperature and solar irradiance are not polynomially cointegrated, and AGW is refuted. Although we reject AGW, we find that greenhouse gas forcings have a temporary effect on global temperature. Because the greenhouse effect is temporary rather than permanent, predictions of significant global warming in the 21st century by IPCC are not supported by the data.

Paper here (PDF)

0 0 votes
Article Rating
281 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
February 14, 2010 1:12 pm

Thank you for one more “nail in the coffin”……
Roger

Richard M
February 14, 2010 1:17 pm

OT, Dr. Spencer has a new post that shows the Jan. temperature variation across the world. What I see does not seem to agree with some reports at WUWT. Particularly, it has Canada completely above average and I seem to remember several posts complaining of cold temps in central Canada. It also begs the question of where the cold air came from that led to the cooler temps in the US.

Stephen Wilde
February 14, 2010 1:18 pm

If true, another paper supportive of my suggestion that more greenhouse gases cause the hydrological cycle to speed up thus neutralising the effect on tropospheric temperatures and maintaining the basic equilibrium between sea surface and surface air temperatures dictated by sun and oceans as they interact over time.

DCC
February 14, 2010 1:21 pm

I think you caused the economics.huji.ac.il web site to crash 🙂

February 14, 2010 1:24 pm

Interesting indeed. What is really amazing though is the slew of papers that are surfacing now that would not have done so even a few months ago or at best would have been met with scorn and derision. This sort of balance has been suppressed for too long.

Hank
February 14, 2010 1:32 pm

So what’s the status of this paper? Submitted? Published? What journal?

George Turner
February 14, 2010 1:42 pm

Well they are economists, not journalists who write for the WWF, so there’s little chance their paper will survive the IPCC ‘review’ process.

February 14, 2010 1:50 pm

Viscount Monckton analyzes global warming numbers: click

February 14, 2010 1:51 pm


Hank (13:32:44) :
So what’s the status of this paper? Submitted? Published? What journal?

Hank – Hank whats-his-name of RC fame maybe?
In any case, you can find it here Hank:
http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf
It was slow on loading for me via a Google link.
.
.

February 14, 2010 1:51 pm

Someone showed me this paper a few days ago.
I’m not much good with statistics – but I’m hoping that among the excitement generated a few people can explain what exactly these statisticians were testing against.
Because if CO2 had a direct link to surface temperature, we could just run a simple/complex statistical test – which seems to be what they did..
But if CO2 and lots of other effects link to surface temperature then how can they test it? What are they testing?
For example – not that I am convinced by the argument – but the modeling community says that when they run their models with the effects of CO2 AND aerosols, they can explain the last 100 years of climate history. (Seems like a necessary but not sufficient proof of climate models..)
Did this paper test against that theory? Including aerosols? Because I couldn’t find any mention of it.
If not, they are too late, the climate modeling community has already rejected the theory that this paper appears to reject.

Britannic no-see-um
February 14, 2010 1:53 pm

Might I be so bold as to suggest that alarmist politicians, at least around here, are stationary in differences of the 3rd kind.

February 14, 2010 1:53 pm
R. Gates
February 14, 2010 1:53 pm

From both a mathematical standpoint, and the very marginal “science” involved in this paper…it is pure crap.

pat
February 14, 2010 1:54 pm

meanwhile, let’s not imagine BBC is changing its spots:
if you listen to the program (link at top of summary), you will hear Dr. Brown repeatedly say we MUST reduce our carbon footprint, and therefore electronics in cars, however, faulty, are not all that dangerous:
12 Feb: BBC: Science in Action
**Modern cars, software and safety
Dr Colin Brown is the Engineering Director at the Institute of Mechanical Engineers in the UK. He explained why the brakes on the Prius were causing problems..
**Potatoes and Climate change
In Peru, in the Andes, the potato is a vital, staple crop. Due to climate change, in particular a change in rain patterns, crop yields have been falling over the past few years. Now scientists, from all around the world have been working on different strategies to fix the problem…
http://www.bbc.co.uk/programmes/p0063zcn
as Delingpole said on his blog yesterday:
Climategate: the official cover-up continues
“If there’s one thing that stinks even more than Climategate, it’s the attempts we’re seeing everywhere from the IPCC and Penn State University to the BBC to pretend that nothing seriously bad has happened, that “the science” is still “settled”, and that it’s perfectly OK for the authorities go on throwing loads more of our money at a problem that doesn’t exist.”
http://blogs.telegraph.co.uk/news/jamesdelingpole/100025934/climategate-the-official-cover-up-continues/

rbateman
February 14, 2010 1:55 pm

Richard M (13:17:03) :
OT, Dr. Spencer has a new post that shows the Jan. temperature variation across the world. What I see does not seem to agree with some reports at WUWT. Particularly, it has Canada completely above average and I seem to remember several posts complaining of cold temps in central Canada. It also begs the question of where the cold air came from that led to the cooler temps in the US.

Was wondering where that cold air come from myself. I would have thought that it came from the Pole, but the westward motion of the jet streams and the warm anomaly over Canada seems to preclude it getting there.
Maybe it just fell out of the sky.

rbateman
February 14, 2010 1:58 pm

Stephen Wilde (13:18:21) :
How about biological cycle speeding up? More C02 = more bacteria, protozoa, plankton, etc. If I leave the contents of my refrigerator out in the warm air, it gets eaten whether I am able to get to it first or not.
If the bugs don’t eat it, it ain’t food.

Doug in Seattle
February 14, 2010 2:03 pm

I wonder if this paper will get published. What is linked looks like a submission rather than a pre-print.

Tom P
February 14, 2010 2:05 pm

This is certainly the first work to state “it is not the level of greenhouse gas forcings that matters, but the change in the level.” There is no physical basis for such a behaviour, and none is even suggested in this paper. I take it the authors are economists, not scientists.
The pdf is entitled Nature_Paper091209. However, given that Beenstock is releasing this now, I assume Nature are not planning on publishing it.

February 14, 2010 2:10 pm

Hmm. They come out swinging in the abstract with, “Therefore, greenhouse gas forcings, global temperature and solar irradiance are not polynomially cointegrated, and AGW is refuted.”

David
February 14, 2010 2:13 pm

Why is it in all the books that I have read on the subject of climate change; all the newspaper articles etc etc, I have not seen any mention of Precession. as I understand it (and I am not in any way qualified) all the calculations for a planetary rise in temperature are inaccurate unless Precession is taken into account as a base figure. Astronomers have known of Precession for two thousand years. Navigators have made allowance for Precession for hundreds of years. Precession is tabled in the navigators bible, Norries Tables. So why do we read nothing of it? Can any qualified person explain??

JDN
February 14, 2010 2:13 pm

Where the hell do these guys get off using “nonstationary time series” and “methodology of polynomial cointegration”? Looking through their paper, they say things like “The method of cointegration is designed to test hypotheses with time series data that are non-stationary to the same order, and to avoid the pitfall of spurious regression.” So what. How was it designed? Under what circumstances will this design succeed in modeling reality and how might it fail? And this is supposed to be a Nature paper of general interest? How are they establishing causality where others have failed? There is no discussion nor any proof. They simply assert that they are correct on the strength of tests that they don’t explain.
The statements they make are uninterpretable to anyone but them and a small group of people like this methodology. If I were to spend a couple weeks figuring out what their ridiculous jargon means, I suspect it could be rewritten using much simpler mathematics. I’ve done this before with other fields, but, I’m really getting sick of it. These authors are doing their level best to be priests of climatology. I condemn their efforts and have zero confidence in their conclusions until the put forward a convincing argument that other people can follow.

Jean Parisot
February 14, 2010 2:15 pm

Maybe it just fell out of the sky.
Maybe the heat didn’t?

Doug Badgero
February 14, 2010 2:17 pm

I second @Hank (13:32:44) what is the status of this paper? Also, what does stationary in 1st or 2nd differences mean as described in the paper? In googling the terms it seems to simply mean that the 1st derivative is a constant or the second derivative is a constant, respectively. Any help?

mamapajamas
February 14, 2010 2:19 pm

science of doom,
From the abstract, I got the impression that they were testing against the overly simplistic idea that CO2 causes global warming, which is where the overwhelming majority of GHG news stories center, and the level of understanding that has been given to governments all over the world, therefore we must all cease and desist producing CO2 altogether (and give these great scientists a huge grant to continue). THAT is what that paper is showing to be bogus– the very thing that “everybody knows” about climate.

Stephen Wilde
February 14, 2010 2:20 pm

rbateman (13:58:21)
“How about biological cycle speeding up? More C02 = more bacteria, protozoa, plankton, etc. If I leave the contents of my refrigerator out in the warm air, it gets eaten whether I am able to get to it first or not.
If the bugs don’t eat it, it ain’t food.”
If you can show that a change in the speed of the biological cycle can shift the air circulation patterns in the same way that changing sea surface temperatures seem able to do then yes, I’d go along with that 🙂
As for the paper at the head of this thread it does seem a bit ‘thin’ on detail so we’ll just have to wait and see what others say when they’ve deconstructed it.

February 14, 2010 2:20 pm

I can just see the headline in the Daily Mail
“Greenhouse gas forcings, global temperature and solar irradiance are not polynomially cointegrated!”
Bu like Hank I don’t see any indication the paper has been published in a journal.

DirkH
February 14, 2010 2:21 pm

“Tom P (14:05:38) :
This is certainly the first work to state “it is not the level of greenhouse gas forcings that matters, but the change in the level.” There is no physical basis for such a behaviour, and none is even suggested in this paper.”
May i help you. A given level of greenhouse gas forcings leads to a certain equilibrium temperature. If you want a higher temperature you need to add greenhouse gases (assuming there were no negative water vapour feedback, this could even work). Simple enough?

Frederick Davies
February 14, 2010 2:23 pm

The link points to a PDF named Nature_Paper091209.pdf, so guess where it has been submitted…
You would probably need a Statistics expert to translate the details, but in page 10 it says: “During the second half of the 20th century greenhouse gas forcings accelerated due in particular to increased carbon emissions. Our model predicts that this effect will be temporary unless these forcing continue to accelerate. Since carbon emissions depend on the level of global economic activity, this continued acceleration would unreasonably imply faster economic growth in the 21st century than in the 20th. Our results also imply that cutting carbon emissions will only induce a short-term reduction in global temperature, leaving no long run effect.”
Ouch!
The conclusion is quiet direct too.

Stephen Wilde
February 14, 2010 2:25 pm

Tom P (14:05:38)
The physical basis would be that it takes a while for the neutralising process to catch up so there is only a measurable effect whilst the change in rate of emissions is still in progress.
It’s too foggy for me to be too confident about it at the moment though the results would be convenient.

kadaka
February 14, 2010 2:26 pm

Smokey (13:50:09) :
Viscount Monckton analyzes global warming numbers: click

After clicking:

Climategate: Viscount Monckton Takes a Victory Lap

Huh?

Doug Badgero
February 14, 2010 2:29 pm

While I am not qualified to meaningfully critique this paper I would not discount mathematical methods entirely. The climate is thought to be deterministically chaotic and the various forcings and feedbacks are obviously deterministic.

February 14, 2010 2:31 pm

From the paper, solar irradiance contributed 0.40°C (74%) of the 0.54°C warming from 1880-2000, and CO2 contributed .09°C (17%) (and note man-made CO2 emissions only constitute 3-4% of the atmospheric CO2- therefore the man-made CO2 contribution to temperature would be 0.0036°C):
Contributions to Global Warming in the 20th Century
1940-2000 1880-2000
Solar irradiance 0.17 0.40
rfCO2 0.20 0.09
rfCH4 0.11 0.03
rfN2O 0.002 0.03
Total 0.48 0.54
Change in temperature 0.43 0.54

dp
February 14, 2010 2:32 pm

If I understand this paper correctly, and I wouldn’t bet that I do, the part of global temperature influenced by GHG will remain fixed if the rate of increase of GHG is fixed. If the rate of increase is increased or reduced then the global temperature will go up or down, following the GHG trend. This seems a bit too simple which usually means I need to read it again.

Richard M
February 14, 2010 2:33 pm

R. Gates (13:53:54) :
From both a mathematical standpoint, and the very marginal “science” involved in this paper…it is pure crap.
Translation: I don’t understand the paper and it doesn’t agree with my worldview, so it “must” be a piece of crap.
BTW, I don’t understand the paper either, but I will await the opinion of those who do understand the statistics used.

Richard Telford
February 14, 2010 2:43 pm

I never cease to be amazed how credulous climate skeptics are. Constantly forgetting that the theory of AGW depends on an understanding of radiative physics, and hyping the most dubious of analyses. Like this paper. The findings of which are blatently aphysical. I defy anyone to come up with a reasonable physical mechanism to explain this “our results clearly indicate that it is not the level of greenhouse gas forcings that matters, but the change in the level”

February 14, 2010 2:51 pm

Slightly off-topic but, I think, highly relevant.
I wonder if we would ever hear Beethoven’s symphonies if they were subject to the peer-review process? Some Salieri would opine that “from both a musicological standpoint, and the very marginal “harmony” involved in Beethoven’s scores…they are pure crap.” His peers would applaud, because, you see, they couldn’t hope to be Beethoven’s peers, could they?
I suspect that Mozart was peer-reviewed by Salieri and his peers. It is known that a second-rate composer Hasse remarked that “if Mozart is to live another 10 years, we all shall end up penniless.”
Franz Schubert was also peer-reviewed to oblivion during his life. He always submitted his compositions to all kinds of competitions, and never won. Anybody remembers the winners, by any chance? His namesake, professor Franz Schubert from Berlin, even threatened to sue him for the insult of attributing “that crap” to his noble, peer-reviewed name.
Wonderful thing, this peer-review process! It eradicates talent and daring thought in embryo, and perpetually protects the well-being of the well-connected mediocrity. You want to kill something — science, music, art, culture, education, anything? Institutionalize it, make it dependent on government subsidies, and make any publication subject to peer review.
Et voila! It’s dead.

debreuil
February 14, 2010 2:52 pm

Richard M
Central Canada (manitoba) has been moslty extremely cold or extremely warm this winter. Recently it has been about seasonal, but Nov was very warm, Dec very cold, then Jan half and half (of the extremes). iirc.

rbateman
February 14, 2010 2:52 pm

Jean Parisot (14:15:07) :
Maybe it just fell out of the sky.
Maybe the heat didn’t?

Now, we have moved the problem from the area stated to the area blocking.
The question was not what caused the warmer winter anomaly in Central Canada, but how the colder winter anomaly got down to the Central US without a clear path.
Heat, when it comes to depicting anomalies, is extremely misleading.
If (for example) the temperatures in Central Canada were -10F (normally -20F)
and the Central US was -10F (normally 1F) the anomaly picture is misleading.
Which is why I really don’t care to see them without the accompanying real temp maps.
The current forced heat agenda wants to beat the world up with a “look only at the HOT anomaly, and pay no attention to the man behind the anomaly curtain.
“Who dares come before the Great and Terrible C0z?”

Turning Tide
February 14, 2010 2:53 pm

Richard Telford: who is being “credulous”? Most of the posts commenting on the paper are questioning its status or disputing its claims. Methinks you’re seeing what you want to see.

jorgekafkazar
February 14, 2010 2:54 pm

Tom P (14:05:38) : “I take it the authors are economists, not scientists.”
What?! Not members of the high priesthood? This is fully in accord with the established traditions of climatology. The primary qualification is to be politically correct. Otherwise, why would the head of the IPCC be a transportation engineer and author of smutty books?

bob
February 14, 2010 2:55 pm

Where are the data tables?

February 14, 2010 2:56 pm

Richard Telford:
“Blatently aphysical”? It seems to me that your remarks are, first of all, “blatantly illiterate.”

February 14, 2010 2:58 pm

I “read” the paper, but please don’t ask me!

Michael Jankowski
February 14, 2010 2:58 pm

JDN,
Looks like you’ll need to go through references i (which was a Nature article), v, and vi.

KTWO
February 14, 2010 3:02 pm

Not heretics? Or deviants better not discussed. Abominations?
Well, as some one said, they are still economists. So they have not defected, sinned, or mutated into beasts.
They must be “ignorant beyond belief” instead.
We shall see. The PDF doesn’t seem to tell enough about the code used or the data. I don’t see that as reason for concern it will show up. Or not.
I look forward to what the experts will tell us.

Mike Ramsey
February 14, 2010 3:04 pm

Finally, we have estimated equation (2) using revised and extended (to 2006) data for solar irradiance.xx Prior to 1980 these data were based on various proxy measures. Data since 1980 are based on instrumental measures from satellites. Whereas the data in NASA GISS used 15 years of satellite data, the revised data use 26 years. We note that the revised data behave differently to the original in that the ratio of revised to original decreases during 1850 to 1950 but increases subsequently. Also, surprisingly, the revised series is not cointegrated with the original. We have focused on the original data since these were used by others who claimed that global temperature is cointegrated with solar irradiance and greenhouse gas forcings.
When we use the revised data, equation (2) ceases to be cointegrated. This happens because, as noted, the revised data are quite different to the original. The revised data confirm that greenhouse gas forcing do not polynomially cointegrate with global temperature. However, they also reject the hypothesis that global temperature varies directly with the change in greenhouse gas forcings, and indeed, that solar irradiance is a driver of climate change.
What do the authors mean when they say that “the revised series is not cointegrated with the original” ?  What is being implied here about the quality of the two data sets?
Mike Ramsey

February 14, 2010 3:04 pm

Frederick Davies:

The link points to a PDF named Nature_Paper091209.pdf, so guess where it has been submitted…

And accepted?
A Google scholar search shows nothing, and a Nature.com search shows nothing.
In his resume webpage – http://economics.huji.ac.il/facultye/beenstock/cv.pdf – it is listed under “current working papers”.

jorgekafkazar
February 14, 2010 3:05 pm

JDN (14:13:26) : “Where the hell do these guys get off using “nonstationary time series” and “methodology of polynomial cointegration”? Looking through their paper, they say things like “The method of cointegration is designed to test hypotheses with time series data that are non-stationary to the same order, and to avoid the pitfall of spurious regression….” I condemn their efforts and have zero confidence in their conclusions until the put forward a convincing argument that other people can follow.”
Part of the risk of visiting science blogs is that you may run into terminology peculiar to areas of science of which you are partly or totally ignorant. Most people either look it up or let it go, instead of flaunting their ignorance in public.

Daniel H
February 14, 2010 3:06 pm

I’m reading it now… so far so good. I’ve been saying the same thing about polynomial cointegration for years but no one would listen (okay, not really). However, this looks like a draft. Was it really accepted by Nature? If so, has hell frozen over?

mikep
February 14, 2010 3:08 pm

Before people get too excited about the methods involved you should be aware that there is a huge literature in econometrics on co-integration, and Granger was given the Nobel prize in econometrics for introducing the concept. The basic idea is that if time series are not stationary then the standard regression techniques will often give good-looking but spurious relationships. Granger and Newbold found, for example, that two random walks would often give high and apparently significant R2 if correlated with each other, even though there was no no real relationship. What Granger showed was that it was sensible to try and explain stationary variables with a combination of non-stationary variables provided that the combination of non-stationary variables was itself stationary. There is a nice light hearted explanation by Michael Murray, originally in the American statistician, I think here
http://www.ulrich-fritsche.net/Material/murray1994.pdf
The first thing any econometrician does these days is to examine any time series to see whether it is stationary or not. The jargon is no worse than in any other field. It’s just a bit unfamiliar.

rbateman
February 14, 2010 3:08 pm

Stephen Wilde (14:20:11) :
I was going along a line even simpler: Coytoes and rabbits.
If you have an explosion of rabbit pop, the Coyotes prosper.
When the rabbits thin out, so do the Coyotes.
If we release a lot of prehistoric biofuels, there should be a biological response to consume them.
If they are not biofuels in origin, biological forces won’t eat them, geology will, and it’s really slow unless the oceans get really cold fast.

kadaka
February 14, 2010 3:09 pm

pat (13:54:28) :
(…)
**Potatoes and Climate change
In Peru, in the Andes, the potato is a vital, staple crop. Due to climate change, in particular a change in rain patterns, crop yields have been falling over the past few years. Now scientists, from all around the world have been working on different strategies to fix the problem…
http://www.bbc.co.uk/programmes/p0063zcn
(…)

An article about that was linked to in this WUWT post.
Peru’s mountain people face fight for survival in a bitter winter

Climate change campaigners and development NGOs say that the failure of Copenhagen has signed the death warrant for hundreds of thousands of the world’s poorest and that a quarter of a million children will die before world leaders meet again to try to thrash out another deal at the United Nations next climate change conference in Mexico in December. Among them may be these children of the high mountains.

In the article you’ll find a major cause of death is lack of medicine. As in any medicine. The one clinic mentioned had just aspirin. People are dying from lack of antibiotics. They are dying from lack of cough syrup.
How much medicine could be bought with the sums spent on AR4? With what has been spent and will be spent chasing “green energy”? People are dying NOW, but to “Save The Planet!” these “climate change campaigners” are willing to spend into the trillions of dollars to slay the fearsome CAGW dragon, which they assure us will save humanity hundreds of years from now. Will there be any of these Peruvian natives left to be saved then?

Bart
February 14, 2010 3:11 pm

Doug Badgero (14:17:35) :
Also, what does stationary in 1st or 2nd differences mean as described in the paper? In googling the terms it seems to simply mean that the 1st derivative is a constant or the second derivative is a constant, respectively. Any help?”
I would expect it means the process is stationary in a stochastic sense in 1st or 2nd differences, as e.g., Brownian motion is stationary in 1st differences.
Richard Telford (14:43:26) :
“I never cease to be amazed how credulous climate skeptics are.”
I never cease to be amazed at how AGW enthusiasts believe failure to suppress information is equivalent to endorsing it.

February 14, 2010 3:11 pm

If people have questions or doubts about the paper let me suggest to them what is always suggested to me.
1. read the paper it is all explained there
2. If you have any questions write the authors.
3. The data and the algorithms are all explained in the text, if you think it’s wrong then go do your own science.
All kidding aside, on it’s face, taken at face value, it’s an interesting analysis of the time series. But:
A. it purports to use a time series of data that many people question. You can’t have it both ways. well you can try.
B. It would appear to be at odds with physical theory. Which leaves us a choice: accept the physical theory and hunt for the mistake in the paper. OR accept the paper and reject a large body of science. This choice is determined by pragmatic values. Extant physical theory wins.
C. Aerosols missing from the analysis. Not sure what impact that has and of course people will speculate along predetermined lines.
Side notes: comments about when where and if the paper is published look silly to me having read all the climategate mails. Publishing a paper in a pal reviewed journal that does not require posting of the code and data is a meaningless exercise after the revelations of climategate. So, ask these guys to publish their data as used and their code as run. If they do, check it. If they don’t, their papers are as worthless as the words that pass for science in today’s pal reviewed journals that don’t require publication of turnkey replication packages.

February 14, 2010 3:15 pm

I’m with pRadio at the top that, thank heavens, this is another nail in the AGW coffin. But then we still will have to deal with “guerilla” thermofascists in industrial and political positions of power as well as the general public, people who are superstitious as far as science is concerned and whose beliefs regarding AGW bear no relation to the findings of real science.

Alan Wilkinson
February 14, 2010 3:15 pm

I defy anyone to come up with a reasonable physical mechanism to explain this “our results clearly indicate that it is not the level of greenhouse gas forcings that matters, but the change in the level”
If there are adaptive negative feedbacks with lags this could well be the case. Changes in ENSO patterns, for instance, or growth in biosphere activity due to increased CO2. Forests are cooler than deserts.
If the data shows this conclusively, the physics will eventually follow.
The AGW hypothesis itself is a transitory one – the planet eventually settles back to a radiation balance and the argument is about the rearrangement of heat within the surface-atmosphere system when this happens.

George Turner
February 14, 2010 3:16 pm

Richard Telford,
It means there’s a lag for something like the oceans to catch up and compensate.
It’s like the inflation rate. The inflation rate doesn’t actually matter as long as it’s constant because everyone can factor it in. It’s the change in the inflation rate that burns some people and makes other rich as they are forced to adjust to the new rate, depending on how they’ve invested and how their compensation is calculated.
On the scientific point, if global warming could be simply explained with radiative physics then this wouldn’t be the 15th year without any stastically significant warming (according to Phil Jones, former head of CRU).

Hank Henry
February 14, 2010 3:28 pm

_Jim-
Hank – Hank whats-his-name of RC fame maybe?
No that’s not me – for some reason I forgot the last half of my regular nick which is Hank Henry. I rarely go to RC anymore, and when I do I tread lightly. It’s funny how much time those guys waste getting their message just so.

Mike Ramsey
February 14, 2010 3:29 pm

David (14:13:02) :
Why is it in all the books that I have read on the subject of climate change; all the newspaper articles etc etc, I have not seen any mention of Precession. as I understand it (and I am not in any way qualified) all the calculations for a planetary rise in temperature are inaccurate unless Precession is taken into account as a base figure.
David,
  It takes the earth 26,000 years to complete one precession cycle.  Obliquity takes 40,000 years to complete a cycle.  The contribution that a change in precession and/or obliquity contributes to climate over a hundred year period is correspondingly minuscule. 
 http://www.sciencecourseware.org/eec/GlobalWarming/Tutorials/Milankovitch/
Mike Ramsey

Mark
February 14, 2010 3:30 pm

Has this paper been peer reviewed?

February 14, 2010 3:36 pm

DCC (13:21:52) :
I think you caused the economics.huji.ac.il web site to crash 🙂

Better them that CA! 🙂

February 14, 2010 3:37 pm

That would be “rather than”, not “that”.

Jimbo
February 14, 2010 3:41 pm

OT – but concerns robust and glaciers.

“The scientist at the centre of the storm over mistakes by the UN’s climate change panel has broken his silence on the affair to defend his report as “robust and rigorous“.
……..
[Martin] Parry was co-chair of the Intergovernmental Panel on Climate Change’s (IPCC) working group on impacts, which produced a 2007 report that included the false claim that Himalayan glaciers would melt by 2035.”

Jimbo
February 14, 2010 3:42 pm
Peter of Sydney
February 14, 2010 3:42 pm

If the findings of this paper are correct (I presume they are until further notice) then it’s more than just another nail in the AGW coffin. It’s a direct nuclear blast that destroys it completely, and there’s nothing left to even have the need for a coffin. It is now time for the likes of Al Gore, Rudd, Obama, Jones, Mann, etc. to call a truce and re-think the whole AGW debate. Otherwise, they should now be called upon to defend their agendas and theories in a court of law and if found guilty be punished accordingly.

David, UK
February 14, 2010 3:42 pm

Common sense should tell one that the earth cannot experience permanent (or ‘run-away’) warming due to anything as trivial as a tiny rise in a natural gas. The planet would not have existed as it has for the last some-billion years if that were the case. We’d have all drowned under 100 feet of water before we’d gotten around to evolving.
@ Richard Telford: “Constantly forgetting that the theory of AGW depends on an understanding of radiative physics.”
Actually, it depends on a lot more than that. But in the absence of such understanding (given that there is so much we just don’t know) it depends on faith.

Robert Austin
February 14, 2010 3:43 pm

Re: Richard Telford (Feb 14 14:43),
So who’s credulous? I haven’t seen much blatant credulity on this thread so far.

David L. Hagen
February 14, 2010 3:45 pm

See: Beenstock & Reingewertz’s Presentation
Beenstock and Reingewertz do not include the affect of water as the most important greenhouse gas. It will be interesting to apply these statistical tests to the full greenhouse model including water. The conventional GHG models assume rising CO2 increases ocean temperatures which increase absolute humidity. Contrast Ference Miskolczi’s planetary greenhouse theory which finds the total atmospheric absorption will remain about constant.

Pa Annoyed
February 14, 2010 3:47 pm

Their basic claim is that CO2 follows the statistical pattern you’d get if you took a stationary time series (one where the distribution is constant over time) and integrated it twice. Temperature, they say, matches a stationary series integrated only once. These are two completely different sorts of behaviour that can’t remain correlated for long – so any correlations you do see must be spurious.
The cointegration stuff is about bending over backwards to allow for statistical oddities, because there are some exceptions to the above rule in which variables can be genuinely related even though they appear to have different orders. Cointegration occurs when two time series are both non-stationary, but there is a stationary linear combination of the variables (or their nth differences) – i.e. the difference between them has a constant distribution. This sort of thing can happen if there are certain sorts of feedback mechanisms. So they perform some additional tests to eliminate this possibility, which take up the bulk of the paper.
If anyone wants to check it, I’d start off thinking about their first claim, that the time series are of different orders meaning that the correlation is (without cointegration) necessarily spurious. I’d be concerned that all they’re testing is whether the trend in temperature is entirely down to CO2, which given the noisy jagged temperature and the smooth quadratic Keeling curve it is pretty obviously not. If you have a fast first-order process on top of a slow second order one, it could look first order to tests, although it had a strong second order component. They do cite some earlier papers which might be worth examining. (e.g. v, vi.)
But if that first argument does work, the second part with the cointegration test will probably go through on the same reasoning. However, I’m not a specialist at this sort of thing – you need to find a statistician and ask.
Basically, it’s saying that there is some sort of unknown feedback mechanism, adjusting some other quantity, that compensates for the effect of changes in CO2 within a few years. i.e. CO2 warms the atmosphere, which permanently releases something else, which cools the atmosphere again.

February 14, 2010 3:54 pm

steven mosher


..B. It would appear to be at odds with physical theory..
..C. Aerosols missing from the analysis. Not sure what impact that has and of course people will speculate along predetermined lines.

The point is that the IPCC, climate scientists, all of the people involved in climate modeling have already explicitly made this point.
You can’t explain the temperature changes in the last hundred years by CO2 changes alone.
As I said, I have my own skepticism about models.
But this new paper appears – on the face of it – and I only read it once – to agree with the IPCC!
Therefore, not at odds with “physical theory”.
So it is amazing to see all the cheering!

Syl
February 14, 2010 4:03 pm

scienceofdoom
“For example – not that I am convinced by the argument – but the modeling community says that when they run their models with the effects of CO2 AND aerosols, they can explain the last 100 years of climate history. (Seems like a necessary but not sufficient proof of climate models..)”
They only do this by assuming the warming is CO2 because the models cannot explain the warming otherwise. Bottom line: the models do not understand the magnitude of natural variation and are thus stuck with a tautology.
I do agree, however, in not understanding exactly what this paper is doing. There seems to be something missing or something sideways. IOW, something I can’t articulate bugs me about it.

TanGeng
February 14, 2010 4:07 pm

if this paper is correct then there must be some kind of rapid negative feedback mechanism that virtually eliminates all of the warming from greenhouse gas increases. It’s a mathematical approach so I think it requires some careful consideration.
A very strange idea indeed.

George Turner
February 14, 2010 4:09 pm

So if the effect of the absolute CO2 level is temporary, not long term, would it be going too far to say that CO2 affects “weather, not climate”?
Just had to throw that out there. 🙂

Gary Palmgren
February 14, 2010 4:15 pm

As an amateur I do not understand this paper. Nor do I understand Miscolczi’s paper on a constant optical optical density for the atmosphere. However, I believe I got the gist of both of these papers and I believe they reinforce each other.
Miskolczi claims that the semitransparent nature of the atmosphere in contact with an essentially infinite source of greenhouse gas in the form of water vapor from the oceans is in a state of dynamic equilibrium. As CO2 increases, a little water vapor rains out to keep the net optical density of the atmosphere constant. Remarkably, radiosonde data shows that the humidity above 300 mb has decreased over the last 50 years as CO2 has gone up. This fact rejects all of the GCMs that assume constant relative humidity (which is, or was, all of them).
This new paper by Michael Beenstock and Yaniv Reingewertz looks at the statistics of the changes in temperature, solar radiance, and CO2 and finds that the first and second derivatives do not match in a series of sophisticated tests that I do not understand. Taking it on faith, that they do know their statistics, I find it remarkable that they find that changes in the rate of CO2 emissions cause a short term rise in the temperatures for only a few years. This is consistent with Miskolczi who would certainly allow a short term change before equilibrium is re-established.
Here is my model of how it could work. CO2 absorbs IR from the ground. Due to the long decay time it collides with other molecules before it re-emits the IR. Each level of the atmosphere is heated by the extra absorption and CO2 only emits IR in agreement with the local temperature. Adding heat to the lower atmosphere will drive more convection and the lapse rate will continue to a slightly higher altitude. The tropopause becomes higher and colder. The stratosphere becomes dryer as the dew point at the tropopause becomes lower. Net effect: Constant optical density per Miskolczi and the statistics of temperature rise do not follow GHG per this paper.

February 14, 2010 4:21 pm

In a shock revelation, the organization known as the IPCC said they agree with the new unpublished paper by Beenstock and Reingewertz.
A high level IPCC scientist admitted last night:
We hid the results away so that no one could find them
Finally, forced by angry public pressure to release the location, this top scientist pointed to an obscure website called http://www.ipcc.ch.
Now that the media have had a chance to examine the revelation, they found this shocking comment buried away in the Executive Summary of Chapter 8 “Model Evaluation” (2001 Third Assessment Report):

Confidence in the ability of models to project future climates is increased by the ability of several models to reproduce the warming trend in 20th century surface air temperature when driven by radiative forcing due to increasing greenhouse gases and sulphate aerosols. However, only idealised scenarios of only sulphate aerosols have been used.

And later, even harder to find without actually reading it:

As noted in the SAR, the inclusion of the direct effect of sulphate aerosols is important since the radiative forcing associated with 20th century greenhouse gas increase alone tends to overestimate the 20th century warming in most models. Groups that have included a representation of the direct effects of sulphate aerosols have found that their model generally reproduces the observed trend in the instrumental surface air temperature warming, thereby suggesting that their combination of model climate sensitivity and oceanic heat uptake is not unrealistic (see Chapter 9, Section 9.2.1 and Figure 9.7).

-More shocking revelations of stuff the IPCC never told anyone, coming up next hour..

February 14, 2010 4:21 pm

So what’s the McIntyre view on this paper?

JonesII
February 14, 2010 4:22 pm

It´s the SUN….! PERIOD.

Cement a friend
February 14, 2010 4:39 pm

The paper may have the correct conclusions from the data used. However, it is very clear that the temperature data from NASA GISS has been manipulated and is false. Further, the CO2 data, and the assumptions and data about CH4 are also false. There is, also, doubt about changes of and what constitutes solar radiation (eg magnetic and particle fluxes).
I am sure if the authors used the correct data that they would find that there is no relation short or long term been atmospheric temperatures and the factors assumed to be forcing (which in itself is a stupid term) ie so-called greenhouse gases other than H2O in its various physical forms.
I would suggest that the paper has been poorly peer reviewed by people who do not understand the science and technology of measurement and heat transfer. The authors should have at least mentioned doubts about the data and the effect on conclusions of uncertainty.

Alan Wilkinson
February 14, 2010 4:42 pm

The salient point of this analysis is that in order to determine what CO2 will do in the future in a complex system it is more instructive to look at what it has done in the past in that system rather than to argue on the basis of a single theoretical physical effect component.
Data always trumps theory.

Pa Annoyed
February 14, 2010 4:46 pm

Gary,
Main issue with Miskolczi is his rather peculiar use of the Virial theorem. The particular form he uses is only valid when all interactions between particles are inverse-square attraction, but the interactions between atmosphere and the ground are repulsive and not inverse-square, so the usage would seem to be invalid. And the way he presents it is by drawing some sort of vague analogy between radically different concepts, so it’s impossible to tell exactly what he meant. I’ve never seen any answer from Miskolczi on that.
But yes, the sort of feedback he proposes would possibly lead to the sort of relationship identified in this paper. But then, so would a thousand other possibilities.
It is a capital error to theorise without data. We simply don’t know, and we’re not going to spoil things by guessing. Providing evidence of such a feedback would be sufficient – figuring out what causes it can come later.

royfomr
February 14, 2010 4:54 pm

Beware of Geeks bearing gifts!
However high-falluting the language of these theorists may sound and however much I’d love to believe their findings but I’m ‘sceptical’! Perhaps I may be turning into a “suspicionist”
If the past few decades have shown us anything that took root in our awareness then, for me, it’s been the willingness of so many to clamber aboard bandwagons!
Yesterdays gravy-train rewarded its passengers for rooting out the sin of SeeOTwo (in Science CO2 but, phonetically, rendered into a format that multitudes of Scientifically unaware, but easily duped, could cope with)
Today, work in progress, the popular paradigm appears to be shifting. Possibly Man is not the primary driver as once thought!
I’m not impressed by the prognostications of currently resident Robust-Man, Gavin, but neither am I interested in him being replaced by an equivalent contrarian!
Forget the models. Today’s or tomorrows! Listen to the data.
That’s why I go to CA to sit before SMc, Mosh, Bender et al. Ditto for WUWT, AV and BH etc.
Beware of Geeks bearing Gifts.

William
February 14, 2010 4:58 pm

Pa Annoyed (15:47:29) :
“Basically, it’s saying that there is some sort of unknown feedback mechanism, adjusting some other quantity, that compensates for the effect of changes in CO2 within a few years. i.e. CO2 warms the atmosphere, which permanently releases something else, which cools the atmosphere again.”
TanGeng (16:07:30) :
“if this paper is correct then there must be some kind of rapid negative feedback mechanism that virtually eliminates all of the warming from greenhouse gas increases. It’s a mathematical approach so I think it requires some careful consideration.”
I think you will find the culprit is water vapour, providing a negative feedback.

B. Smith
February 14, 2010 5:06 pm

R. Gates (13:53:54) :
From both a mathematical standpoint, and the very marginal “science” involved in this paper…it is pure crap.
_____________________________________________________________________________
Everyone here is entitled to their own opinion. The problem here for me is, I don’t know your background, so how can assign a relative value to your opinions? You may well be right, for all I know.
That being said, which disciplines of science and/or mathematics do you hold advanced degrees in? What facts are you basing your very strong opinion on? Did you already find and read all of the supportive works cited in the paper? Where did the authors err, in your analysis? Inquiring minds want to know!
_________________________________________________________________________
I have noticed that with very few exceptions, most posters and guest authors here do not state their qualifications. For instance, I am considered to be a reasonably intelligent fellow with a degree in Political Science. While I may not be able to follow all of the advanced math and science, I can follow the authors’ reasoning and supporting arguments. As always, the devil is in the details. I need to hear what other qualified mathematicians or scientists in the fields related to climate science have to say about the paper; people who understand the math and the jargon so they can opine with some authority. Perhaps R. Gates is so qualified, but I don’t know that.
One can draw an informed opinion from analyzing the concurring and dissenting arguments of people well qualified in the subject matter to do so. However well informed one may be, they are still a layman and not a qualified scientist. I am no different than our elected policy makers in this regard. With much of the supporting “settled” science in question and the compromised integrity and veracity of many of the scientists involved, a prudent approach would be to affect a Missouri attitude (Show Me or skeptical).
No disrespect meant to R. Gates or anyone else here. I just like to know who’s who.

AnonyMoose
February 14, 2010 5:10 pm

Bob Tisdale – “and AGW is refuted” is just a shop talk and doesn’t mean that any refutation took place. Mathematicians often use the term “refuted” to refer to a “a good way to deal with a problem”, rather than something that is “secret”, and so there is nothing problematic in this at all.

February 14, 2010 5:16 pm

At the end of the day, what Miskolczi said was an increase in CO2 would cause a temporary fluctuation that would settle out to an equilibrium about the same as before. What this paper says is that an increase in CO2 would cause a temporary fluctuation that would settle out to an equilibrium about the same as before.
The notion that this paper defies physics is correct… provided that it is physics as per the IPCC and others who argue that increase in CO2 is a positive feedback that will in the long term exceed negative feedback. If its physics governed by the laws of thermodynamics including conservation of energy though, then it supports the physics.

Simon Arnold
February 14, 2010 5:23 pm
Larry
February 14, 2010 5:24 pm

I really am amazed at how the warmist trolls in this post come out immediately attacking something they obviously don’t understand and don’t have the credentials to understand. Let time take its course and we will see if this is refuted or clarified by those mathematicians, physicists, and climatologists who should understand it. I only remember polynomials from my high school algebra class, so I don’t know what the heck “polynomial cointegration” is, either.

DCC
February 14, 2010 5:32 pm

This paper purports to tease out the “CO2 signal” (or lack of one.) So far, so good, because I cannot think of any other person or paper who even made a serious stab at determining what portion of climate variability is due to CO2. Climate models certainly don’t count; they start with a given and work backwards.
However, the jury is out. We need to know if their mathematical treatment applies in this case and, if so, did they do it right.

Ron de Haan
February 14, 2010 5:34 pm

Miskolczi Interview
“Former NASA scientist defends theory refuting global warming doctrine“
http://www.heliogenic.net/2010/02/13/miskolczi-interview/

Ron de Haan
February 14, 2010 5:35 pm
February 14, 2010 5:39 pm

I can help explain the first and second derivatives being a constant.
Consider the equation of a line, Y = mX + b, where m is the slope of the line. The first derivative dY/dx results in m, a constant. Thus, the equation is first order, because the first derivative (dY/dx) yields a constant. The term ‘b’ disappears, as the derivative of a constant is, by definition, zero.
But, when this is done with a curve, such as a parabola, the first derivative yields the equation of a straight line. The parabola equation is y = ax^2 + bx + c. The first derivative is then dY/dx = 2a x + b. We must then take the second derivative to obtain a constant. Thus the equation is second order.
What the paper appears to state is that portions of the problem are first-order and other portions are second-order.

Stephan
February 14, 2010 5:44 pm

My father was an eminent meteorologist/physicist.. working as an expert for the WMO (published 3 papers in Nature). I remember way back in 1998 he said that it was a tax grab (AGW) how’s that for foresight. I remember him saying the extraordinary temps in Europe that year were nothing out of the ordinary! (occurred ocassionally every 30-40 years) Of course he knew the science was c### even then. Hopefully this site will not be required soon sorry Anthony. Maybe you can keep going as a Climate/Weather in general site!

February 14, 2010 5:49 pm

Hmm. Polynomial cointegration, eh? Common sense gives the same answer.

David S
February 14, 2010 5:54 pm

pRadio (13:12:39) :
“Thank you for one more “nail in the coffin”……
Roger”
This coffin has so many nails in it that its weight should cause it to sink into the earth like a lead weight into quicksand. But there are powerful people who hope to profit from it, who are keeping it afloat.

Michael
February 14, 2010 5:57 pm

Man-made global warming caused 9/11.
“(CNN) — The thin wisps of condensation that trail jet airliners have a significant influence on the climate, according to scientists who studied U.S. skies during a rare interruption in national air traffic after the September 11 terrorist attacks.
During the three-day commercial flight hiatus, when the artificial clouds known as contrails all but disappeared, the variations in high and low temperatures increased by 1.1 degrees Celsius (2 degrees Fahrenheit) each day, said meteorological researchers.”
9/11 Study: Air Traffic Affects Climate
http://archives.cnn.com/2002/TECH/science/08/07/contrails.climate/index.html

February 14, 2010 6:05 pm

Ice/snow cover, Northern Hemisphere, 2009 v 2010: click

aMINO aCIDS iN mETEORITES
February 14, 2010 6:11 pm

From the Conclusion, page 13 of the pdf:
We have shown that greenhouse gas forcings do not polynomially cointegrate withglobal temperature and solar irradiance. Therefore, previous claims that carbonemissions permanently increase global temperature are false.
http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf

aMINO aCIDS iN mETEORITES
February 14, 2010 6:15 pm

I would like to see another thread on this very interesting work from Ferenc Miskolczi. Interesting interview:
…..“greenhouse constant” is the total infrared optical thickness of the atmosphere, and its theoretical value is 1.87…..The computations involved the processing of 300 radiosonde observations, using a state-of-the-art, line-by-line radiative transfer code…….the global average infrared optical thickness turned out to be 1.87, agreeing with theoretical expectations.
http://www.examiner.com/x-32936-Seminole-County-Environmental-News-Examiner~y2010m2d12-Former-NASA-scientist-defends-theory-refuting-global-warming-doctrine
🙂

February 14, 2010 6:17 pm

[quote bob (14:55:55) :]
Where are the data tables?
[/quote]

They’re at the end of the article.
Anyway, I’ll label this paper as “interesting”. That the rate of change in CO2 matters is certainly a new idea. But I’ll wait till I see someone come up with a physical process that occurs when the rate of change of CO2 changes before I take the findings of this paper seriously.

George Turner
February 14, 2010 6:20 pm

ScienceOfDoom,
Confidence in ability of the models to predict future climate was decreased when someone leaked the source code and comments, especially the liberal use of hard coded fudge factors to reproduce any curve they wanted.

February 14, 2010 6:21 pm

I’m really amused by some of the comments regarding the data. The time series of cointegrated comments over the last 15 years sounds like this:
Warmists – this data shows that CO2 is heating up the planet
Skeptics – that data is all messed up
Warmists – is not!
Skeptics – is too!
New Paper – the data shows that CO2 is NOT heating up the planet
Skeptics – that data is all messed up
Warmists – that data is all messed up
Skeptics – yeah, that… huh? what?
I also think it would be very interesting to see this analysis done again, but with a better data set because I think the methodology itself makes sense.

Joe
February 14, 2010 6:22 pm

JonesII (16:22:37) :
It´s the SUN….! PERIOD.
If it was truly the sun then it would only effect heat and cold. Not evaporation or precipiation that is increasing.
Joe

Bob Duncan
February 14, 2010 6:25 pm

David L. Hagen (15:45:26)
_______________________________________
As you noted, the Beenstock and Reingewertz paper did not include water vapor in their calculations and you refer to the recent work by Miskolczi in which water vapor is a central issue. I would like to suggest that the Miskolczi model and the rationale underlying Willis Eisenbach’s Thermostat hypothesis may in fact have a lot in common. Miskolczi sees the water effect as stabilizing the IR layer to a constant value while Eisenbach thinks of cloud formation, which depends on water vapor, as the fundamental “governor” of global temperature. My own experience as a sailor in the Caribbean resonated very strongly with the effects described by Eisenbach. My belief is that the Earth is a huge heat engine about which know little. I feel that both Miskolczi and Eisenbach are on the right track to helping our understanding.

NickB.
February 14, 2010 6:35 pm

If I understand it corectly, it should no matter if aerosals, cloud behavior, land use, etc are factored into this analysis – the analysis looks for relationships between specific variables and the behavior of the system as a whole does not need to be understood to do this. What gets me is that I really thought someone would have done this already(?)
If not that means the GCMs were built off of theory-only, most likely with a heavy dose of tweaking to get their behavior to match the historical data. If this really is the case, I have given the RC crowd WAY too much credit…
My Econometrics is very rusty, so feel free to shoot me down if I’m mis-stating.
A couple of random thoughts in parting: 1.) lack of correlation in violation of established physical relationships (assuming, incorrectly of course, all other things being equal) could also point to the data being crap, 2.) since it hasn’t been mentioned, the quick feedback to neutralize CO2’s temperature could also be the cloud behavior proposed by Lindzen

February 14, 2010 6:42 pm


Michael (17:57:08) :
Man-made global warming caused 9/11.

Do you think perhaps your thesis should be totally reversed, for both ’cause and effect’ and the direction of effect, vis-a-vis:
“9/11 caused man-made global cooling ”

“During the three-day commercial flight hiatus … the variations in high and low temperatures increased by 1.1 degrees Celsius (2 degrees Fahrenheit) each day, said meteorological researchers.”

.
.

Alan Wilkinson
February 14, 2010 6:42 pm

“Beenstock and Reingewertz paper did not include water vapor in their calculations”
As I understand the paper this is not very relevant/accurate. They matched the observed temperature to CO2 concentration and found a mis-match in trends from what should be expected if the AGW models were realistic.
It doesn’t matter what effect water vapour is having if the paper shows that climate doesn’t respond to CO2 in the way previously advertised. The paper is looking at the effect of CO2 in combination with everything else that is happening including water vapour effects.

February 14, 2010 6:43 pm

George Turner
My point is not to defend any GCM models. I would be amazed if they could predict the future.
And who knows, I might find out that buried away in that paper they actually reviewed the results of an actual climate model. Or the derivative of a climate model.
It’s just seems like the Coleman report all over again – disproving the IPCC by proving something the IPCC agrees with.
Call me old-fashioned but I find it more enjoyable to see a discussion where the presenter actually knows what his opponents believe and demonstrates the flaw in those arguments.
I’m clearly in a minority though, but that’s ok.

Gary Palmgren
February 14, 2010 6:52 pm

“Pa Annoyed (16:46:44) :It is a capital error to theorize without data.
As someone working in an industrial laboratory, it is silly to collect data without some sort of theory. You must at least believe the data has some sort of meaning. I have learned a number of times that the data will often refute the theory. Actually this almost always turns into a great learning experience. It has happened so many times we often try not only what we think will improve the product but also the exact opposite.
I’ve got a new model. It is now up to me to see if there is some data on the height of the tropopause over the last 50 years. I would not even know to look for this without some type of theory.
The only mistake i could make would be to assume my model is true and then make all kinds of announcements about the consequences if it were true. This is what the climate modelers have done. Now we know that every single climate model that assumes constant relative humidity is wrong. Yet they have been scaring people with thermageddon while ignoring that their models have been proven wrong. Its time to go and tell all of the biologists that all of the papers that assumed dramatically increasing temperatures are pretty much useless. Oops.

red432
February 14, 2010 6:52 pm

I wouldn’t put any more faith in this paper than I would have if it pointed in some other direction. Sounds like mumbo jumbo to me. “If all the economists in the world were laid end to end they would still point in different directions.”

Hank Hancock
February 14, 2010 7:33 pm

I think if the study is looked at from the context of emergence in complex systems – coherence of multiple cooperant forces – Beenstock and Reingewertz’s approach makes sense. Here’s my take on it…
Characteristics presented (or being observed) tend to be observable only while the cooperant forces remain in a stationary relationship or a time series relationship where the size scale and phase of the relationship remains reasonably the same. Because a stable system is dominated by negative feedbacks or regulators, it is highly unlikely that the relationship of the cooperant forces can remain the same for long periods of time if one or more cooperant forces are the product of disequilibrium.
According to the AGW theory, GHG change forces the system out of equilibrium, resulting in long term temperature rise. If AGW theory is true then GHG change must be cointegrated with temperature trend and solar irradiance (among other cooperant forces).
Beenstock and Reingewertz look primarily at greenhouse gas forcings, global temperature, and solar irradiance, in their tests for cointegration. Cointegration testing doesn’t require that all involved cooperant forces be identified, rather only several that can be quantified are needed.
The argument that some have made about water vapor not being included isn’t important to the outcome of the analysis. If water vapor is one of the cooperant forces (and it most surely is), then its affect doesn’t change the test for polynomial cointegration of GHG’s with temperature trend and solar irradiance. It changes only the size scale of the relationship.
If this study is correct then temperature rise is emergent and will occur only as long as it takes for the system to return to the old or a new equilibrium. That time period is dependent on feedback response times and the casual relation across the different scales, phase, and time series of the cooperant forces.

February 14, 2010 7:51 pm

I wouldn’t put any more faith in this paper than I would have if it pointed in some other direction. Sounds like mumbo jumbo to me>
I think some people are nervous about these results or rejecting them outright because the analysis technique is so foreign to how we look at data, but this technique is not new. I first came across it 20+ years ago when I was introducing this revolutionary computer technology called a graphics terminal (yes! you can draw lines on the screen!) to researchers. One of my customers was a geophysicist who was trying to locate micro-seismic events in three dimensions from an array of sensors. I remember him showing me some of his data capture from the sensors on the screen. There was a squiggle and he’d show me the math to determine how far away and how deep (back then I could still do the math…. now not so much). Then another squiggle which he identified as a heavy truck going by on the nearby highway. Another squiggle was from heavy artillary at a military base a few miles away. Then a BIG squiggle. What was that? Using similar techniques he showed that the big squiggle was a seismic event, a truck going by, and an artillary strike all at the same time. He could figure out how much of the squiggle was seismic event and how much was “other stuff”.
One of the first big events he captured was too far away for him to calculate location, just distance, but we knew it was big. The next morning the quake that clobbered Mexico City was on the news, so about 1985 I think?

steven
February 14, 2010 8:04 pm

I’m sorry but statistics don’t replace logic, they substantiate or refute it. This just doesn’t sound logical as long as you accept the GHG theory. Or perhaps I just don’t have a clue what they are talking about.

Street
February 14, 2010 8:08 pm

It may seem strange having a couple economists using one of their statistic techniques on climate, but both sciences are about finding cause and effect in a massively chaotic system. Data-wise, I see more similarities than differences and I think economics is a little more mature on the statistics side…..
Does anyone know any other studies that can account for CO2 only having a temporary effect? Could that be attributable to the black carbon issue or is a 1 year half-life too long for that?

fishhead
February 14, 2010 8:08 pm

Who found this paper? It seems odd, if it really is to be published in Nature, that it could be released NOW, before publication. Most journals I know, especially Nature, place an embargo on when a paper can be publicized. This right now would seem to be a big no-no. Secondly, it looks like it is a pdf of a word file from 12/21/09. Awfully fast turnaround time to get a paper accepted, especially of this magnitude. If, in fact, it was submitted in December that makes it all the more incredible since trying to find anyone, much less willing reviewers, in December is like herding cats. I haven’t read the paper, and most likely won’t since math ain’t my bag, baby. Still, the provenance of this seems odd.

Jean Parisot
February 14, 2010 8:27 pm

rbateman – I have some serious concerns with the spatial nature of both the surface temperature record and now the CO2 record. These cyclical anomalies do not seem to distribute randomly – should the surface temperature record be “gridded” as if natural temperature variation was a random function or a local characteristic (eg UHI).

February 14, 2010 8:30 pm

I reckon that our Gav thinks that this is a load of crap.
Which, to my way of thinking, will be the proof of it.

February 14, 2010 8:35 pm

ed432 (18:52:40) :
Sounds like mumbo jumbo to me.
To me too ! And the underlying assumption is that the data is good. We know that the sunspot numbers, TSI, and likely the temperature as well are not well constrained and that there is significant doubt on the long-term variation of all of them. In view of that, it is doubtful that statistical test have much meaning [either way].

February 14, 2010 8:39 pm

Smokey (18:05:28) : Ice/snow cover, Northern Hemisphere, 2009 v 2010: click
Scary. Extrapolate that out 5 years.

JDN
February 14, 2010 8:43 pm

jorgekafkazar (15:05:34) :
JDN (14:13:26) : “Where the hell do these guys get off using “nonstationary time series” and “methodology of polynomial cointegration”? … [snip]
Part of the risk of visiting science blogs is that you may run into terminology peculiar to areas of science of which you are partly or totally ignorant. Most people either look it up or let it go, instead of flaunting their ignorance in public.
Actually, I’m flaunting my disgust. I’m a scientist of various training with a heavy emphasis on mathematics and biology at this point, so, I’m accustomed to being ignorant of one subject or another. This paper is a case of people deliberately trying to obfuscate their work. You see it all the time in mathematics. At this point, if people refuse to write clearly and try to disguise their methods, I’m not interested in looking it up. They need to summarize their independent variables, justify their choices of variables (e.x. they introduce rfCO2 without explanation… it could be anything), explain why they are using linearized equations for those variables instead of raw data, provide some explanation on how their statistical method (something not useful for establishing causality) overturns predictions that have explicit causality (AGW calculations for all their flaws still pretend to be causal), etc., etc., & etc. As far as I’m concerned, this paper reads like the Sokal hoax.

savethesharks
February 14, 2010 9:39 pm

rbateman (14:52:50) :
Yes. BRAVO.

February 14, 2010 9:45 pm

I hadn’t known, before reading this paper, that it is already known that CO2 is I(2) and temperature is I(1). In economics, a finding like that that is normally taken to mean “Game Over” for any theory that says a permanent increase in the level of CO2 will cause a permanent increase in temperature. The theory has failed at the very first empirical hurdle. I am surprised that this result has not been more widely publicised. It’s a big problem for the CO2 theory.
(What Beenstock and Reingewertz do is try to help the CO2 theory jump over that first hurdle, by getting some help from solar irradiance. Even with help, the CO2 theory fails to jump the first hurdle. But solar irradiance can jump the first hurdle.)

February 14, 2010 10:30 pm

Summary of the paper for non-economists:
Temperature has increased linearly. CO2 has increased exponentially. If CO2 would have driven temperature, it would have increased exponentially too. It has not. Therefore, CO2 does not drive temperature.
That’s what Beenstock and Reingewertz say.
I disagree, because they do not account for the fact that the ocean warms only very slowly.

Syl
February 14, 2010 10:44 pm

mikep (15:08:23) :
http://www.ulrich-fritsche.net/Material/murray1994.pdf
Thank you for this link!!
This should be read (short, clear, sweet) before tackling the subject of the post.

Editor
February 14, 2010 11:07 pm

JDN (14:13:26)

… The statements they make are uninterpretable to anyone but them and a small group of people like this methodology. If I were to spend a couple weeks figuring out what their ridiculous jargon means, I suspect it could be rewritten using much simpler mathematics. I’ve done this before with other fields, but, I’m really getting sick of it. These authors are doing their level best to be priests of climatology. I condemn their efforts and have zero confidence in their conclusions until the put forward a convincing argument that other people can follow.

Ah, yes, the famous “I can’t understand it so it must be wrong” argument …

Alan Wilkinson
February 14, 2010 11:26 pm

Richard Tol: “Temperature has increased linearly. CO2 has increased exponentially. If CO2 would have driven temperature, it would have increased exponentially too.”
That sounds a dubious statement. CO2 is subject to saturation effects. Therefore temperature should increase at logarithmic function of CO2. If CO2 is increasing exponentially that would make temperature linear as found?

Ron Cram
February 14, 2010 11:35 pm

I would love to see CAGW convincingly refuted, but this paper does not inspire confidence. The URL has Nature_Paper in it, but I can guarantee you Nature did not approve this for publication. Possibly it was submitted to Nature, which means exactly nothing. Also, I hate the fact the authors use the term “robust.”
Someone ought to be able to figure out how to do the type of testing they are suggesting here, but I am not convinced they have it right.
Another paper which seems to use some similar tests claim “significant (dangerous) anthropogenic interference with the climate system has already occurred.” See http://www.springerlink.com/content/h0tx44h508602755/

DirkH
February 15, 2010 12:04 am

“JDN (20:43:35) :
[…]
summarize their independent variables, justify their choices of variables (e.x. they introduce rfCO2 without explanation”
“In Table 1 we provide details of the classification procedure for the radiative
forcing of CO2 (rfCO2).”
Questions?

Ian
February 15, 2010 12:32 am

Putting a paper such as this on WUWT adds fuel to the fire from RealClimate, Tamino et al. There doesn’t seem to be an identifiable Journal title, volume part or page number on the pdf and I can’t find anything except Nature-09 on the net. I doubt very much if this paper is destined for Nature, not because it is anti-AGW but because it isn’t that flash. Possibly it is a trawl by Nature to provide info on upcoming papers but that’s just a guess. I don’t think you do your cause any good at all with stuff like this, it provides too much ammunition for those who seek to discredit your blog.

February 15, 2010 12:37 am

Don’t want to sound like a broken record but I probably do.
It would be wonderful to see an analysis of actual climate model outputs against reality.
Rather than a fictional result (CO2 correlated to temperature) which everyone including the IPCC all agree won’t produce good results.
A real climate model has more than CO2 affecting climate.
Surely someone can produce an analysis.

February 15, 2010 1:02 am

Wilkinson
I simplified too much. The radiative forcing of carbon dioxide is logarithmic in the concentration. The authors correct for that.
Still, rfCO2 is I(2) — in this and other studies — that is, the second partial derivative is positve. Temperature is only I(1) — that is, the first partial derivative is positive but the second is not.
I(2) cannot Granger cause I(1).

Pete H
February 15, 2010 1:04 am

R. Gates (13:53:54) :
“From both a mathematical standpoint, and the very marginal “science” involved in this paper…it is pure crap.”
Actually, if they were using a computer with available “climate data” then, I totally agree with you!
The one thing I have learnt in the last few years is that we are light years away from being able to model chaotic systems, especially when one ….fudges the figures but as usual I will leave it to Lord S. McIntyre to debunk the numbers.
Honest! The Queen will be over to Canada very soon to get the blade out on S.M. if only to shut up “He who Talks to Plants”!

Tom P
February 15, 2010 2:57 am

Richard Tol (01:02:24) :
I’m surprised you say mean global temperatures are only I(1). A polynomial fit to GISTEMP, 1880 to date, shows a positive coefficient in the quadratic term:
http://img638.imageshack.us/img638/7125/gistemppoly.png

Pa Annoyed
February 15, 2010 3:30 am

“Still, rfCO2 is I(2) — in this and other studies — that is, the second partial derivative is positve. Temperature is only I(1) — that is, the first partial derivative is positive but the second is not.”
It means the second derivative is stationary, not necessarily positive.
It is true that if temperature really is I(1) then it cannot be driven by CO2. But if you take an I(1) process and add a very small amplitude I(2), the result is strictly speaking I(2), but looks I(1) on short time scales. You need to collect a huge amount of data for the I(2) behaviour to come out.
I’m beginning to think that what this paper says is that the null hypothesis – of any correlation being spurious/temporary – cannot be rejected on the evidence available. That’s not the same as the positive statement of saying the correlation is entirely spurious/temporary. It’s also nothing to get excited about.

Disputin
February 15, 2010 3:51 am

As a sea surveyor (looking at seabed rather than crawling through bilges looking for rust holes like a marine surveyor) with a degree in Marine Biology and Biochemistry my statistics is not what it should be and “cointegration” is way over my head, but it does make sense in a general sort of way. I’m reminded of Le Chatelier’s Principle in chemistry, wherein a system in equilibrium will act to a perturbing influence in such a way as to restore equilibrium. A couple of years ago New Scientist (when I was still subscribing) published a puff-piece for AGW replete with graphics that showed that the anthropogenic flux of CO2 was about 7% of the annual natural flux. Now I know of no demonstrably stable system which is so sensitive to such a small perturbation.
It strikes me that the GCMs all start from the assumption that CO2 is a major contributor to atmospheric temperatures by means of a known ability to absorb and re-emit IR radiation. That part of the science is certainly settled, but what isn’t is the overall effect of such in the vast, turbulent and probably chaotic atmosphere in general.
Were one to start from the data (and as both Pa Annoyed (16:46:44) and Sherlock Holmes pointed out, “it is a capital error to theorise without data” (although perfectly permissible and indeed essential to hypothesise), one might note that as the world gets warmer CO2 levels are seen to increase and vice versa, with a 400-1,000 year lag in each case. This indicates that CO2 is most unlikely to be the cause of temperature rise. Instead, since CO2 is emitted by soil organisms, the oceans, peat bogs and permafrost as temperatures rise the a priori assumption must be that the balance point of the system shifts towards higher CO2 levels with higher temperatures and any sudden injection of CO2 will cause a temporary spike in the atmospheric CO2 levels before a near equilibrium is restored. This paper, so far as I can understand it, seems consistent with this idea. Comments please.

Editor
February 15, 2010 4:08 am

Ian (00:32:39)

Putting a paper such as this on WUWT adds fuel to the fire from RealClimate, Tamino et al. There doesn’t seem to be an identifiable Journal title, volume part or page number on the pdf and I can’t find anything except Nature-09 on the net. I doubt very much if this paper is destined for Nature, not because it is anti-AGW but because it isn’t that flash. Possibly it is a trawl by Nature to provide info on upcoming papers but that’s just a guess. I don’t think you do your cause any good at all with stuff like this, it provides too much ammunition for those who seek to discredit your blog.

Oh my god, you mean it’s not peer-reviewed??? Burn it immediately.
Ian, you (and Tamino and the like) miss the point entirely. The reason we put papers like this up here is not to claim that they are right. It is to find out if they are right.
Unlike you, we don’t give a shit where it was published. Instead, we care if it is true. Only by submitting a paper to the full glare of public examination can we determine if it is worth keeping.
But we certainly don’t decide that on whether it is a nature trawl, or whether it is published or peer reviewed. Instead, we call on the extended community to comment and find fault with the substantive aspects of the paper.
Now, a complex paper like this, only some people can comment on it. But I understand what they are saying. I don’t have an opinion yet on the validity of the paper, but I find the approach fascinating. A number of people have already either raised objections, or validated, sections of the paper and the ideas therein.
That’s the process of public science. Yes, I know it is not pretty … but it is damned effective, much more so than the pathetic process that passes for peer review in these parlous times …

tallbloke
February 15, 2010 4:44 am

Willis Eschenbach (04:08:40) :
That’s the process of public science. Yes, I know it is not pretty … but it is damned effective, much more so than the pathetic process that passes for peer review in these parlous times …

Seeing that a paper was published in certain reputable journals would probably shuffle it higher up my intray if the subject matter was relevant to my interests.
Nature isn’t one of them.
If I thought it was important I’d then bring it here for an airing. WUWT provides a much more thorough and transparent process.

Tom P
February 15, 2010 5:10 am

Pa Annoyed (03:30:38) :
The paper states:
“We confirm previous findings (ii,iii,v,vi,iix) that the radiative forcings of greenhouse gases (C02, CH4 and N2O) are stationary in second differences (i.e. I(2)) while global temperature and solar irradiance are stationary in first differences (i.e. I(1)).”
There is no analysis given in this paper that temperatures are I(1) – it is just asserted. As for the previous findings, if you look at the latest Kaufmann work they cite, it actually says “temperature itself is not I(1)”, hardly what this paper claims to confirm.
This is a sloppy and obscurantist piece of work.

Merv Hobden
February 15, 2010 5:19 am

Gary Palmgren,
I am very sympathetic to your viewpoint, as I do understand the physics, and have actually carried out infrared absorption and reflection measurements on a variety of materials, liquids and gasses. I also understand non-linear dynamics of systems, and how difficult it is to make sense of complex time series data where dynamic non-linearities are involved. One discovery I made was that much of the published data for infrared absorption and its magnitude, was that many of those who carried out the experiments had failed to clear their experimental method of the differences between absorption and reflection. So called absorption and ‘extinction’ coefficients can be found to have large errors – the optical density and refractive index are subject to the density and the admixture of other gasses, in particular water vapor. I therefore consider that the Miscolczi paper contains some good sense, as does the Beenstock paper’s analysis of the data. What is worrying is that I can find no current experimental science – the measurement of gas columns in a laboratory environment, in a correctly designed experiment. The case seems to rest on Laplacian computer models, and the interpretation of remote sensor data that appears to support the model. This is not good science, and would bring howls of protest from an older generation of physicists. To find good experimental results you might as well take down R.W. Wood’s ‘Physical Optics’ of 1911 which does describe well conducted experimental results on gaseous absorption, both the near and far infrared.
My own opinion is that water vapor is such a powerful blocker of long wave IR – mainly by reflection, because of reststrahlen and the difference of refractive index, that CO2 effects are orders of magnitude lower. A simple water cell, with 1/2″ of water in it will completely block the longwave IR from a 300W tungsten lamp. And, the water does not boil, as the mechanism is reflective, not absortive. Below the cut off wavelength of 4.5microns, the water is completely transparent – the visible and near IR pass straight through. You can put your hand in front of the cell – there is some heating from near IR absorption, but little serious effect. Without the cell -Ouch!
Professor Richard Wood was one of the last great experimental physicists – he was the American Faraday. And I think that a major problem now is that experimental physics is almost a dead art – except in areas such as biological sciences. Most postgrads in physics reach for a textbook and start computer modelling – they would not dream of entertaining a physical experiment. The great English Physicist, Lord Raleigh, told his students that if you could not demonstrate the laws of physics with simple experimental apparatus, you did not understand them! A lession we will have to re-learn.

VS
February 15, 2010 5:51 am

‘Street (20:08:02)’ put it very well when he made the comparison between economics and climatology.
Methodologically speaking, climatological modeling looks much more like economic modeling, than modeling in the exact sciences. Climatology, just like economics, relies mainly on non-experimental data to verify it’s hypotheses (i.e. models).
The main differences are:
(1) economists know econometrics, and climatologists just wing it as they go along (ask E. Wegman or J.S. Armstrong).
(2) economists have access to many different time series. This is quite important if one realizes that, in (statistical) time series modeling, an observed series is treated as a single drawing from an (unknown) data generating process.
In broad lines, the paper posted above empirically tests whether we find CO2 effects on temperature. The Time Series methods employ assume very little about the underlying data generating process; it is a statistical procedure, much akin to measuring a correlation. Very simply put: rather than looking at the relationship between levels in the variables studied, one looks at the relationship between growth rates (or changes in growth rates), which should be just as solid if the hypothesized relationship between levels is valid.
The conditions tested are furthermore necessary for causality, rather than sufficient. This paper therefore rejects the necessary conditions for measurable causality. I say ‘measurable’ because it is quite plausible that CO2 levels impact temperature in some way. It is however very unlikely that they impact them such a straightforward and unambiguous manner as proposed by Mann et al.
Furthermore, methodologically speaking, what amounts to ‘scientific proof’ of a hypothesis in climatology is very disturbing. The argument propagated by both Mann (in his recent Washington Post editorial) and Jones (in his recent BBC interview), is that Man-Made CO2 is the culprit because the coefficient is significant in their regression.
Note that this is also their ONLY empirical/fact-based argument, the rest of their ‘arguments’ are hypotheses (i.e. models) and anecdotes.
Mann wrote: “Scientific evidence for the reality of human-caused climate change includes independently replicated data documenting the extent of warming; unprecedented melting of glaciers; rises in global sea levels; increasingly widespread continental drought; and models that predict all of these things but only when human impacts are included.”
Rough translation:
(1) We tried to explain the variance in temperatures with a model including total CO2, the coefficient was insignificant.
(2) We then split total CO2 into Natural and Man-Made CO2, where the latter series (which represents only a small part of total CO2) is equal to 0 for most of the last millennium, and shows a positive trend since the industrial revolution.
(3) In our new regression, the positive Man-Made CO2 trend (now entered separately from Natural CO2) coincides with the modern warming trend, and therefore ‘sucks up’ the variance. The Man-Made CO2 variable then ‘explains’ the Modern Warming.
I furthermore bet that the coefficient on the Man-Made CO2 variable is very high. This shouldn’t surprise anybody. You are dividing a warming trend with a (relatively) small Man-Made CO2 trend, the outcome will be a very high amount of ‘warming’ per unit of Man-Made CO2.
Do note that it is only of marginal importance how exactly Man-Made CO2 enters the right hand side of the equation. As long as the hypothesized relationship is positive, it is very likely to be ‘validated’ by regressing two positive trends on each other.
From this it should also be clear why Michael Mann really wants to eliminate the Medieval Warm Period from the official temperature record. If the temperature record is flat (i.e. has low variance) before the Modern Warming Period, his Man-Made CO2 variable will have an even stronger and more statistically significant effect (the temperature variance in the past won’t ‘disturb’ his two-trend regression).
..
This type of analysis wouldn’t pass as a Bachelor thesis in econometrics at the university where I graduated.

Pa Annoyed
February 15, 2010 6:14 am

Tom P,
The paper says “Consistent with our argument that temperature itself is not I(1),…” which isn’t quite the same. They’re trying to argue that heat does not accumulate, but is determined directly by the external forcings at the time, some of which do accumulate, and therefore temperature is due to those forcings.
However, I can’t see where in the paper they examine whether temperature is I(1). The nearest I’ve found is a comment to the effect that if there was a stochastic trend in temperature, it would be unstable, and would not keep returning to an equilibrium level. But that statement depends on the timescale you examine it over. In the 50-100 year short-term it will look very much like a stochastic trend.
There’s a lot of “consistent with” going on, which looks like affirming the consequent.
But you’re right. Both papers fall short of showing all their working.

February 15, 2010 6:24 am

P / Pa Annoyed
I was just trying to summarise the paper for people who do not know econometrics.
The paper does not volunteer many details, but Beenstock is a fine econometrician. Besides, the core argument — the different orders of integration — is well-established.
The innovation of the paper is the polynomial integration.

Gary Pearse
February 15, 2010 7:04 am

Hmm. I’m a sceptic but I don’t need this kind of stuff to augment my arsenal. This has to much of a “presto” feel to it. I think that what there is of climate science has been spread too thinly. If we don’t really know half of what drives climate, a hermetically sealed mathematical proof of anything by the buy-low-sell-high science isn’t going to make climate science settled anymore than the political scientists were able to. With AGW on the run, we are going to get a bandwagon effect and carloads of crap like we got when AGW was the plat du jour (farmers noting that their sheep were getting smaller so they would cool better, etc).

Steve Keohane
February 15, 2010 7:29 am

Joe (18:22:04) : JonesII (16:22:37) : It´s the SUN….! PERIOD.
“If it was truly the sun then it would only effect heat and cold. Not evaporation or precipiation that is increasing.”

Where does your information on increasing evaporation/precipitation? The atmosphere is drying out: http://i38.tinypic.com/30bedtg.jpg

Tom P
February 15, 2010 7:43 am

Richard Tol (06:24:26) :
Do you know of any evidence that establishes temperature as I(1)?
Beenstock may be a fine econometrician, but he appears to be directly contradicting Kaufmann who does seem to have done the seminal work in this field. I don’t see how they can both be right.

red432
February 15, 2010 7:43 am

Leif Svalgaard (20:35:49) :
red432 (18:52:40) :
Sounds like mumbo jumbo to me.
To me too !
Of course the arrival of mumbo jumbo conjurors in the skeptical camp is pretty good evidence that the tides of intellectual fashion are turning, perhaps decisively. These guys may not know how to predict climate, but they know how to get papers published…

February 15, 2010 7:53 am

A car drives down a road. Carefull measurements are taken to determine the altitude of the top of the car and the top of the road. From time to time the car hits a bump in the road. This cause a slight increase in altitude of top of car followed by diminishing oscillations due to the suspension system. someone hits the breaks. The front end of the car dips, the rear rises, oscillations damp out to zero. An extra person gets into the car. Slightly lower altitude until they get out. Road goes over a hill. At bottom of hill altitude at top of car starts to rise but the change in angle causes a slight compression of the susepension that oscillates and damps out. At top of hill, the reverse, as slight decompression happens. At bottom of hill, slight compression.
Now take reams and reams and reams of such data. Fix the altitude of the road and the altitude of the top of the car as stationary 1st order. All other inputs as 2nd order. Conclusions one would draw about things that affect the altitude of the top of the car:
1. Bump in the road causes an oscillation in altitude, but no long term change.
2. A person getting in the car makes a long term change, but it goes away when the person is removed.
3. Changes in incline of the road (bottom, crest and top of hill) cause both temporary oscillations in the altitude of the car, as well as long term changes in the altitude of the car that mirror the altitude of the road.
4. When several changes occur together (change in incline, bump in the road, stunt person leaping from one vehicle to another) the combination of oscillations can be much larger, but any conclusions about long term altitude must trends must be derived by removing the stationary 1st order to reveal the balance of change due to the 2nd order.
So… what this technique arrives at is to show that a bump in the road causes an oscillation in car altitude but no long term change. If one were to measure during a period of the initial oscillation only, one would incorrectly conclude that the bump had established an upward trend that could be extrapolated until the cars wheels were no longer touching the road. Changes in altitude of the road itself (the hill) also induce temporary oscillations as well as long term altitude changes, as does the stunt person leaping onto the car.
Unfortunately for the zealot demanding smoother roads (built by him of course) the bump in the road will not launch the car into space. Gravity works. The paper simply shows that when the primary drivers which are the irradiance of the sun and the temperature of the earth are isloated as first order, 2nd order inputs such as CO2 increases appear as temporary oscillations, not long term changes. Thermodynamics works.

Don Keiller
February 15, 2010 7:57 am

And I bet that the statistics used to analyse tree ring data for the Hockey Stick, falls into the same trap.
Stationary statistical methods used for non-stationary data leading to spurious correlation.
Didn’t Steve McIntyre say this some time ago?

February 15, 2010 8:07 am

P
Beenstock and Reingewertz write “We confirm previous findings” (about the order of integration) and refer to Kaufmann and co. Why do you think there is disagreement on this point?
B&R critique previous studies at the top of page 3.

February 15, 2010 8:24 am

Just to follow up on this, I plotted the 1st and 2nd derivatives of monthly CO2. The 1st derivative gives you a wave that matches monthly changes to the solar ephemeris, which is the distance between the sun and earth.
The 2nd derivative gives you basically the same thing, but with a longer sequence wave repeating in the series. This longer wave generally matches the up and down trends of both cosmic rays and monthly sunspot groups, at least between the period of Sept, 1983 to Dec, 1995. Between Cosmic Rays and Sunspot groups, I’d give an edge to cosmic rays as both the cosmic rays trend and the 2nd deriv CO2 trends are flat, where as the sunspot trend edges up slightly.
So basically, I’d say if this paper has hit upon something, it’s that there’s a correlation between the 1st deriv of CO2 and the solar ephemeris, and the 2nd deriv of CO2 and reversed cosmic rays.
Anyone wanting to repeat this analysis or explore it further can get the data I used from my Climate Scientist Starter Kit here:
https://sourceforge.net/projects/cssk/files/Climate%20Scientist%20Starter%20Kit.zip/download

February 15, 2010 8:33 am

scienceofdoom (00:37:45) : “It would be wonderful to see an analysis of actual climate model outputs against reality.”
see http://hockeyschtick.blogspot.com/2009/12/22-climate-models-v-actual-observations.html
and http://hockeyschtick.blogspot.com/2010/01/highlights-from-john-christys-ipcc.html
Tom P (02:57:42): “A polynomial fit to GISTEMP, 1880 to date, shows a positive coefficient in the quadratic term”
the coefficient in the quadratic term of 5E-05 is so small that it’s essentially zero

February 15, 2010 8:48 am

Hi all. Matt Briggs here.
I’ve not had time to read all the comments, but I have, on the request of an email, read the paper.
I do not find it especially interesting and cannot support its conclusion “that greenhouse gas forcings do not polynomially cointegrate with global temperature and solar irradiance.”
What the authors did was fit a particular kind of times series model to a couple of sets of data (not the only sets). They came to the conclusion that, since their model more or less fit their sets of data, the “previous claims that carbon emissions permanently increase global temperature are false.”
There are many kinds of time series models; some will fit better than others. Regardless of how close any of them fit any set of data, no measures of fit of those models can disprove—or prove—the AGW theory.
Statistical models can add weight to the evidence that the AGW theory is true or false, but none can ever prove it false, as the authors have claimed.
Too, their model is not especially convincing and rather divorced from the physics. Also, the more a statistical model strays from the physics, the less weight we should give it.
Last, like nearly all similar analyses, the authors do not account for the uncertainty in the sets of data. These numbers aren’t measured perfectly, and that uncertainty should be carried through and expressed in the final results.

February 15, 2010 8:49 am

David M. Hoffer,
The paper simply shows that when the primary drivers which are the irradiance of the sun and the temperature of the earth are isolated as first order, 2nd order inputs such as CO2 increases appear as temporary oscillations, not long term changes.
Thank you for this clear and concise summary. There is nothing in this article that would prompt a person familiar with the basics of calculus to call it “mumbo jumbo.”
I am not a mathematician but I’ve read this paper with interest, and everything seemed to me quite clear and reasonable. It would be more understandable. perhaps, if the authors would use more traditional English terms such as “1st and 2nd derivatives,” instead of “1st and 2nd differences” — but again, English is not their native language.

George Turner
February 15, 2010 8:51 am

davidmhoffer,
But the car ride is politicized and there’s a guy citing tipping point theory saying, “There’s a bomb on the bus. If the bus goes below 50 the bomb will explode. What do you do, hotshot?!”
I agree with your analogy, though.
If there’s no discernable statistically valid correlation between CO2 and temperature, then continued claims that CO2 increases will cause temperatures to rise is falsified. AGW people may wave their hands, citing physical theory and complexity as reasons why no such correlation can be found (which is like mumbling “God works in mysterious ways!”) , but for their warnings that CO2 will devastate the planet through increasing temperatures there MUST be a detectible correlation between CO2 and temperature. This paper used accpeted and sophisticated statistical techniques and failed to find such a correlation.

February 15, 2010 9:03 am

[quote mattstat (08:48:17) :]
Too, their model is not especially convincing and rather divorced from the physics. Also, the more a statistical model strays from the physics, the less weight we should give it.
[/quote]

A post I made just above yours which was probably approved by the Mod at the same time yours was does show a correlation between the 1st and 2nd derivatives of CO2 and the solar ephemeris and cosmic rays, respectively.
I’m not saying this proves their paper has a meaningful relationship to physics, but it does suggest that further exploration in that direct may be useful.

George Turner
February 15, 2010 9:04 am

mattstat,
I disagree that the more a statistical model strays from the physics, the less weight we should give it.
The whole debate about global warming boils down to this simple model.
CO2 increase -> black box -> equillibrium surface temperature increase.
With sufficient experimentss you could reduce it to a simple graph of CO2 level versus long term global average surface temperature. The lack of that one simple graph is why governments are spending billions – with the end result of one day producing that graph.
One way to unequivacally produce it would be to vary atmospheric CO2 levels over thousands of years with a sweeping frequency modulation, then use a Fourier transform to find the same sweeping frequency component in the temperature records. But that would take thousands of years.
The current data set is so limited that rough curve fits are too easily produced by dreaming up a couple of confounding effects whose magnitude is essentially a guess, allowing everyone to use their favorite fudge factors to get a better looking fit and reducing the science to arguments about whose fudge factor model is best.

February 15, 2010 9:04 am

mattstat:
Too, their model is not especially convincing and rather divorced from the physics.>
Excellent point. Could you perhaps elaborate as to HOW it is divorced from the physics? Or do you consider confirming the laws of thermodynamics to be a divorce?

Ron Cram
February 15, 2010 9:07 am

Willis,
It is fine to post a paper here to determine if it is true, but the paper was not really introduced that way. It was introduced more like “Game over. We have the paper that disproves AGW.” Indeed, that is the claim the authors make.
It would have been better if the paper had been introduced as “Here’s an interesting paper that draws very strong conclusions. It has not been published yet or accepted for publication, but let’s look at the data, methods and conclusions and see if the paper has value.”
In pursuit of that goal, I have provided a link to a paper which claims to use some of the same tests. Only this paper claims the tests prove AGW. It would be interesting to see an analysis of why the same tests get such different results. See http://www.springerlink.com/content/h0tx44h508602755/

George Turner
February 15, 2010 9:09 am

Alexander Feht,
I assume that economists use the word “differences” instead of “derivatives” because economic data comes in daily, monthly, and quarterly clumps and so to a mathematician it isn’t really smooth and differentiable.

February 15, 2010 9:14 am

George Turner;
If there’s no discernable statistically valid correlation between CO2 and temperature, then continued claims that CO2 increases will cause temperatures to rise is falsified>
Not quite. What the said is that there IS a stasticaly valid correlation between CO2 INCREASE and temperature. What they show is that the amount of increase is like a bump in the road…. so the increase causes an oscillation, but no long term change. Hence their comments about the rate of increase being important like the size of bump in the road. But it is the RATE OF INCREASE the defines the size of the bump in the road, and hence the oscillations in temperature, which will damp out to zero. Not the amount of CO2 itself.
Which fits exactly with the physics.

Tom P
February 15, 2010 9:17 am

Richard Tol (08:07:07) :
“Beenstock and Reingewertz write “We confirm previous findings” (about the order of integration) and refer to Kaufmann and co. Why do you think there is disagreement on this point?”
See my comment at (05:10:31). Kaufmann argues in work cited that temperature is not I(1), and neither would a simple polynomial fit suggest this order of integration.

Tom P
February 15, 2010 9:22 am

Mark Sawusch (08:33:37) :
“the coefficient in the quadratic term of 5E-05 is so small that it’s essentially zero”
Not at all. It’s just that the fit is to year-squared values that are very high (e.g. 1900^2). Hence the coefficient will necessarily be small, but that doesn’t make it essentially zero.

Phil.
February 15, 2010 9:29 am

Merv Hobden (05:19:43) :
My own opinion is that water vapor is such a powerful blocker of long wave IR – mainly by reflection, because of reststrahlen and the difference of refractive index, that CO2 effects are orders of magnitude lower. A simple water cell, with 1/2″ of water in it will completely block the longwave IR from a 300W tungsten lamp. And, the water does not boil, as the mechanism is reflective, not absortive. Below the cut off wavelength of 4.5microns, the water is completely transparent – the visible and near IR pass straight through. You can put your hand in front of the cell – there is some heating from near IR absorption, but little serious effect. Without the cell -Ouch!
Professor Richard Wood was one of the last great experimental physicists – he was the American Faraday. And I think that a major problem now is that experimental physics is almost a dead art – except in areas such as biological sciences. Most postgrads in physics reach for a textbook and start computer modelling – they would not dream of entertaining a physical experiment. The great English Physicist, Lord Raleigh, told his students that if you could not demonstrate the laws of physics with simple experimental apparatus, you did not understand them! A lession we will have to re-learn.

Apparently advice you should take to heart! The experiment you describe has absolutely no relevance to the GH effect, absorption and reflection of solar radiation by liquid water, maybe. The brightness temperature of your tungsten lamp is probably ~3000K so there is negligible emission beyond 5μm, a source ~300K would be more relevant for GH effect, of course your cell probably is glass so it would block longer IR anyway.

VS
February 15, 2010 9:34 am

P
I took the series magicjava posted (global mean temperature, 1900/01-2009/03) and averaged the non-normalized data for each year (I was too lazy to make monthly/seasonal adjustments).
Augmented Dickey Fuller test (no intercept, 3 lags, selection based on SIC) on annual global mean temperature. Note that the H0 of the ADF test is that the series in fact has a unit root.
ADF test stat: -1.134301
(1% critical value, -2.587172)
One sided, MacKinnon (1996), p-value < 0.2323
Augmented Dickey Fuller test (no intercept, 2 lags, selection based on SIC)) on the first difference of annual global mean temperature:
ADF test stat: -9.720368
(1% critical value, still, -2.587172)
One sided p-value < 0.0000
At first sight, temperature seems to be I(1), as the authors claim.

grumpy old man
February 15, 2010 9:50 am

It appears that the rate of increase of CO2 causes the temperature rise (see also http://www.2bc3.com/warming.html) What is the feedback mechanism that makes the increase temporary?

wsbriggs
February 15, 2010 10:08 am

I would like to point out that there is a difference between econometrics, and economics. Econometrics is concerned with numerical analysis and hypotheses, which may or may not be valid.
WRT the findings on the data they examined, I find it interesting that no correlation has been found, despite the best efforts of the warmists to make the case for warming so convincing by modifying the data. Should the methodology they used withstand critical scrutiny, it would seem to be a powerful way of detecting bad data.

February 15, 2010 10:17 am

Thank you matt briggs!
A note to regulars. Just because you like the result of a paper don’t lose your skepticism. A statistical argument without a physical theory is nothing more than numerology.
The other thing is this. Theories, especially long accepted theories, are not in practice falsified by one observation or paper. People may wish it to be so, but in point of fact they are not. It takes a lot more work than that to remove a theory from the throne.

February 15, 2010 10:22 am

science of doom. you can go get the results yourself. they are available.
terabytes. Hop over to Lucia’s she works that problem and some of her regulars as well ( chad )

February 15, 2010 10:25 am

grumpy old man
It appears that the rate of increase of CO2 causes the temperature rise (see also http://www.2bc3.com/warming.html) What is the feedback mechanism that makes the increase temporary?>
Think of two planes suspended in space. We’ll call one of them Sun, which is radiating heat to the other Earth. At first Earth heats up. As it does so, it starts radiating heat back. It reaches an equilibrium temperature at which the amount of energy it radiates out exactly equals the amount of energy being radiated in by the Sun.
Now think of a nearly impossibly thin slice of the earth plane being converted to CO2. The amount of energy from the Sun to Earth hasn’t changed. But the CO2 resists the emission of energy from the Earth, so temperatures start to rise since the amount of energy in the system per unit mass is increasing. But this sets off a chain of other events. The amount of energy radiated by the earth plane rises a lot faster than its temperature (it goes up with the temp in degrees K to power of 4). That thin slice of CO2 is part of the earth plane, so it heats up too,and it radiates in all directions, some back to earth and some away from earth.
When the system stops oscillating, the amount of energy going from Sun to Earth will equal exactly the amount of energy being radiated back by the Earth. Hence, the change in the amount of CO2 “slice” causes a temporary oscillation, but no long term temperature change.

February 15, 2010 10:34 am

davidmhoffer (18:21:08) :
I love that little logic knot as well. my favorite version is this.
Skeptic1: the temperature data is all messed up.
Warmist: is not.
Skeptic1: is too.
Skeptic2: Here I show how sunspots correlate with the temperature data.
Warmist: Your pal said the data was screwed up.
Skeptic2: err, ya but.
There are of course ways to wriggle out of this knot, but the fundamental question of the accuracy of the record remain and all analysis that uses that data is subject to caveats. GIGO for one GIGO for all.

February 15, 2010 10:36 am

to use yet another analogy, suppose you have a river flowing at a constant rate. Build across it a damn that you can raise and lower. When you raise the damn, the lake fills up. downstream from the damn, the flow decreases…. for a while. Once the lake fills up and gets to the top of the damn, the flow rate downstream goes to the same amount it was before. Now lower the damn. As the lake empties, the amount of water flowing downstream goes up… for a while. Once the lake level gets back down to the damn level, the rate of flow downstream goes back to the same level as before.
So… raising and lowering the damn causes temporary fluctuations in the flow of water below the damn. But the height of the damn makes no difference at all. You could raise and lower the damn all you want, over the long term, the flow would average exactly the same. So, height of damn means nothing to average flow rate. Changing the height of the damn causes a temporary change in flow rate which depends on how fast or slow the height of the damn is changed.

February 15, 2010 10:45 am

WRT the physicality of the argument. I want to show you a little logic trick you may find useful. (Hat tip to GE Moore)
Suppose that you read a long detailed statistics paper that claimed the earth was flat. What would you conclude? and why?
assume you don’t know statistics? assume you know statistics but can find any error in the paper. Can you rationally reject this paper and it’s conclusions even IF you can’t find the error? why.
How can I reject this paper without even checking it?

red432
February 15, 2010 10:47 am

George Turner (09:04:22) :

The current data set is so limited that rough curve fits are too easily produced by dreaming up a couple of confounding effects whose magnitude is essentially a guess, allowing everyone to use their favorite fudge factors to get a better looking fit and reducing the science to arguments about whose fudge factor model is best.

Yeah, I’d like to see statistical analysis done as double blind experiments where the statisticians don’t know which data sets are the actual data sets, and which are concocted data sets perhaps derived by summing together data sets from obviously unrelated domains. Then when the conclusions correctly identify the real data sets and argue that there is a relationship, rejecting the others, there might be something to it. But even this wouldn’t help if the “real” data sets themselves are possibly bogus, as in this case.

February 15, 2010 10:59 am

Hmmm; I note a lot of generic deriding comments (obfuscating language and similar ilk) and slights at the authors. These comments come across as textbook examples of the demean and distract method of avoiding science. AKA the AGW Religious Ecstasy kneejerk response.
With a little web surfing:
Contrary to slights; Michael Beenstock is not unknown nor is he a lightweight.
http://ideas.repec.org/e/pbe130.html
There is a powerpoint presentation that goes along with the paper; http://www.patrickminford.net/Business_Topics/BusinessTopicsCOctober2009.pdf
I read the paper and powerpoint presentation as direct challenge to the AGW central CO2 forcing issues. The author’s names and email addresses are available; I assume they welcome discussion, that is, IF you have identified their mathematical errors or have relevant counter arguments.

VS
February 15, 2010 11:20 am

@steven mosher (10:22:30) :
“A statistical argument without a physical theory is nothing more than numerology.”
You are absolutely right, but a physical theory (or any theory for that matter), that is not supported by facts (or data/statistical argument) is nothing more than a false opinion.
What the authors of the paper are simply attempting to show, is that the data is not matching the theory.
Furthermore, you state:
“Theories, especially long accepted theories, are not in practice falsified by one observation or paper. ”
In this context, this ‘one’ observation, is the only observation we have, namely the development of our temperature record. In a statistical sense, the realization of our temperature series is considered a ‘single observation’ or a single draw from the data generating process we are trying to uncover.
….and you have to admit that you are in trouble if your only observation doesn’t match your hypothesis… 🙂

NickB.
February 15, 2010 11:26 am

grumpy old man (09:50:44) :
What is the feedback mechanism that makes the increase temporary?
The two suspects seem to be humidity levels and cloud formation behavior.
George Turner (09:04:22) :
The current data set is so limited that rough curve fits are too easily produced by dreaming up a couple of confounding effects whose magnitude is essentially a guess, allowing everyone to use their favorite fudge factors to get a better looking fit and reducing the science to arguments about whose fudge factor model is best.
Well said – that is a very frightening proposition if GCMs have actually been produced this way
Ron Cram (09:07:37) :
this paper claims the tests prove AGW. It would be interesting to see an analysis of why the same tests get such different results. See http://www.springerlink.com/content/h0tx44h508602755/
Very interesting! Abstract from the linked paper:
Abstract To characterize observed global and hemispheric temperatures, previous studies have proposed two types of data-generating processes, namely, random walk and trend-stationary, offering contrasting views regarding how the climate system works. Here we present an analysis of the time series properties of global and hemispheric temperatures using modern econometric techniques. Results show that: The temperature series can be better described as trend-stationary processes with a one-time permanent shock which cannot be interpreted as part of the natural variability; climate change has affected the mean of the processes but not their variability; it has manifested in two stages in global and Northern Hemisphere temperatures during the last century, while a second stage is yet possible in the Southern Hemisphere; in terms of Article 2 of the Framework Convention on Climate Change it can be argued that significant (dangerous) anthropogenic interference with the climate system has already occurred.
Perhaps the difference is due to the GES paper only looking at CO2 and temperature, while the BR paper looks at CO2, irradiance and temperature. I think, in theory at least, econometrics techniques should give consistent results between a 2 variable analysis and a 3 variable analysis (assuming proper methods, same data set, same time frames, etc)… but for a sec let me eyeball and summarize the trends using NOAA’s summary data (assuming of course that the data is correct):
Temperature – long term upward trend since bottoming out around 1910, stalled from around 1945 to 1977/78 (http://www.climatewatch.noaa.gov/2009/articles/climate-change-global-temperature)
CO2 – exponential upward trend since the 1950’s (http://www.climatewatch.noaa.gov/2009/articles/climate-change-atmospheric-carbon-dioxide)
Irradiance – long term upward trend since the early 1900’s (http://www.climatewatch.noaa.gov/2009/articles/climate-change-incoming-sunlight)
Personally, I am quite at a loss to understand how the increase between 1910 and 1945 (~.5 C), and 1977/78-current (a smidge under .6 C) fit with a constant up-trend in irradiance over the entire period that accounts for 10% or less of warming, and an exponentially increasing level of CO2 from 1960-on – which is, as I understand it, the official explanation… but anyway
I would be very interested to see the timeframes for the two analyses. As strange as it is, I can imagine that both analyses could be technically correct if the timeframe for GES was 1960-on and BR was from 1910-on… which would then beg the question which one is more correct(?)

February 15, 2010 11:38 am

Steven Mosher
Suppose that you read a long detailed statistics paper that claimed the earth was flat. What would you conclude?>
Through direct observation it is clear the earth is flat, which the statistics you allude to above confirm. However, statistics also show that it is possible to travel in one direction and arrived back at your starting point. Since both statistical analysis are correct, I must conclude that reality is derived from a combination of the two.
I conclude therefore that the earth is a six sided cube. I offer as additional supporting evidence the statistical analysis provided by others showing the MWP as confined to Europe, which is further confined to a single face of the six sided cube, and would clearly constrain energy flows from long wave radiation as these travel at near light speed and could not turn the corner at cube edge.

George E. Smith
February 15, 2010 11:44 am

“”” Merv Hobden (05:19:43) :
My own opinion is that water vapor is such a powerful blocker of long wave IR – mainly by reflection, because of reststrahlen and the difference of refractive index, that CO2 effects are orders of magnitude lower. A simple water cell, with 1/2″ of water in it will completely block the longwave IR from a 300W tungsten lamp. And, the water does not boil, as the mechanism is reflective, not absortive. Below the cut off wavelength of 4.5microns, the water is completely transparent – the visible and near IR pass straight through. You can put your hand in front of the cell – there is some heating from near IR absorption, but little serious effect. Without the cell -Ouch! “””
Well Merv, I don’t know where you get your physical properties of water or water vapor from; but they don’t jibe with anything that I find in standard handbooks.
Far from being completely transparent below 4.5 microns, water does in fact have very significant absorption bands below 4.5 microns, and one particular band at 3.0 microns, has almost ten times the absorption coefficient that water demonstrates in the range, from about 4.0 microns to beyond 10 microns. Other bands at 2 and 1 microns are very well known; and water vapor starts having significant impact on for example the solar spectrum, at wavelengths from about 760 nm.
As to your belief; well you call it “your own opinion” that water’s interraction with LWIR is primarily reflective; standard textbooks (such as “The Infra-Red Handbook” show no untoward reflection from water out to at least 13 microns; that is no reflectance that is not fully explained by normal Polarised Fresnel Reflection; that changes over that wavelength range in accordance with the refractive index variation. So it is never as much as 5% reflectance at normal incidence, and is mostly under 3% over that range.
Water vapor is a major greenhouse gas; THE major one, and water in its liquid and solid phases also has major climate and weather related effects; clouds for instance.
But I’m with Phil; your water cell demonstration is hardly indicative of what actually happens in the real atmosphere (or oceans either.

February 15, 2010 12:11 pm

P
Kaufman et al. (2006) indeed says that temperature is not I(1). It does not say what temperature is then: I(0), I(2), FI, trend-stationary? More importantly, Kaufman and colleagues say so, but they do not provide any evidence.

Ron Cram
February 15, 2010 12:51 pm

NickB,
Thank you for some reasonable comments. The time frame analyzed could be a big reason for the difference in the two papers.
As an aside, if you use CRU data, I think the warming from 1910-1945 is slightly greater than the warming from 1975-2005.
I would love to see a thorough and convincing refutation of CAGW.

Alan Wilkinson
February 15, 2010 1:10 pm

“Steven Mosher
Suppose that you read a long detailed statistics paper that claimed the earth was flat. What would you conclude?”
Let’s make that more realistic. Suppose you did an experiment that showed sub-atomic entities were waves. Then you did another that showed they were particles. What would you conclude?

grumpy old man
February 15, 2010 1:43 pm

NickB. (11:26:56) :
grumpy old man (09:50:44) :
What is the feedback mechanism that makes the increase temporary?
The two suspects seem to be humidity levels and cloud formation behavior.
———
Makes sense, clouds would seem to be the answer. This would indicate that the system is very stable, and would explain why we have not had runaway warming in the past. I do not see how humidity levels could be directly responsible, though.

George Turner
February 15, 2010 2:01 pm

George Smith,
What gets really weird is when you put water vapor into a huge, tall column, apply a strong gravitational force. For this experiment you need to find a water planet and roll it into the lab, which can be difficult and expensive.
Anyway, when you illuminate it with the solar spectrum your results are well within accepted parameters for a while and then you start getting evaporation, condensation, coronal discharges, arcing, and strong emissions of X-rays and gamma rays. The X-ray and gamma ray discharges last up to 3.5 milliseconds and could have a pulse repitition frequency of over 30 Hertz, making a horrible buzzing interference with lab your equipment.
You’d think an Earth temperature black body wouldn’t be emitting gamma rays unless you blast it with an energy beam from the Death Star, but the phyics of thunderstorms is frustratingly elusive.

February 15, 2010 2:07 pm

Let’s make that more realistic. Suppose you did an experiment that showed sub-atomic entities were waves. Then you did another that showed they were particles. What would you conclude?>
That they’re wavicles?

Alan Wilkinson
February 15, 2010 2:32 pm

davidmhoffer, probably. My point is that verifiable contrary results that cannot be explained require thought rather than dismissal.

Roger Knights
February 15, 2010 2:48 pm

What would you conclude?

It’s Miller time.

George Turner
February 15, 2010 2:54 pm

Davidmhoffer,
I think he’s refering to the famous Schrodinger’s physicist experiment. You put a physicist in a box. When you open the box he’ll say either that light is a particle or that it’s a wave, but until you open the box he’s saying that light is BOTH a particle and a wave. It has something to do with the superposition of quantum states or probabilities, or something. It’s been a while since I’ve taken a physics class. All I remember is that we shoved the professor into a box and ran off to a keg party.

February 15, 2010 3:10 pm

i knew the flat earth part of that would trip folks up . lemme splain another way.
you get the gist is the Moorian argument. The point I would make is that many people approach these problems in the “common sense” manner.
( hmm maybe it was wittegenstein, whatever)
Especially when operating in the normal science paradigm. radiative physics is a given ( or rather it’s given a secure status owing to a variety of factors, like we design crap that actually works by using modtran etc ) when somebody shows me a paper that claims to contradict it, it’s usually a waste of time to even read it. I can confidently ( with some measure of probablity that I dont need to compute ) just reject it on its face as being wrong.
That’s the tactic at least.

February 15, 2010 4:06 pm

Steven Mosher,
Especially when operating in the normal science paradigm. radiative physics is a given ( or rather it’s given a secure status owing to a variety of factors, like we design crap that actually works by using modtran etc ) when somebody shows me a paper that claims to contradict it, it’s usually a waste of time to even read it. I can confidently ( with some measure of probablity that I dont need to compute ) just reject it on its face as being wrong>
On one level I agree, but in the AGW debate I think that’s poor tactics. The bulk of the people in this forum have some level of knowledge of basic physics and math. For you to dismiss a paper such as the one you refer to above for your own purposes is one thing. To dismiss it in a forum such as this is not far off. But the vast majority of those swayed by the AGW arguments are not so technical and so are easily pursuaded by well worded, but wrong, arguments. Winning the debate is just as much about getting the science right as it is about understanding what arguments the public at large finds persuasive and how to correct the impression that those that are wrong leave. The world went from flat to round to flat to round many times in history. Those who knew the facts were over whelmed by the tide of opinion informed by junk science such as that with which AR4 is riddled.
At the peak of the tech bubble, friends would bring me investment prospectus and ask for my opinion (having been in that industry all my life). I recall one brought to me by a lawyer of considerable intelligence and repute. After reading it I told him this “this is the best prospectus I have ever read. It is detailed, covers every business question one might ask, has an excellent business plan and demonstrates good balance between revenue projections and cost containment”. So I should invest? he asked. I responded “well, as good as the prospectus it, they don’t actually have a product. There’s 200 pages of business case here, but no product. Not even a proposed product. Were does the money go for R&D when there isn’t a product, and where does the revenue come from when there’s nothing to sell?”
I managed to keep an unscrupulous investment scheme from picking my lawyers pockets. He phoned my a year later to report that they raised $40 million and then went bankrupt. The problem with AGW is that they’re not trying to pick a few people’s pockets, they’re trying to pick everyone’s pockets. and if they have a report that shows the earth is flat…. and people are accepting it…. then yeah, I have to read it and be able to refute it as my small part of keeping their hands out of my pockets.
although I contiue to be of the opinion that the flat earth is consistant with a 6 sided cube.

February 15, 2010 4:24 pm

If this paper does get published please let me know – populartechnology (at) gmail (dot) com
I would like to add it to the list,
500 Peer-Reviewed Papers Supporting Skepticism of “Man-Made” Global Warming
Yes it does give the paper more credibility if it is published. No it does not matter if it is published in Nature or Science, there are plenty of reputable climate journals to get it published in.

Tom P
February 15, 2010 4:29 pm

Richard Tol (12:11:33) :
It would hardly be I(0) so I assume Kaufmann believes it should be at least I(2). You’re familiar with the statistics: what do you calculate it to be?

February 15, 2010 4:29 pm

steven mosher:

..you can go get the results yourself. they are available.
terabytes. Hop over to Lucia’s she works that problem and some of her regulars as well.

Of course, I follow Lucia’s work, it’s excellent.
I’m more about understanding the physics than the statistics. I hope others will perform statistical analysis of the terabytes..
It would be interesting to see some work which covers:
a) analysis of some individual models forecast vs reality for temperature predictions 2001-2010 (rather than ensembles – how good is one model?)
b) analysis of specific models forecast vs reality for other outputs, 2001-2010:
– ocean temps in specific regions
– humidity
– cloud cover
– albedo
– tropopause height in different locations
c) analysis of the “hindcast results” of specific models, 1950-2000:
– regional temps
– ocean temps in specific regions
– humidity
– cloud cover
– albedo
– tropopause height in different locations
And in a journal would be even better but not essential of course.
Now this would be interesting.
Maybe there are papers around which have done this.. (I know about the Douglass and Christy paper), I have been looking at other areas so far.

George Turner
February 15, 2010 4:45 pm

Steven Mosher,
What’s more interesting is that the Earth IS flat in most climate models. It’s wrapped around a sphere but the underlying mathematics is still a flat (and 2D) non-rotating surface with Coriolis forces added artificially. Instead of vertical coordinates they use pressure coordinates to make the math easier, but as objects move “up and down” they don’t conserve their momentum, they tend to stay over the same spot on the surface. Gravity is also constant with altitude.
I don’t think a finer mesh is going to get any such model to output an accurate plot of equilibrium temperature versus CO2 level.

February 15, 2010 4:59 pm

Is CO2 being measured anywhere besides Mauna Loa? Isn’t it a little disingenuous to extrapolate global CO2 levels from the measurements at a volcano?
According to the USGS (http://hvo.wr.usgs.gov/maunaloa/current/monitoringdata.html), “gathering SO2 and CO2 data while Mauna Loa is quiet is important to establish normal background levels for gases emitted from the volcano.”
If they’re still establishing normal background levels, than any variation so far tells us… what, exactly?
Is there another CO2 monitoring facility at Mauna Loa? I mean, one that’s not intended to monitor volcanic gasses? Please tell me there is.

February 15, 2010 5:27 pm

George Turner
What’s more interesting is that the Earth IS flat in most climate models. It’s wrapped around a sphere but the underlying mathematics is still a flat (and 2D) non-rotating surface with Coriolis forces added artificially>
Seriously? My six sided cube would therefore be an improvement? OK, all joking aside…. seriously?

February 15, 2010 5:36 pm

Some Guy:
Excellent point!

Tom P
February 15, 2010 5:45 pm

VS (09:34:02) :
I’d like to see your results from the whole period of GISTEMP, 1880 to present, and Hadcrut, from 1850. Truncation of a series will tend to hide any I(2) behaviour. Both full series show very similar positive second derivatives.

Merv Hobden
February 15, 2010 6:09 pm

Phil, George,
Firstly, the water cell has very thin windows, made from large microscope cover glasses.
Secondly, a tungsten lamp is a broad band emitter with less than 2% of its energy input transformed into the visible. 48% of its input is transformed into the near IR, the rest comes out as ‘heat’ – longwave IR to well above 5micron. So a 300W tungsten lamp outputs 144W in the near IR, and the water cell was used to isolate this spectral area.
Thirdly, the water absorption spectrum is continuous, from 2E-4 1/cm in the visible through to 5E3 1/cm at 10microns. There are peaks at around 3micron, 2 micron, and 1.5 micron, but they are generally part of what is a logarithmic increase with wavelength, unlike the discrete absorption lines seen with CO2. This spectral response continues over the full longwave range, only dropping by an order of magnitude from 10 micron into what we call the TeraHertz region.
If we clear the measurement of the reflectivity due to the difference of refractive index of the glass, water, and back into air, the difference is only about 10% at normal incidence. I did check for Fabry-Perot resonance, due to the thickness of the cell wall – it was not apparent, and with no water in the cell the ouch! factor was still very significant.
Water cells have been used in microscopy for at least 150 years, to protect valuable objectives from heat radiation. There are now specialist glasses to perform that function, but most of these block out the near IR as well as longwave IR. If ordinary soda glass alone was effective, I am sure that our Victorian ancestors would have made good use of it – but that was the crown glass in their objectives after all. Glass can be used in greenhouses at the relatively low intensity of the transformation from the near IR to longwave IR, however absorptive glass would rapidly perish if you used it alone in front of an intense long-wave IR source. In the above experiment close to 100W of heat was going somewhere – less the reflection and absorption of the bulb envelope – if the glass in the cell absorbed it, the water would boil, if the water absorbed it, it would boil.
Radiation back into space from the heated surface of the planet covers the range 5 – 60microns. What the experiment demonstrates is the huge difference between water, and gases such as CO2. Water vapor is very little different to water in its spectral response, the lower molecular density decreases the effect, and the effect is further diminished at lower pressures, but this also applies to CO2.
What I would like to see is the experimental method used to establish the ‘reflectivity’ of CO2 – it does not seem to be apparent – most sources only state the absorption, as is the case for water. Is this just a theoretical calculation stuffed into a computer model, or is it based on some real science? I would love to find out!

Bart
February 15, 2010 6:54 pm

davidmhoffer (10:25:53) :
“When the system stops oscillating, the amount of energy going from Sun to Earth will equal exactly the amount of energy being radiated back by the Earth. Hence, the change in the amount of CO2 “slice” causes a temporary oscillation, but no long term temperature change.”
No. You are still confusing power and energy. The power flows stabilize to the same level, but you have charged up the “capacitance” energy of the Earth, and it is therefore hotter.
davidmhoffer (10:36:46) :
” Once the lake fills up and gets to the top of the damn, the flow rate downstream goes to the same amount it was before.”
Exactly! The flow rate is the same, but there is now the potential energy of the mass of water stored behind the dam.
The key to falsifying the AGW hypothesis is to show that either the “capacitance” is not changing as a result of anthropogenic emissions, or that the magnitude of that change due to anthropogenic emissions is insignificant, not in denying well established radiative physics.

Bart
February 15, 2010 7:05 pm

steven mosher (10:34:08) :
Skeptic2: Here I show how sunspots correlate with the temperature data.
Warmist: Your pal said the data was screwed up.

Skeptic2: But, you said it wasn’t.

February 15, 2010 8:13 pm

Bart:
The key to falsifying the AGW hypothesis is to show that either the “capacitance” is not changing as a result of anthropogenic emissions, or that the magnitude of that change due to anthropogenic emissions is insignificant, not in denying well established radiative physics>
You are correct and missing the point all at the same time. Yes the lake fills up and so there is potential energy stored in the lake. And yes, that slice of CO2 charges the earth capacitor making it hotter. BUT:
The AGW Hypothesis is that doubling CO2 adds 3.7 watts/m2 being radiated toward earth surface resulting in a direct rise of 1.1 degrees. At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance to outer space of 6.1 watts/m2. We can delve into ever more detailed analysis of the who said what who meant what variety, but that math does not work. In order to achieve the proposed temperature increase the AGW Hypothesis is built on the assumption that there is a long term positive feedback from CO2 that exceeds the resulting long term negative feedbacks. In the most ridiculous version a tipping point driving runaway warming happens. THIS is the point I am trying to make about the physics.
They are claiming that if the height of the damn goes up one meter, that the height of the water will go up two meters. They are claiming that the CO2 “radiates” the surface with 3.7 watts/m2 when all it can do is slow down, temporarily, the amount of energy being radiated out by the planet to space. My point is that there IS NO EXTRA 3.7w/m2 from CO2, there’s no energy generated by CO2 in the first place. Raising the damn doesn’t create water that flows from the damn into the lake does it? That’s the AGW claim! They attribute to the damn the ability to create water like they attribute to the CO2 the ability to create 3.7 w/m2. They compound attributing CO2 with the ability to create power with over estimating the temperature change by assuming a linear temperature increase versus power input while ignoring an exponential increase in radiance as a negative feedback. They compound THAT by assuming that the acceleration of fossil fule consumption from 1950 to 1990 will continue… not the rise in consumption, the ACCELERATION in the rise of consumption. Well, we ought to be at about 300 million bbl per day by now OOOPS we’re only at 100, the acceleration went away.
I was making a point, and my point stands. your point that technicaly there’s a bit of energy held back that results in a slight temp increase? fine. CO2 still doesn’t create power/energy/calories/watts/ergs/btu/hsp and any explanation that says it does is malarky. There are better words, but the mods will let malarky through. The insertion of an extra layer of CO2 creates a fluctuation that damps out and the amplitude of the oscillation is out of proportion to the final change in temperature. That’s what the original paper that started this thread said, thats what I was trying to explain, AND THAT’S WHY THE REAL WORLD ISNT DOING WHAT THE CLIMATE MODELS PREDICTED.
I find your attempt distract people from the crux of the argument by raising a slight technicality and then claiming that I am denying well established physics to be… a warmist tactic that is disengenuous to the core. again, other words are more accurate, but I am sticking to ones the mods will allow.

George Turner
February 15, 2010 9:46 pm

davidmhoffer,
Yes, seriously. Most models are simplified versions of the shallow water equations developed by a guy who wore a powdered wig and tube socks. The exceptions is MIT’s model which, as I’ve been told, is pure Navier-Stokes equations, which is great except that Navier-Stokes equations are invalid in conditions of evaporation or condensation (ie. the weather).
Fluid flow is a very difficult problem, and weather flows depend on tiny differences which would disappear into the round-off error of an airplane wing plowing along at hundred miles an hour, a flow which itself assigns empirical fudge factors for turbulence.
As one research paper pointed out, the equations used to model fluid flow are reversible, so they’re perfectly happy unmixing a fluid. They can have time flow forward or backward with equal accuracy, which means that, as currently formulated, they ignore effects that maintain important laws of thermodynamics. In short, they ignore entropy, turbulence, and other crucial phenomenon, but do at least provide a passable guess at some short term or large scale behaviors.
So they’re not completely useless, just not accurate and no way to achieve accuracy as currently mathematically formulated.

February 15, 2010 10:05 pm

George Turner;
thanks for answering, but I am flabbergasted. I knew they ignored a lot of thermodynamics and entropy and so on which I have been trying to explain to some people, but this revelation has my head spinning in disbelief. Think about this series of questions.
Q. What area of the earth do we have the least data about?
A. Arctic Zones
Q What are of the earth shows the greatest variance from mean temp?
A. Arctic Zones
Q. Which parts of the earth radiate the most energy into space compared to what they retain from solar?
A. Arctic Zones.
Q. If there were errors made in constructing the models, they would be most difficult to discern when applyng the results to the areas we have the least data about to correlate to, which would be?
A. Arctic Zones.
So, if we started with a 2D model and extrapolated in onto a sphere, meaning that the accuracy of the model would be greatest at the equatorial regions, and worst at the Arctic Zones. You know the Arctic Zones… the ones with the most temp variability, the most negative feedback, and the least data with which to figure out if the model was right or not. so the models are by default the most innacurate at the very spot that we need the most accuracy to figure out if net heating or cooling is happening and if the model is right or not.
Should I assume the modelers are not able to figure this out themselves, or should I assume they got a bad feeling about what they might find out when they clean it up?

Street
February 15, 2010 10:32 pm

davidmhoffer (20:13:02) :
I think you’ve taken the term radiative forcing too literally. From wikipedia (not the best source… got it):
“In climate science, radiative forcing is loosely defined as the change in net irradiance at the atmospheric boundary between the troposphere and the stratosphere (the tropopause). Net irradiance is the difference between the incoming radiation energy and the outgoing radiation energy in a given climate system and is measured in Watts per square meter. ”
So AGW is not saying CO2 generates that energy. It’s saying it’s changing the transmission rate of the energy between parts of the atmosphere ie increasing energy storage (something you agree it can do).
This effect of CO2 does not violate thermodynamics. CO2 slows escape of energy, so outgoing radiation is REDUCED by CO2. Since incoming radiation is still constant (relatively), then there is energy buildup in the atmosphere until the incoming/outgoing radiation balances again. More energy in the atmosphere = higher temps.
As a skeptic, I agree with all of this. Where I think AGW went wrong is in calculating the effect. I believe that other atmospheric processes produce a negative feedback, not a positive one as AGW requires.
The idea that CO2 to be an I(2) is an interesting idea. Since the CO2 is not removed from the air, then perhaps it is an effect of the generation of the CO2 and not the CO2 itself. So what is it about burning fossil fuels that would have a temperature effect with a half-life of 1 year? The chemical energy released? The black carbon?
Or maybe the correlation isn’t physical, but socio-economic. I’ll bet the increase in the rate of CO2 emissions is highly correlated with the UHI. Every time we build a building or add a road, increasing UHI, we also increase the rate of CO2 emissions to power the building or fuel the car that drives on the road. That might make urbanization an I(1) variable, proving the temperature record is badly contaminated by UHI.

Brent Hargreaves
February 16, 2010 3:48 am

Some Guy (16:59:14) : You asked about sources of CO2 PPMs other than Mauna Loa. I wondered the same yesterday and did some digging. There are several other such studies in existence, and I am mighty relieved to report that they show (a) The same PPM to within about 10 ppm (b) The same upward trend (c) The same annual variation.
Why do I say “relieved”? Because there is a danger, having been so deceived by the likes of Mann and Jones, that we will cease believing in anything. Of course, that would be a gross overreaction; uberscepticism. We “sceptics”, if we accept such a label, are not naysayers; we just want to differentiate between untested and tested hypotheses. Between, on the one hand, plausible and well-presented fallacies and, on the other, repeatable solid confirmable truths.
Looking at the Mauna Loa graph, I found it rather too tidy, rather too regular. I intuited that measurement error and natural variation ought to make the shape rather more chaotic. Having found independent verification, I am glad to report that my intuition was wrong; I conclude that the Mauna Loa dataset is clean and honest.

VS
February 16, 2010 4:16 am

P (17:45:14) :
“I’d like to see your results from the whole period of GISTEMP, 1880 to present, and Hadcrut, from 1850. Truncation of a series will tend to hide any I(2) behaviour. Both full series show very similar positive second derivatives.”
Tom, where do Kaufmann et al (2006) state that temperature is not I(1)?
I’ve scrolled through their paper (I didn’t read it carefully, I admit, the baseless hypothesizing was too much to handle while digesting lunch ;), and the ADF stat for GLOBL (the temp seires) is not listed. They do however find that RFAGG is I(1) in Table 1.
They then proceed to estimate a cointegration relationship between GLOBL and RFAGG, which implies that they think that GLOBL is I(1), because otherwise there couldn’t be a cointegration relationship to start with.
Furthermore, in an earlier paper, Kaufmann and Stern (2002), write:
“Consistent with the results of previous research (Woodward and Gray, 1995; Bloomfield and Nychka, 1992; Stern and Kaufmann, 1999), the results in Table 1 show that the temperature data are I(0) or I(1).”
Table 1 in Kaufmann and Stern (2002) furthermore gives the ADF stats for the temperature series NHEM and SHEM.
Northern Hemisphere (test statistic)
Level (don’t reject h0) -2.85.
First difference (reject h0) -11.67
Conclusion: I(1)
Southern Hemisphere (test statistic)
Level (reject h0) -3.55
Conclusion: I(0)
Where the employed significance level is 0.05 (with 0.1 both series would be I(1)). In any case, there seems to be no concrete evidence of temperature being I(2).
I also went on to look through Kaufmann and Stern (2002)’s references, listed above. Note that I had no (free) access to Bloomfield and Nychka (1992), so I didn’t check that one.
Woodward and Grey (1995): In section 6 they reject the H0 that the series is I(0), but for some curious reason they fail to continue for an unit root in the I(1) series.
Kaufmann and Stern (1999): In section 3.3 they perform the univariate unit root tests, with some modifications and conclude that “The KPSS test shows that all the temperature series are I(1),” They furthermore state that these tests were performed on the longest series available.
Finally, I took the data you suggested (CRUTEM3, GISSTEMP, HADCRUT), and performed the tests, I employed the same methodology as earlier (e.g. SIC based lag selection). For all three datasets the conclusion is unambiguous: global temperatures are I(1).
** CRUTEM3, global mean, 1850-2008:
Level series, ADF test statistic (p-value<):
-0.329923 (0.9164)
First difference series, ADF test statistic (p-value<):
-13.06345 (0.0000)
Conclusion: I(1)
** GISSTEMP, global mean, 1881-2008:
Level series, ADF test statistic (p-value<):
-0.168613 (0.6234)
First difference series, ADF test statistic (p-value<):
-11.53925 (0.0000)
Conclusion: I(1)
** HADCRUT, global mean, 1850-2008
Level series, ADF test statistic (p-value<):
-1.061592 (0.2597)
First difference series, ADF test statistic (p-value<):
-11.45482 (0.0000)
Conclusion: I(1)

So, can we agree that temperature is I(1)? 🙂
PS. I notice that some people in this thread are taking the results in the paper and then figuring out how to 'fit' a physical model to them. Note that this kind of practice (i.e. data mining) is heavily frowned upon in the econometrics community: you first come up with your hypothesis, and then you test it. That's the only clean way to go about it.

VS
February 16, 2010 4:31 am

PPS.
I just saw I used ‘Means based on Land-surface air temperature anomalies only’ version of GISSTEMP. So I ran the test again for ‘Combined land-surface air and sea-surface water temperature anomalies’.
** GISSTEMP, combined, global mean, 1881-2008:
Level series, ADF test statistic (p-value<):
-0.543388 (0.4722)
First difference series, ADF test statistic (p-value<):
-5.585529 (0.0000)
Conclusion: …this one is I(1) too.

VS
February 16, 2010 5:09 am

PPPS. Excuse the spam, but the results for that last test should read:
Levels: -0.301710 (0.5752)
First differences: -10.84587 (0.0000)
The conclusion doesn’t change.

Tom P
February 16, 2010 5:51 am

VS (04:16:37) :
“Where do Kaufmann et al (2006) state that temperature is not I(1)?”
It’s on page 272:
“Consistent with our argument that temperature itself is not I(1), the
increase in solar activity has little effect beyond the first year.”
Stern and Kaufmann (1999) give some very useful background concerning the (mis)use of univariate tests to determine the stationarity order of temperature, and caveats which seems to have been ignored by Beenstock and Reingewertz:
“Statistical theory suggests that it will be difficult to detect an I(2) trend in a noisy time series such as global and hemispheric temperature series especially when the series is inappropriately approximated as a purely autoregressive process (Hamilton, 1994; Schwert, 1989; Phillips and Perron, 1988;Kim and Schmidt, 1990; Harvey, 1993; Pantula, 1991). An alternative approach is to model the I(2) trend and noise processes separately using the structural time series approach promoted by Harvey (1989).”
Stern and Kaufmann suggests how Beenstock and Reingewertz might have gone awry in their analysis.

February 16, 2010 6:20 am

Street;
This effect of CO2 does not violate thermodynamics. CO2 slows escape of energy, so outgoing radiation is REDUCED by CO2. Since incoming radiation is still constant (relatively), then there is energy buildup in the atmosphere until the incoming/outgoing radiation balances again. More energy in the atmosphere = higher temps>
1. The outgoing radiation is in fact reduced. TEMPORARILY
2. As you said, there is energy build up until incoming and outgoing balance….which is why the reduced outgoing radiation from CO2 increase is TEMPORARY
3. The process by which the complexities of the system as a whole fluctuate to arrive at a new equilibrium produce oscillations in temperature that are TEMPORARY.
4. When a new equilibrium point is reached, the steady state will be a higher temperature that is a minor increase in comparison to the TEMPORARY oscillations.

Phil.
February 16, 2010 7:21 am

davidmhoffer (20:13:02) :
Bart:
The key to falsifying the AGW hypothesis is to show that either the “capacitance” is not changing as a result of anthropogenic emissions, or that the magnitude of that change due to anthropogenic emissions is insignificant, not in denying well established radiative physics>
You are correct and missing the point all at the same time. Yes the lake fills up and so there is potential energy stored in the lake. And yes, that slice of CO2 charges the earth capacitor making it hotter. BUT:
The AGW Hypothesis is that doubling CO2 adds 3.7 watts/m2 being radiated toward earth surface resulting in a direct rise of 1.1 degrees. At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance to outer space of 6.1 watts/m2. We can delve into ever more detailed analysis of the who said what who meant what variety, but that math does not work. In order to achieve the proposed temperature increase the AGW Hypothesis is built on the assumption that there is a long term positive feedback from CO2 that exceeds the resulting long term negative feedbacks. In the most ridiculous version a tipping point driving runaway warming happens. THIS is the point I am trying to make about the physics.

And what you’ve succeeded in doing is reveal that you don’t understand the physics!
At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance to outer space of 6.1 watts/m2.
This is not true, the correct version would be: At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance back into the atmosphere of 6.1 watts/m2. Not all of that radiance will make it back into space because of absorption, scattering etc., according to Trenberth about 60.2% (235/390) of the surface radiance makes it into space so that’s 0.602*6.1W/m^2 = 3.7W/m^2!

Street
February 16, 2010 8:05 am

davidmhoffer (06:20:11) :
Please keep in mind, I’m just clarifying how the physics works. I disagree with AGW about the magnitude of the effects. I am also not talking about what this paper implies as no one has suggested a physical mechanism by which CO2 would have such a fast effect. I believe it may exist, but we can’t really discuss it until someone figures out what it is…..
“1. The outgoing radiation is in fact reduced. TEMPORARILY”
According to AGW, the reduction persists until the temperature goes up to a level that increases outgoing radition to balance the input. In that sense it is temporary, but the energy storage (temp) of the atmosphere persists as long as the CO2 is there. The physical mechanism for this in AGW is a long-term process.
“2. As you said, there is energy build up until incoming and outgoing balance….which is why the reduced outgoing radiation from CO2 increase is TEMPORARY”
Same as #1.
“3. The process by which the complexities of the system as a whole fluctuate to arrive at a new equilibrium produce oscillations in temperature that are TEMPORARY.”
That statement cannot be evaluated until we know what processes we are talking about. The radiative physics alone would imply the temperature would persist as long as the CO2 persists. With negative feedback, I believe this temp increase would be small, but it would exist. If you talking about the results of this paper, again we don’t know the physical mechanism that results in CO2 being an I(2).
“4. When a new equilibrium point is reached, the steady state will be a higher temperature that is a minor increase in comparison to the TEMPORARY oscillations.”
If your basing that off this paper, then I agree that it may be true. However, without a physical mechanism, all we have are some interesting statistics.

February 16, 2010 8:19 am

Phil;
This is not true, the correct version would be: At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance back into the atmosphere of 6.1 watts/m2. Not all of that radiance will make it back into space because of absorption, scattering etc., according to Trenberth about 60.2% (235/390) of the surface radiance makes it into space so that’s 0.602*6.1W/m^2 = 3.7W/m^2>
Yes! So there’s an “extra” 3.7w/m2 going down, and an “extra” 3.7 w/m2 going up… which nets to… doing the math in my head here…. zero. So the amount of energy going into the system equals exactly the amount coming out over the long term. Except wait a second… if the CO2 is already heated up enough to re-radiate an “extra” 3.7 w/m2 down, then it is ALSO hot enought to re-radiate an “extra” 3.7 w/m2 up. so it didn’t even need a boost from earth radiance at all. Now that of course is not how it would happen, I’m just making a point. The temperature gradient would change and the intensity of the new curve would be interesting to understand because it would result in different temp changes at different layers. But the energy balance must be zero in the long term and the AGW theories are not consistent with that.

NickB.
February 16, 2010 8:20 am

@ grumpy old man (13:43:10)
The humidity theory was a reference to the post by Gary Palmgren (16:15:27) that summarized Miscolczi’s theory as such:
Miskolczi claims that the semitransparent nature of the atmosphere in contact with an essentially infinite source of greenhouse gas in the form of water vapor from the oceans is in a state of dynamic equilibrium. As CO2 increases, a little water vapor rains out to keep the net optical density of the atmosphere constant. Remarkably, radiosonde data shows that the humidity above 300 mb has decreased over the last 50 years as CO2 has gone up. This fact rejects all of the GCMs that assume constant relative humidity (which is, or was, all of them).
@ Some Guy (16:59:14)
@ Brent Hargreaves (03:48:13)
I don’t have the link handy but there was a link here to a report about “lumpy” CO2 distributions detected by one of the NASA satellites – the “lumpy” part was very much overstated (delta was in the neighborhood of 6 or 8 ppm if I remember correctly, definitely within the 10 ppm Brent referenced) – but the average distribution was well in line with the Mauna Loa observations.
So the theory that CO2 is quickly and relatively well distributed through the atmosphere seems to be confirmed, and the Mauna Loa data seems to be a valid representation of atmospheric CO2 levels as a whole

Phil.
February 16, 2010 9:48 am

davidmhoffer (08:19:57) :
Phil;
This is not true, the correct version would be: At a mean earth temperature of 288 K (15 C) a temperature increase of 1.1 degrees would result in a rise in earth radiance back into the atmosphere of 6.1 watts/m2. Not all of that radiance will make it back into space because of absorption, scattering etc., according to Trenberth about 60.2% (235/390) of the surface radiance makes it into space so that’s 0.602*6.1W/m^2 = 3.7W/m^2>
Yes! So there’s an “extra” 3.7w/m2 going down, and an “extra” 3.7 w/m2 going up… which nets to… doing the math in my head here…. zero. So the amount of energy going into the system equals exactly the amount coming out over the long term. Except wait a second… if the CO2 is already heated up enough to re-radiate an “extra” 3.7 w/m2 down, then it is ALSO hot enought to re-radiate an “extra” 3.7 w/m2 up. so it didn’t even need a boost from earth radiance at all. Now that of course is not how it would happen, I’m just making a point. The temperature gradient would change and the intensity of the new curve would be interesting to understand because it would result in different temp changes at different layers. But the energy balance must be zero in the long term and the AGW theories are not consistent with that.

Again you reveal your ignorance as that is exactly the expectation of AGW theory, increase the CO2 concentration in the atmosphere and the surface temperature must increase to balance the energy fluxes at the top of the atmosphere!

Lon Hocker
February 16, 2010 9:55 am

Thank you Nick B. for the humidity info. I wish I understood what the mechanism was for causing the water vapor to decrease when CO2 rises.
Now that I have a dog in this hunt (http://www.2bc3.com/warming.html), I’ll change to my real name Lon Hocker, from my accurate name “grumpy old man”.

George Turner
February 16, 2010 10:17 am

davidm and phil,
To be picky, the average surface temperature isn’t sufficient to calculate radiance becacuse radiance is based on temperature to the fourth power of each part of the surface that used to compute the average temperature.
If half the planet (night side) is absolute zero and half is twice 288K (day side), the radiance is sigma*(1/2*0 + 1/2*(2*T)^4) = sigma*8T where T is the average temperature and the 1/2’s are because the area of the planet was 1/2 cold and 1/2 hot. So instead of radiating 390 W/m^2 it radiates 3,120 W/m^2 but the average temperature is EXACTLY the same.

NickB.
February 16, 2010 10:48 am

Lon Hocker (09:55:45)
Welcome sir! The first rule of Fight Club is you do not talk about Fight Club 😉
A quick note on Miscolczi, the last time I looked at least, the underlying physics for his theory is very complex and there have been allegations made by the RC crowd of fundamental flaws in his approach. Again, AFAIK, these critiques have not been responded to. His theory does seem to match the observed data better than the prevailing consensus approach used in the GCMs, so there very well could be something groundbreaking here. Here’s the WUWT thread on him: http://wattsupwiththat.com/2008/06/26/debate-thread-miskolczi-semi-transparent-atmosphere-model/

VS
February 16, 2010 11:10 am

P
OK, I see I have to read the paper more carefully. On page Kaufmann 252 et al (2006) write
“Instead, the results indicate that the time series for temperature, anthropogenic emissions of CO2 and their atmospheric concentrations contain a stochastic trend.”
Like everybody else, they find that temperature and co2 are not I(0). On page 253 they then infer (second part of Table I) that CO2 is I(2). They don’t publish the test statistics for the temperature series though, which awkward, especially since it is the most important variable in their analysis.
They assert that if they find a cointegrating relationship between temperatures and RFAGG (aggregated radiative forcing, including CO2), they will prove that human activity, by causing RFAGG to have a stochastic trend (through economic development and such), causes temperature to have a stochastic trend too.
“Rather, the stochastic trends in temperature reflect the stochastic trends in the radiative forcing of greenhouse gases and anthropogenic sulfur emissions. These trends are like “fingerprints” that can be used to identify the effect of radiative forcing on temperature.”
So their statement on page 271, that “consistent with our argument that temperature itself is not I(1), the increase in solar activity has little effect beyond the first year.” is actually their hypothesis. Namely, that the stochastic trend in temperature can be explained by the stochastic trend in GHG’s. They actually refer to GLOBL (the global mean temperature series, so not ‘temperature in itself’, according to them) as being I(1) on page 267. This is opinion, not data driven fact.
Kaufmann and Stern, over and over again (I’ve now read enough of his papers) bend over backward to allow for some kind of relationship between GHG’s and temperatures within a cointegration framework. It’s clear that they have their preferred hypothesis, and that they are attempting to find a matching method. However, as both Beenstock and Reingewertz (2009), and Kaufmann and Stern (2000) state, the issue with GHG’s and temperatures is that one is I(1) and the other is I(2).
“Normally, this difference would be sufficient to reject the hypothesis that global temperature is related to the radiative forcing of greenhouse gases, since I(1) and I(2) variables are asymptotically independent” BR2009
“The univariate tests indicate that the temperature data are I(1) while the trace gases are I(2). That is, the gases contain stochastic slope components that are not present in the temperature series. This result implies that there cannot be a linear long-run relation between gases and temperature. These univariate tests are not, however, conclusive.” KS2000
They go on to present their model, but then fail to acknowledge that they are trying to cointegrate the two stochastic trends of different order (i.e. GHG’s and temperature) as if nothing is wrong. Beenstock and Reingewertz state the following about Kaufmann and Stern (2006):
“Others noticed that they [GHG’s] are I(2) variables, but inappropriately used standard cointegration tests instead of polynomial cointegration tests”
Beenstock and Reingewertz (2009) then say, ‘OK, we’ll give the AGWH the benefit of the doubt’ and take temperature to be I(1) and allow GHG’s to be I(2) and then test, very generally, for polynomial cointegration. This would allow for a correlation structure between temperatures and CO2.
And then they reject that relationship. So, simply put, it boils down to whom you believe. Is temperature I(1) or I(2) or what? Here are the results of my little literature review on the nature of temperature data:
** Woodward and Grey (1995)
– reject I(0), don’t test for I(1)
** Kaufmann and Stern (1999)
– confirm I(1) for all series
** Kaufmann and Stern (2000)
– ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
– PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
** Kaufmann and Stern (2002)
– confirm I(1) for NHEM
– find I(0) for SHEM
** Beenstock and Reingewertz (2009)
– confirm I(1)
I also managed to replicate the tests using four different datasets (two versions of GISSTEMP, HADCRUT, CRUTEM3), and found, in all instances, with or without drift in the test equation, a I(1) relationship.
So I guess temperature is I(1), right? And GHG’s I(2)? And then the Beenstock and Reingewertz approach is far more correct than the Kaufmann and Stern (2006) approach (which is plain wrong if temperature and GHG’s are not of the same I(p))?
Then I guess I’ll go with Beenstock and Reingewertz.
PS. After reading three Kaufmann and Stern papers I have to say that they grant themselves a lot of leeway in picking which test results are ‘conclusive’ and which can be ignored. Most of the time, the results that can be ignored are the ones that are in conflict with hypothesized physical relationships within the AGWH… the same hypothesized physical relationships they are trying to test.
Very strange methodology.

Bart
February 16, 2010 11:46 am

Phil. (09:48:27) :
“Again you reveal your ignorance …”
Having dealt with this guy in the past, I can say this is beyond the snark of the Pot vis a vis the Kettle. Don’t let him waste your time.
davidmhoffer (20:13:02) :
“In order to achieve the proposed temperature increase the AGW Hypothesis is built on the assumption that there is a long term positive feedback from CO2 that exceeds the resulting long term negative feedbacks.”
I believe you have it backwards. The AGW hypothesis relies on short term positive feedback, particularly with water vapor, to amplify the increase in the analogous capacitance or the height of the dam, but long term, the T^4 feedback forms a dominating outer feedback loop to keep temperature BIBO stable overall. At least, this is the consensus view, though the wilder-eyed prophets of doom preach that positive feedback of melting permafrost, etc., will overwhelm T^4 and send us spiraling down the path of a runaway greenhouse. I agree, based on past history, that this scenario is dubious at best. But, falsification of the runaway hypothesis will not incapacitate the AGW beast.
“…by raising a slight technicality and then claiming…”
Then, please stop saying things like “the amount of energy going from Sun to Earth will equal exactly the amount of energy being radiated back by the Earth.” Say, the power or the energy flux. And, “But the height of the dam[] makes no difference at all.” It does make a difference. More CO2 in the air will raise temperatures on Earth, though I agree, to a much lesser extent than has been claimed.

February 16, 2010 12:33 pm

Bart
Then, please stop saying things like “the amount of energy going from Sun to Earth will equal exactly the amount of energy being radiated back by the Earth.” Say, the power or the energy flux. And, “But the height of the dam[] makes no difference at all.” It does make a difference. More CO2 in the air will raise temperatures on Earth, though I agree, to a much lesser extent than has been claimed.>
power being energy/unit of time, fine with your terms of reference, but the conversation started out as a generalization regarding energy balance and the wording I used was sufficient to illustrate the concept, not build a model. The damn analogy was similarly designed to illustrate a concept, and in fact, for the amount of water flowing past the damn, the long term average is in fact the same over the long haul, and raising or lowering the damn in fact has a huge but temporary effect on the flow rate (like the paper suggests). If we were talking about a really big damn and a really tiny river that would be somewhat different but the Sun is a really big river and CO2 a teeny weeny damn. “no difference” is pretty much correct in the context it was presented. Technicaly there are 1024 bytes in a kilobyte, for practical purposes 1000 is close enough.
I’m not trying to build an analogy here that can be converted to a computer model, I’m trying to illustrate some basic concepts. the scaremongers leave the impression that co2 generates new power input. it doesn’t. it messes with the flow of the power already there by daming it up. but its a teeny weeny damn.

Tom P
February 16, 2010 1:12 pm

VS (11:10:03)
“After reading three Kaufmann and Stern papers I have to say that they grant themselves a lot of leeway in picking which test results are ‘conclusive’ and which can be ignored. Most of the time, the results that can be ignored are the ones that are in conflict with hypothesized physical relationships within the AGWH… the same hypothesized physical relationships they are trying to test.”
One difference between time-series analysis in econometrics and physical science is the additional constraint in the latter that any relationship has to conform to known physical laws. Kaufmann and Stern recognise this, Beenstock and Reingewertz apparently don’t.
Hence, when univariate analysis comes up with an unphysical I(1) relationship, Kaufmann and Stern re-examine the assumptions behind the statistics and come up with an improved and more sensitive approach. Beenstock and Reingewertz just plough on regardless.
Econometrics is not my field but Kaufmann looks like he’s considerably more highly cited than Beenstock.

George Turner
February 16, 2010 2:54 pm

TomP,
But we don’t have a complete understanding of all the physical factors that may come into play. Letting the data speak for itself instead of torturing it until is shows what you expect to see is probably the safer method.
For example:
Increased CO2 means a fast plant metabolism and plants also uptake large amounts of water vapor, a more important greenhouse gas.
Increased CO2 affects algae growth and thus may make tiny changes in the oceans’ albedo or IR emissivity.
Increased CO2 might differentially have these effects in nutrient rich/warm environments, making tiny changes to the atmospheric circulation and the relative lattitudinal cloud coverage and distribution.
CO2 blocks downward IR from the sun, keeping it from directly melting snow (snow is a great IR absorber/emitter but is highly reflective in the visible spectrum). This might change polar circulation patterns.
Perhaps as radiative cooling slows down, convective cooling speeds up. Perhaps above a certain threshhold this makes clouds more likely to form thunderstorms with emit upward radiation via sprites, X-rays, and gamma rays. Perhaps this modifies the global electric charge distribution, affecting everything from surface evaporation to cloud formation to the ions in the mesosphere out to a couple of Earth radii.
One of the problems I have with the simple greenhouse effect is that it perfectly models our atmosphere as a semi-transparent solid. If it’s a perfect model of a solid, it’s bound to be a far less than perfect model of a gas, especially a wet gas.

VS
February 16, 2010 3:20 pm

P
You wrote:
“One difference between time-series analysis in econometrics and physical science is the additional constraint in the latter that any relationship has to conform to known physical laws. Kaufmann and Stern recognise this, Beenstock and Reingewertz apparently don’t.”
Are you suggesting that the (estimated) model presented on p. 269 (eq. 13-22) of Kaufmann et al (2006) comes anywhere near ‘confounding to known laws of physics’? It can at best be described as a hypothetical approximation or educated guess.
There is a lot of ‘distance’ between experimental physical results, and climate models. I reckon it’s about the same as the distance between macroeconomic models and things we know about people and the nature of their preferences.
Also note that time series analysis is a sub-field of econometrics; there are no two TSA variants as probability laws don’t change when the interpretation of your coefficients does so.
Finally, establishing a cointigration relationship between I(1) and I(2) series via a regular ADF test (i.e. Kaufmann et al 2006) doesn’t ‘confound to known laws’ of mathematics and probability theory. Perhaps Kaufmann and Stern could begin by recognizing that, first.

VS
February 16, 2010 3:24 pm

Ahem. I meant: perhaps Kaufmann, Kauppi and Stock could begin by recognizing that, first. 🙂

February 16, 2010 3:50 pm

I think I can fill in a lot of blanks in this discussion.
First, someone mentioned qualifications. I am an economist who has actually been hired by an environmental economics Ph.D. program to teach stuff like cointegration to their students. I know the theory and techniques in the paper, and how it would be applied to this data (in fact, I did something like this as a class exercise in Spring 2000 at Tulane).
Second, I’ve been looking at GW data casually since 1988. I’m not a skeptic: I’ve thought since around 1990 that a lot of the GW statistical relationships did not, and could not be made to, make sense. Data is like building blocks, and the statistics is how it all goes together – if it looks like a Frankenstein monster you’ve got a problem.
Third, the reason you should pay at least some attention to economists in these discussions is that, like climatology, our field is largely non-experimental. We’re used to looking at a past data set and opining on what it can and can’t be consistent with.
So, here’s a primer on the point of this paper.
First, disregard the polynomial part. That’s a modelling tweak that probably doesn’t matter too much (if anything, it makes me think they fished a bit and thus doubt the conclusions).
Second, integration is used here in the sense of a summation. Specifically, that an observation is the sum of its past value plus some forcing behavior. It isn’t clear that CO2 data is integrated. But it seems plausible: today’s concentration arises from yesterday’s concentration plus today’s changes. Now, that isn’t being used as a definitional identity: it’s a claim that the CO2 from yesterday (for the most part) didn’t go anywhere, which seems plausible. The same is probably true of temperature measurements: temperature today depends on temperature yesterday because that heat didn’t completely dissipate. Both of these merely mean that the observed data depends on the past history of the same data. Nothing controversial there.
Third, the problem with integrated variables is spurious correlation. This is where two integrated variables have a high correlation coefficient, but that number is meaningless because the data series aren’t related to each other (example below). In particular, this means (calling Al Gore) that plots of the data series against each other look like they’re related. So, here’s an example of two data series that must be unrelated, but which are each integrated, and which will match up nicely on a scatter plot: a time series of the number of total wins the Yankees have on every calendar day of a season against the same series for the Red Sox. Obviously, these aren’t related (other than on days when one team beats the other). And yet if you graphed them against each other, they both start at 0, and progress together up through 20, 40, 70, perhaps even 90 – almost in lock step. So, if CO2 concentration and temperature (however measured) are integrated, spurious correlation is a potential outcome. A famous econometrics paper from the 1980’s showed that the R-squared of spuriously correlated series actually has an expected value close to 0.5.
Fourth, the only way two integrated series can be related to each other without spurious correlation is if they are cointegrated. This is statistically testable with any pair of series – whether you are a climatologist, physical scientist, or whatever. No one should be knocking this: it should be a standard part of the statistical tools used by climatologists on most of their data. It isn’t.
Lastly, all of this can also be done with sets of data rather than pairs, it just makes the math more complex.
What these guys are showing is that, for the data they used, the series are integrated (of different orders), are not cointegrated, and therefore any apparent relation between them must be spurious. The rest is just details.
Methodologically, the appropriate way to criticize this result is to find appropriate data that do appear to be cointegrated. This is not something that people are doing out in the literature. Having done this with some climate data myself over the years, I have a sneaking suspicion that it’s because none of it is cointegrated.
A couple of extra thoughts: not one, but two Nobel prizes in economics have been awarded for the theory of integration and cointegration. We’re all quite sure there will be at least one more.
Also, the lack of cointegration between a series and a revision of the same series is actually a huge result. Cointegration of revisions is a standard way to tell if you’ve done something moronic in your revision. The fact that a series fails this is a huge red flag.

Tom P
February 16, 2010 4:33 pm

George Turner (14:54:09) :
“But we don’t have a complete understanding of all the physical factors that may come into play. Letting the data speak for itself instead of torturing it until is shows what you expect to see is probably the safer method.”
But you don’t need a complete understanding of all the physical factors to produce some constraints on the possible relationships between the parameters. Those constraints, though, can be very useful in making sense of the data while making no assumptions about the relationship you’re trying to establish. What you recommend as a “safer method” can in fact result in an unphysical solution based on oversimplified statistical methods.
“One of the problems I have with the simple greenhouse effect is that it perfectly models our atmosphere as a semi-transparent solid.”
None of the models used today make such a gross simplification.
VS (15:20:41)
“There is a lot of ‘distance’ between experimental physical results, and climate models. I reckon it’s about the same as the distance between macroeconomic models and things we know about people and the nature of their preferences.”
No, people are much more complicated than the Earth’s climate! Of course now that all but a few accept there is some level of influence of humans on climate, even that relative simplicity is being lost.

NickB.
February 16, 2010 5:11 pm

P
One difference between time-series analysis in econometrics and physical science is the additional constraint in the latter that any relationship has to conform to known physical laws. Kaufmann and Stern recognise this, Beenstock and Reingewertz apparently don’t.
I’m really hoping I misread that because it almost sounded like you said that a statistical analysis done for physical science has to give an answer that fits preconceived notions and if not it is invalid. While if it is wrestled into its proper result then its good, solid science. Please clarify!
As much as everyone loves swinging the “it’s a simple physics problem” hammer, once CO2 gets out of the lab and out there the whole problem becomes something completely different. Anyone who has studied Economics can probably brainstorm hundreds of examples of logical theoretical linking between variables that are quite literally lost in the noise once you try and find them in the wild. The whole point of this exercise and discipline is to come at it from the other direction, find solid signals, and then use that to support/refute existing empirical theory, or even come up with new theories.
A counterintuitive (sorry, but I cannot use the term aphysical to describe this) result for this type of analysis should be seen as a good thing – it means we’re probably going to learn something new here (assuming, of course, that their approach is solid, which does seem to be the case per VS’ reviews)… and like I said earlier, if the physical science on AGW is as solid as you think it is, the result could also mean the data is bad.
That said, there is absolutely nothing I have seen so far to indicate that there is any grounds for refuting/rejecting this analysis.

JDN
February 16, 2010 7:30 pm

DirkH (00:04:18) :
“JDN (20:43:35) :
[…]
summarize their independent variables, justify their choices of variables (e.x. they introduce rfCO2 without explanation”
“In Table 1 we provide details of the classification procedure for the radiative
forcing of CO2 (rfCO2).”
Questions?
—————————-
Oh, just the typical reviewers comments:
1) Define radiative forcing of CO2.
1a) Is it taken from theory or measured somehow? Is it animal, vegetable or mineral? Seriously, this paper is so bad it’s not even wrong…. literally. Unless someone is exactly you or someone in a very tight field for whom this jargon is second nature, there is no possibility of checking its accuracy. If you were to come back to this article 20 years from now, do you really expect to understand it yourself?
1b) Does it belong to a particular theory. I would imagine there are hundreds of definitions. Let’s just nail this one thing down. Also, why did you select this one definition?
2) Can you state a hypothesis in terms which don’t involve “unit root of the variable”? This bit of jargon is specific to your chosen method, non-stationary time series. In science, we have hypotheses where the mathematics is brought in to solve the hypothesis. In other words, connect your mathematics to the real world.
3) “In Table 1 we provide details of the classification procedure for the radiative
forcing of CO2 (rfCO2).” This statement is untrue. You don’t provide any detail, you just list some tests. Procedures involve either explaining data input, manipulation, data sorting/output or *clearly* citing references which use exactly the same procedures or something similar.
4) What do non-stationary time series mean to you (the author) (REF?)
5) “The method of cointegration is designed to test hypotheses with time series data that are non-stationary to the same order…” Really, are they designed for that purpose? Well, do they actually succeed? What conditions must be met for that success and how did you check whether those conditions were met?
5a) For those of us who don’t feel like crawling through all this jargon, what is the method of cointegration and why should we believe anything that comes out of that method? Since this will certainly be the first time hearing about it for many people, how about mentioning some successful application of the method and not just the fact that people are using it. People do a lot of silly things.
And there’s more to be sure, so, if you’re still reading this thread, how about some answers?

NickB.
February 16, 2010 8:10 pm

@ Dave Tufte
Thanks for the summary explanation. I’m sure you’ve forgotten more on this subject than I ever learned, but from one rusty economist to a sharp one… I’m glad to see my suspicions confirmed or denied by someone else from our area of study.
@ Tom P
I didn’t mean to pile on there, and FWIW I hadn’t seen your most recent reply when I posted. What I was really trying to get at is that there are times (as always assuming the methods and approach are solid and unbiased) when a counterintuitive result might result in a new and interesting insight… and one that might not be apparent on first review.

Alan Wilkinson
February 16, 2010 8:25 pm

Very interesting discussion. I concur that it is foolish to ignore well established theory from other fields here.
However, since it comes to the wrong conclusions there is little chance the paper will be published in Nature under present management.

VS
February 17, 2010 2:39 am

Dave Tufte (15:50:00) :
Yes, you are right when you stress that the basic problem is that GHG’s are I(2) and temperature is I(1). Asymptotically (if the sample size, n, goes to infinity) these two stochastic trends are independent.
But then you state:
“First, disregard the polynomial part. That’s a modelling tweak that probably doesn’t matter too much (if anything, it makes me think they fished a bit and thus doubt the conclusions).”
Beenstock and Reingewertz indeed use a ‘model tweak’ that would allow for these two series to be cointegrated on the next level. But that ‘model tweak’ is not in itself something trivial. Polynomial cointegration has first been described by Yoo (1986), and since then a solid body of literature has been developed on the topic, where contributors include the likes of Johansen (!) and Granger (!). This is the established way to deal with I(2)/I(1) relationships.
So, the difference between Kaufmann et al (2006) and Beenstock and Reingewertz (2009) is the following:
**Kaufmann et al (2006) attempt to cointegrate temperature, I(1) with the sum of radiative forcing, which is I(2) (equation (3), p. 255). This is plain wrong.
**Beenstock and Reingewerts (2009) first cointegrate various greenhouse gases back to a I(1) variable, and then attempt to cointegrate temperature with this I(1) variable. This is the accurate approach.
They find that solar irradiance is the most important factor determining temperature levels (what a surprise).
“This shows that the first differences of greenhouse gases are empirically important but not their levels. The most important variable is solar irradiance. Dropping this variable, but retaining the first differences of the greenhouse gas forcings, adversely affects all three cointegration test statistics”
They then proceed to show what happens if you ignore the different order of integration, like Kaufmann et al (2006) did.
“Haldrup’s (1994) critical value of the cointegration test statistic when there are three I(2) variables and two I(1) variables is about -4.25. Therefore equation (4) is clearly not polynomially cointegrated, and the conclusions of these studies regarding the effect of rfCO2 on global temperature are incorrect and spurious.”
Specifically, they argue that the following conclusion, by Kaufmann et al (2006) on p.255, is spurious:
“The ADF statistic strongly rejects (P < 0.01) the null hypothesis that the residual contains a stochastic trend, regardles of the lag length used in Equation (2) (Table I), which indicates that the variables in (3) cointegrate."
Let me stress the most important point here. By incorrectly applying these procedures Kaufmann et al. (2006) conclude that an increase in CO2 has a permanent effect on temperatures. Beenstock and Reingewerts (2009), by correctly applying the procedure, conclude that it is in fact only temporary.
So, you are right to state that the variables being I(1) and I(2) respectively, is the main issue. However, the procedure described by Beenstock and Reingewerts (2009) is not just a 'model tweak'.

VS
February 17, 2010 3:38 am

JDN (19:30:34) :
For the record, Beenstock and Reingewerts (2009) use NASA’s GIS data.
Also, with all due respect, the questions you asked are not ‘typical reviewer comments’, but rather comments from somebody unfamiliar with the methods employed. Now, there is nothing wrong with that an sich, so here are some answers to your questions:
1) The same way Kaufman et al (2006), and everybody else in the literature, define it.
1a) See answer above. As for the ‘jargon’, it’s statistical modeling. I can understand that coming from science it can be a bit frustrating to encounter (mathematical) language and jargon that you are not familiar with, but that’s just the result of 60 years of development in a field. I can’t read quantum field theory papers either, and you don’t see me dismissing their findings because of that.
Also, in all honesty, I find the language and methods used by Beenstock and Reingewerts (2009) far more clear than those used by Kaufmann et al (2006). To start with, they report all their tests (unlike Kaufmann et al, where I had to dig through the whole paper to infer that, somewhere in the corner, they take their main variable of interest, GLOBL, to be I(1), see the entire discussion above).
1b) They are testing a correlation here. I don’t really see your point (DirkH (14:21:44) explained the idea behind it well, I think). They first reject the specifications used before on the basis of test results, and then they try to find a correlation structure, any correlation structure, which could plausibly be in there somewhere. In short, they are testing a necessary, rather than sufficient, (statistical) conditions for causality.
If you cannot detect a nudge in temperatures, on any level, due to CO2 forcing, then what on Earth is all the CO2 fuss about?
2) Well, we find that temperature levels are a random walk (see answer 4 below), and that GHG’s first differences are a random walk. Does that help?
3) When you apply a regular t-test on the difference of means, do you also refer to all the literature on the topic? The test procedures in Table 1 are the results of standard methods (I.e. Augmented Dickey Fuller tests), used by every single paper on the topic I read in the past few days, to establish the order of integration. If you are interested in the theory behind them, you can start with this:
http://nobelprize.org/nobel_prizes/economics/laureates/2003/ecoadv.pdf
4) Here’s the most simple example of a nonstationary time series:
Y(t)=rho*Y(t-1) + error(t)
with rho=1 (which is what makes the series nonstationary)
where the error term is identically and independently distributed. This is called a random walk. The series is integrated of the first order, or has an unit root if you wish (i.e. the value of the coefficient of Y(t-1), rho, is equal to unity, making it a random walk).
If you then take first differences, you get:
D_Y(t)= error(t)
which is a stationary series, hence Y(t) is I(1). The issue here is, that once rho=1, the series Y(t) is non-stationary, making standard statistical inference on the value of rho, invalid. We therefore have to apply non-standard tests (e.g. ADF test equation) to distinguish between the cases rho=1 and |rho|<1 (i.e. between the series being I(1) or I(0)).
5) Again, there is a whole body of literature on the topic, and a Nobel Prize has been handed out for work on it. In fact, all those test statistics you were referring to, are tests to see whether conditions for cointegration are met. Kaufmann et al (2006) inappropriately test those conditions.
5a) Good question, people do a lot of silly things. Cointegration is currently the standard approach in nonstationary time series analysis (and all the papers I read so far on the topic, and my own tests, are quite conclusive on the point that we are dealing with a bunch of nonstationary series).
So, we might doubt the merit/value of the entire field of time series statistics… but I doubt that that will get us anywhere in the current discussion.
Hope this helps 🙂
PS. It is quite curious that Kaufmann et al (2006) passed peer review with that error included, especially since we know for over 20 years that the method they employed is incorrect. Perhaps it's because they submitted it to 'Climate Change' rather than the 'Journal of Econometrics'…?

Phil.
February 17, 2010 6:25 am

VS (03:38:56) :
JDN (19:30:34) :
For the record, Beenstock and Reingewerts (2009) use NASA’s GIS data.
Also, with all due respect, the questions you asked are not ‘typical reviewer comments’, but rather comments from somebody unfamiliar with the methods employed. Now, there is nothing wrong with that an sich, so here are some answers to your questions:
1) The same way Kaufman et al (2006), and everybody else in the literature, define it.
1a) See answer above. As for the ‘jargon’, it’s statistical modeling. I can understand that coming from science it can be a bit frustrating to encounter (mathematical) language and jargon that you are not familiar with, but that’s just the result of 60 years of development in a field. I can’t read quantum field theory papers either, and you don’t see me dismissing their findings because of that.

But if you publish a paper using such jargon in another field where it hasn’t previously used you have an obligation to explain it. I’m sure you’d expect that if someone tried to publish in the ‘Journal of Econometrics’ using quantum field terminology…?

Roger Knights
February 17, 2010 6:36 am

TYPO:

They are claiming that if the height of the damn dam goes up …

Alan Wilkinson
February 17, 2010 1:08 pm

VS, many thanks for your very helpful expositions.

NickB.
February 17, 2010 2:31 pm

Alan,
Agreed – nice job VS! Really solid complex statistics truly is an art form, and specific to Econometrics analyses, one that is apparently missing from this climate discussion. Thanks!

Roger Abel
February 17, 2010 6:15 pm

The AGW crusaders depend on the truthness of the temperature measurements done the last 30 years. The raw data has been passed on through NOAA, and manipulated data to the IPCC and World is delivered by NCDC, GISS and CRU.
The climategate letters/data and Anthony Watts/Joseph D’Aleo report, “Surface Temperature Records: Policy Driven Deceptions?”, states that both measurement stations, and the measuerementvalues are HEAVILY TAMPERED WITH !!! Please dig into this, and realise that nothing in the IPCC reports holds water if these temperatures cannot be relied upon.
BOTTOM LINE: Measurement stations are badly cited. Those picked for the raw dataset are heavily biased to warmest places on earth(towns, coasts, airports). North and south areas of the earth are almost not represented in the collected data used for mean temperature analyses. The adjustments done by CRU, NCDC and GISS is further more biased towards warming trends by manipulating the data (not correcting them!!!)
THE RISING TEMPERATURES ARE MAN-MAD(e) -orchestrated by IPCC&Assosiates!
Underwater vucanos close to the southpole and under the northpole shurely melting of ice -NO DOUBT! Himalaya claciers are melting due to manmade deforesting and agriculture development in it’s neigbourhood. The changes to the Earth magnetic fields due to poleshifts are confusing animals so they are getting lost. IPCC is telling us that this evidence for Global Warming???
This is a political/ideological campagne with NO BASIS in real science. It is developed behind the curtains since the fifties, with the goal of replacing the oilmarket with a CO2 market and new energy sources. This is done cleverly by forcing people to pay for it by taxes imposed on us through FEAR(basic instincts easily overrides sense -our brain is made that way for surviving). Those on top making money on the oildriven market today, will continue to make money on a CO2driven market tomorrow -and they are getting away without paying for the transition too 😮
NO DOUBT there are clever “brains” behind this…and they have the money to put behind the making of this deception of AGW too. Most people don’t care, cause they don’t understand…and of course these “brains” KNOW that too!
-As to this statistical analyse against CO2…it’s printed out and put into my HUUGE pile of debunking AGW evidence yet to be read and understood :-/
Kindly, Roger Abel,
Norway -with “Press Freedom” to shut up about any contraditions to AGW and holding the people without knowledge about fraudulent political actions !!! There is still NO debate on any AGW contradictions in media here in Norway 🙁

February 17, 2010 7:32 pm

Re: VS (Feb 17 03:38),
I couldn’t see anywhere in the B&R paper where they showed that temperature was I(1), and not I(2). What test is used? It seems to me that recent temperature history doesn’t conform satisfactorily to any polynomial model. Nor did I see solar I(1) tested.

Richard Sharpe
February 17, 2010 8:06 pm

NickB. (14:31:13) said:

Alan,
Agreed – nice job VS! Really solid complex statistics truly is an art form, and specific to Econometrics analyses, one that is apparently missing from this climate discussion. Thanks!

Yes, and I seem to remember Steve McIntyre pointing out that methods from Econometrics are more solid than those from Climate Science and are perhaps to be explored 🙂

JDN
February 17, 2010 8:20 pm

VS (03:38:56) :
You know what’s absolutely hilarious… when I went to look up Kaufmann (2006) in the reference section of Beenstock & Reingewertz (2009), I discovered that they listed the wrong journal (Climate Change -> Climatic Change) and the wrong page numbers (248 -> 249). They haven’t even proofed their reference section for publication.
Seriously, though, why not just spell out the definition of radiative forcing of CO2? Why make me go through journals to get it? It was a simple question, and, you blew me off.
According to Kaufmann (2006), radiative forcing has unit of W/m^2, making it irradiance (a physics standard unit taught in every optics course). There are two tiny, eensy, weensy little problems with this definition, 1) The units are wrong for something called a “forcing”, and 2) the units are wrong for something called a “forcing”. I realize that it’s the same objection twice, but, I thought it was so important it was worth repeating. They don’t address how things get forced or the fact that radiative heat transfer should have units of W/m^3 for the atmosphere, plants & water but W/m^2 for reflective surfaces. Is the planet nothing but surface? Additionally, a forcing should have units of rate of change of something per unit of something else. And that’s just for starters. I guarantee you they have oversimplified the problem to make it work with their technique. Kaufmann (2006) also had the most amusing term: “explanatory variable”. It’s such an abuse of jargon, it’s hilarious. Don’t tell me it’s standard terminology; I don’t want to know.
You put up a link to the fact that this jargon-riddled, so-called statistical method has won a nobel prize in economics. Well, our buddy Al Gore has one of those too. Ipso facto, he must be a genius. That ought to blow your mind.
Well, this has really been educational. If I ever learn non-stationary time series, it will be to prove its worthlessness. After its so-called method is applied, all you end up doing is “proving” that some variable undergoes a random walk. Such a feat is just not possible without some more substantial mathematics. I say that this entire statistical technique is on marshy ground based solely on the fact that encryption is not threatened by your technique. I want to remind you that mathematicians regularly excoriate physicists on their sloppy use of mathematics. I suspect that if you look through the literature, there is some statistician jumping up and down about how useless this method is. That’s why it’s so filled-up with jargon, to cover its rotting body.
So, once again, I condemn this paper, the entire technique it’s based upon, and the attendant bad physics. I’ve seen enough and don’t feel obligated to plunge into its swampy morass any further. I suppose if someone wants to correct the gaping, fundamental problems with this paper, I might have another look, but, life is too short to waste it on people who are deliberately obfuscating their methods with misappropriated terminology and an infinite regress of definitions.

Alan Wilkinson
February 17, 2010 8:49 pm

JDN, I think more listening and less frothing would improve both your health and knowledge.
There is no physics in the paper. It is an exploration of a claimed causal relationship given two data series. The expertise applied is appropriate to the scope of the paper.

Editor
February 17, 2010 9:14 pm

JDN (20:20:17)

… According to Kaufmann (2006), radiative forcing has unit of W/m^2, making it irradiance (a physics standard unit taught in every optics course). There are two tiny, eensy, weensy little problems with this definition, 1) The units are wrong for something called a “forcing”, and 2) the units are wrong for something called a “forcing”. I realize that it’s the same objection twice, but, I thought it was so important it was worth repeating. They don’t address how things get forced or the fact that radiative heat transfer should have units of W/m^3 for the atmosphere, plants & water but W/m^2 for reflective surfaces. Is the planet nothing but surface? Additionally, a forcing should have units of rate of change of something per unit of something else. And that’s just for starters. I guarantee you they have oversimplified the problem to make it work with their technique. Kaufmann (2006) also had the most amusing term: “explanatory variable”. It’s such an abuse of jargon, it’s hilarious. Don’t tell me it’s standard terminology; I don’t want to know.

For historical reasons, everyone in the field calls it a “forcing”. A number of people have made your comment over the last decade or so, and the comment is 100% correct.
The first time someone made this point, it was interesting. The tenth time, not so much. For you to make the same point for the 4,323rd time is tendentious. Every field has its own jargon, its own often-strange use of terms. Many times these terms do not have their normal, everyday meaning. So what? The authors of this paper call it “forcing”, which is what everyone in the field, from Phil Jones to Steve McIntyre and every single other scientist in the field calls it. You don’t like it? … Tough. After a decade, complaining about it just makes you look out of touch.

You put up a link to the fact that this jargon-riddled, so-called statistical method has won a nobel prize in economics. Well, our buddy Al Gore has one of those too. Ipso facto, he must be a genius. That ought to blow your mind.

For someone who seems to be pedantic about small points, perhaps you could give us the details on Al Gore’s Nobel Prize in Economics …
For your future reference, there are two kinds of “Nobel Prizes”, which are given by entirely different groups of people. The first kind of Nobel is given to brilliant mathematicians and scientists for Chemistry, Economics, Physics, Mathematics, and the like. The second kind is the “Nobel Peace Prize”, which is given to peaceful folks like Yassir Arafat and overweight carbon millionaires like Al Gore. If you haven’t noticed the difference between the science prizes and the peace prize at this late date, you need more help than we can offer you here.

DeWitt Payne
February 17, 2010 9:24 pm

It looks like B&R used the Lean, Beer and Bradley TSI reconstruction (reference xi). Other researchers, Leif Svalgaard, e.g., don’t agree and think L,B&B significantly overestimate the solar variability. Treating the forcings from CO2, N2O and CH4 as independent variables while leaving out any contribution from aerosols, land use changes and other forcings seems questionable to me.

DeWitt Payne
February 17, 2010 9:57 pm

And more to the point, while temperature may have been I(1) when changes in solar activity were conceded by all to be the principal driving force, is it still? It looks like B&R analyzed from 1850 to 2006. For something like 2/3 to 3/4 of that time, solar would have been dominant. It’s the temperature increase from 1970 on that is supposed to be most heavily influenced by well-mixed ghg’s. If we restrict the analysis to 1970 on, are ghg’s still I(2)?

Alan Wilkinson
February 18, 2010 12:17 am

DeWitt Payne, if CO2 only influenced temperature after 1970, what was it doing since 1750:
http://cdiac.ornl.gov/trends/co2/graphics/lawdome.gif
And don’t forget it’s effect is supposed to be logarithmic – i.e. becoming less effective per delta.
The AGW claim is that CO2 impact will extrapolate to catastrophic levels. The data is saying temperature is not following CO2 sufficiently rapidly to support that claim. Unless you can point to major discontinuities in other factors for the future compared with the past there isn’t much of a leg left for AGW to stand on.

VS
February 18, 2010 2:12 am

JDN (20:20:17) :
I’m really not knowledgeable enough to comment on the use and definitions of radiative forcing variables. The definition used by both Kaufmann et al (2006) and Beenstock and Reingewertz (2009) is in line with what everybody else does, though. I’m sorry if it seemed like I was ‘blowing you off’, but I was more interested in the alleged statistical relationships.
Then you wrote: “Additionally, a forcing should have units of rate of change of something per unit of something else.”
For the record, forcing is indeed defined as a rate of change by Kaufmann et al (2006), and presumably then also by Beenstock and Reingewertz (see below for the references they use for their data).
On page 269 of Kaufmann et al (2006) give equation (20).
RFCO2t = 6.3 ln(CO2 t/CO2 1860)
which, if you look carefully, is a ‘rate of change’, with a reference to a given base year (namely 1860). It is furthermore in line with the defintion given here (with C0 being CO2 1860):
http://en.wikipedia.org/wiki/Radiative_forcing#Example_calculations
Except that the ‘multiplier’ given there is 5.35 and K2006 employs 6.3. I don’t know what that last point implies in general, but for Kaufman’s model it implies a relatively stronger impact of CO2 in the aggregate radiative forcing variable (RFAGG, eq. (23), p. 269) than would be the case if 5.35 were used as the factor.
You also stated “I suspect that if you look through the literature, there is some statistician jumping up and down about how useless this method is.”
Oh I’m sure you can find more than one, and that’s what makes econometrics a scientific discipline. Lack of people ‘jumping up and down’ would be far more worrisome.
DeWitt Payne (21:57:31) :
That’s a very good point, and Beenstock and Reingewertz considered, and tested, that possibility:
“We also check whether rfCO2 is I(1) subject to a structural break. A break in the stochastic trend of rfCO2 might create the impression that d = 2 when in fact its true value is 1. We apply the test suggested by Clemente, Montanas and Reyes (1998) (CMR). The CMR statistic (which is the ADF statistic allowing for a break) for the first difference of rfCO2 is -3.877. The break occurs in 1964, but since the critical value of the CMR statistic is -4.27 we can safely reject the hypothesis that rfCO2 is I(1) with a break in its stochastic trend.”
…but we’re all good 😉
———————–
For JDN:
References to data/variables used by Beenstock and Reingewertz (2009)
Hansen, J.,Ruedy, R., Glascoe, J. & Sato,M. GISS analysis of surface temperature change. Journal of Geophysical Research 104, 30997-31002 (1999).
Hansen, J. et al. A closer look at United States and global surface temperature change. Journal of Geophysical Research 106, 23947-23963 (2001).

VS
February 18, 2010 2:29 am

Nick Stokes (19:32:35) on temperature (not) being I(1).
Take a look at the discussion above, specifically VS (11:10:03).
Also, BR2009 state that they confirm previous findings that temperature and solar irradiation are I(1), and they state that again in Table 2. The test statistics are missing though, but I suspect that’s because they didn’t think this is a point of dispute (again, see discussion above :).

DeWitt Payne
February 18, 2010 6:19 am

Re: VS (Feb 18 02:12),
But did they or anyone else test the temperature series for a break?
I’d also like to see the technique applied to one or more climate model temperature series where we know for a fact that ghg’s influence the temperature. If climate model temperature series are I(2), that would cast some doubt on their validity. OTOH, if they are I(1) and the ghg forcing is I(2), then there’s something wrong with how the test is being performed.

DeWitt Payne
February 18, 2010 7:53 am

Another question: Is the choice really just between I(0), I(1), I(2) … I(n) where n is an integer only? Why not I(0.9)? My quick and dirty research on the topic says that an I(1) time series doesn’t show recovery to the trend after a shock, that is it’s a random walk. But one can see at a glance with the temperature series that shocks such as the Pinatubo eruption and ENSO events do show recovery to the trend. So what is it that I don’t understand correctly here?

DirkH
February 18, 2010 1:12 pm

“JDN (20:20:17) :
[…]
Well, this has really been educational. If I ever learn non-stationary time series, it will be to prove its worthlessness. After its so-called method is applied, all you end up doing is “proving” that some variable undergoes a random walk.[…]”
Now that’s what i call blowing off steam. If i didn’t know this is JDN i would have assumed it’s Gavin S. himself.
VS and Dave Tufte, thanks to the both of you for your calm and very illustrative writing! I’m a software engineer and the only things i had to do with time series by now were simple signal processing algorithms, but this thread has become the most fascinating reading for me, and it looks like econometrics has a lot of insight for me to offer!
For me, the fact that Beerstein and Reingewertz do *NOT* have to rely on any assumed physical mechanism is the very strength of their approach. It creates a constraint that any supposed physical mechanism has to fulfill to be a valid candidate. Really a nail in the coffin for CO2 as the major climate driver IMHO.
I also think that B&R harmonize wonderfully with Ferenc Miskolczi’s theory. It all makes sense.

Bart
February 18, 2010 1:14 pm

davidmhoffer (12:33:51) :
“The damn (sic, or maybe not 🙂 analogy was similarly designed to illustrate a concept, and in fact, for the amount of water flowing past the dam[] , the long term average is in fact the same over the long haul, and raising or lowering the dam[] in fact has a huge but temporary effect on the flow rate (like the paper suggests).”
Interesting article here. Perhaps a better point could be made if you updated your analogy to say that extending the width of the dam has no steady state effect at all, whether of flow rate or of retention of water.

DirkH
February 18, 2010 1:14 pm

“DeWitt Payne (07:53:35) :
Another question: Is the choice really just between I(0), I(1), I(2) … I(n) where n is an integer only? Why not I(0.9)? ”
They talk about “first differences”, “2nd differences” etc… the analogon to 1st and 2nd differential in a discrete time series. You can’t differentiate 0.9 times…

Bart
February 18, 2010 1:16 pm

Bart (13:14:14)
Maybe “depth” would be a better word than width. I mean, of course, the dimension in the direction of flow.

DirkH
February 18, 2010 1:17 pm

“DirkH (13:14:58) :
[…]
They talk about “first differences”, “2nd differences” etc… the analogon to 1st and 2nd differential”
Sorry, i mean “1st and 2nd derivative”, i don’t know if “differential” has the same meaning in english.

February 18, 2010 1:19 pm

Re: VS (Feb 18 02:29),
I missed your earlier post. But it’s confusing. You say there that “Beenstock and Reingewertz (2009)- confirm I(1)”
but now you agree that, no, they didn’t, but relied on earlier results, and don’t quote any test statistics or uncertainty intervals. But “earlier results” would seem to be Kaufmann and Stern, which you are quite critical of.
It seems to me the key issue is not whether I(1) is an adequate approximant for temperature, but whether I(2) can be ruled out. I don’t see that anyone has tested that. It seems to me that the temperature plot has a lot of structure which doesn’t correspond to any polynomial order, and there will be a lot of difficulty in uniquely associating it with any I(n).
One could also note that the K&S references are up to ten or more years old, and there has been a lot of temperature measurement since then.

George Turner
February 18, 2010 3:45 pm

Would taking the square of the temperature change anything? In thermodynamics T^2 is a measure of energy, which is what we’re looking for.
Also, if I can find the data, could someone do the same analysis of global temperature versus human sin? Some people seem to think there’s a direct link.

February 18, 2010 7:39 pm

Bart (13:14:14) :
Interesting article here. Perhaps a better point could be made if you updated your analogy to say that extending the width of the dam has no steady state effect at all, whether of flow rate or of retention of water>
interesting article, thanks for the link.
Not quite what the paper says and not quite my point. how about this? Go down to a pond and throw in a fist sized rock. You will be able to SEE the ripples across the surface of the pond. With a keen eye you could even measure them with a decent ruler. Now in theory, the rock is now laying at the bottom of the pond, and so the level in the pond has to be higher. It is. But the ripples from throwing the rock in are WAY out of proportion to the increase in the level of the pond. That’s what the author’s of the paper were getting at. The increase in CO2 causes ripples, but the total change in temperature is tiny.

DeWitt Payne
February 18, 2010 7:55 pm

Re: DirkH (Feb 18 13:17),
Sorry, but you’re wrong. There is such a thing as fractional calculus. More to the point:
The first order autoregressive model, y(t)=a1y(t-1)+epsilon(t), where epsilon(t) is a serially uncorrelated stochastic process with mean zero and constant variance, has a unit root when a1 = 1. In this example, the characteristic equation is m − a1 = m − 1. The root of the equation is m = 1. But a1 does not have to be an integer, it can be any real number. In which case, differencing of any integer order will not make the series stationary. If a1 is close to 1, then the ADF test will fail to reject the null hypothesis that a1 is equal to one. I think.

George Turner
February 18, 2010 8:03 pm

Yes, but that analogy opens you up for the obvious retort, “but there are six billion people throwing rocks in the pond, every day! It’ll fill up in a few weeks!”
To anticipate counter arguments, you have to learn to think like a person who would bomb a city with polar bears.

February 18, 2010 8:20 pm

George Turner (20:03:30) :
Yes, but that analogy opens you up for the obvious retort, “but there are six billion people throwing rocks in the pond, every day! It’ll fill up in a few weeks!”
Fallen into my trap… and so fast too!
Yes let’s scale the pond up to the size of a planet. 6 billion people throwing a fraction of a grain of sand into the ocean every day. How have they done so far? Well, since 1920 they’ve gone from 0.000280 of the atmosphere all the way to (shudder) 0.000380 of the atmosphere. Omigosh! 0.000100! They’ll have that sucker filled up in a few hundred years! panic! everyone panic!
I predict they run out of oil before the glaciers melt. Ha!

February 18, 2010 8:27 pm

BTW I live in Winnipeg where polar bear sculptures appear all over the city every year. I have never seen one being put in place, or sculpted or anything. I just assumed that people were putting them up when I wasn’t paying attention. I had no idea that they were BOMBS! I will keep an eye on the sky from now on.
When I first read your post I thought you meant real polar bears. I think getting them into the bomb bay might be harder than you think. I’ve been nose to nose with a black bear a couple of times, and polar bears are a LOT bigger. And they eat seals which I have been told my physique resembles. So I have a certain amount of antipathy toward the concept of trying to put one into a bomb bay.

February 18, 2010 8:36 pm

and further to the record, Winnipeg is a winter city where surviving 10 days in succession of -30 is a badge of honor. That said, we have no sceptics here. We call them pessimists.

George Turner
February 18, 2010 10:04 pm

Davidmhoffer,
But your argument used decimal numbers. Your opponents don’t do decimals. 🙂
BTW, have you ever heard of the top-secret bat-bomb project in WW-II? Prior to confidence in the A-bomb, and bunch of researchers (and a famous actor, and a tiger mascot) were on a project that was going to drop millions of Texas free-tailed bats on Japanese cities, each with a tiny little incendiary device glued to its fur. The bats were kept air-conditioned so they stayed torpid, then at altitude the cold temperatures would keep them that way. They were to be dropped in extending egg-crate carriers with a parachute. Once they reached lower altitudes the warmer air would wake them up, and they’d fly out and roost under the eaves and in the nooks and crannies of Japanese houses.
During an airbase photoshoot with about a dozen of the little guys the photographer took too long, the bats warmed up, and they flew away, setting fire to the base. Due to the secrecy of the project, the Air Corps couldn’t let the local fire department in and the whole base went up in flames.
Not long after that the A-bomb test worked and the project was cancelled, because there was no way we could intimidate Stalin with bombers loaded with little bats.
Bat Bomb at Amazon.

Bart
February 18, 2010 11:35 pm

davidmhoffer (19:39:49) :
Not quite what the paper says…
Perhaps not precisely. The comment at “perryalger Feb 17, 10:49 AM” made an impression on me, and I guess I was cuing off of that. At 15 microns, he says, CO2 is already absorbing all reflected radiation, so adding more is akin to increasing the depth of the dam.
DeWitt Payne (19:55:28) :
“But a1 does not have to be an integer, it can be any real number. In which case, differencing of any integer order will not make the series stationary.”
Your point is correct – fractional derivatives do exist. However, in your example, you need to be more precise. If the equation is assumed to have been initialized in the infinte past, and the absolute value of a1 is less than unity (these requirements mean the output is bounded and there is no exponentially decaying remnant of the initial state to contend with), it is wide sense stationary. If the pdf of epsilon(t) is Normal, then it is strict sense stationary. See this page on autoregressive processes.
I think you mean “differencing of any integer order will not make the increments independent.”

Editor
February 18, 2010 11:44 pm

Your point is correct – fractional derivatives do exist.

An honest question. My understanding that the difference between a “first derivative” and a “first difference” is that the former is done on a continuous function, and the latter on a discrete dataset.
First, is that the case? Second, while fractional derivatives exist, does fractional differencing exist?
Thanks,
w.

DirkH
February 19, 2010 3:34 am

Willis:
“Second, while fractional derivatives exist, does fractional differencing exist?

World of wonders, it seems so, Willis:
http://www.research.ibm.com/people/h/hosking/abs.pub06.html
“Abstract. The family of autoregressive integrated moving-average processes, widely used in time series analysis, is generalized by permitting the degree of differencing to take fractional values. […]”
Looks like a really wicked operator to me – a discrete approximation of a fractional derivative maybe. I’m fascinated. The next question would of course be: Has econometrics or statistics already taken advantage of that?
Yes, indeed:
http://ideas.repec.org/p/boc/bocoec/317.html
“Fractional Differencing Modeling and Forecasting of Eurocurrency Deposit Rates”

February 19, 2010 7:26 am

George Turner;
BTW, have you ever heard of the top-secret bat-bomb project in WW-II?>
I had not heard that one, sounds fascinating. Just far fetched enough, yet logical enough, to prompt additional reading. Reminds me of the Russians in WW-II going with a similar strategy. They rounded up dogs and only fed them under a tank. Once they had the dogs trained to eat only under a tank, they starved them for a while, strapped magnetic mines to their backs, and released them into the face of a Nazi tank advance. Instead of racing toward the Nazi tanks, they scattered. Turned out they could tell the difference between a Nazi tank and a Russian tank and they had been trained with Russian tanks. The Russians had to shoot every dog on sight for a month and they lost quite a few of their own tanks in theprocess.

DeWitt Payne
February 19, 2010 8:04 am

Re: Bart (Feb 18 23:35),

I think you mean “differencing of any integer order will not make the increments independent.”

Thanks for the correction.
The question still remains, though. Are the conclusions of B&R in their paper still valid if the time series aren’t I(0), I(1) or I(2) because there are no unit roots? If a series is noisy enough, is it even possible to determine if it’s I(2) because each differencing decreases the signal to noise ratio because the variances are additive? The treatment of error in regressions of autocorrelated series that I am familiar with at places like The Blackboard and Climate Audit has been to reduce the number of degrees of freedom based on the lag(1) correlation coefficient, which is usually less than 1.
Another question: Does conversion to an anomaly affect the integration order of a time series?

VS
February 19, 2010 9:30 am

DirkH (03:34:08) :
That’s very interesting! I wasn’t aware of that method (but, in my defense, time series analysis is not really my field :).. the only issue is that it takes the problem of interpreting the coefficients (already a big hurdle in TSA) to a whole new level.
In any case, perhaps that’s something I might try on climate series, once I find the time for it… it looks like it takes some programming to implement.
DeWitt Payne (08:04:56) :
It doesn’t really matter how ‘noisy’ a series is for detecting a stochastic trend, if by noise you mean white noise (i.e. idiosyncratic error).
You are right that differencing ‘kills the signal’, but only if performed on a series which doesn’t contain a stochastic trend in the first place. If the series is in fact a random walk, that ‘signal’ wasn’t there to start with 🙂 Luckily we have tests to help us out there..

MikeN
February 19, 2010 12:38 pm

So is TomP the new proxy for RealClimate, the blog that deigns to ignore WUWT?

DeWitt Payne
February 19, 2010 1:03 pm

Re: VS (Feb 19 09:30),
Is it true that the impulse response of a random walk or I(1) series is to fail to return to the trend (effectively a step change)? If so, then how can the temperature series be I(1) regardless of the ADF statistic when there are many examples of impulses that do indeed return to the trend (ENSO events and volcano eruptions specifically)? The lower stratosphere series looks at first glance like a step change response to volcano eruptions, but on closer examination it looks more like a warming pulse that decays rapidly on top of a cooling pulse that decays at a much slower rate. With any luck the graph will display. Otherwise, the link is here.

DeWitt Payne
February 19, 2010 7:51 pm

A couple of references on the problems dealing with cointegration of time series with near unit roots:
http://www.informaworld.com/smpp/content%7Econtent=a713692127&db=all
Abstract:
This paper argues that the predominant method of estimating equilibrium relationships in macroeconometric models, namely the VECM system of Johansen, is severely flawed if the underlying variables are distributed as near unit root processes. Researchers may apply cointegration techniques to these processes, as the power of rejecting near unit roots using standard unit root tests is extremely low. Using Monte Carlo analysis, problematic behaviour of cointegration analysis is found in detecting the true underlying form of the connection between the near unit root processes. Furthermore the connecting vector is imprecisely estimated, resulting in problematic inference for error correction models.
http://www.federalreserve.gov/pubs/ifdp/2007/907/ifdp907.pdf
Methods of inference based on a unit root assumption in the data are typically not robust to even small deviations from this assumption. In this paper, we propose robust procedures for a residual-based test of cointegration when the data are generated by a near unit root process. A Bonferroni method is used to address the uncertainty regarding the exact degree of persistence in the process. We thus provide a method for valid inference in multivariate near unit root processes where standard cointegration tests may be subject to substantial size distortions and standard OLS inference may lead to spurious results. Empirical illustrations are given by: (i) a re-examination of the Fisher hypothesis, and (ii) a test of the validity of the cointegrating relationship between aggregate consumption, asset holdings, and labor income, which has attracted a great deal of attention in the recent finance literature.
These papers would seem to cast doubt on the validity of the conclusions of the B&R paper as it seems to be based on the conclusion that the temperature series is identically I(1), which it clearly isn’t.

DirkH
February 20, 2010 2:04 am

“DeWitt Payne (13:03:12) :
Re: VS (Feb 19 09:30),
Is it true that the impulse response of a random walk or I(1) series is to fail to return to the trend (effectively a step change)?”
I don’t think so. I’m trying to think about this in terms of signal processing and i hope i’m making sense….
I(1) is first differences, right? So that would be a FIR filter that does z(t) = i(t) – i(t-1) , z=output signal, i=input signal. This removes the DC component completely already and has a high-pass characteristic, letting high frequencies pass and dampening low frequencies with no specific f0.
So i would think that I(1) would return to the trend. I think the random walk you mention would be I(0). Does this make sense?
Talking about fractional differencing: For the moment, i see that as a method of using linear combinations of I(0), I(1) and I(2) (more if you need) or in SP terms a FIR filter like
z(t) = a * i(t) + b * i(t-1) + c * i(t-2)
– kind of like linearly mixing signal, first difference, second difference, slowly shifting the impulse response from a I(1) characteristic to a I(2) characteristic for instance. I hope you get the picture, it sounds less magical if you think about it in terms of a sound engineer who uses an equalizer to modify the frequency characteristic of a signal.
So if we are not certain whether temperature can safely be said to be I(1) we could model it as such a combination, same for CO2. I don’t know at the moment, though, whether tests for Granger causality can be done with such an approach or whether it is meaningful to do so.

VS
February 20, 2010 4:55 am

DeWitt Payne (13:03:12) (and DirkH (02:04:55)):
The series you posted is ‘detrended’ (i.e. you gave me the error_hat(t) of y(t)=a+b*t+error_hat(t)), which makes the errors usuitable for inference. Whether the series actually has a deterministic or stochastic trend is the whole question we are trying to answer with these unit root tests.
Also, about the error correction (what you are talking about when you wrote ‘returning to a trend’): A cointegrated relationship implies that, while the two (or more) series are random walks, they share a common random walk component, or a stochastic trend if you will.
This in it’s turn implies an error correction mechanism, where deviations from the (common) stochastic trend are corrected over the longer run.
So, for example, lets say that temperatures and solar irradiance (both I(1)) are cointegrated, and therefore share a common stochastic trend. Now, the implication of this is the following: a deviation of, say temperatures from this stochastic trend can happen due to an exogenous shock, but in the mid-long run, it will converge back to its long run equilibrium relationship (i.e. the common stochastic trend) with solar irradiance.
Note that we do not have to ‘explain’ the stochastic trend in order to make this statistical inference, we’re just measuring, and that’s the beauty of it.
The following should get you started on ECM’s:
http://ricardo.ecn.wfu.edu/~cottrell/ecn215/error_corr_2004.pdf
DeWitt Payne (19:51:18) :
There are always problems when you deviate from your base assumptions.
However, in light of all the tests which point to a I(1) process, it is a bit irresponsible to conclude that temperature is not I(1).. 🙂 I mean, what do you have the tests for then?
But OK, let us assume that you are right and that temperature is ‘near unit root’. Loosely put, this implies that temperatures are I(0) with very high persistence (which could hypothetically be the case, see overview I posted at: VS (11:10:03)). In its turn this would imply the alleged link with GHG’s, which are I(2), is even less likely.
They also state: “Researchers may apply cointegration techniques to these processes, as the power of rejecting near unit roots using standard unit root tests is extremely low”
Again, this implies that a cointegrating relationship is even LESS likely, further contradicting the AGWH. I don’t think these papers invalidate the BR approach per se (they reject the relationships, remember). Rather, they invalidate the Kaufmann et al approach.
DirkH (02:04:55) :
“Talking about fractional differencing: For the moment, i see that as a method of using linear combinations of I(0), I(1) and I(2) (more if you need) or in SP terms a FIR filter like z(t) = a * i(t) + b * i(t-1) + c * i(t-2)”
I don’t quite understand what you mean with a ‘linear combination of I(0), I(1) and I(2)”, as the notation given doesn’t correspond to what you wrote below (you wrote z(t)=(a+b*L+c*L^2)*i(t), where L is the lag operator).
I(d) is a characteristic of a series (i.e. how many times you have to difference the original series to obtain a stationary series) not the actual differencing (that would be the (1-L) operation :).
If you in fact meant a cointegrating relationship between I(1) (temperature, solar irradiance) and I(2) (GHG’s) variables: well, this is exactly what BR test for when they perform polynomial cointegration tests! 🙂

DirkH
February 20, 2010 6:01 am

“VS (04:55:47) :
[…]
If you in fact meant a cointegrating relationship between I(1) (temperature, solar irradiance) and I(2) (GHG’s) variables: well, this is exactly what BR test for when they perform polynomial cointegration tests!”
Thanks for the clarification, VS. Sloppy wording on my side. I’m just trying to get a grip on the concept of fractional differencing. I see it as a spectral thing. How many times you have to difference a series actually just means how often you apply the same filter so it’s nothing too magical…

phlogiston
February 21, 2010 10:07 am

Alexander Feht (14:51:50)
Your comments on the peer-review process are very apt. Best post I’ve read for a while!

Alan D McIntire
February 21, 2010 2:43 pm

My 2 cents worth- The paper was stating that changes in the rate of CO2 increase affect temperature, not absolute changes in CO2.
My guess is that the temperature increases that are being detected are due to changes in economic activity, and the UHI bias
in recording temperatures:
http://www.uoguelph.ca/~rmckitri/research/gdptemp.html
There theoretically SHOULD be an increase in temperatures with
any increases in CO2, but since the feedback is logarithmic, and supposing there’s a negative feedback from clouds and water vapor, the fraction of the temperature increase caused by changes in CO2 level could easily be below the threshhold of measurability.

VS
March 9, 2010 12:17 am

If anybody is interested in a more elaborate discussion of this paper (yes it’s possible ;), as well as a couple of other issues regarding (mis)use of statistics within climate science, you are welcome to take a look at this thread here:
http://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/
Joshua Halpern’s (aka Eli Rabbett’s) amateurish attempt at a ‘refutation’ of the BR paper is also refuted there.

Ed Snack
March 18, 2010 2:33 am

Steve Mosher, in this thread you exhibit a disturbing lack of understanding of what is being tested here. Your point of view appears to be “CO2 levels in the atmosphere affect global temperature as a given resulting from basic Physics”. Thus you reject a paper which attempts to test that as a hypothesis because it isn’t a hypothesis but a given truth. In this you’re acting just like Tamino, who’s main reaction appears to be sticking his fingers in his ears and shouting “lalalalala I can’t HEAR YOU” at the top of his voice, egged on by the witless f*wits who inhabit his site.
Surely, if such a cause and effect relationship exists, then appropriate tests must show it. B&R use certain sets of data to demonstrate that using what appear to be the best techniques currently known, that no such relationship exists. You then deny the paper validity because that cannot be so, which is entirely unscientific. Surely, the appropriate scientific response is to examine what was done. B&R use NASA/GISS as a temperature record, is that a valid record, I for one would suspect that it may contain serious overstatements of actual temperature rises, could that be a cause of problems ? The CO2 measures seem less controversial, so leave them, what about the methods and statistics. My knowledge of time series is limited (as it is apparently for almost everyone commenting including yourself), but from such reading as is available, B&R’s treatment as far as I can tell is indeed in line with currently accepted best practice, at least from a majority of those working in the field. The tests are also repeatable, VS’s work would appear to confirm their work neatly.
In experimental physics, if such a relationship was to be tested and such results came out, one can re-run the experiments a number of times (or at least, usually one can), but if the data consistently rejects your theory, it hardly seems reasonable behaviour to reject the data in favour of the theory. Here we have tests on an unrepeatable data series, although we can try other versions of the data. Assume the data and methods are correct, isn’t the prime learning we get that we don’t understand the behaviour as the atmosphere as well as you are apparently claiming.
You are claiming that you know the behaviour of the atmosphere so well that it must be true that increased CO2 generates additional heat. I instead assert that what you think you know you don’t, what you have in fact is a theory that additional CO2 should cause increased temperatures if the feedbacks are clearly understood and don’t interfere. I suggest that you don’t in fact know enough about feedbacks or the atmosphere, and that this paper suggests (only suggests and subject to verification) that your theory is wrong. That doesn’t mean that CO2 can’t increase the temperature and that it doesn’t in a laboratory situation, but it may mean that in the real world it doesn’t.
My first thought would be that maybe the way that the temperature records have been bodged (deliberately or otherwise) leads to the incorrect derivation of temperature as l(1), perhaps the real record is l(2), and there is a relationship.
B&R’s paper has some very useful features, amongst which is that many of the important postulates are easily tested, Temperature, as used is clearly l(1), the tests are appropriately used, therefore the result is difficult to simply laugh off. Claiming that the results are “unphysical” is simply doing the “lalalala I can’t hear you” trick. The results are perfectly acceptable, as long as you accept that you (and we, generally) don’t actually know the climate as well as we would like to think.

kim
March 18, 2010 2:39 am

VS continues to burn barns at his link above to Bart’s.
====================

April 15, 2010 4:01 am

I know that I am very late to the party, but I’ve just written up a blogpost on the B&R paper:
http://stochastictrend.blogspot.com/2010/04/polynomial-cointegration-and-global.html