Tisdale on model initialization in wake of the leaked IPCC draft

Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century?

Guest post by Bob Tisdale

The coupled climate models used to hindcast past and project future climate in the IPCC’s 2007 report AR4 were not initialized so that they could reproduce the multidecadal variations that exist in the global temperature record. This has been known for years. For those who weren’t aware of it, refer to Nature’s Climate Feedback: Predictions of climate post, written by Kevin Trenberth.

The question this post asks is, should the IPCC’s coupled climate models be initialized so that they reproduce the multidecadal variability that exists in the instrument-based global temperature records of the past 100 years and project those multidecadal variations into the future.

Coincidentally, as I finished writing this post, I discovered Benny Peiser’s post with the title Leaked IPCC Draft: Climate Change Signals Expected To Be Relatively Small Over Coming 20-30 Years at WattsUpWithThat. It includes a link to the following quote from Richard Black of BBC News:

And for the future, the [IPCC] draft gives even less succour to those seeking here a new mandate for urgent action on greenhouse gas emissions, declaring: “Uncertainty in the sign of projected changes in climate extremes over the coming two to three decades is relatively large because climate change signals are expected to be relatively small compared to natural climate variability”.

That’s IPCC speak, and it really doesn’t say they’re expecting global surface temperatures to flatten for the next two or three decades. And we have already found that at least one of the climate models submitted to the CMIP5 archive for inclusion in the IPCC’s AR5 does not reproduce a multidecadal temperature signal. In other words, that model shows no skill at matching the multidecadal temperature variations of the 20th Century. So the question still stands:

Should IPCC climate models be Initialized so that they replicate the multidecadal variability of the instrument temperature record during past 100 years and project those multidecadal variations into the future?

In the post An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model, after illustrating that the NCAR CCSM4 (from the CMIP5 Archive, being used for the upcoming IPCC AR5) does not reproduce the multidecadal variations of the instrument temperature record of the 20th Century, I included the following discussion under the heading of NOTE ON MULTIDECADAL VARIABILITY OF THE MODELS :

…And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.

To help clarify those statements and to present them using Sea Surface Temperatures, the source of the multidecadal variability, I’ve prepared Figure 1. It compares observations to climate model outputs for the period of 1910 to year-to-date 2011. The Global Sea Surface Temperature anomaly dataset is HADISST. The model output is the model mean for the hindcasts and projections of the coupled climate models of Sea Surface Temperature anomalies that were prepared for the Fourth Assessment Report (AR4) from the Intergovernmental Panel on Climate Change (IPCC) published in 2007. As shown, the period of 1975 to 2000 is really the only multidecadal period when the models come close to matching the observed data. The two datasets diverge before and after that period.

Figure 1

Refer to Animation 1 for a further clarification. (It’s a 4-frame gif animation, with 15 seconds between frames.) It compares the linear trends of the Global Sea Surface Temperature anomaly observations to the model mean, same two datasets, for the periods of 1910 to 1945, 1945 to 1975, and 1975 to 2000. Sure does look like the models were programmed to latch onto that 1975 to 2000 portion of the data, which is an upward swing in the natural multidecadal variations.

Animation 1

A NOTE ABOUT BASE YEARS: Before somebody asks, I used the period of 1910 to 1940 as base years for anomalies. This period was chosen for an animation that I removed and posted separately. The base years make sense for the graphs included in that animation. But I used the same base years for the graphs that remain in this post, which is why all of the data has been shifted up from where you would normally expect to see it.

Figure 2 includes the linear trends of the Global Sea Surface Temperature observations from 1910 to 2010 and from to 1975 to 2000 and includes the trend for the model mean of the IPCC AR4 projection from 2000 to 2099. The data for the IPCC AR4 hindcast from 1910 to 2000 is also illustrated. The three trends are presented to show this disparity between them. The long-term (100 year) trend in the observations is only 0.054 deg C/decade. And keeping in mind that the trends for the models and observations were basically identical for the period of 1975 to 2000 (and approximately the same as the early warming period of 1910 to 1945), the high-end (short-term) trends for a warming period during those 100 years of observations is about twice the long-term trend or approximately 0.11 deg C per decade. And then there’s the model forecast from 2000 to 2099. Its trend appears to go off at a tangent, skyrocketing at a pace that’s almost twice as high as the high-end short-term trend from the observations. The model trend is at 0.2 deg C per decade. I said in the earlier post, “the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades.” The models not only continued that trend, they increased it substantially, and they’ve clearly overlooked the fact that there is a multidecadal component to the instrument temperature record for Sea Surface Temperatures. The IPCC projection looks bogus to anyone who takes the time to plot it. It really does.

Figure 2

CLOSING

The climate models used by the IPCC appear to be missing a number of components that produce the natural multidecadal signal that exists in the instrument-based Sea Surface Temperature record. And if these multidecadal components continue to exist over the next century at similar frequencies and magnitudes, future Sea Surface Temperature observations could fall well short of those projected by the models.

SOURCES

Both the HADISST Sea Surface Temperature data and the IPCC AR4 Hindcast/Projection (TOS) data used in this post are available through the KNMI Climate Explorer. The HADISST data is found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runs webpage. I converted the monthly data to annual averages for this post to simplify the graphs and discussions. And again, the period of 1910 to 1940 was used as the base years for the anomalies.

ABOUT: Bob Tisdale – Climate Observations

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
119 Comments
Inline Feedbacks
View all comments
John B
November 15, 2011 9:23 am

Jim,
A better way of putting it: the net effect of an individual oscillation “evens out” to zero over a long enough timescale.
John

Disko Troop
November 15, 2011 9:29 am

Max, How can you say such a thing? I was one of the guys than ran a maritime mobile weather station. We religiously threw the bucket over the side, or gave it to the apprentice to do, or phoned the engine room for sea inlet temps, depending on the weather or how much time we had. Sometimes we hauled an uninsulated bucket of water 40 feet to the bridge, other times we heaved it in over the rail 6 feet above the water. The sea inlet would be 54 feet deep at max draft and 22 feet deep in ballast. The thermometer was a mercury one calibrated in WHOLE degrees, not halves, tenths or hundredths. I have to agree with you, manipulating data which was never designed to be used for trending is a fools game. The statistical premise is that if you build a high enough pile of garbage, that somehow, above a certain height, it will magically transmute into gold dust is a typical academic fallacy. There were many occasions when we would be the ONLY reporting ship in the entire South Indian Ocean yet today I see these pretty pictures of temperature trends and computerised charts, and hear people telling me that they can identify a trend of 0.7 degrees in a hundred and fifty years. Honest Guv….me computer says it so it must be right. Utter garbage.

Gail Combs
November 15, 2011 9:32 am

ax Hugoson says:
November 15, 2011 at 7:38 am
…The “sea surface temperatures” are based, prior to the 80′s or 90′s or even the ARGO BOUYS, on
fundementally a FICTION.
Ships logs? Guys throwing BUCKETS over the side? Calibration, consistency, QUALITY ASSURANCE? Absolutely lacking. Those data have been manipulated to show WHAT THEY WANTED. I think they are completely BOGUS.
I think others should QUESTION THE SOURCE OF THIS HIGHLY PROCESSED DATA and not accept it at face value.
__________________________________
I read here at WUWT that someone who had done this type of measurement at sea is going back and getting the actual records to look at. He had asked for help but I do not have the pointer bookmarked.
Perhaps someone else does. (He noted that the data was actually taken in a “rigorous manner” at least by UK seamen.)

Gail Combs
November 15, 2011 9:38 am

John B says:
November 15, 2011 at 7:58 am
….. No, I did not contradict myself. Here is my point:
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this. Your Animation 1 is cherry picking, presumably aimed at showing otherwise
_______________________________
So what you are saying is the model long term trends, which are always shown as a straight line headed off the paper, are correct and the earth is going to become VERY VERY hot??
You are also saying that any influence from “natural variability” is minor and should be ignored???
What is your basis for such believe.

November 15, 2011 9:41 am

Tisdale:
In online disputes with warmists, I’ve encountered the claim that the global SST is rising. I haven’t been able to find a chart that I could link to that would counter that–there isn’t one here on WUWT. Is there one anywhere? (Ideally one provided by some supposedly neutral official or academic source.)
If such a one exists, I hope it gets added to this site’s Reference section.

Jiri Moudry
November 15, 2011 9:53 am

We know enough to make 100-year climate predictions but not enough to make a 100-hour weather forecast. That’s a settled science.

November 15, 2011 10:15 am

Hi Bob,
The models are not programmed to “latch” onto the period 1975-2000.
The models produce absolute C ( all over the place ). anomalies are created.
guess the reference period and the effect it will have.
you have to pick an alignment period.

Steve Garcia
November 15, 2011 10:19 am

@Theo Goodwin November 15, 2011 at 7:54 am:

The big lessons are clear to the many who made comments above. The modelers put all their eggs in the CO2 basket. CO2 concentration in the atmosphere increases linearly, at least given their relatively simple minded assumptions. So, the warming had to go up linearly as in the 1975 to 2000 period. In other words, they treated CO2 and its effects on radiation from the sun as the only natural processes that required modeling. Now they are being forced to admit that other natural processes must be treated as important in their own right. The sum total of all those natural processes make up most of what is called natural variability.

These are all things that should have been addressed back in the late 1980s, before drawaing any conclusions. If some of these forcings/processes were not known then, then the ones that were known should have been addressed. To have to be dragged, kicking and screaming, to address them at this late date is an utter scandal.
My VERY FIRST approach to this subject in the 1990s was to go out looking for such studies – looking for the ones thaat falsified ALL other possible forcings, as individual forcings and also as possible combined forcings. When I didn’t find them, I knew this was a case of the Emperor’s New Clothes. Thank Allah and God and Rama and all the other gods, present and past, that Anthony and Steve M, in particular, and Bob and Willis, too, for keeping at it and holding their feet to the fire.

John B
November 15, 2011 10:20 am

Gail Combs says:
November 15, 2011 at 9:38 am
John B says:
November 15, 2011 at 7:58 am
….. No, I did not contradict myself. Here is my point:
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this. Your Animation 1 is cherry picking, presumably aimed at showing otherwise
_______________________________
So what you are saying is the model long term trends, which are always shown as a straight line headed off the paper, are correct and the earth is going to become VERY VERY hot??
You are also saying that any influence from “natural variability” is minor and should be ignored???
What is your basis for such believe.
————————–
Pretty much, with a few provisos…
Model long term trends are only shown as straight lines by the likes of those who want to discredit them, but they do indeed show the Earth becoming hot (whether it is “VERY VERY hot” depends on how hot you like things). How hot depends mainly on future CO2 emissions.
It is not that “natural variability” is small, rather that it does not show a trend over the timescales we are interested in (decades to centuries). AMO, PDO, ENSO all have an ‘O’ because they are oscillations. And there are no natural variabilities that can plausibly explain the trends in observations. For example, GCRs: even if it could be shown that GCRs help clouds form, there has been no trend in GCRs that would explain post-industrial temperature trends. And so on.
And the basis for my belief? Well, it’s not a belief, it is an acceptance of the science. The physics says CO2+feedbacks will cause warming, observations and models confirm it. Science has looked for (and in detail at) alternative explanations, but not of them stack up. Yes, the science could be wrong… and you could have a winning lottery ticket in your hand. I don’t think either is a very safe bet.

John B
November 15, 2011 10:24 am

Jiri Moudry says:
November 15, 2011 at 9:53 am
We know enough to make 100-year climate predictions but not enough to make a 100-hour weather forecast. That’s a settled science.
———————–
Exactly! In the same way that I do not know if it will be warmer next Tuesday than it was today, but I am pretty sure it will be warmer in July.

polistra
November 15, 2011 10:43 am

Climate models should not exist.
Period.
Observe nature.
Period.

Warren in Minnesota
November 15, 2011 10:47 am

Stephen Wilde says:
November 15, 2011 at 7:35 am…
Anything else would be subsumed within those parameters.
Needless to say the current models are not well designed in any of those areas.

I had thought of three of the four parameters that you list, but I hadn’t thought of point three: the latitudinal positions. However, I would add the outside influences of aerosols such as volcanic eruptions.

Keith
November 15, 2011 10:53 am

If the climate models are so good, have been tested to destruction and incorporate everything that is relevant and material, then it must’ve occurred to the modellers to perform runs removing one factor one at a time, i.e. does the model show skill if volcanic aerosols are removed, if variations in solar TSI are removed, solar magnetic flux, manmade aerosols, UV/EUV, oceanic cycles, cloud cover, atmospheric water vapour, etc. If manmade CO2 is dominant, then there should still be a moderate-to-high degree of skill displayed.
We’re often told that only through including man’s CO2 emissions are we able to recreate 20th century temperature trends. OK then, in papers that show 20th century temp recreations by one or numerous models when CO2 is removed as a forcing, are we informed as to what forcings/factors remain and their assigned weightings? If so, are all suspected factors incorporated and, by adjusting weightings, is it impossible to get closer to measured temp trends than the CO2-driven model versions?
I won’t be stunned if there is a model version that does a much better job of anything else published by limiting CO2 to a very minor role and focusing on ALL solar activity, global cloud cover, volcanic aerosols and oceanic cycles. I WILL be stunned, though, if this model version was stuck with and its results and methodology ever published.
Bob, excellent work as ever.

John B
November 15, 2011 11:12 am


Is this the kind of thing you are looking for:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch9s9-2-2.html
So yes, they have looked at forcings in isolation and in combination. Lots of people have looked at exactly that.

Editor
November 15, 2011 11:12 am

John B says: “Yes, I read the post. No, I did not contradict myself.”
Anyone reading your comments can see you’ve contradicted yourself. In your November 15, 2011 at 3:31 am comment you wrote, “The hindcast shows that the models replicate past climate pretty well over scales of decades to centuries.” Pretty well is subjective but anyone looking at Animation 1 above can see that this is incorrect. Then in your November 15, 2011 at 3:38 am comment you wrote, “But the models do not include multidecadal oscillations, and were pretty good, though obviously they missed the spike due to the 1998 El Nino.” That appears to be a contradiction, John B. One would think for the models to “replicate past climate pretty well over scales of decades to centuries,” that multidecadal would be included in those timeframes.
Back to your November 15, 2011 at 7:58 am comment: There you wrote, “The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends. Your Figure 1 shows this.”
Again, John B, you continue to overlook the topic of this post. The post is not about long-term trends. They’re not being discussed on this thread. The title of my post is, Should Climate Models Be Initialized To Replicate The Multidecadal Variability Of The Instrument Temperature Record During The 20th Century? That’s the subject discussed in the post. A similar question is asked twice in it, which is why I asked you earlier if you had read it.
You continued, “Your Animation 1 is cherry picking, presumably aimed at showing otherwise”
Cherry picking? You actually made me laugh with that one, John B. Cherry picking? Really? There is a well-know multidecadal signal in the instrument temperature record. It’s even acknowledged by the IPCC in AR4. In Chapter 3, page 249, they state, “Clearly, the changes [in Global Temperature] are not linear and can also be characterized as level prior to about 1915, a warming to about 1945, leveling out or even a slight decrease until the 1970s, and a fairly linear upward trend since then (Figure 3.6 and FAQ 3.1).”
If I had wanted to cherry pick, I would have presented this:
http://i40.tinypic.com/o7n23s.jpg
Note that the mid-century flat spell is now slightly negative. And of course a couple more moderate El Nino events, followed by back-to-back La Nina events, as have happened since about 2005, would lower the trend after 1998 even more. I could have cherry picked for this post but I didn’t.
Just in case you missed the link, refer to this post and click on the link marked “IMAGINE, IF YOU WILL…” toward the bottom of it. That’s the worst case scenario the IPCC is facing because they don’t consider multidecadal variability:
http://bobtisdale.wordpress.com/2011/11/14/imagine-if-you-will/
Now, if you’re not aware, climate model outputs of SST data bear no likeness to observations over the past 30 years or so. Refer to the following two posts. They’ve also been cross posted here at WUWT. I’m sure you’re aware of the implications of that with respect to atmospheric circulation, since much of atmospheric circulation is dependent on the oceans:
http://bobtisdale.wordpress.com/2011/04/10/part-1-%e2%80%93-satellite-era-sea-surface-temperature-versus-ipcc-hindcastprojections/
And:
http://bobtisdale.wordpress.com/2011/04/19/492/
I have the feeling you also need to be made aware that the SST data for the past 30 years does not support the hypothesis of AGW. In the following two posts, I discuss and illustrate that fact pretty well. They too hav been cross posted here. The first one starts off with an introductory discussion of ENSO. My guess is you misunderstand the El Nino-Southern Oscillation as well:
http://bobtisdale.wordpress.com/2011/07/26/enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/
And:
http://bobtisdale.wordpress.com/2011/08/07/supplement-to-enso-indices-do-not-represent-the-process-of-enso-or-its-impact-on-global-temperature/

Matt
November 15, 2011 11:12 am

TBear,
No, this post does not say that the climate models are crap. Crap can be used as fertalizer and thus has some redeaming value. The climate models on the other hand….

November 15, 2011 11:49 am

http://climexp.knmi.nl/data/tcet_mean1a.png
This is the longest instrumental record, CET. Can anyone run the model for given area backwards to see the result? Only when all ups and downs will be replayed, THEN we can claim “Sun it is not, nor clouds or aerosols, so it must be CO2 because what else”.

P. Solar
November 15, 2011 12:07 pm

Here are a couple or simple models that show the relative magnitudes of the cyclic and quadratic components.(result of an exponential rise in CO2 conc)
Basic method is to fit a straight line to dT/dt to account for CO2 and try to characterise the majority of the residual with cosines.
Even one ~58y cosine plus a quadratic is better than just about any super computer model.
CO2 emmissions:
http://tinypic.com/r/r76l4h/5
trivial model
http://tinypic.com/r/2nrn24m/5
better model (also shows Scaffeta)
http://tinypic.com/r/2dw924i/5

November 15, 2011 12:08 pm

Keith:
“If the climate models are so good, have been tested to destruction and incorporate everything that is relevant and material, then it must’ve occurred to the modellers to perform runs removing one factor one at a time, i.e. does the model show skill if volcanic aerosols are removed, if variations in solar TSI are removed, solar magnetic flux, manmade aerosols, UV/EUV, oceanic cycles, cloud cover, atmospheric water vapour, etc. If manmade CO2 is dominant, then there should still be a moderate-to-high degree of skill displayed.”
You clearly don’t understand how climate models work, how they are tested and how attribution is done.
1 does the model show skill if volcanic aerosols are removed,
This has been explicitly tested by seeing how models respond to volcanic eruptions. It is one of the better ( but still not perfect ) aspects of the models. Also, the entire sulphur cycle can be
flipped on and off. Skill improves when its on.
2.variations in solar TSI are removed, TSI is also tested. The biggest issue with TSI is NOT
how the models handle it, but rather
A. the historical forcings
B. projecting the future. In the runs that everybody is looking at some of the models
projected flat TSI going forward. Flat from a high baseline. This leads models to overestimate
in the short term, which they have done.
3. solar magnetic flux. To incorporate a physical cause you need physics that connects the
variable to other variables in the model. Missing physics.
4.manmade aerosols, Yup they are in there. You can see the response to adding them or not.
5.UV/EUV : this is an area that some models will cover better than others, based on their
atmosphreic chemistry modules
6:oceanic cycles: This is an OUTPUT of the models not an input you vary! many people make
this mistake. They think that the EMERGENT properties of the system should somehow
be inputs. There are not. What Bob is showing you is that the output of the models does
not capture the emergent properties perfectly.
7 cloud cover: cloud cover is not an input, you dont vary it. cloud cover is an output
8:atmospheric water vapour: this is also an output. Although NASA did one test where they
ZEROED the water vapor. Naturally, the model responded correctly and water vapor returned
to the atmosphere
C02. Here is the simple fact. If you run the models without C02 forcing they perform poorly in hindcast. They miss the current warming. If you include C02 the models do better.
Why?
Simple. GHGs cause warming. More GHGs, more warming. Ask Anthony, WIllis, Lindzen, Christy, Spencer, Monkton, all skeptics ( with backgrounds in physics or good reading skills) understand that more GHGs means warmer planet.
Does that mean that all the warming is caused by C02? No
It means exactly what it shows: without C02, the models get a F
With C02 the models get a C or B. perfect? hardly. Do they confirm ( not prove) that
our core understanding is correct. Yes. Should they be used to set policy?
That is a whole different question.

P. Solar
November 15, 2011 12:14 pm

>>
The models do not include mulitdecadal oscillations. They do not need to in order to model long term trends.
>>
No, but if you use “the last 50y of 20th c” as your reference period and ignore the cyclic components you are going to confound GH warming and natural cycles and head off in the wrong direction after y2k.

Editor
November 15, 2011 12:22 pm

Ross Sheehy – “I suppose they could always just whack in a giant sine curve and then adjust the parameters to pretend they know what is actually going on in the world. Even that seems too hard.
Try this:
http://members.westnet.com.au/jonas1/HadleyCurveFit20111114.jpg
I would suggest that it gives a much better clue than fitting a straight line to the last 30 years of the 20thC.
Braddles – “It’s been said before, but every post on modelling should say it again: any model that uses curve fitting has no predictive value.
Correct. So the above graph does indeed have no predictive value. One value it does have, however, is that it demonstrates very clearly that any straight line or curve fitted to 30-odd years of data cannot possibly have any predictive value. Another of its values is that it can point to possible actual influences on temperature which can then be investigated – once there is a mechanism the rules change. See Vukcevic’s link
http://www.vukcevic.talktalk.net/theAMO.htm
“Global importance of the AMO is underlined by the recent Berkeley Earth Project:
We find that the strongest cross-correlation of the decadal fluctuations in ( global ) land surface temperature is not with ENSO but with the AMO.
I would have expected PDO or PDO+AMO, but the point is made – natural factors drive global temperature far more than is understood by the IPCC.
John B – The models do not work by latching on to trends. They work by simulating the effects of known physics and various forcings on a simplified, gridded model of the atmosphere and oceans.
For “various” read “selected”. Even that’s being generous, try : For “various” read “CO2”. And they do latch on to trends, that’s how they calibrate the models – look in the IPCC report for the words “parametrization” and “constrained by observation”.

Jim Masterson
November 15, 2011 12:34 pm

>>
John B says:
November 15, 2011 at 9:23 am
Jim,
A better way of putting it: the net effect of an individual oscillation “evens out” to zero over a long enough timescale.
John
<<
In physics, the usual purpose of models is to investigate and discover the processes that are really happening. Apparently the purpose of climate models is to dumb them down so they only show that CO2 is the boogeyman.
>>
John B says:
November 15, 2011 at 10:20 am
And the basis for my belief? Well, it’s not a belief, it is an acceptance of the science. The physics says CO2+feedbacks will cause warming, observations and models confirm it. Science has looked for (and in detail at) alternative explanations, but not of them stack up.
<<
Even Trenberth’s simple cartoon energy model requires that the atmosphere warm faster than the surface. It’s a physical requirement of the feedback model used in the cartoon. The problem appears to be lack of imagination–not lack of alternative explanations. Albedo change (that stays well within the current albedo error ranges) explains the surface warming while warming the atmosphere by the correct, lesser amount.
Jim

P. Solar
November 15, 2011 12:43 pm

“7 cloud cover: cloud cover is not an input, you dont vary it. cloud cover is an output”
Not quite true. They have minimal understanding of cloud formation and precipitation and therefore cannot model the physics. Instead they “parametrise” it. At that point it becomes an input.
Currently used “parameters” cause the models to produce a climate sensitivity that questionable to say the least.

P. Solar
November 15, 2011 12:45 pm

>> The physics says CO2+feedbacks will cause warming
No the feedbacks are pure speculation , not science.