A new paper published in Ecological Modelling finds climate sensitivity to doubled CO2 concentrations is significantly lower than estimates from the IPCC and climate models which “utilize uncertain historical data and make various assumptions about forcings.” The author instead uses a ‘minimal model’ with the fewest possible assumptions and least data uncertainty to derive a transient climate sensitivity of only 1.093C:
“A minimal model was used that has the fewest possible assumptions and the least data uncertainty. Using only the historical surface temperature record, the periodic temperature oscillations often associated with the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation were estimated and subtracted from the surface temperature data, leaving a linear warming trend identified as an anthropogenic signal. This estimated rate of warming was related to the fraction of a log CO2 doubling from 1959 to 2013 to give an estimated transient sensitivity of 1.093 °C (0.96–1.23 °C 95% confidence limits) and equilibrium climate sensitivity of 1.99 °C (1.75–2.23 °C). It is argued that higher estimates derived from climate models are incorrect because they disagree with empirical estimates.”
Otto et al find equilibrium climate sensitivity [over the next several centuries] is only ~1.3 times greater than transient climate sensitivity, thus the estimate of 1.093C transient sensitivity could be associated with as little as 1.4C equilibrium sensitivity, less than half of the implied IPCC central estimate in AR5 of ~3.3C.
Moreover, this paper does not assume any solar forcing or solar amplification mechanisms. The integral of solar activity plus ocean oscillations explain ~95% of global temperature change over the past 400 years. Including potential solar forcing into the ‘minimal model’ could substantially reduce estimated climate sensitivity to CO2 to a much greater extent.
- Empirical estimates of climate sensitivity are highly uncertain.
- Anthropogenic warming was estimated by signal decomposition.
- Warming and forcing were equated in the time domain to obtain sensitivity.
- Estimated sensitivity is 1.093 °C (transient) and 1.99 °C (equilibrium).
- Empirical study sensitivity estimates fall below those based on GCMs [Global Circulation Models].
Abstract
Climate sensitivity summarizes the net effect of a change in forcing on Earth’s surface temperature. Estimates based on energy balance calculations give generally lower values for sensitivity (< 2 °C per doubling of forcing) than those based on general circulation models, but utilize uncertain historical data and make various assumptions about forcings. A minimal model was used that has the fewest possible assumptions and the least data uncertainty. Using only the historical surface temperature record, the periodic temperature oscillations often associated with the Pacific Decadal Oscillation and Atlantic Multidecadal Oscillation were estimated and subtracted from the surface temperature data, leaving a linear warming trend identified as an anthropogenic signal. This estimated rate of warming was related to the fraction of a log CO2 doubling from 1959 to 2013 to give an estimated transient sensitivity of 1.093 °C (0.96–1.23 °C 95% confidence limits) and equilibrium climate sensitivity of 1.99 °C (1.75–2.23 °C). It is argued that higher estimates derived from climate models are incorrect because they disagree with empirical estimates.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Isn’t 1c around what would be expected from CO2 warming + no feedbacks?
Hilarious 🙂
Very close to Arrhenius’s final estimate. If the current calculation is correct, then it seems climate science took about 100 years to arrive back to where it started. 😉
Past atmospheric CO2 content is very poorly covered. When measurements were taken they were mostly not believed. All studies seem to take the Calender 1948 paper as gospel but is it? No because of all readings available to him then he put a ceiling of 285ppmv as maximum. unfortunately 50% of readings then exceeded 285ppmv some by more than twice. This is not science but biased reporting.
They show no evidence that the new model with the higher figures is correct and improved. After the “step change” we have much more data thus it is more likely the correct number.
“leaving a linear warming trend identified as an anthropogenic signal. ”
While attempting to account for natural variability is good step, I don’t see the justification for assuming the resulting “trend” is AGW.
firstly the world has been warming for several 100 years, clearly there is a long term process independent of AGW.
Secondly, CO2 effect is not calculated to be linear.
Using only the historical surface temperature record, the periodic temperature oscillations often associated with the Pacific Decadal Oscillation and Atlantic
Multidecadal Oscillation were estimated and subtracted from the surface temperature
subtracted PDO ? wrong to do that.
Calling on Mr. Tisdale !:
This is just another attempt a ‘accelerated cosine warming’ :
http://climategrog.wordpress.com/?attachment_id=209
There are roughly two and a half cycles in the sample all they are doing is calling the difference of fitting a (inappropriate) linear model to 2 and half cycles to that of the recent rise.
http://climategrog.wordpress.com/?attachment_id=987
The second graph reproduced here is a joke. Having removed the fitted cyclic variability they then _selectively_ fit a linear ‘trend’ to 1950-2010. What about the rest of the data ??
There is huge deviation of this fitted trend to early part of the “residual” line that is dropping a fast as fitted trend is rising. That means there is a non-AGW variability that has still not been removed before selectively fitting a supposed AGW signal to the data.
Even with those huge errors in their method they arrive at a value half of IPCC claims.
I suppose it’s progress of a sort, inch by inch they are giving ground but it’s going to take decades before they finally admit CO2 sensitivity is sod_all +/- 50% .
I think Lindzen & Choi 2011 is probably the most accurate assessment yet and it is the outlier.
They were so far off anyone else’s work that I originally dismissed their result. I suspect a lot of people , even being object, may have the same reaction but the more I actually do calculations and look at the evidence, the more I’m convinced that they are correct.
http://climategrog.wordpress.com/?attachment_id=884
Lindzen & Choi 2011: “On the Observational Determination of Climate Sensitivity and Its Implications”
http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf
Lindzen and Choi are the only ones who don’t make the mistake of doing linear regression on a scatter plot, a basic but ubiquitous error that reduces the slope and thus gives a artificially high value of CS:
http://climategrog.wordpress.com/2014/03/08/on-inappropriate-use-of-ols/
This paper jives with my comment about the last one. If 1950 to present presents the hockey stick, but does NOT contain the lengthy “periodic pauses” which were found to be present throughout the record then the models overestimate man-made warming significantly. This paper finds the same thing.
If they were to take the linear trend from 1880-1940 ( ie peak to peak of the harmonic components they have selected ) it may make more sense. They could compare that to 1940-2000 slope.
1840-1950 is just one and half cycles of “cosine warming”.
This is rather refreshing:
Since the temperatures have been higher before, how can anyone conclude that the current warming isn’t just a natural return to higher temperatures as part of the recovery from the LIA?
in 2002 we wrote:
“Without these speculated positive feedbacks, even a doubling of CO2 concentration would lead to a theoretical warming of only approximately 1º C.”
http://www.apega.ca/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
BUT:
1. There are negative feedbacks, ECS, if it exists at all , is much less than 1C. Say ~~0.2C.
2. BUT CO2 LAGS temperature at all measured time scales, so perhaps we should not be talking about ECS (temperature sensitivity to atmospheric CO2), but rather the sensitivity of CO2 to temperature.
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no
P,S, Greg Goodman – good comments here and on another thread. Good man. 🙂
Craig Loehle, it appears you removed the multidecadal cycles from global surface temperatures using a statistical model, not AMO and PDO data. Is that correct?
This might get snipped, but I’ll take a chance.
On 60/20 year sharmonic cycles employed.
Climate related indices are mish-mash of frequencies, a selection I used here (some time ago for another purpose) doesn’t have any noticeable presence of a 60 year component, while the 20 year one is likely to be the solar magnetic field (Hale) cycle of ~22 years, which shows a very strong presence. Another bunch is concentrated about the luni-solar period of about 18 years.
Just a reminder, while there is a “consensus” that warming is happening and humans are a cause, there is absolutely NO CONSENSUS on the value of climate sensitivity.
In the IPCC’s own words (AR5 SPM, Note 16): “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies”
“Lack of Agreement” is exactly the opposite of “Consensus”.
MY THEORY (critiques welcomed):
One potential source of disagreement between empirical, paleo, and GCM estimates for Climate Sensitivity (and Transient Climate Response) could be the recent unexpected behaviour of Antarctic sea ice.
Models (and common sense) suggest that as the planet warms, sea ice should recede. We’ve seen that in the Arctic, but the Antarctic has, for no apparent reason, done the opposite. I don’t know of any models that predicted an increase in southern sea ice, nor of any paleo instances of this occuring.
This recent unexpected behaviour would have the effect of changing the sign of ice albedo (IA) feedback, at least in the southern hemisphere, from positive to negative. and probably from positive to neutral for the planet as a whole.
The lower feedback in the southern hemisphere could explain the slower rate of warming observed there, and the overall lower warming observed recently, including the lower estimates of Climate Sensitivity (and TCR) derived from empirical methods.
I have never been able to understand what ”equilibrium sensitivity” really is. It is supposed to be the effect of the injection of a quantity of CO2 into the atmosphere after the temperature change has reached equilibrium with the oceans. However this equilibration can’t be complete until there has been a complete turnover of the deep ocean waters through the thermohaline circulation, which takes on the order of 1000 years, and by that time all, or almost all, the CO2 will be gone from the atmosphere, into the biosphere, into the ocean waters, into carbonate minerals or into (inert) organic bottom deposits in lakes and oceans.
This decline of the CO2 level is universally modelled as an exponential decay that leaves a substantial proportion of the CO2 in the atmosphere indefinitely. I once asked a very eminent climate scientist how this could be true, since it implied that CO2 levels could only increase and never decrease over geologic times, while we know that on the whole the opposite is true. He told me that the model was only meant to be used over relatively short timespans, perhaps up to a couple of centuries.
So the equilibrium sensitivity is calculated by using a formula that explicitly is known not to be applicable to calculating the equilibrium sensitivity. I suppose that is about par for the course in ‘climate science’.
By the way: no I won’t tell the name of the scientist. He is already in enough trouble for occasionally deviating from the party line.
…saving me from having to make similar comments. Such as, what about running the data back to (say) 12,000 BCE? 500,000 BCE? 5,000,000 BCE? Not to establish the correspondence with CO_2, as we don’t have accurate global CO_2 estimates for more than the last 30+ years, perhaps even 60 years, but certainly not for 150 years (and well in to that stretch we learn more from proxies and might as well shoot for the longer time stretches). To establish a) the natural variability of the climate and b) to demonstrate trivially that there is no visible cyclic variability to remove in the general temperature record. There is no viable physical model that explains the cycles they are fitting that has the slightest predictivity across the geological or proxy-derived temperature record, and in a nonlinear chaotic system such as climate, the decadal oscillations are probably quasiparticle structure, that appears, disappears, and spontaneously alters in any sort of chaotic sequence.
This is just a more complex form of numerology of the sort that is regularly argued on WUWT pages. 1000 year cycles. 60 year cycles. 100,000 year cycles. 26,000 year cycles. Jupiter, Saturn, the Moon. ENSO. PDO. AMO. NAO. The solar cycle. We cannot build a credible deterministic or explanatory model for the climate because it is an insanely difficult computational problem, so we are reduced to numerology in one or two dimensions to explain a fractal, chaotic phenomenon with an enormous dimensionality.
I sometimes weep for my species. Oh wait! The clouds outside look just like a giant whale at this very moment! Whales are lucky! I’m gonna go buy some lottery tickets.
What could go wrong?
rgb
Reblogged this on Centinel2012 and commented:
This matches very closely with several other recent papers that show CO2 sensitivity at 1.o degree C or less!
This is making a lot of sense – listen up!
“Moreover, this paper does not assume any solar forcing or solar amplification mechanisms. The integral of solar activity plus ocean oscillations explain ~95% of global temperature change over the past 400 years. Including potential solar forcing into the ‘minimal model’ could substantially reduce estimated climate sensitivity to CO2 to a much greater extent.”
1. It is impractical to measure climate sensitivity at the present time. Therefore ALL numeric values quoted are nothing more than guesses, and have no place in science, physics.
2. No-one, and I mean no-one has measured a CO2 signal in any modern temperature/time graph. So it follows that there is significant data suggesting that the true value of climate sensitivity, however defined, is indistinguishable from zero.
All estimates of climate sensitivity have no place in true physics. Anyone who deems to call him/herself a physicist should shout it from the rooftops that ALL and ANY numeric values of climate sensitivity which ANYONE quotes are utterly and completely meaningless.
Three significant figures, LOL.
They should talk to Mosher’s friends at Antarctica to use this algorithm instead of the Bootstrap one. These computer modelling algorithms are much better than the one used to count white pixels. Apparently the error of Bootstrap algorithm could be of the size of Alaska, maybe.
rgbatduke says: July 23, 2014 at 5:28 am
The clouds outside look just like a giant whale at this very moment! Whales are lucky!
Professor Brown,
your colleague Professor Phil Jones of Anglia University ‘saw’ a more superior one, the other day just down the road from the CRU.
http://i.telegraph.co.uk/multimedia/archive/02983/gods-face0_2983566k.jpg