It seems depending on who you talk to, climate sensitivity is either underestimated or overestimated. In this case, a model suggests forcing is underestimated. One thing is clear, science does not yet know for certain what the true climate sensitivity to CO2 forcings is.
There is a new Paper from Tanaka et al (download here PDF) that describes how forcing uncertainty may be underestimated. Like the story of Sisyphus, an atmospheric system with negative feedbacks will roll heat back down the hill. With positive feedbacks, it gets easier to heatup the further uphill you go. The question is, which is it?
Insufficient Forcing Uncertainty Underestimates the Risk of High Climate Sensitivity

ABSTRACT
Uncertainty in climate sensitivity is a fundamental problem for projections of the future climate. Equilibrium climate sensitivity is defined as the asymptotic response of global-mean surface air temperature to a doubling of the atmospheric CO2 concentration from the preindustrial level (≈ 280 ppm). In spite of various efforts to estimate its value, climate sensitivity is still not well constrained. Here we show that the probability of high climate sensitivity is higher than previously thought because uncertainty in historical radiative forcing has not been sufficiently considered. The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely. We call for further research on how best to represent forcing uncertainty.
CONCLUDING REMARKS
Our ACC2 inversion approach has indicated that by including more uncertainty in
radiative forcing, the probability of high climate sensitivity becomes higher, although low climate sensitivity (< 2°C) remains very unlikely. Thus in order to quantify the uncertainty in high climate sensitivity, it is of paramount importance to represent forcing uncertainty correctly, neither as restrictive as in the forcing scaling approach (as in previous studies) nor as free as in the missing forcing approach. Estimating the autocorrelation structure of missing forcing is still an issue in the missing forcing approach. We qualitatively demonstrate the importance of forcing uncertainty in estimating climate sensitivity – however, the question is still open as to how to appropriately represent the forcing uncertainty.
h/t and thanks to Leif Svalgaard
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The “missing forcing” is produced groundwater from slow to recharge aquifers that is not in equilibrium with the atmosphere for the first cycle. This “new” water amounts to about 800 cubic kilometers per year, about 70% of which is used for food and forage irrigation. The potential energy in the water is changed into kinetic energy at constant temperature by evapotranspiration. The rising water vapor is condensed due to lower temperatures. The condensation process converts the kinetic energy back into potential energy, giving up the absorbed latent heat as specific heat thereby increasing the temperature of the atmosphere.
After the first 10 day cycle, the produced groundwater becomes part of the hydrological cycle and thus adds no more heat; however the production of groundwater from slow to recharge aquifers continues and slowly increases each year. The rate of increase in groundwater production has slowed somewhat in recent years.
The condensed new water adds about 2.5 mm to the level of the oceans each year.
I understand why the water vapor created by the hydrological cycle is not in accounted for in the AGW models but leaving out the new water (about 7% of which comes from burning fossil fuels) is a serious oversight in my view. The new water vapor adds much more heat to the atmosphere than is reflected in global warming, thus there is a “temperature relief valve” in the atmosphere, perhaps in the Tropopause, where the relative humidity is decreasing due to increasing carbon dioxide lowering the partial pressure of the mixture.
Also not considered by the AGW models is water vapor from power and industrial plant mechanical draft (and natural draft in the case of nuclear plants) cooling towers. These cooling towers accelerate evaporation and create aerosols inside the towers, which may create different cloud structures. Unless the water supply was from groundwater produced from slow to recharge aquifers, cooling towers would not increase the level of the oceans.
Joel Shore, the highest CO2 sensitivity one can get from the PaleoClimate record is 1.5C per doubling of CO2.
There are a few periods which indicate a higher sensitivity number but most periods indicate a much lower than 1.5C per doubling figure.
Let’s look at 450 million years ago. CO2 4,500 ppm, 4 doublings – Solar forcing 5% lower than today or 4.5 watts/m^2 or 1.2C – Estimated Temp at the time +2.0C from today or just 0.8C per doubling netting out the CO2 and solar changes.
Let’s look at 35 million years ago. CO2 1,400 ppm, 2.5 doublings – Estimated temp at the time +2.0C from today or 0.8C per doubling again.
I can keep going with about 500 million other years if you want.
Here we show that the probability of high
climate sensitivity is higher than previously thought because uncertainty in historical
radiative forcing has not been sufficiently considered.
As for the Solar Forcing, visible activity on the Sun has been counted from the start, and then changed in midstream. Actual measurements of visible activity on the sun began somewhat later, has suffered from changing of what is measured and how it is measured, as well a dropped effort leaving large gaps.
Let me make clear that there is a whole quantitative difference between counting the number of objects to fit into a scaling system, and the measuring of the volume of those objects.
When it comes to activer regions, sunpots, flares, etc on the Sun, it makes no difference whether counting or measuring, zero is zero.
So, when activity is moving upwards, counting takes a back seat to measuring. Since the ratio of area to count is in a constant state of change,
and the type of phenomena on the Sun also chage in relation to each other, there are no magic ratios to apply to go backwards to apply to counts.
The only way to do that is to dig up all drawings/photos and measure them in the same way.
For a picture of the record and it’s gap/uncertainties:
http://www.robertb.darkhorizons.org/SC24/GrDebSfoSnUPF.PNG
I’m still at it trying to link together the most consistent measurements w/overlaps. It’s not pretty.
Darell C. Phillips (11:27:20) :
I had to go look when you brought it up. Thanks, I am not going to sleep tonight.
The watermelon isn’t floating, just sitting on hard surface right below the waterline. Ditto cat. Very few cats like water, but a rare one does. Has the SPCA seen this?
If CO2 really has a great effect, why are we here? In times long past there was lots of CO2. How did any life survive? If CO2 is the only thing driving warming, it should have been really hot back then. Unless, there are other factors? Unknown factors? Things that the models don’t consider. Perish the thought!
To claim because we don’t know, the real value must be worse than what we previously thought we knew is almost as dumb as someone saying “We had to spend a lot of money to keep from going bankrupt.”
Oh, wait …
Leif Svalgaard (11:01:25) :
So in percentages you have a swing of 7% and a solar cycle effect of 0.1% [or almost a hundred times smaller]. That translates into a quarter [S-B law, not round vs. flat] temperature change, so 1.7% of 285K = 5K annual swing and 0.025% of 285K = 0.07K for the solar cycle effect [also almost a hundred times smaller].
Ok, but do you agree that the mid day tropical sea surface sees the full ~1.4W/m^2(PMOD) or ~2.2W/m^2(ACRIM) variation over the solar cycle?
tallbloke (12:12:02) :
Ok, but do you agree that the mid day tropical sea surface sees the full ~1.4W/m^2(PMOD) or ~2.2W/m^2(ACRIM) variation over the solar cycle?
No, there is this little thing called the albedo, which takes a 31% [or so] bite out of the radiation, and the solar cycle variation is more like 1.2 W/m2, so we are talking about a tiny 0.69*1.2 = 0.8 W/m2 variation at midday. Integrated over the day, that falls to 0.3 W/m2.
There’s a post on Real Climate about forcing and unknowns — Swanson’s
“Warming, interrupted: Much ado about natural variability”.
I tried to post there but for some reason my tries vanish — they must have a very fierce spam filter. Or something. This is what I thought about that paper — it’s also relevant to the subject here — together with the curious way these papers make assumptions which are not spelled out fully but which influence the logic.
“Somewhere buried in Google are my thoughts on this — no doubt Professor Swanson will be delighted to find that I came to much the same conclusion as he did. My preferred metric is Hadcrut: there is a linear increase from 1910 to 1939 and a similar line from 1970 to the 90’s el Nino, about .17 deg/decade.
I cannot share his confidence that this fits neatly into the current forcing hypothesis. Forcing in the first linear period must have been very different from that during the second, not least because, as I have been told, the real CO2 effect only kicked in with full force during the ’60s. Also, his logic that the sensitivity remains the same depends on the assumption that any other effect is small and short-lived compared to that from CO2. If there is an as yet undefined forcing which operates more-or-less continuously during both periods then all bets are off. Something, not CO2, was warming the planet from 1910 onwards. The similarity with the post-war warming may be coincidence, of course, but it doesn’t seem necessary to multiply the causes when assuming the same effect is in op-eration is so much more economical.
The Folland and Parker SST correction, in my opinion, is ill-judged. Without it, the two linear warming spells stand out clearly, as does the miniature PETM that is the period 1939 to ‘46. On the clean graph it looks as if the planet were controlling itself into a steady .17 deg/decade, yielding to extra-large warming hits but, rolling with the punches, resuming from where it left off when the punching stops.
Now all we have to do is explain the WWII temperature excursion and the huge 97/98 el Nino spike. It’s a pity the latter did not occur just after the Gulf War. If it had, I could have done that too.”
JF
“”” Robert Austin (11:23:07) :
David (08:56:20) :
The role of water in earths climate is certainly the most central and least understood factor. Water exists on earth in three phases, it has a complex absorption spectrum, it stores and releases vast quantities of heat energy and it has great effects on albedo in the form of clouds , snow and ice. Without the assumption of a positive feedback from water, the vaunted computer models show no frightening scenarios. CO2 is a simple gas and an climatological open book by comparison. Conclusion: nobody is close to pinning down the role of water in climate regulation. “””
“Least Understood factor ? ” By whom ?
I would say the water factor is well understood; so I suggest you reduce the number of people you choose to include in your nobodys who aren’t close to pinning the role of water; well you need to reduce it by at least one. And I’m not responsible for anybody else’s lack of understanding.
George
This is incorrect since forcing is never expected to be zero when one expects a logarithmic relationship of CO2 to temperature.
You have a log curve (most likely) in this case. However, you have to multiply your log curve by some sensitivity value. In strictly mathematical considerations it could be zero.
But give me ANY monotonically rising curve multiplied by a sensitivity and the lower bound is zero.
Now of course for a log curve the effect is undefined at zero. But for engineering work (and that is as close as measured climate gets) it could be considered zero. But if it makes you happy I will just say at concentrations of 1E-666 ppmv the value is very close to zero. At any reasonable sensitivity (say up to 20 kw m-2). Not counting quantum effects. (Where in the universe is that CO2 molecule any way? Does it even exist?)
So really, below a certain concentration the CO2 molecule doesn’t even exist anywhere in the universe. Mathematics has its limitations. It is however a thing of great beauty and sometimes it even has practical applications in bounded ranges.
Yeah. I know. Spoken like an engineer.
Joel Shore:
“A positive feedback does not necessarily lead to a run-away effect. It has to be sufficiently strong to do so. If it is not, it simply leads to amplification of the warming.”
No. This is just a gross reversal of cause and effect. Try adding the effect of positive feedback to the result of each iteration. This is elsewhere known as “Compound interest.”
Why is is so that LOD (length of the day) forecast of temperatures and fish catches works nice for UN´s FAO and not for UN´s IPCC?
I am insisting on this issue to call your attention because it is a case of UN vs.UN.
The paper is divided in 12 pdf at:
ftp://ftp.fao.org/docrep/fao/005/y2787e/
Boudu (00:26:46) :
Well, that’s settled then.
My thoughts exactly.
Allan M (14:09:24) :
You’re wasting your time with that one. Warmists don’t appreciate that positive feedback also acts on the signal fed back, I’ve had the argument too many times now.
DaveE.
Allan M,
Quite right. There is a lot of confusion between amplification and positive feedback. Climate “science” is full of it. (and you can take that in several ways and for most of those currently involved true).
Leif Svalgaard (12:49:19) :
tallbloke (12:12:02) :
Ok, but do you agree that the mid day tropical sea surface sees the full ~1.4W/m^2(PMOD) or ~2.2W/m^2(ACRIM) variation over the solar cycle?
No, there is this little thing called the albedo, which takes a 31% [or so] bite out of the radiation, and the solar cycle variation is more like 1.2 W/m2, so we are talking about a tiny 0.69*1.2 = 0.8 W/m2 variation at midday. Integrated over the day, that falls to 0.3 W/m2.
Or about 1.4W/m^2 according to the ACRIM/Neptune data. Which isn’t far off the 1.7W/m^2 the IPCC claim for co2, which is of course overinflated by their failure to account for positive phases of oceanic cycles 1975-2005.
Amd there is hardly any cloud over the Pacific warm pool most of he time when it’s in heat absorbing mode during the top of solar cycles, so that makes it a bigger forcing than just about anything else, on an ~11 year cyclic basis.
But if TSI follows the sunspot number fairly reliably, the increase in sunspot numbers during the C20th are going to have quite an effect on longer term climate trends too, especially if ACRIM’s measurements of absolute solar output are nearer the truth’s than PMOD’s.
What independent means of calibration are there? Hoyt’s survey of measurements on aerosols must spread over several solar cycles. Can they be used to estimate the difference in solar radiation between solar max and min? How were the satellite’s sensors calibrated?
Bill Illis (11:42:22) :
Joel Shore, the highest CO2 sensitivity one can get from the PaleoClimate record is 1.5C per doubling of CO2.
Let’s look at 450 million years ago….
Bill- calculating CO2 sensitivity based on paleoclimate MILLIONS of years ago is not as straight forward as you seem to think.
The layout of the continents was much different, and the uncertainties in the proxies get bigger the further back you go.
Conditions during the last glacial maximum (CO2 levels, layout of the continents, extent of ice sheets, dust levels, etc) are much better known. Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.
But not a chemical engineer M. Simon. The point of CO2 in the atmosphere never gets to zero no matter how hard you try to make it go there. 0.0 stays undefined. Also, such graphs should be put on semi-log paper so one doesn’t make an assumption it should be treated like zero. Old ChemE trick.
The wide range of climate “sensitivity” to doubling of CO2 will not be narrowed by any study that conflates a purely internal capacitive effect–which is all that is involved in the Earth’s “greenhouse”–with external forcing. Capacitors produce no power on their own; they can only store and discharge energy.
Somebody help me out here.
We hear a lot about absorption spectra, radiative spectra, ad nauseum.
Could it just possibly be that the primary mode of heat transfer from our lonely little CO2 molecule is not radiation.
Since CO2 is such a well mixed gas, methinks convection may have a tiny bit to say about the result. That is seemingly ignored.
Chris V. (14:51:26) :
Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.
That’s an error range big enough to make both you and Bill right.
And the models don’t give that as an output in any meaningful sense of the word proof, they close the radiation budget with that figure because everything else is fitted to it.
“Conditions during the last glacial maximum (CO2 levels, layout of the continents, extent of ice sheets, dust levels, etc) are much better known. Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.”
Would not a cold, dry atmosphere necessitate a higher forcing value for CO2?
Chris V. (14:51:26) :
Using the LGM, CO2 sensitivities are calculated to be about 3C +/- 1.5C, basically the same sensitivities the models give.
Those numbers indicate that CO2 was responsible for 1.9C of the 5.0C change in temperatures during the ice ages.
So the majority of the temperature change, 3.1C, is due to other factors.
How do we know that the other factors are not in fact responsible for 4.0C and CO2 was only responsible for 1.0C (or the same 1.5C per doubling).
The climate models are programmed to give 3.0C per doubling. The fact that a climate model result indicated the sensitivity is 3.0C per doubling is not an unexpected result it seems. A climate model is not proof when it is programmed to give that result.
Bill Illis (17:30:19) :
The CO2 sensitivity numbers include the effects of feedbacks.
The climate models are not programed to give 3C per doubling. In fact, some have higher sensitivities, some lower, but they cluster around 3.
The trick with AGW is to distract onlookers from the fact that the empirical evidence doesn’t support the Hypothesis…. So get a cute picture of a cat rolling a water melon and speak only of assumptions based on Computer models based on the statistics derived from suspect sources….
It’s the shiny things in life that amaze the confused.