It seems depending on who you talk to, climate sensitivity is either underestimated or overestimated. In this case, a model suggests forcing is underestimated. One thing is clear, science does not yet know for certain what the true climate sensitivity to CO2 forcings is.
There is a new Paper from Tanaka et al (download here PDF) that describes how forcing uncertainty may be underestimated. Like the story of Sisyphus, an atmospheric system with negative feedbacks will roll heat back down the hill. With positive feedbacks, it gets easier to heatup the further uphill you go. The question is, which is it?
Insufficient Forcing Uncertainty Underestimates the Risk of High Climate Sensitivity

ABSTRACT
Uncertainty in climate sensitivity is a fundamental problem for projections of the future climate. Equilibrium climate sensitivity is defined as the asymptotic response of global-mean surface air temperature to a doubling of the atmospheric CO2 concentration from the preindustrial level (≈ 280 ppm). In spite of various efforts to estimate its value, climate sensitivity is still not well constrained. Here we show that the probability of high climate sensitivity is higher than previously thought because uncertainty in historical radiative forcing has not been sufficiently considered. The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely. We call for further research on how best to represent forcing uncertainty.
CONCLUDING REMARKS
Our ACC2 inversion approach has indicated that by including more uncertainty in
radiative forcing, the probability of high climate sensitivity becomes higher, although low climate sensitivity (< 2°C) remains very unlikely. Thus in order to quantify the uncertainty in high climate sensitivity, it is of paramount importance to represent forcing uncertainty correctly, neither as restrictive as in the forcing scaling approach (as in previous studies) nor as free as in the missing forcing approach. Estimating the autocorrelation structure of missing forcing is still an issue in the missing forcing approach. We qualitatively demonstrate the importance of forcing uncertainty in estimating climate sensitivity – however, the question is still open as to how to appropriately represent the forcing uncertainty.
h/t and thanks to Leif Svalgaard
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Insufficient Forcing Uncertainty Underestimates the Risk of High Climate Sensitivity
click for larger image
Typical alarmist BS.
If your write a serious scientific report, please use language that can be understood by intelligent people.
This is written for civil servants who also write endless long sentences with an “empty”message.
I like the picture of the cat though!
Never thought they like to play with melons as I do.
If I have understood the climate models, the radiative forcing of CO2 already builds in a threefold feedback from water vapour – and it is generally admitted that CO2 alone has a low impact, with a sensitivity between 0.5 and 1.5 C for the doubling. So everything depends upon how this water vapour behaves – in particular whether it creates more cloud. Cloud behaviour is the least understood climate science and the greatest uncertainty in the models. Taking the satellite period of records of cloud from 1983, which also coincides with the major global warming signal, cloud thinning of 5% between 1983-2000 and resultant additional short-wave flux to the ocean surface can account for at least 80% of the signal. Clouds thickened by 2% in 2001- since maintained, and ocean heat storage flatlined, along with sea-level rise from about 2004.
Thus, whether natural or anthropogenic, it is cloud changes that have driven the signal. Cloud changes resultant on shifts in atmospheric pressure and winds are the consequences of several ocean-basin cycles – all of which peaked positive (warm) between 1995-2005 and are now turning to their cool cycle.
The fact that these natural cycles can now over-ride the model’s projections (and undoubtedly contributed to the rising signal in the first place) shows clearly that the sensitivity of the models has been over-estimated. These new estimates of uncertainty only point in the opposite direction because there are no revisions of the models used – if the researchers had updated their models to include natural cycles (as a number of modelling centres are now doing) they might have come to an opposite conclusion.
In the UK, at Hadley, for example, revisions are under way – but within what is called the mid-range modelling group (2030), whilst the long-range group (2050 and beyond) still takes no account of cycles and present nice straight lines for the upcoming Copenhagen meeting as well as UK policy makers. The mid-range group expects cooling over the next decade – largely due to the shift in the Atlantic Multidecadal Oscillation – but this does not get out to the British press.
In this week’s Sunday Times, we thus have government agriculturalists recommending that our farmers consider planting olive trees in advance of 4 degrees Celsius by 2050. If planted now, I guarantee they’d be killed by frost as the second hard winter strikes in what may be two or three decades of hard winters.
This is the current level of idiocy operating in relation to the uncertain science – and it is compounded by the scientists themselves colluding with the political agenda in advance of Copenhagen.
Ahhhh, probability. Brings out the gambler in me. Even if you accept more of this modelled nonsense based on WAGs it doesn’t change reality.
You are more likely to get two pair than three of a kind playing poker but our universe is more like a single hand of poker. It’s already been dealt. If the physical laws lead to a three of a kind, then the probabilities that two pair is more likely are meaningless.
Did you notice that the simulations all cut off at year 2000?
The authors include the 1997 super El Nino temperature spike, but make no attempt to assess “missing forcing” in the relatively flat post 2002 period.
—-
“Figure 2.1 shows that low climate sensitivity is not supported even with the missing forcing approach because the missing forcing goes beyond its 2σ uncertainty range to explain the warming in the late 20th century.”
Not clear from the paper how they determined the “2σ uncertainty range”. A cynical thought on that statement might be .. if we believed that the missing forcing was that large, we would be out of the AGW/GCM business.
One thing is certain. Cats, at times, are known to roll watermelons at the water’s edge. As for the rest… I think ‘disputin’ makes the most sense.
I note that this comment on the Vinland map claims “If the map is genuine it provides evidence of a relatively ice-free Arctic in the Medieval Warm Period that allowed the Vikings to sail unimpeded to North America.”
http://climateresearchnews.com/
The really interesting question is: how do you get a cat to roll a watermelon in a lake?
I think that they should have included a statement as to why they stop at 2000. It could be that the are constrained by available data which they could state otherwise I am sure someone will accuse them of cherry-picking.
Also it is pretty clear that they are using annual values throughout. (the 251 degress of freedom in Fig 1 is one per year (1750-2000 inclusive). It may be a small point but diffusion models are sensitive to granularity. The heat upatke calculation involves a second order differential “d^2Temp/dt^2” term making it more sensitive to granularity than other terms like the out bound flux calculation which only has to consider the Temp(t). If this is important it would mostly likely show up with transient responses like those produced by volcanoes. They could be using monthly values of SST where available and annual vlaues for the forcings but it does not seem to be stated.
Finally, If anyone can track down the data they used I would be grateful, BTW I did not think we had individual forcing data for all the gases and aerosols used in their model for even the 20th Century let alone going back to 1750.
Alexander Harvey
To take the gambling analogy one step further. A model of 5 card draw poker could be exactly right except for one small error. If deuces are wild then it turns out 3 of a kind is more likely than two pair.
This shows how missing one simple component of the problem can throw off the results completely.
I hope to get time to read the paper, but so far it appears to be a peer-reviewed statement that the authors don’t know enough to draw a real conclusion.
If there is so much uncertainty, why does the IPCC force all the models to play from the same playbook.
Here is a quote from the selection criteria for a climate model to be included in the IPCC’s 2007 Fourth Assessment Report.
“Criterion 1: Consistency with global projections. They should be consistent with a broad range of global warming projections based on increased concentrations of greenhouse gases. This range is variously cited as 1.4°C to 5.8°C by 2100, or 1.5°C to 4.5°C for a doubling of atmospheric CO2 concentration (otherwise known as the “equilibrium climate sensitivity”). ”
http://www.ipcc-data.org/ddc_scen_selection.html
Normally, when a field of science faces uncertainty, they try hard to limit or nail down that uncertainty through experimental measurements and/or just accepting that the basic data probably indicates the truth.
In climate science, it seems that they’s rather just leave the uncertainty in place and/or rewrite the basic data so that it matches “the selection criteria”.
Leif Svalgaard (03:45:18) :
tallbloke (23:34:55) :
Does Leif Svalgaard agree with the characterisation of solar forcing displayed in the graph shown, and please would he explain what the red solar curve is representing in case I misunderstand it.
The red curve shows some representation of the solar cycle and does not seem too much out of whack. It has the smallest amplitudes of all the forcings, so is in line with what I would expect.
Leif I know that the TSI from the sun at ~1365.2 to ~1366.6W/m^2 is divided by four to get the incident insolation on the curved sun facing side of earth sorted sorted out, but isn’t the tropical area right under the sun going to feel the full effect of the ~ 1.4(PMOD)-2.2W/m^2(Neptune/ACRIM) swing between solar max and solar min, rather than a quarter of it? Plus of course the 8W/m^2 swing induced by earth’s eccentric orbit.
I know the magnitude of the swing doesn’t necessarily relate to the ‘forcing’ calculated for modelling ourposes from poorly understood data, but as a physical fact?
Solar forcing has been underestimated by models, as indicated by Scafetta in his conference at epa, where he told that FAO, a UN agency, following LOD to predict climate and future fish catches openly contradicts the IPCC affirmations.
You can dowload the complete paper from (12 pdf documents):
ftp://ftp.fao.org/docrep/fao/005/y2787e/
Where you will find a completely reasonable temperatures´ forecast.
I have a hard time getting beyond “Insufficient … Uncertainty Underestimates the Risk of … Sensitivity “
Roddy Baird is right. Billions of Dollars spent, and nobody does a simple experiment.
A Pox on all the Houses. ALL the houses.
“We call for further research on how best to represent forcing uncertainty.”
Or:
“Give us more money so as we can continue our cushy academic jobs during the present economic crisis so as we can continue to learn a little bit more about something which is proving to be more and more insignificant.”
“The greater the uncertainty that is considered for radiative forcing, the more difficult it is to rule out high climate sensitivity, although low climate sensitivity (< 2°C) remains unlikely."
The reason is that the lower bound is zero while the upper end is unbounded.
Gary: Very carefully!
.
:o)
.
I tell you one thing is certain, AGW is horse crap (Actually, no. Horse crap is useful). So I say AGW is…a lot of hot political air!
How do you know a politician is telling lies? Their mouth is moving and sounds come out.
How do you know “human induced climate change” is a scam? It is used in political campagins. Rudd in Australia and Obama in the US. What surprises me is the number of people who support it….until recently of course. Those who “voted” for KRudd747 and Obamassiah…are slow to learn, if at all.
Perhaps the title should be ‘Certain Uncertainty’.
More from the “We have to treat uncertainty as evidence of certainty” school of thought.
Take out the precautionary principle and AGW falls to pieces. Its all just speculation.
Gary (05:38:06) :
The really interesting question is: how do you get a cat to roll a watermelon in a lake?
Good question. Maybe it’s just a model?
I believe these boys have about got themselves treed.
Louis Hissink (03:45:59) :
The simple answer is that CO2 has nothing to do with temperature.
Exactly. They are simply assuming C02 “forces” climate. Climate can’t be sensitive to a phantom, so the paper’s discussion about the climates’ sensitivity to C02 is moot, and frankly, ridiculous.
I dutifully read the linked Tanka et al paper. This is my summary impression:
First, take particular models with their working assumptions about forcings intact: (a) a sum of greenhouse forcings with uncertainties; (b) other forcings (like aerosols) parameterized with no uncertainties, and (c) “missing” forcings which comprise that which the models assume they don’t know.
Next, plug in actual data and assume no uncertainties. This is bad because the result is a low climate sensitivity. Because the models are really designed to have a high sensitivity, probabilistic analysis based on the models reports that this low sensitivity is improbable–imagine my surprise!! Therefore, because the working assumptions of the models can’t be wrong, and there is no obvious missing component, there must be more … uncertainty.
Last, we discover by formalizing this uncertainty in an inversion approach, there is the two-fold benefit of (a) saving appearances such that even when existing available data fail to confirm the preferred higher sensitivity, the result still falls within the new uncertainty range and (b) it also adds a really scary higher-than-we-thought upper range even though there is little or no indication that it’s actually happening now.
Shorter version: It is nature’s fault that the models suck and it really is worse than we thought.