Updated climate sensitivity estimates using aerosol-adjusted forcings and various ocean heat uptake estimates

Guest essay by Nic Lewis

The Otto et al. paper has received a great deal of attention in recent days. While the paper’s estimate of transient climate response was low, the equilibrium/effective climate sensitivity figure was actually slightly higher than that in some other recent studies based on instrumental observations. Here, Nic Lewis notes that this is largely due to the paper’s use of the Domingues et al. upper ocean (0–700 m) dataset, which assesses recent ocean warming to be faster than other studies in the field. He examines the effects of updating the Otto et al. results from 2009 to 2012 using different upper ocean (0–700 m) datasets, with surprising results.

Last December I published an article here entitled ‘Why doesn’t the AR5 SOD’s climate sensitivity range reflect its new aerosol estimates?‘ (Lewis, 2012). In it I used a heat-balance (energy-budget) approach based on changes in mean global temperature, forcing and Earth system heat uptake (ΔT, ΔF and ΔQ) between 1871–80 and 2002–11. I used the RCP 4.5 radiative forcings dataset (Meinshausen et al, 2011), which is available in .xls format here, conformed it with solar forcing and volcanic observations post 2006 and adjusted its aerosol forcing to reflect purely satellite-observation-based estimates of recent aerosol forcing.

I estimated equilibrium climate sensitivity (ECS) at 1.6°C,with a 5–95% uncertainty range of 1.0‑2.8°C. I did not state any estimate for transient climate response (TCR), which is based on the change in temperature over a 70-year period of linearly increasing forcing and takes no account of heat uptake. However, a TCR estimate was implicit in the data I gave, if one makes the assumption that the evolution of forcing over the long period involved approximates a 70-year ramp. This is reasonable since the net forcing has grown substantially faster from the mid-twentieth century on than previously. On that basis, my best estimate for TCR was 1.3°C. Repeating the calculations in Appendix 3 of my original article without the heat uptake term gives a 5–95% range for TCR of 0.9–2.0°C.

The ECS and TCR estimates are based on the formulae:

(1) ECS = F ΔT / (ΔF − ΔQ) and (2) TCR = F ΔT / ΔF

where F is the radiative forcing corresponding to a doubling of atmospheric CO2 concentrations.

A short while ago I drew attention, here, to an energy-budget climate study, Otto et al. (2013), that has just been published in Nature Geoscience, here. Its author list includes fourteen lead/coordinating lead authors of relevant AR5 WG1 chapters, and myself. That study uses the same equations (1) and (2) as above to estimate ECS and TCR. It uses a CMIP5-RCP4.5 multimodel mean of forcings as estimated by general circulation models (GCMs) (Forster et al, 2013), likewise adjusting the aerosol forcing to reflect recent satellite-observation based estimates – see Supplementary Information (SI) Section S1. It Although the CMIP5 forcing estimates embody a lower figure for F (3.44 W/m2) than do those per the RCP4.5 database (F: 3.71 W/m2), TCR estimates from using the two different sets of forcing estimates are almost identical, whilst ECS estimates are marginally higher using the CMIP5 forcing estimates[i].

Although the Otto et al. (2013) Nature Geoscience study illustrates estimates based on changes in global mean temperature, forcing and heat uptake between 1860–79 and various recent periods, it states that the estimates based on changes to the decade to 2000–09 are arguably the most reliable, since that decade has the strongest forcing and is little affected by the eruption of Mount Pinatubo. Its TCR best estimate and 5–95% range based on changes to 2000-09 are identical to what is implicit in my December study: 1.3°C (uncertainty range 0.9–2.0°C).

While the Otto et al. (2013) TCR best estimate is identical to that implicit in my December study, its ECS best estimate and 5–95% range based on changes between 1860–79 to 2000–09 is 2.0°C (1.2–3.9°C), somewhat higher than the 1.6°C (1.0–2.9°C) per my study, which was based on changes between 1871–80 and 2002–11. About 0.1°C of the difference is probably accounted for by roundings and the difference in F factors due to the different forcing bases. But, given the identical TCR estimates, differences in the heat-uptake estimates used must account for most of the remaining 0.3°C difference between the two ECS estimates.

Both my study and Otto et al. (2013) used the pentadal estimates of 0–2000-m deep-layer ocean heat content (OHC) updated from Levitus et al. (2012), and made allowances in line with the recent studies for heat uptake in the deeper ocean and elsewhere. The two studies’ heat uptake estimates differed mainly due to the treatment of the 0–700-m layer of the ocean. I used the estimate included in the Levitus 0–2000-m pentadal data, whereas Otto et al. (2013) subtracted the Levitus 0–700-m pentadal estimates from that data and then added 3-year running mean estimates of 0–700-m OHC updated from Domingues et al (2008).

Since 2000–09, the most recent decade used in Otto et al. (2013), ended more than three years ago, I will instead investigate the effect of differing heat uptake estimates using data for the decade 2003–12 rather than for 2000–09. Doing so has two advantages. First, forcing was stronger during the 2003–12 decade, so a better constrained estimate should be obtained. Secondly, by basing the 0–700-m OHC change on the difference between the 3-year means for 2003–05 and for 2010–12, the influence of the period of switchover to Argo – with its higher error uncertainties – is reduced.

In this study, I will present results using four alternative estimates of total Earth system heat uptake over the most recent decade. Three of the estimates adopt exactly the same approach as in Otto et al. (2013), updating estimates appropriately, and differ only in the source of data used for the 3-year running mean 0–700-m OHC. In one case, I calculate it from the updated Levitus annual data, available from NOAA/NOCDC here. In the second case I calculate it from updated Lyman et al. (2010), data, available here. In the third case I use the updated Domingues et al. (2008) data archived at the CSIRO Sea Level Rise page in relation to Church et al. (2011), here. Since that data only extends to the mean for 2008–10, I have extended it for two years at a conservative (high) rate of 0.33 W/m2 – which over that period is nearly double the rate of increase per the Levitus dataset, and nearly treble that per the Lyman dataset. The final estimate uses total system heat uptake estimates from Loeb et al. 2012 and Stephens et al. 2012. Those studies melded satellite-based estimates of top-of-atmosphere radiative imbalance with ocean heat content estimates, primarily updated from the Lyman et al. (2010) study. The Loeb 2012 and Stephens 2012 studies estimated average total Earth system heat uptake/radiative imbalance at respectively 0.5 W/m2 over 2000–10 and 0.6 W/m2 over 2005–10. I take the mean of these two figures as applying throughout the 2003–12 period.

I use the same adjusted CMIP5-RCP4.5 forcings dataset as used in the Otto et al. (2013) study, updating them from 2000–09 to 2003–12, to achieve consistency with that study (data kindly supplied by Piers Forster). Likewise, the uncertainty estimates I use are derived on the same basis as those in Otto et al. (2013).

I am also retaining the 1860–79 base reference period used in Otto et al. (2013). That study followed my December study in deducting 50% of the 0.16 W/m2 estimate of ocean heat uptake (OHU) in the second half of the nineteenth century per Gregory et al. (2002), the best-known of the earlier energy budget studies. The 0.16 W/m2 estimate – half natural, half anthropogenic – seemed reasonable to me, given the low volcanic activity between 1820 and 1880. However, I deducted only 50% of it to compensate for my Levitus 2012-derived estimate of 0–2000-m ocean heat uptake being somewhat lower than that per some other estimates. Although the main reason for making the 50% reduction in the Gregory (2002) OHU estimate for 1861–1900 disappears when considering 0–700-m ocean heat uptake datasets with significantly higher trends than per Levitus 2012, in the present calculations I nevertheless apply the 50% reduction in all cases.

Table 1, below, shows comparisons of ECS and TCR estimates using data for the periods 2000–09 (Otto et al., 2013), 2002–11 (Lewis, 2012 – my December study) and 2003–12 (this study) using the relevant forcings and 0–700 m OHC datasets.

NicLewis_table1

Table 1: ECS and TCR estimates based on last decade and 0.08 W/m2 ocean heat uptake in 1860–79.

Whichever periods and forcings dataset are used, the best estimate of TCR remains 1.3°C. The 5–95% uncertainty range narrows marginally when using changes to 2003–12, giving slightly higher forcing increases, rather than to 2000–09 or 2002–11, rounding to 0.9–1.95°C. The ‘likely’ range (17–83%) is 1.05–1.65°C. (These figures are all rounded to the nearest 0.05°C.) The TCR estimate is unaffected by the choice of OHC dataset.

The ECS estimates using data for 2003–12 reveal the significant effect of using different heat uptake estimates. Lower system heat uptake estimates and the higher forcing estimates resulting from the 3-year roll-forward of the period used both contribute to the ECS estimates being lower than the Otto et al. (2013) ECS estimate, the first factor being the most important.

Although stating that estimates based on 2000–09 are arguably most reliable, Otto et al. (2013) also gives estimates based on changes to 1970–79, 1980–89, 1990–99 and 1970–2009. Forcings during the first two of those periods are too low to provide reasonably well-constrained estimates of ECS or TCR, and estimates based on 1990–99 may be unreliable since this period was affected both by the eruption of Mount Pinatubo and by the exceptionally large 1997–98 El Niño. However, the 1970–2009 period, although having a considerably lower mean forcing than 2000–09 and being more impacted by volcanic activity, should – being much longer – be less affected by internal variability than any single decade. I have therefore repeated the exercise carried out in relation to the final decade, in order to obtain estimates based on the long period 1973–2012.

Table 2, below, shows comparisons of ECS and TCR estimates using data for the periods 1900–2009 (Otto et al., 2013) and 1973–2012 (this study) using the relevant forcings and 0–700-m OHC datasets. The estimates of system heat uptake from two of the sources used for 2003–12 do not cover the longer period. I have replaced them by an estimate based on data, here, updated from Ishii and Kimoto (2009). Using 2003–12 data, the Ishii and Kimoto dataset gives almost an identical ECS best estimate and uncertainty range to the Lyman 2010 dataset, so no separate estimate for it is shown for that period. Accordingly, there are only three ECS estimates given for 1973–2012. Again, the TCR estimates are unaffected by the choice of system heat uptake estimate.

Nic_Lewis_table2

Table 2: ECS and TCR estimates based on last four decades and 0.08 W/m2 ocean heat uptake in1860–79

The first thing to note is that the TCR best estimate is almost unchanged from that per Otto et al. (2013): just marginally lower at 1.35°C. That is very close to the TCR best estimate based on data for 2003–12. The 5–95% uncertainty range for TCR is slightly narrower than when using data for 1972–2012 rather than 1970–2009, due to higher mean forcing.

Table 2 shows that ECS estimates over this longer period vary considerably less between the different OHC datasets (two of which do not cover this period) than do estimates using data for 2003–12. As in Table 1, all the 1973–2012 based ECS estimates come in below the Otto et al. (2013) one, both as to best estimate and 95% bound. Giving all three estimates equal weight, a best estimate for ECS of 1.75°C looks reasonable, which compares to 1.9°C per Otto et al. (2013). On a judgemental basis, a 5–95% uncertainty range of 0.9–4.0°C looks sufficiently wide, and represents a reduction of 1.0°C in the 95% bound from that per Otto et al. (2013).

If one applied a similar approach to the four, arguably more reliable, ECS estimates from the 2003–12 data, the overall best estimate would come out at 1.65°C, considerably below the 2.0°C per Otto et al. (2013). The 5–95% uncertainty range calculated from the unweighted average of the PDFs for the four estimates is 1.0–3.1°C, and the 17–83%, ‘likely’, range is 1.3–2.3°C. The corresponding ranges for the Otto et al. (2013) study are 1.2–3.9°C and 1.5–2.8°C. The important 95% bound on ECS is therefore reduced by getting on for 1°C.

References

Church, J. A. et al. (2011): Revisiting the Earth’s sea-level and energy budgets from 1961 to 2008. Geophysical Research Letters 38, L18601, doi:10.1029/2011gl048794.

Domingues, C. M. et al. (2008): Improved estimates of upper-ocean warming and multi-decadal sea-level rise. Nature453, 1090-1093, doi:http://www.nature.com/nature/journal/v453/n7198/suppinfo/nature07080_S1.html.

Forster, P. M., T. Andrews, P. Good, J. M. Gregory, L. S. Jackson, and M. Zelinka (2013): Evaluating adjusted forcing and model spread for historical and future scenarios in the CMIP5 generation of climate models, J. Geophys. Res. Atmos., 118, doi:10.1002/jgrd.50174

Ishii, M. and M. Kimoto (2009): Reevaluation of historical ocean heat content variations with time-varying XBT and MBT depth bias corrections. J. Oceanogr., 65, 287 – 299.

Levitus, S. et al. (2012): World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010. Geophysical Research Letters39, L10603, doi:10.1029/2012gl051106.

Loeb, NG et al. (2012): Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty. Nature Geoscience, 5, 110-113.

Lyman, JM et al. (2009): Robust warming of the global upper ocean. Nature, 465, 334–337. http://www.nature.com/nature/journal/v465/n7296/full/nature09043.html

Meinshausen M., S. Smith et al. (2011): The RCP greenhouse gas concentrations and their extension from 1765 to 2500. Climate Change, Special RCP Issue

Otto, A. et al. (2013): Energy budget constraints on climate response. Nature Geoscience, doi:10.1038/ngeo1836

Stephens, GL et al (2012): An update on Earth’s energy balance in light of the latest global observations. Nature Geoscience, 5, 691-696


[i]Total forcing after adjusting the aerosol forcing to match observational estimates is not far short of total long-lived greenhouse gas (GHG) forcing. Therefore, differing estimates of GHG forcing – assuming that they differ broadly proportionately between the main GHGs – change both the numerator and denominator in Equation (1) by roughly the same proportion. Accordingly, differing GHG forcing estimates do not matter very much when estimating TCR, provided that the corresponding F is used to calculate the ECS and TCR estimates, as was the case for both my December study and Otto et al. (2013). ECS estimates will be more sensitive than TCR estimates to differences in F values, since the unvarying deduction for heat uptake means that the (ΔF − ΔQ) factor in equation (2) will be affected proportionately more than the F factor. All other things being equal, the lower CMIP5 F value will lead to ECS estimates based on CMIP5 multimodel mean forcings being nearly 5% higher than those based on RCP4.5 forcings.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Jim Cripwell

I am sorry, but all these estimates of climate sensitivity are like discussing how many angels can dance on the head of a pin. People have been discussing estimates of climate sensitivity for something like 40 years. In that time, so far as I can make out, little, if any, progress has been made. Until we know the magnitudes and time constants of all naturally occurring events that cause a change in global temperatures, and so we might have a hope of actually measuring what the numeric value is, all these studies are just a waste of time and money. All people are actually doing is just taking another guess. My best guess is that the climate sensitivity of CO2 is indistinguishable from zero.

bobl

I am struggling with a not so related issue that came to me just yesterday. The theory has it that N2 an O2 lacks vibrational modes in the infrared making it incapable of reradiating heat. To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas? So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase, given that the non radiating gasses transfer their energy by collisions.
Surely this has to increase losses to space overall.
What am I missing?

Stephen Wilde

“To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas? So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase”
Quite.
GHGs provide an additional radiative window to space that is not provided by a non GHG atmosphere.
Still doesn’t necessarily result in any net thermal effect though once negative system responses are taken into account.
I think that whether the net effect of GHGs is potential warming or potential cooling the air circulation adjusts to negate it.
So an effect of zero or near zero overall but with a miniscule shift in air circulation.

Your updated result for ECS are the most honest, because it is based on the latest OHC data (Levitus 2012) and the latest temperature data (1973 – 2012). The result shows that ECS = 1.7 +- 1.0/0.4 degrees C. The UN goal of limiting climate change to 2 degrees C has apparently been met – congratulations !
Can we now drop the ‘C” from CAGW ?

bobl

Jim, I have been making this point for a while. The climate feedbacks are not Scalar, they are complex, they each have a time dimension, a lag, and they are all different, ranging between milliseconds and decades. Feedbacks cannot be added without accounting for the time (phase component). The lack of accounting for time means that transient sensitivity can vary wildly from moment to moment depending on the speed and direction of all the feedback effects on multiple timescales.
Acheiving a Net gain of 3 in the climate therefore requires a completely implausible loop gain of about 0.95.
In support of your point, sensitivity can only be evaluated by modelling each and every feedback effect, including the lags and amplitudes of each effect. In many cases the feedback amplitudes or phases are dependent on the system itself (consider tropical storm non-linear behaviour)! Sensitivity cannot be a simple number, it is a chaotically varying complex number in both time and space, it is to all intents and purposes unknowable.
Climate science attempts to model this as a simple scalar average, without even knowing if the combination of all the feedbacks represents a stationary function, that is, they dont even know if the mean of the sensitivity is a constant.

John Peter

Jim Cripwell may well be right in stating “I am sorry, but all these estimates of climate sensitivity are like discussing how many angels can dance on the head of a pin.” but such studies done studiously are still important and should be welcomed as the effect may be (as an intermediate stage) to reduce the “consensus” estimate of climate sensitivity from the IPCC median of 3C. A generally accepted ECS of 1.65-1.75C is much to be preferred to 3C and could have enormous consequences for policy decisions. It would mean that a doubling of CO2 would not mean a 2C (or higher) increase in global temperatures and would minimise the concept of the impending “tipping point”. We are moving slowly towards the Lindzen & Co view.

bobl

Stephen, no, must have an effect however miniscule, some change needs to drive the air current change, you can have a negative feedback, but there must be a net change to drive the effects.
Nevertheless, more photons to space surely implies cooling rather than warming

Patrick

Human driven climate change alarmists are just a bunch of aerosols imo.

Greg Goodman

All this work on narrowing the range of confidence values is impressive and valuable. That some major IPCC figures seem to be coming along with the process is very encouraging.
However, the whole idea of simple linear model (and indeed to much more complex GCMs) seems founded on the idea that the climate system has a linear response to radiative forcing.
The two major events of late 20th c. give one of the few discernible features other than the long slow (accelerating) rise.
However, I find it very hard to find evidence in climate data this the strong negative forcing of these events.
I saw nothing obvious in TLT nor in tropical SST but I was assured it was visible in land records. So I had a detailed look at CRUTEM4 for northern hemisphere.
http://climategrog.wordpress.com/?attachment_id=270
Now maybe I’m just not looking in the right place but there seems to be a problem here. There is no cooling effect to be seen. In fact good indications of a short term warming. There is no indication of the marked, permanent negative offset that a linear response would produce to such a negative forcing.
Now if the response to volcanic forcing is not materialising in the climate record, then the linear model is fundamentally inadequate and hence current GCMs as well.
If I am overlooking something obvious, looking at the wrong dataset or misinerpretting what to expect , hopefully Nic or someone can point out where.
thanks.

Patrick

“Greg Goodman says:
May 24, 2013 at 7:43 am”
The simple, and correct answer is, no-one actually knows. Once “we” accept that, we can move on!

Jim Cripwell

John Peter, you write “but such studies done studiously are still important”
To a limited extent I agree. My point is that with our current knowledge of the physics of our atmopshere, no-one has the slightest idea of what happens to global temperatures as we add more CO2 to the atmosphere from current levels. Just about the only things we know about climate sensitivity of CO2 is that it is probably positive, and it has a maximum value. If these studies were framed in terms of estimating the MAXIMUM value of climate sentiivity, I would not object. But I do object to claims that these estimates are in some sort of way associated with what the real number is.

Phil.

bobl says:
May 24, 2013 at 6:47 am
I am struggling with a not so related issue that came to me just yesterday. The theory has it that N2 an O2 lacks vibrational modes in the infrared making it incapable of reradiating heat. To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas?

Correct.
So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase, given that the non radiating gasses transfer their energy by collisions.
No, because the atmosphere is optically thick at the GHG wavelengths, i.e. lower in the atmosphere it absorbs more than it emits. Emission to space only occurs above a certain height and therefore at a certain temperature, as the concentration increases then that height increases and the temperature decreases and hence emission to space goes down.

Bill Illis

The NODC has updated the Ocean Heat Content numbers for the first quarter of 2013.
Big jump in the OHC numbers in the first quarter of 2013 (and some restating of the older numbers again).
0-2000 metre uptake equates to 0.49 W/m2 in the Argo era.
http://s13.postimg.org/u6al0f6xj/OHC_700_and_2000_M_Q1_2013.png
http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/basin_data.html
Equivalent to an average temperature increase of 0.073C in the 0-2000 metre ocean since 1977, 0.135C in the 0-700 metre ocean and 0.222C in the 0-100 metre ocean.
http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/basin_avt_data.html

Henry Clark

The climate sensitivity estimate in this article is better than highly overstated ones. Still, though:
1) Properly accounting for GCRs+TSI in solar-related change makes such contribute several times more to past warming than solar irradiance change alone, even aside from an ACRIM versus PMOD model matter on solar irradiance history. Almost whenever cosmic rays are not explicitly mentioned, usually one can assume someone is implicitly ignoring them entirely and treating them as zero effect, which is highly inaccurate. As Dr. Shaviv has noted:
“Using historic variations in climate and the cosmic ray flux, one can actually quantify empirically the relation between cosmic ray flux variations and global temperature change, and estimate the solar contribution to the 20th century warming. This contribution comes out to be 0.5 +/- 0.2 C out of the observed 0.6 +/- 0.2 C global warming (Shaviv, 2005).”*
That leaves roughly on the order of 0.1 degrees Celsius over the past century for net warming from anthropogenic effects / independent components of the longest types of ocean cycles (with likely a large portion of the apparent 60-year ocean cycle being rather sun & GCR generated as looking at appropriate plots suggests) / etc.
Especially considering logarithmic scaling and diminishing returns, human emissions over this century are not likely to contribute more than tenths of a degree warming if even that, even aside from how a near-future solar Grand Minimum starting another LIA by the mid 21st century looks likely. (A mixture of both cooling and warming effects, influence on water vapor, and other complexities apply).
* General discussion:
http://www.sciencebits.com/CO2orSolar
Related paper:
http://www.phys.huji.ac.il/~shaviv/articles/sensitivity.pdf
Some illustrations I made a while back:
http://s7.postimg.org/69qd0llcr/%20intermediate.gif
NOAA humidity data for decades past got drastically changed already since I started posting the above several months ago, but still it provides a number of illustrations.
2a) Considering how many problems there have been with activist-reported (Hansen, CRU, etc.) surface temperature measurements despite such being relatively more readily independently verified than 0–700m ocean heat content, where the latter is talking about mere hundredths of a degree change anyway (with there being quite a reason that such ocean temperature change over hundreds of meters of depth tends to be reported in joules rather than degrees Celsius or Kelvin), OHC has uncertainties, to say the least.
2b) Then there are questions on aerosol data…

thingodonta

One thing we do know is that the human response to climate sensitivity is very high. The positive feedbacks are much stronger than what is going on in the atmosphere.

“I am sorry, but all these estimates of climate sensitivity are like discussing how many angels can dance on the head of a pin. People have been discussing estimates of climate sensitivity for something like 40 years. In that time, so far as I can make out, little, if any, progress has been made. ”
This is factually wrong. The first estimates of sensitivity were made over 100 years ago.
Since then the estimate has followed a downward trajectory, from the first report to the fourth the central value has creeped downward. Nic’s work adds to that body of knowledge.
Let me put the importance of this metric into perspective: every degree of C in uncertainty is worth about 1 trillion dollars a year if you are planning to mitigate.
Jim. I suggest you read some of the history of climate science and read some actual papers and work with some actual data.

Nic,
Nice work. I think it might be instructive for WUWT readers to understand how Anthony’s claims about microsite bias would play into your calculations. For example, if one assumed that the land warming was biased by .1C per decade from 1979-current, what would that do to the sensitivity calculation? The purpose of course is to show people how they can locate and communicate their doubts WITHIN the framework and language of their opponents.
As you have shown it is much more effective to question the science from the inside rather
than attack the character and motivations of people from the outside. You’ve shown that there IS A DEBATE and you’ve shown people how to join that debate. You’ve shown that the consensus is broader and more uncertain than people think, not by questioning the existence of the consensus but by working with others to demonstrate that some of the core beliefs ( how much will it warm) admit many answers.

Rud Istvan

Nic Lewis, thanks for this post. WE posted a different way to derive basically the same answer. Good to see so many data sets and methods converging on something just over half of the AR4 number. It will be very interesting, and important, to see where AR5 comes out given the Otto co-authors. Either the C gets removed from CAGW, or the process is plainly shown to be utterly corrupted.

Stephen Wilde

“bobl says:
May 24, 2013 at 7:23 am
Stephen, no, must have an effect however miniscule, some change needs to drive the air current change, you can have a negative feedback, but there must be a net change to drive the effects. ”
The effect would a change in atmospheric heights and the slope of the lapse rate which is then compensated for by circulation changes.
Assuming there is a net thermal effect from GHGs in the first place. Some say net warming, others say net cooling.
Doesn’t matter either way. The system negates it by altering the thermal structure and circulation of the atmosphere.
I can’t actually prove that with current data so will just have to wait and see but it seems clear to me from current and past real world observations of climate behaviour.

HR

Jim Cripwell
In order to understand science you need a health dose of caution. The limits of our data and understanding mean we must pepper our conclusions with appropriate caveats and/or uncertainty ranges. You seem to completely misunderstand this and instead favour the idea of perfection or nothing. The unfortunate truth is that most of the time science is about being less wrong than it is about being right you need to moderate your skepticism appropriately.

Gary Pearse

Official ECS estimates since climategate seem to be roughly following a decay function:
N(t)=N(0)e^- lambda t
N(0) is the initial ECS ~ 3.0 and N(present) ~ 2 after t (to present) = 10yrs.
which makes lambda = 0.04
To get to an official ECS~1 will take Ln1/3 =0.04t; t=27yrs.
Hmm, if time elapsed since consensus ECS~3 has been just 5 years, then we would have 13 years to wait for consensus ECS~1. This assumes stalwart resistance and represents an outer limit. Lambda is probably not a constant here – I would go for half the 13 years ~6years.
My take: ECS finally turns out to be vanishingly small (i.e. there is a governor on climate responses al a Willis Eschenbach), then TCR is larger than ECS and within a few years it declines to the minor ECS figure and natural variability is basically all that is left. How’s that for a model!

bw

Mechanisms controlling atmospheric CO2 follow both geological and biological processes. Each pathway operates with different time constants and amplitudes over any time scale. The atmosphere has evolved in composition due to biology. Physicists can’t understand how the atmosphere behaves because they don’t include biology. Except for Argon, the atmosphere is completely biological in origin. Biology also alters surface albedo.
All evidence points to those supporting the “essentially zero” climate sensitivity on a planetary scale. The satellite data support zero temperature increase since 1980. Quality surface thermometers also show zero warming, eg the Antarctic science stations, Amundsen-Scott, Vostok, Halley and Davis.
CO2 follows biology, biology follows temperature.

richcar1225

I have trouble reconciling the realty of surface radiation measurements with climate sensitivity calculations based on TOA calculations. BSRN measurements indicate that since 1992 short wave radiation has increased by 3 w/m per decade likely due to global brightening (less clouds) while long wave radiation (including ghg back radiation) has increased by 2 w/m per decade.
Considering that SW (visible light) is much more easily absorbed by the oceans than thermal long wave radiation it would seem that the .4 to .6 w/m of ocean flux could be attributed mostly to the short wave contribution or simply to changes in cloud cover. AGW proponents will claim that the global brightening is a positive feed back of course. How much of the 2 w/mtr per decade of the long wave surface radiation increase is due to the ocean releasing heat versus GHG back radiation?

Greg Goodman

bw says: All evidence points to those supporting the “essentially zero” climate sensitivity on a planetary scale.
Don’t like terms like “all evidence” but here some evidence.
http://climategrog.wordpress.com/?attachment_id=271
http://climategrog.wordpress.com/?attachment_id=270
However, I do agree with Mosh’s last comment , Nic is taking a very wise approach and doing the difficult task injecting some reason into the thinking in small, digestible pieces. Congratulations on finding the right balance between being honest and being effective 😉

Jim Cripwell

HR, you write “you need to moderate your skepticism appropriately.”
I have absolutely no intention whatsoever of moderating my skepticism. There is no empirical data whatsoever to support the hypothesis of CAGW, and until we get such empirical data, I will continue to believe that CAGW is a hoax. The warmists have been conducting pseudo- science for years, trying to pretend that the estimates they have made on climate sensitivity have a meaning in physics. IMHO, as I have noted, I think these estimates are completely worthless.

Dan Pangburn

Measurements since before 1900 demonstrate that sensitivity is between zero and insignificant.
Natural Climate change has been hiding in plain sight
http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html

george e. smith

“””””…..bobl says:
May 24, 2013 at 6:47 am
I am struggling with a not so related issue that came to me just yesterday. The theory has it that N2 an O2 lacks vibrational modes in the infrared making it incapable of reradiating heat. To me this implies that all IR radiation to space from the atmosphere must be from a greenhouse gas? So if the concentration of greenhouse gasses increases then the number of photons released to space must necessarilly increase, given that the non radiating gasses transfer their energy by collisions.
Surely this has to increase losses to space overall.
What am I missing?………””””””””””””
Bob, what it is that you are missing is an understanding of the fundamental difference between atomic or molecular line/band spectra emission/absorption radiation, which is entirely a consequence of atomic and molecular structure of SPECIFIC materials; and THERMAL RADIATION which is a continuum spectrum of EM radiation, that is NOT material specific, and depends (spectrally) ONLY on the Temperature of the material. Of course, the level of such emission or absorption depends on the density of the material (atoms/molecules per m^3).
Spectroscopists have known since pre-Cambrian times, that the sun emits a broad spectrum of continuum thermal radiation, on top of which it was discovered by Fraunhoffer and others, there is a whole flock of narrow atomic or molecular line spectra at very specific frequencies, that are characteristic of specific elements or charged ions, in the sun.
So-called “Black Body Radiation ” is an example of a thermal continuum spectrum.
I deliberately said “so-called”, because nobody ever observed black body radiation, since the laws of Physics prohibit the existence of any such object.
Well some folks think a black hole might be a black body.
By definition, a black body absorbs 100% of electromagnetic radiation of ANY frequency or wavelength down to, but not including zero; and up to, but not including infinity.
Yet no physical object (sans a black hole) is able to absorb 100% of even ONE single frequency, or wavelength; let alone All frequencies or wavelengths. To do that, the body would have to have a surface refractive index of exactly 1.0, the same as the refractive index of empty space. That would require that the velocity of light in the material be exactly (c).
Now (c) = 1/sqrt(munought x epsilonnought) ; the permeability, and permittivity of free space.
munought = 4pi E-7 Volt seconds per Amp metre.
epsilonnought = 8.85418781762 E-2 Amp seconds per Volt metre.
Both of these, and (c) = 2.99792458 E+8 are exact values. the only such fundamental physical constants that are exact.
So a material with a product of permeability and permittivity = 1 / c^2 would have a velocity of EM radiation also equal to (c). But that is not sufficient.
Free space vacuum, also has a characteristic impedance = sqrt( munought / epsilonnought) which is approximately 120 pi Ohms, or 377 Ohms.
And when a wave travelling in a medium of 377 Ohms, such as free space, encounters a medium of different impedance, there is a partially transmitted wave, and a partially reflected wave; so no total absorption.
So any real physical medium, must have a permeability of munought, and a permittivity of epsilon nought, at all frequencies and wavelengths, in order to qualify as a black body. It would be indistinguishable from the vacuum of free space.
The point of all this, is that real bodies only approximate what a black body might do, and only do so over narrow ranges of frequency or wavelength, depending on their Temperature.
And in the case of gases like atmospheric nitrogen and oxygen; the molecular density is extremely low. so the EM absorption doesn’t come anywhere near 100%, even for huge thicknesses of atmosphere. But the absorption per molecule is not zero, as some people assume, so even non IR active non GHG gases do absorb and emit a continuum spectrum of thermal radiation based on the gas Temperature.
Experimental practical near black bodies, operate as anechoic cavities, where radiation can enter a small aperture, and then gets bounced around in the cavity and never escapes. Some derivations of the Planck radiation law are based on such cavity radiation.
In the case of a “black body cavity”, the required conditions are that the walls be perfectly reflecting of ALL EM radiaton, and also must have zero thermal conductivity so that heat energy cannot leak out through the walls.
Once again, such conditions are a myth, and no ideal black body cavity can exist either.
So we have the weird circumstance, that Blackbody radiation has never been observed by anybody, and simply cannot exist, yet all kinds of effort went into theoretical models of a non-existing non-phenomenon, and gave us one of the crown jewels of modern physics; the Planck radiation formula.

Has anyone looked at/challenged this?
http://www.naturalnews.com/040448_solar_radiation_global_warming_debunked.html
Climate sensitivity may be irrelevant or wrong

John Peter

Steven Mosher says:
May 24, 2013 at 8:47 am
“You’ve shown that the consensus is broader and more uncertain than people think, not by questioning the existence of the consensus but by working with others to demonstrate that some of the core beliefs ( how much will it warm) admit many answers.”
So it is not a consensus after all. Good to see that the 3C consensus is breaking up. We will all benefit from that (other than the rent seakers).
I also applaud the fact that Steven Mosher has transformed into something less cryptic than usual. Long may it continue as he often has something valuable to add when the notion takes him.

Ian H

Clearly estimates of climate sensitivity have had to fall because models based on higher numbers have tracked so poorly they have reached the point of falsification. The greatest pressure is on the TCR value since sufficient time has now passed without significant warming to rule out a high value for this number. The ECS on the other hand makes predictions that cannot be fully falsified for hundreds of years so I expect we’ll see people continuing to defend high numbers here for some time. I expect estimates of TCR and ECS will continue to fall if we see cooling over the next decade. These numbers in any case are still based on a simple forcing model with feedback which I don’t think is at all realistic.
I expect the immediate response of the most alarmed will be to start talking up the ECS and downplaying the TCR. However these ECS values are not really alarming. Over the longer term we are staring down the barrel of the next ice age. I find it reassuring to think that our influence on the planet might allow us to dodge this calamity. In fact I am more concerned that ECS might not be big enough to allow this to happen.
The problem is that ECS is bigger than TCR because of long term feedbacks to warming that depend on slow processes like the melting of ice sheets or warming of the deep oceans. But in the context of a planet that should be heading into an ice age the effect of added CO2 may not be to warm but merely to offset the expected natural cooling. If the greenhouse effect is not actually warming the planet but simply staving off the descent into the next ice age then none of these feedback effects will come into play.

Nic Lewis

Steven Mosher wrote:
“I think it might be instructive for WUWT readers to understand how Anthony’s claims about microsite bias would play into your calculations. For example, if one assumed that the land warming was biased by .1C per decade from 1979-current, what would that do to the sensitivity calculation?”
Good point, Steve. That assumption would reduce the increase in global temperature betrween the 1860-79 mean and the 2003-12 mean from 0.76 C to about 0.68 C. All the climate sensitivity estimates, and their uncertainty ranges, would then reduce by about 11%. So a sensitivity of 1.7 C would change to just over 1.5 C, for example.

Rob

Clive Menzies , I too have seen that , then I ran across this , it makes one wonder if we’re not over complicated this .http://www.crh.noaa.gov/dtx/march.php

X Anomaly

What I would like to see is negative (below 1 deg C) ECS /TCS, I.e., AT what minimums over the next decades / century would it take for both estimates to get tickets to the LIndzen and Choi ball game ((0.7 deg C)???

Anteaus

“One thing we do know is that the human response to climate sensitivity is very high. The positive feedbacks are much stronger than what is going on in the atmosphere.”
That’s because of the renewables subsidy forcing, which will result in runaway inflation-level rise and economies going under if the propaganda levels exceed 400ppm.

Greg Goodman

In the previous article, Willis questioned why the volcanic forcings were being spread back in time by a running mean filter. It was confirmed by Nic that this was the case but he stated that it was immaterial to the findings of Otto et al 2013. This is probably true.
Now that Nic has kindly linked to a source of the forcings used, I have plotted it up against UAH TLT and TLS and marked in the dates of the two major eruptions.
I chose the SH extra-tropical region since this shows no visible impact from El Chichon and allows us to see the background variation in temperatures that was happening at that time. (Note stratospheric temps tend to vary in the opposite sense so I have inverted and scaled to give a ‘second opinion’ on the background variaitons).
http://climategrog.wordpress.com/?attachment_id=273
Now we see that the effects of the back spreading of the forcing data produce a totally false correlation with natural variations of temperature that preceded the eruption. This has nothing to do with forcing or the model and is entirely a result of improper processing.The distorted form of the forcing data just happens to correlate with the natural temperature background around the time of the event.
Incidentally, I remain even more convinced now of my initial assessment that this is a five year running mean, not a three year as suggested by Willis and confirmed by Nic. I would ask Nic to check his source of information because it seems pretty incontrovertible from this, that it is affecting two points either side not one, hence it is a 5 pt filter kernel.
So why was this done? There is no valid reason and it has to be an intentional act , you can’t accidentally run a filter on one of your primary inputs.
Whoever had the idea to “smooth” the volcanic forcings, are they also introducing this practice elsewhere than Otto et al, where it may be falsely improving the ability of the hindcasts to reproduce key features of the temperature record?

What I love about science are the necessary assumptions that are made in order to carry out a calculation, you know the kind of thing I mean….’let’s assume a value for such and such’ or, let’s invent a concept like a ‘Black Body’, which of course cannot exist but is nonetheless useful in carrying out this calculation; well here are a couple observations from ‘real life’ which in my opinion seem to render ‘sensitivity’ calculations almost completely irrelevant….
Let’s assume (see what I did there?) that the increase of CO2 concentration from 350 to 400ppm does indeed capture sufficient energy to raise the overall temperature of the atmosphere by say 1 degree C. Let’s then assume that excess heat is eventually transported by ocean currents towards the polar regions. In the case of the Arctic Ocean in winter, sea Ice cover is reduced thereby allowing ‘larger volumes of warmer’ water to come into contact with the atmosphere at a time when there is no solar input (indeed conditions are ideal for heat loss to space).
Could it not then be argued that a slight heating of the atmosphere would cause and be balanced by subsequent polar cooling effect?
Indeed could it be further argued that Arctic Ocean heat loss could be a self amplifying effect ( a bit like the Warmist ‘feedbacks’…subsequently causing ‘runaway cooling’?

Richard M

Phil. says:
May 24, 2013 at 8:04 am
No, because the atmosphere is optically thick at the GHG wavelengths, i.e. lower in the atmosphere it absorbs more than it emits. Emission to space only occurs above a certain height and therefore at a certain temperature, as the concentration increases then that height increases and the temperature decreases and hence emission to space goes down.

You are oversimplifying the situation.
First, the GHE is real and works off of radiation from the surface. Bobl wasn’t referring to this process.
Second, thermalization and radiation of atmospheric energy (not surface energy) is basic physics. This works in parallel to the GHE and this is what Bobl was asking about. Since the density of the atmosphere is reduced the higher you go, the average distance the radiation travels until re-absorption (or loss to space) is computable, let’s assume X meters upwards. It looks like any flow through a pipe. Now, if you add more CO2 you increase the probability of these events occurring which increases the flow of energy at all levels of the atmosphere towards space. Essentially you create a wider pipe. If climate models ignore this process it’s not surprising they get the wrong answer.

John Parsons

Nic Lewis’s work (a significant contribution) and it’s implications need to be put into perspective. His work doesn’t seem to take into account the paleo record, nor should it necessarily do so. But the extremely short sample period needs to be recognized.
Additionally, from my reading of his results (as well as Dr. Otto’s apparently); at most, we may have a reprieve of ten or fifteen years before the same effects are upon us.
Not exactly a ‘Hallelujah’. JP

Janice Moore

1) “… if one makes the assumption that the evolution of forcing over the long period involved approximates a 70-year ramp. This is reasonable [based on another assumption that] the net forcing has grown substantially faster from the mid-twentieth century on than previously.”
***
2) “… estimates based on changes to the decade to 2000–09 are arguably the most reliable, since that decade has the strongest forcing… .” [
assumes the forcing is of any significance at all]
***
3) “…forcing was stronger during the 2003–12 decade…” [assumes significant forcing causation]
***
4) “… Since that data only extends to the mean for 2008–10, I have extended it for two years at a conservative (high) rate of 0.33 W/m2… ”
***
From statements like those quoted above, this well-executed paper appears to be a careful attempt to both: 1) deprogram genuinely brainwashed AGW cult members by gingerly casting doubt upon their core beliefs; and 2) provide a face-saving way for AGW crooks who know better to back down from their lies.
It is not, nevertheless, robust, open, debate.
When a debate opponent has no evidence to back up their conjectures, when that opponent offers only assumptions and speculation, then, no matter how complicated their math, it adds up to no more than “I simply believe this.” There is nothing to debate. The above is only playing their imaginary game. It may get them to change their behavior slightly, but not significantly. It’s like going along with a person having a psychotic episode just enough to get them out of the middle of the road and onto the shoulder. “Yes, yes, my good fellow, those tiny green men most likely do want you to go with them, but, I know that they want you to walk on the shoulder, not down the centerline. There’s a good lad. Just keep to the right of (or left — in U.K.) of that solid white line there. Good luck!”
While it is shrewd not to try to tell them “TINY GREEN MEN DO NOT EXIST,” the above really isn’t a debate.
Conclusion: While scientific discussion is very important, the main goal is to save our economies, thus we must win over the voters. And that debate needs to be simply and powerfully stated. In terms such as:
“All people are actually doing is just taking another guess.” [Jim Cripwell]
“Climate science attempts to model this as a simple scalar average, without even knowing if the combination of all the feedbacks represents a stationary function. That is, they don’t even know if the mean of the sensitivity is a constant.” [bobl]
“Clearly estimates of climate sensitivity have had to fall because models based on higher numbers have tracked so poorly they have reached the point of falsification. ” [IanH]

GO, you wonderful WUWT SCHOLARS — argue with vigor! TRUTH IS ON YOUR SIDE.

Janice Moore

AAAAAaaaack! Please forgive me. I messed up my first after the “]” in first paragraph. Sigh.

Janice Moore

Oh, brother… “my first [end bold]…”

bobl

Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.
1. CO2 molecule takes up energy through collision with non radiating gas
2. C02 molecule emits photon
It seems to me that increasing the CO2 concentration, increases the probability of such an interaction, and therefore must increase the emission to space. Does this component for example form part of the increased IR emission in the C02 emission bands seen in the Satellite record?
This isn’t much more that a thought at the moment, but seems to me that this is just a question of conservation IE energy in Vs energy out, anything that increases energy out must result in an overall cooling – granted it could be stratified, cooling in upper atmosphere only, but given the convection processes at play… Increasing the efficiency of radiation must increase the temperature difference, increasing the rate of convective and conductive heat transport to match.
This question has rocked my world so to speak, I can’t reconcile this with a warming effect, and to date I have been firmly of the opinion that CO2 warms. That’s still true if one only considers radiation, in that case radiation to space should decrease as GHGs rise because the radiation never reaches from the surface to height. But likely not if convective heat is radiated to space by GHGs. In that case there is always plenty of energy drawn from the thermal energy of surrounding N2 and O2 to feed into the pipe…
Thoughts on this are wlecome

Janice Moore

left should be right and right left above (I really need a vacation… !!!)

Nic // Why not do a meta analysis to collapse those wide C.I. values. The consistency between the various results suggests that the C.I. is too large.

Tsk Tsk

bobl says:
May 24, 2013 at 5:30 pm
Thank you Richard, that’s exactly what I was trying to say, I was thinking about how energy lost from the surface, by convection is radiated to space, and whether CO2 partial pressure plays into the efficiency of that process.
1. CO2 molecule takes up energy through collision with non radiating gas
2. C02 molecule emits photon
It seems to me that increasing the CO2 concentration, increases the probability of such an interaction, and therefore must increase the emission to space.
———————————————————-
I think you’re also assuming that the radiation always has to be outwards (are you?). The reality is that the CO2 molecule has basically a 50/50 chance of radiating up and out or down and in. The net effect is to increase the transit time of the photon and increase the energy content of the atmosphere and the surface as a result. Of course this is happening at all levels of the atmosphere just to make it more complicated. Finally, it can be directly observed just by measuring the radiation from a dark sky at night.

ruvfsy

So Mosh,
Where is the fine line between denialism and lukewarmerism?
1.2 per doubling of CO2?

ruvfs

Nic:
Could you tell us something about the journey from your first interestm your first calculations, your first paper and to the collabroartion towards this paper?
Would be interesting to hear.

Master_Of_Puppets

‘Climate’ and ‘Climate Change’ are interpretations, in part based on the psychological state of the ‘observer’ at any particular time and therefore not physical in any way or form, i.e. fantasies or phantasms.
Fantasies and phantasms have no sensitivity, not even memory, they are only apparitions.

AlecM

bobl 6.47 am: ‘Surely this has to increase losses to space overall.’
The fundamental problem with Climate Alchemy is that it starts from the premise that the ~15 µm CO2 IR band emitting at ~220 °K to space controls IR energy flux to space because if you double CO2, it reduces that band’s emitted flux by ~3 W/m^2.
However, at present CO2 level, that band is ~8% of OLR. 92% of the OLR comes from cloud level, the H2O bands and in the atmospheric window, near the surface temperature.
The premise has to shift to accepting that the Earth self regulates OLR equal to SW energy IN and the variations about the set point are oscillations as long time constant parts of the system adapt.
In other words, CO2-AGW is by definition zero on average.