Why doesn't the AR5 SOD's climate sensitivity range reflect its new aerosol estimates?

This article is a detailed complement to Matt Ridley’s Op-Ed today in the Wall Street Journal:

Cooling Down the Fears of Climate Change

Evidence points to a further rise of just 1°C by 2100. The net effect on the planet may actually be beneficial.

Guest post by Nic Lewis

There has been much discussion on climate blogs of the leaked IPCC AR5 Working Group 1 Second Order Draft (SOD). Now that the SOD is freely available, I can refer to the contents of the leaked documents without breaching confidentiality restrictions.

I consider the most significant – but largely overlooked – revelation to be the substantial reduction since AR4 in estimates of aerosol forcing and uncertainty therein. This reduction has major implications for equilibrium climate sensitivity (ECS). ECS can be estimated using a heat balance approach – comparing the change in global temperature between two periods with the corresponding change in forcing, net of the change in global radiative imbalance. That imbalance is very largely represented by ocean heat uptake (OHU).

Since the time of AR4, neither global mean temperature nor OHU have increased, while the IPCC’s own estimate of the post-1750 change in forcing net of OHU has increased by over 60%. In these circumstances, it is extraordinary that the IPCC can leave its central estimate and ‘likely’ range for ECS unchanged.

I focused on this point in my review comments on the SOD. I showed that using the best observational estimates of forcing given in the SOD, and the most recent observational OHU estimates, a heat balance approach estimates ECS to be 1.6–1.7°C – well below the ‘likely’ range of 2‑4.5°C that the SOD claims (in Section is supported by the observational evidence, and little more than half the best estimate of circa 3°C it gives.

The fact that ECS, as derived using the new aerosol forcing estimates and a heat balance approach, appears to be far lower than claimed in the SOD is highlighted in an article by Matt Ridley in the Wall Street Journal, which uses my calculations. There was not space in that article to go into the details – including the key point that the derived ECS estimate is very well constrained – so I am doing so here.

How does the IPCC arrive at its estimated range for climate sensitivity?

Methods used to estimate ECS range from:

(i) those based wholly on simulations by complex climate models (GCMs), the characteristics of which are only very loosely constrained by climate observations, through

(ii) those using simpler climate models whose key parameters are intended to be constrained as tightly as possible by observations, to

(iii) those that rely wholly or largely on direct observational data.

The IPCC has placed a huge emphasis on GCM simulations, and the ECS range exhibited by GCMs has played a major role in arriving at the IPCC’s 2–4.5°C ‘likely’ range for ECS. I consider that little credence should be given to estimates for ECS derived from GCM simulations, for various reasons, not least because there can be no assurance that any of the GCMs adequately reflect all key climate processes. Indeed, since in general GCMs significantly overestimate aerosol forcing compared with observations, they need to embody a high climate sensitivity or they would underestimate historical warming and be consigned to the scrapheap. Observations, not highly complex and unverifiable models, should be used to estimate the key properties of the climate system.

Most observationally-constrained studies use instrumental data, for good reason. Reliance cannot be placed on paleoclimate proxy-based estimates of ECS – the AR4 WG1 report concluded (Box 10.2) that uncertainties in Last Glacial Maximum studies are just too great, and the only probability density function (PDF) for ECS it gave from a last millennium proxy-based study contained little information.

Because it has historically been difficult to estimate ECS purely from instrumental observations, a standard estimation method is to compare observations of key observable climate variables, such as zonal temperatures and OHU, with simulations of their evolution by a relatively simple climate model with adjustable parameters that represent, or are calibrated to, ECS and other key climate system properties. A number of such ‘inverse’ studies, of varying quality, have been performed; I refer later to various of these. But here I estimate ECS using a simple heat balance approach, which avoids dependence on models and also has the advantage of not involving choices about niceties such as truncation parameters and Bayesian priors, which can have a major impact on ECS estimation.

Aerosol forcing in the SOD – a composite estimate is used, not the best observational estimate

Before going on to estimating ECS using a heat balance approach, I should explain how the SOD treats forcing estimates, in particular those for aerosol forcing. Previous IPCC reports have just given estimates for radiative forcing (RF). Although in a simple world this could be a good measure of the effective warming (or cooling) influence of every type of forcing, some forcings have different efficacies from others. In AR5, this has been formalised into a measure, adjusted forcing (AF), intended better to reflect the total effect of each type of forcing. It is more appropriate to base ECS estimates on AF than on RF.

The main difference between the AF and RF measures relates to aerosols. In addition, the AF uncertainty for well-mixed greenhouse gases (WMGG) is double that for RF. Table 8.7 of the SOD summarises the AR5 RF and AF best estimates and uncertainty ranges for each forcing agent, along with RF estimates from previous IPCC reports. The terminology has changed, with direct aerosol forcing renamed aerosol-radiation interactions (ari) and the cloud albedo (indirect) effect now known as aerosol-cloud interactions (aci).

Table 8.7 shows that the best estimate for total aerosol RF (RFari+aci) has fallen from −1.2 W/m² to −0.7 W/m² since AR4, largely due to a reduction in RFaci, the uncertainty band for which has also been hugely reduced. It gives a higher figure, −0.9 W/m², for AFari+aci. However, −0.9 W/m² is not what the observations indicate: it is a composite of observational, GCM-simulation/aerosol model derived, and inverse estimates. The inverse estimates – where aerosol forcing is derived from its effects on observables such as surface temperatures and OHU – are a mixed bag, but almost all the good studies give a best estimate for AFari+aci well below −0.9 W/m²: see Appendix 1 for a detailed analysis.

It cannot be right, when providing an observationally-based estimate of ECS, to let it be influenced by including GCM-derived estimates for aerosol forcing – a key variable for which there is now substantial observational evidence. To find the IPCC’s best observational (satellite-based) estimate for AFari+aci, one turns to Section 7.5.3 of the SOD, where it is given as −0.73 W/m² with a standard deviation of 0.30 W/m². That is actually the same as the Table 8.7 estimate for RFari+aci, except for the uncertainty range being higher. Table 8.7 only gives estimated AFs for 2011, but Figure 8.18 gives their evolution from 1750 to 2010, so it is possible to derive historical figures using the recent observational AFari+aci estimate as follows.

The values in Figure 8.18 labelled ‘Aer-Rad Int.’ are actually for RFari, but that equals the purely observational estimate for AFari (−0.4 W/m² in 2011), so they can stand. Only the values labelled ‘Aer-Cld Int.’, which are in fact the excess of AFari+aci over RFari, need adjusting (scaling down by (0.73−0.4)/(0.9−0.4), all years) to obtain a forcing dataset based on a purely observational estimate of aerosol AF rather than the IPCC’s composite estimate. It is difficult to digitise the Figure 8.18 values for years affected by volcanic eruptions, so I have also adjusted the widely-used RCP4.5 forcings dataset to reflect the Section 7.5.3 observational estimate of current aerosol forcing, using Figure 8.18 and Table 8.7 data to update the projected RCP4.5 forcings for 2007–2011 where appropriate. The result is shown below.


The adjustment I have made merely brings estimated forcing into line with the IPCC’s best observationally-based estimate for AFari+aci. But one expert on the satellite observations, Prof. Graeme Stephens, has stated that AFaci is at most ‑0.1 W/m², not ‑0.33 W/m² as implied by the IPCC’s best observationally-based estimates: see here and slide 7 of the linked GEWEX presentation. If so, ECS estimates should be lowered further.

Reworking the Gregory et al. (2002) heat balance change derived estimate of ECS

The best known study estimating ECS by comparing the change in global mean temperature with the corresponding change in forcing, net of that in OHU, is Gregory et al. (2002). This was one of the studies for which an estimated PDF for ECS was given in AR4. Unfortunately, ten years ago observational estimates of aerosol forcing were poor, so Gregory used a GCM-derived estimate. In July 2011 I wrote an open letter to Gabi Hegerl, joint coordinating lead author of the AR4 chapter in which Gregory 02 was featured, pointing out that its PDF was not computed on the basis stated in AR4 (a point since conceded by the IPCC), and also querying the GCM-derived aerosol forcing estimate used in Gregory 02. Some readers may recall my blog post at Climate Etc. featuring that letter, here. Using the GISS forcings dataset, and corrected Levitus et al. (2005) OHU data, the 1861–1900 to 1957–1994 increase in QF (total forcing – OHU) changed from 0.20 to 0.68 W/m². Dividing 0.68 W/m² into ΔT‘, the change in global surface temperature, being 0.335°C, and multiplying by 3.71 W/m² (the estimated forcing from a doubling of CO2 concentration) gives a central estimate (median) for ECS of 1.83°C.

I can now rework my Gregory 02 calculations using the best observational forcing estimates, as reflected in Figure 8.18 with aerosol forcing rescaled as described above. The change in QF becomes 0.85 W/m². That gives a central estimate for ECS of 1.5°C.

An improved ECS estimate from the change in heat balance over 130 years

The 1957–1994 period used in Gregory 02 is now rather dated. Moreover, using long time periods for averaging makes it impossible to avoid major volcanic eruptions, which involve uncertainty as to the large forcing excursions involved and their effects. I will therefore produce an estimate based on decadal mean data, for the decade to 1880 and for the most recent decade, to 2011. Although doing so involves an increased influence of internal climate variability on mean surface temperature, it has several advantages:

a) neither decade was significantly affected by volcanic activity;

b) neither decade encompassed exceptionally large ENSO events, such as the 1997/98 El Nino, and average ENSO conditions were broadly neutral in both decades (arguably with a greater tendency towards warm El Nino conditions in the recent decade); and

c) the two decades are some 130 years apart, and therefore correspond to similar positions in the 60–70 year quasi-periodic AMO cycle (which appears to have a peak-to-peak influence on global mean temperature of the order of 0.1°C).

Since estimates of OHU have become much more accurate during the latest decade, as the ARGO network of diving buoys has come into action, the loss of accuracy by measuring OHU only over the latest decade is modest.

I summarise here my estimates of the changes in decadal mean forcing, heat uptake and global temperature between 1871–1880 and 2002–2011, and related uncertainties. Details of their derivations are given in Appendix 2.

Change in global decadal mean between 1871–1880 and 2002–2011 Mean estimate Standard deviation Units
Adjusted forcing: CO2 and other well-mixed greenhouse gases 0.29 W/m²
Adjusted forcing: all other sources (balancing error standard dev.) 0.34 W/m²
Adjusted forcing: total 2.09 0.45 W/m²
Earth’s heat uptake 0.43 0.08 W/m²
Surface temperature 0.73 0.12 °C

Now comes the fun bit, putting all the figures together. The best estimate of the change from 1871–1880 to 2002–2011 in decadal mean adjusted forcing, net of the Earth’s heat uptake, is 2.09 − 0.43 = 1.66 W/m². Dividing that into the estimated temperature change of 0.727°C and multiplying by 3.71 W/m² gives an estimated climate sensitivity of 1.62°C, close to that from reworking Gregory 02.

Based on the estimated uncertainties, I compute a 5–95% confidence interval for ECS of 1.03‑2.83°C – see Appendix 3. That implies a >95% probability that ECS is below the IPCC’s central estimate of 3°C.

ECS estimates from recent studies – good ones…

As well as this simple estimate from heat balance implying a best estimate for ECS of approximately 1.6°C, and the reworking of the Gregory 02 results suggesting a slightly lower figure, two good quality recent observationally-constrained studies using relatively simple hemispheric-resolving models also point to climate sensitivity being about 1.6°C:

§ Aldrin et al. (2012), an impressively thorough study, gives a most likely estimate for ECS of 1.6°C and a 5–95% range of 1.2–3.5°C.

  • Ring et al. (2012) also estimates ECS as 1.6°C, using the HadCRUT4 temperature record (1.45°C to 2.01°C using other records).

And the only purely observational study featured in AR4, Forster & Gregory (2006), which used satellite observations of radiation entering and leaving the atmosphere, also gave a best estimate of 1.6°C, with a 95% upper bound of 4.1°C.

and poor ones…

Most of the instrumental-observation constrained studies featured in IPCC reports that give PDFs for ECS peaking at significantly over 2°C have some identifiable deficiency. Two such studies were featured in Figure 9.21 of AR4 WG1: Forest 06 and Knutti 02. Forest 06 has several statistical errors (see here) and other problems. Knutti 02 used a weak statistical procedure and an arbitrary combination of five ocean models, and there is little information content in its probabilistic ECS estimate.

Five of the PDFs for ECS from 20th century studies featured in Figure 10.19 of the AR5 SOD peak significantly above 2°C:

  • one is Knutti 02;
  • three are various cases from Libardoni and Forest (2011), a study that suffers the same deficiencies as Forest 06;
  • one is from Olson et al. (2012); the Olson PDF, like Knutti 02’s, is extremely wide and contains almost no information.


In the light of the current observational evidence, in my view 1.75°C would be a more reasonable central estimate for ECS than 3°C, perhaps with a ‘likely’ range of around 1.25–2.75°C.

Nic Lewis


Appendix 1: Inverse estimates of aerosol forcing – the expert range largely reflects the poor studies

The AR5 WG1 SOD composite AFari+aci estimate of −0.9 W/m² is derived from mean estimates from satellite observations (−0.73 W/m²), GCMs (−1.45 W/m² from AR4+AR5 models including secondary processes, −1.08 W/m² from CMIP5/ACCMIP models) and an “expert” range of −0.68 to −1.52 W/m² from combined inverse estimates. These figures correspond to box-plots in the lower panel of Figure 7.19. One of the inverse studies cited hasn’t yet been published and I haven’t been able to obtain it, but I have examined the other twelve studies.

Because of its strong asymmetry between the northern and southern hemispheres, in order to estimate aerosol forcing with any accuracy using inverse methods it is essential to use a model that, at a minimum, resolves the two hemispheres separately. Only seven of the twelve studies do so. Of the other five:

  • one is just a survey and derives no estimate itself;
  • one (Gregory 02) merely uses an AOGCM-derived estimate of a circa 100-year change in aerosol forcing, without itself deriving any estimate;
  • three are based on global-mean only data (with two of them assuming an ECS of 3°C when estimating aerosol forcing).

One of the seven potentially useful studies is based on GCM simulations, which I consider to be untrustworthy. A second does not estimate aerosol forcing over 90S–28S, and concludes that over 1976–2007 it has been large and negative over 28S–28N and large and positive over 28N–60N, the opposite of what is generally believed. A third study is Andronova and Schlesinger (2001), which it turns out had a serious code error. Its estimate of −0.54 to ‑1.30 W/m² falls to −0.42 to −0.99 W/m² when using the corrected model (Ring et al., 2012). Three of the other studies, all using four latitude zones, estimate aerosol forcing to be even lower: in the ranges −0.14 to −0.74, −0.3 to −0.95 and −0.19 to −0.83 W/m². The final study estimates it to be around or slightly above ‑1 W/m², but certainly below ‑1.5 W/m². One recent inverse estimate that the SOD omits is −0.7 W/m² (mean – no uncertainty range given) from Aldrin et al. (2012).

In conclusion, I wouldn’t hire the IPCC’s experts if I wanted a fair appraisal of the inverse studies. A range of −0.2 to −1.3 W/m² looks more reasonable – and as it happens, is centred almost exactly on the mean of the estimates derived from satellite observations.


Appendix 2: Derivation of the changes in decadal mean global temperature, forcing and heat uptake

Since it extends back before 1880 and includes a correction to sea surface temperatures in the mid-20th century, I use HadCRUT4 global mean temperature data, available as annual data here. The difference between the mean for the decade 2002–2011 and that for 1871–1880 is 0.727°C. The uncertainty in that temperature change is tricky to work out because the various error sources are differently correlated in time. Adding the relevant years’ total uncertainty estimates for the HadCRUT4 21-year smoothed decadal data (estimated 5–95% ranges 0.17°C and 0.126°C), and very generously assuming the variance uncertainty scales inversely with the number of years averaged, gives an error standard deviation for the change in decadal temperature of 0.08°C (all uncertainty errors are assumed to be normally distributed, and independent except where otherwise stated). There is also uncertainty arising from random fluctuations in the internal state of the climate. Surface temperature simulations from a GCM control run suggest that error source could add a further error standard deviation of 0.08°C for both decades. However, the matching of their characteristics as set out in the main text, points a) to c), and the fact that some fluctuations will be reflected in OHU, suggests a reduction from the 0.11°C error standard deviation implied by adding two 0.08°C standard deviations in quadrature, say increasing halfway, to 0.095°C. Adding that to the observational error standard deviation of 0.08°C gives a total standard deviation of 0.124°C.

The change between 1871‑1880 and 2002–2011 in decadal mean AF, with aerosol forcing scaled to reflect the best recent observational estimates, is 2.09 W/m², taking the average of the Figure 8.18 and RCP4.5 derived estimates (which are both within about 1% of this figure). The total AF uncertainty estimate of ± 0.87 W/m² in Table 8.7 equates to an error standard deviation of 0.44 W/m², which is taken as applying for 2002–2011. Using the observational aerosol forcing error estimate given in Section 7.5.3 instead of the corresponding Table 8.7 uncertainty range gives the same result. Although there would be some uncertainty in the small 1871–1880 mean forcing estimate, the error therein will be strongly correlated with that for 2002–2011. That is because much of the uncertainty relates to the relationships between:

§ concentrations of WMGG and the resulting forcing

§ emissions of aerosols and the resulting forcing,

the respective fractional errors in which are common to both periods. Therefore, the error standard deviation for the change in forcing between 1871–1880 and 2002–2011 could well be smaller than that for the forcing in 2002–2011. However, for simplicity, I assume that it is the same. Finally, I add an error standard deviation of 0.05 W/m² for uncertainty in volcanic forcing in 1871–1880 and a further 0.05 W/m² for uncertainty therein in 2002–2011, small though volcanic forcing was in both decades. Solar forcing uncertainty is included in Table 8.7. Summing the uncertainties, the total AF change error standard deviation is 0.45 W/m².

I estimate 2002–2011 OHU from a regression over 2002–2011 of 0–2000 m pentadal ocean heat content estimates per Levitus et al. (2012), inversely weighting observations by their variance. OHU in the 2000–3000 m layer is estimated to be negligible. After conversion from zeta Joules/year, the trend equates to 0.433 W/m², averaged over the Earth’s surface. The standard deviation of the OHU estimate as computed from the regression residuals is 0.031 W/m², but because of the autocorrelation implicit in using overlapping pentadal averages the true figure will be much higher. Multiplying the standard deviation by sqrt(5) provides a crude adjustment for the autocorrelation, bringing the standard deviation to 0.068 W/m². There is no alternative to using GCM-derived estimates of OHU for the 1871–1880 period, since there were no measurements then. I adopt the OHU estimate given in Gregory 02 for the bracketing 1861–1900 period of 0.16 W/m², but deduct only 50% of it to compensate for the Levitus et al. (2012) regression trend implying a somewhat lower 2002-2011 OHU than is given in the SOD. Further, to be conservative, I treble Gregory 02’s optimistic-looking standard deviation, to 0.03 W/m². That implies a change in OHU of 0.353 W/m², with a standard deviation of 0.075 W/m², adding the uncertainty variances. Although Gregory 02 ignored non-ocean heat uptake, some allowance should be made for that and also for any increase in ocean heat content below 3000 m. The (slightly garbled) information in Section 3.2.5 of the SOD implies that 0–3000 m ocean warming accounts for 80–85% of the Earth’s total heat uptake, with the error standard deviation for the remainder of the order of 0.03 W/m². Allowing for all components of the Earth’s heat uptake implies an estimated change in total heat uptake of 0.43 W/m² with an error standard deviation of 0.08 W/ m². Natural variability in decadal OHU should be the counterpart of natural variability in decadal global surface temperature, so is not accounted for separately.


Appendix 3: Derivation of the 5–95% confidence interval

In the table of changes in the variables between 1871–1880 and 2002–2011, I split the AF error standard deviation between that for CO2 and other greenhouse gases (0.291 W/m²), and for all other items (0.343 W/m²). The reason for doing so is this. Almost all the SOD’s 10.2% error standard deviation for greenhouse gas AF relates to the AF magnitude that a given change in the greenhouse gas concentration produces, not to uncertainty as to the change in concentration. When estimating ECS, whatever that error is, it will affect equally the 3.71 W/m² estimated forcing from a doubling of equivalent CO2 concentration used to compute the ECS estimate. Most of the uncertainty in the ratio of AF to concentration is probably common to all greenhouse gases. Insofar as it is not, and the relationship between changes in greenhouse gases is different in the future to in the past, then the two AF estimation fractional errors will differ. I ignore that here. As most of the past greenhouse gas forcing is due to CO2 and that is expected to be the case in future, any inaccuracy from doing so should be minor.

So, I estimate a 5–95% confidence interval for ECS as follows. Randomly draw a million realisations from each of the following independent Normal(mean, standard deviation) distributions:

a: AF WMGG uncertainty – before scaling – from N(0,1)

b: Total AF without WMGG uncertainty – from N(2.09,0.343)

c: Earth’s heat uptake – from N(0.43,0.08)

d: Surface temperature – from N(0.727,0.124)

and for each quartet of random numbers compute ECS as: 3.71 * (1 + 0.102*a) * d / (0.291*a + b − c).

One then computes a histogram for the million ECS estimates and finds the points below which 5% and 95% of the total estimates lie. The resulting 5–95% range comes out at 1.03 to 2.83°C.

UPDATE: Dr. Judith Curry provides her take on the issue, and endorses the leak:




newest oldest most voted
Notify of

If one takes away nothing else — There is now sufficient data to study climate trends without models, either computer or paleo. Therefore, it makes sense to do so. While computer modeling and paleo reconstructions both have their place, and can lead to valid “science”, policy makers would do well to concentrate on real data studies, and implied consequences for a decade or two. Hundred year plans of today continue to be as valid as those of a hundred plus years ago involving much handwringing over uncontrollable mountains of horse-hockey —hmm. often the same topic.

John W. Garrett

Richard Feynman (R.I.P.) is beginning to crack a smile.


Fascinating….. if one takes the “catastrophic” out of CAGW then the public debate should change….. shouldn’t it?
Others will need to assess (‘audit’) the calculations and arguments, but I am glad to see this kind of analysis getting attention. Many warm thanks to Nic Lewis for his important work on this topic, and to the extraordinary blog hosts (Steve Mc, Anthony, Andrew, Judith Curry) who have provided such fertile environments for discussing truly innovative thinking.


Nice. I note that this was based on comparing 1871–1880 to 2002–2011. Considering Willis Eschenbach’s recent article I think it’s likely that future sensitivity will be lower than it has been for the period compared here. The higher the temperature, the more the convection-driven negative feedback kicks in, and i.e. the lower the sensitivity.

The title is “Why doesn’t the AR5s SODs climate sensitivity range reflect it’s new aerosol estimates?”
The answer is easy. The IPCC is stuck with the baggage of it’s previous reports. It has been obvious for some time that writing the AR5 was going to be a problem. Either it must omit and ignore a lot of work that has been done since the AR4 which gives a strong indication that previous IPCC conclusions were just plain wrong. Or it must admit that it’s previous estimates were gross exaggerations. Neither of these alternatives are acceptable to the IPCC.
It remains to be seen what the IPCC will actually do.

Why would the IPCC want to publish a figure of 1.6C for ECS? The figure of 3.0C is much more acceptable, because it exceeds the 2.0C that has been arbitrarily agreed as the target for CAGW. If the IPCC was to publish 1.6C, then there would be no reason for the IPCC to continue to exists. In effect the scientists and bureaucrats involved would have done themselves out of a rather sweet job.
Who in their right mind at the IPCC is going to do that? In effect publish something that says there is no problem, time for us to wrap things up and go home. Imagine hiring an efficiency expert to take a look at your company. How likely is it that the efficiency expert will recommend you fire the efficiency expert to save costs? How much more likely is it that the efficiency expert recommends you fire everyone else and keep the efficiency expert to ensure things are efficient? No matter if you have no sales people, not shipping department, no receivables. At least you will be efficient.
Isn’t that really what the IPCC is saying? Shut down your economies and be efficient. No matter if you can heat your houses or feed your children, that isn’t what matters. What matters is that you live a green lifestyle, leaving all natural resources in the ground where they belong, so that future generations can dig them up and consume them.
But wait, if future generations are going to dig them up and consume the natural resources, what will the future, future generations consume? This means that no generation can ever dig up the natural resources of the planet, and we must leave them for eternity. In which case we might as well dig them up and use them ourselves as no one else will be using them.

David L. Hagen

Compliments on your detailed analysis and identifying major errors that attribute too great a climate sensitivity to CO2 amplification.
Dismal Theorem Correction
To complement to your Probability Density Function corrections, recommend incorporating the major economic projection correction by Ross McKitrick (2012):
Ross McKitrick, Cheering Up the Dismal Theorem, DISCUSSION PAPER 2012-05 University of Guelph DEPARTMENT OF ECONOMICS AND FINANCE

The Weitzman Dismal Theorem (DT) suggests agents today should be willing to pay an unbounded amount to insure against fat-tailed risks of catastrophes such as climate change. . . .Use of the exact measure completely changes the pricing model such that the resulting insurance contract is plausibly small, and cannot be unbounded regardless of the distribution of the assumed climate sensitivity.

Declining Clouds
How does cloud sensitivity affect your analysis/conclusions?
See the evidence of declining cloud cover of Eastman & Warren 2012. where the global average cloud cover declined about 1.56% over 39 years (1979 to 2009) or ~0.4%/decade, primarily in middle latitudes at middle and high levels. A one percentage point decrease in albedo (30% to 29%) nominally increases the black-body radiative equilibrium temperature about 1°C, about equal to a doubling of atmospheric CO2. e.g., by a 1.5% reduction in clouds since they form up to 2/3rds of global albedo (IPCC report AR4 1.5.2 p.114).
Ryan Eastman, Stephen G. Warren, A 39-Year Survey of Cloud Changes from Land Stations Worldwide 1971-2009 Journal of Climate 2012 ; e-View doi: http://dx.doi.org/10.1175/JCLI-D-12-00280.1
Could this indicate that a portion of warming could be from declining cloud cover, suggesting that the magnification of CO2 warming is even more overrated? (Caution: We only have this recent data, not back to 1871–1880, and thus skewed by the warming portion of the 60 year cycle.)
Now how do we distinguish which the cause and which the consequence in Clouds vs CO2?
Total Solar Insolation uncertainty:
IPCC uses the PMOD team’s analysis of Total Solar Insolation (TSI) BUT it ignores the ACRIM team’s analysis. Nicola Scafetta (2011) shows that alternative reconstructions of the ACRIM Gap in TSI measurements could result in the solar contribution causing 15%, 50% or 60% of the warming. i.e., the TSI uncertainty appears very strongly underestimated by the bias of selecting one team’s analysis and ignoring the competing team’s analysis.
Nicola Scafetta (2011) Total Solar Irradiance Satellite Composites and their Phenomenological Effect on Climate. In Evidence-Based Climate Science edited by Don Easterbrook (Elsevier), chap. 12, 289-316.
Technically: The confidence in CO2 climate amplification is very likely overestimated, with climate sensitivity likely to be much lower than IPCC’s AR4 estimates.
Popularly: Lewis + McKitrick + Scafetta = Why worry?
“If the cost of the premium exceeds the cost of the risk, don’t insure.”
Adapt for small changes as they occur!

Rud Istvan

There are additional, quasi independent (all relying on the same observational inputs) ways to reach the same general conclusions about ECS. See the climate change chapter in the new eBook, The Arts of Truth.

John Francis

The focus on average temperature seems wrong to me. Even if this analysis is 100% correct, the likely outcome is daytime highs changing insignificantly, and night time lows warming slightly, due to the T^4 factor. If so, I fail to see any problem at all.


Nice quote from the WSJ article:
“…given the organization’s record of replacing evidence-based policy-making with policy-based evidence-making…”

cui bono

Far from a triumphant grin, it is difficult to repress anger. The waste of time, money and scientific talent. The opportunity cost of all the useless energy systems pushed by the Greens. The propoganda, hype and alarm generated just to turn toddlers into sloganeering anthrophobes.
Almost enough for a large ‘bah humbug’ directed at the responsible parties. Merry Christmas, IPCC.

This estimation of equilibrium climate sensitivity assumes that the climate system is in equilibrium but makes no attempt to verify if this assumption is valid. If there is “warming in the pipeline”, this analysis will underestimate ECS.
This is the enormous advantage of GCM – it is possible to run them until they are in equilibrium.

nice work nic


Or, in four sentences:
The surface absorbes ~200 W/m2 solar (160 directly and 40 via atmospheric reradiation)
The energy transport from the surface is ~500 W/m2, so the system gain is ~2.5.
The next 3.7 W/m2 will see the same 2.5 amplification and will increase the surface emission by 9.25 W/m2. The surface temperature has to increase 1.7 K to get rid of the extra 9.25 watts.
(note no unreliable temp measurements needed)

John Blake

Under Railroad Bill Pachauri, the UN IPCC remains irremediably a New World Order propaganda organ dressed up with pseudo-science a la Rene Blondlot, J.B. Rhine, Immanuel Velikovsky, Trofim Lysenko. Rather than dignify such twits with rational discourse, better to simply cite replicable observations without regard to AGW blowhards’ self-serving Big Government grant monies.

Louis Hooffstetter

My psychic friend Miss Cleo points out the Team’s message leading up to AR5 has been “It’s worse than we thought!” Stephan Rahmstorf and his buddy Tamino clearly proved the planet is still warming according to model projections (thermometers just aren’t recording it because of pesky interference from ENSO and volcanoes):
And once you correct for that, sea levels are rising 60% faster than IPCC projections!:
She says Jim Cripwell is right; there is no way they can admit their previous conclusions were just plain wrong or that previous estimates were gross exaggerations. So Miss Cleo predicts the IPCC is going to double down their SWAGs* in AR5!
*SWAG = Scientific Wild Ass Guess

John F. Hultquist

I read Matt Ridley’s Op-Ed and the comments that followed on-line. As Nic Lewis (Thanks, Nic) shows there is science being done, and also science being ignored in the 2nd Order Draft (SOD). Many of the folks commenting at the WSJ seem clueless and apparently are determined to stay that way. I found this attitude as interesting as the question about equilibrium climate sensitivity (ECS).


richard telford:
At December 19, 2012 at 8:19 am

This estimation of equilibrium climate sensitivity assumes that the climate system is in equilibrium but makes no attempt to verify if this assumption is valid. If there is “warming in the pipeline”, this analysis will underestimate ECS.
This is the enormous advantage of GCM – it is possible to run them until they are in equilibrium.

Equilibrium is reached very fast: i.e. in less than a year.
This is demonstrated by the absence of the “committed warming” predicted (n.b. predicted, not projected) in the AR4 on the basis of GHGs already in the system.
So, allow me to correct your final sentence.
This is the enormous advantage of GCM – it is possible to TUNE them until they SHOW WHATEVER IS DESIRED.

Nic Lewis

Richard Telford wrote:
“If there is “warming in the pipeline”, this analysis will underestimate ECS.”
Not so. The analysis allows for ocean etc. heat uptake, as even a cursory reading of it shows. The “warming in the pipeline” issue relates to part of the increase in forcing going into heating the ocean rather than being radiated into space as a result of surface (and hence atmospheric) temperatures being higher. I suggest you read the Gregory et al. (2002) paper if you don’t believe my explanation.

John West

1) Thank you Nic Lewis!!!!!
2) This is a step in the right direction, not the finale. There’s evidence suggesting even lower ECS (see David L. Hagen comment) perhaps in the 0.3 to 0.7 range.
3) Given that those who seek carbon controls can point to James Hansen’s 2 degrees will be detrimental paper and the range of 1.03 to 2.83 here, they can still insist the precautionary principle demands action, therefore we’ve still got a long way to go.

Theo Goodwin

“The IPCC has placed a huge emphasis on GCM simulations, and the ECS range exhibited by GCMs has played a major role in arriving at the IPCC’s 2–4.5°C ‘likely’ range for ECS. I consider that little credence should be given to estimates for ECS derived from GCM simulations, for various reasons, not least because there can be no assurance that any of the GCMs adequately reflect all key climate processes.”
Warms my heart to read this. In my humble opinion, sceptics deserve credit for the fact that reference to “climate processes” is now taken for granted in sophisticated writing about climate science. In the not too old days, so-called climate scientists were quite happy to base their work on strings of temperature readings without reference to the many and varied climate processes from which those readings were taken. Sceptics have “bent the curve” toward the empirical and genuine science.

Thank you very much Nic, excellent article!


Thank you Nic. Very timely and very precise.
Confirmation that the Skeptical Science Syndrome is not spreading, but rather being laid to waste by folks without optical illusion problems.

Thanks for releasing this Nic 🙂
I’ll just add that the estimated sensitivity would be reduced still further by taking into account the solar forcing beyond TSI that the IPCC now admits to be indicated by the paleo evidence (page 7-43, lines 1-5):

Many empirical relationships have been reported between GCR or cosmogenic isotope archives and some aspects of the climate system (e.g., Bond et al., 2001; Dengel et al., 2009; Ram and Stolz, 1999). The forcing from changes in total solar irradiance alone does not seem to account for these observations, implying the existence of an amplifying mechanism such as the hypothesized GCR-cloud link. We focus here on observed relationships between GCR and aerosol and cloud properties.

The more the forcing behind a given temperature rise the lower the sensitivity, and the sun was a “grand maximum” levels according to Usoskin 2007 from roughly 1920 to 2000, and no, it makes no qualitative difference whether solar activity was merely “high” across this span rather than “exceptional,” as Raimund Muscheler claims. But if you do think it matters, please note that Usoskin thinks Muscheler’s revision is wrong. (I’d look up that last reference too but I have to run!)

P. Solar

Silly SODs at the IPCC seem intent on making a AR5 of themselves.

this is all rubbish
There is no man made global warming:
there never was:
a balance sheet / test result
There is no man made global cooling either
there never was
a balance sheet /test result there either.
All we had is natural warming from 1927
(ignore the “Global” temp. record before that time; they did not even re-calibrate thermometers in those days once they were manufactured)
And now we are on our way to natural cooling since 1998.
note the recent graph of AR5 and see the actual measurements going up from 1992 reaching a maximum in 1998 and now curving down – a real binomial curve clearly visible for anyone who would care to have a good look.
As predicted,
I might add.
Each weather station has its own sine wave, depending on the make up of the chemicals lying on top of the atmosphere.
Global warming.
Global cooling.
Everything is natural….
we are now cooling.
and we will continue to cool, until ca. 2040
Live with it. Prepare for it.


So the simple message is: The observed ECS using IPCC AR5 values is 1.75 degrees. Congratulations are due to the IPCC, the member states, our science community, and concerned citizens of the globe everywhere for having saved the earth through tireless advocacy, study, and innovation. The people of the earth have risen to the challange and beaten our goal. Champagne will served in the lobby, thank you.

I still say this modeling foolishness and obsessing our part of a degree C that is firmly in the error band is simply masturbation with millions of taxpayer dollars. That is simply wrong masturbation should be done by ones self, not in public. This was a nice write up and, thanks Nick.

Bill Treuren

Taking the C out of CAGW is very important. As a member of the oil exploration community it’s clear that peak oil is hard to support, but lifting production by 100% is hard to see being possible, and a bussiness as usual model tends to imply that this will happen.
A sane application of efficient technologies throughout the world where pollution controls focuses on substances that are pollution is what the world needs. so we can all enjoy the fruits of economic growth, not just the elite ruling class that fills the UN.

I think it might be useful to consider what Union General William Tecumseh Sherman said during the American Civil War. WTTE, “A good general gives his opponent two alternatives, both of which are bad.” I am not sure which of us skeptics has achieved this, but I would suggest that now the IPCC, in writing the AR5, is faced with two alternatives, both of which are bad.


If there is significant “warming in the pipeline”, the energy needs to be stored somewhere.for years The atmosphere is not able to hide enough energy without it showing up as heat and radiating out. Humidity alters heat capacity but we watch for that. The deep ocean is often given as the likely storage zone, In principles heat being locked into deep ocean now could come back to the surface decades later. It seems far fetched the energy could sneak to the deep without being spotted on its way down by Argos.
So the “enormous advantage of GCM models” – it is possible to run them using assumptions without supporting observation. The models kind of worked where the training period was warming and the predicted period was also observed to be warming. But now the CO2 is going up and the temp seems fairly, stable and this was not predicted by the GCM in advance. You can always can “improve” a model to account for past failures to predict behavior. Remember Aerosols. However, it is just a glorified gut feeling until the observations back up the assumptions.

D Böehm

Bill Treuren,
I agree with your comment 100%. AGW may exist, but if so, it is only a third order forcing (per Willis Eschenbach’s definition).
There is nothing ‘catastrophic’ about the rise in the (harmless, beneficial) trace gas CO2. Economic growth in the less developed countries is inevitable, therefore CO2 will rise from 0.03% to 0.04%, and the biosphere will be the beneficiary. There is no downside to this minuscule rise in CO2.
Fortunately, the rise in CO2 is truly harmless. Developing countries will continue to generate CO2, regardless. Only the eco-totalitarians oppose economic growth, for their own self-serving, nefarious reasons. Political power and control are their true motivations. Science is nothing but a thin veneer to cover their anti-West agenda.


There are 2 ways to show that a result is wrong:
1. Prove assumptions are wrong.
2. Suppose, assumptions are right and prove computation is wrong.
This estimate follows 2. The good thing about this way is that it is hard to ignore.
Assumptions are most likely wrong as well, because following them, we would still be at the bottom of the little ice age, probably the coldest episode of the last 2000+ years without human interference. Part of this recovery is most likely natural.and sensitivity even lower. Another issue is UHI.

Doug Proctor

There are so many pieces of the puzzle with questionable basis, one wonders what those, like Lewandowsky, mean by having “99%” certainty.
The AR5: what is the “certainty” of the world having at 2100 now, at the 95% level? I’m asking for not a range, but a NUMBER. Or at 2025?
Statistically, what does a high probability of a large range of events mean?


Matt Ridley’s Op-Ed today in the Wall Street Journal was excellent.
Nic Lewis’ post here makes for an even better day. Well done.
And then ferd berple says: December 19, 2012 at 7:48 am on this same post just makes today an even better one.
And then when we have all survived the feared (by some) Mayan apocalypse the weekend will just be absolutely terrific.

Gail Combs

cui bono says:
December 19, 2012 at 8:19 am
Far from a triumphant grin, it is difficult to repress anger…..
I agree and I am old with no children. I can not imagine the anger of a parent when they realize not only have they been fleeced but their children’s future has been mortgaged and trashed just to feed the egos and bank accounts of a bunch of liars and cheats.


@Nic Lewis
“Conclusion: In the light of the current observational evidence, in my view 1.75°C would be a more reasonable central estimate for ECS than 3°C, perhaps with a ‘likely’ range
of around 1.25–2.75°C.”
Wouldn’t that have to be 1.25-2.25°C?
Or am I missing the point completely?

R Barker

Nice work Nic. When model forecasts and observations part company, you have to stay with the observations.

Frank K.

HenryP says:
December 19, 2012 at 9:57 am
“we are now cooling…and we will continue to cool, until ca. 2040
Live with it. Prepare for it.”
Related news story…
Down to -50C: Russians freeze to death as strongest-in-decades winter hits
“Russia is enduring its harshest winter in over 70 years, with temperatures plunging as low as -50 degrees Celsius. Dozens of people have already died, and almost 150 have been hospitalized.”
“The country has not witnessed such a long cold spell since 1938, meteorologists said, with temperatures 10 to 15 degrees lower than the seasonal norm all over Russia.”
“Across the country, 45 people have died due to the cold, and 266 have been taken to hospitals. In total, 542 people were injured due to the freezing temperatures, RIA Novosti reported.”

Mitigation, anyone??


Re: Aldrin et al. 2012.
Is this
* Magne Aldrin et al., 2012, “Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperatures and global ocean heat content,” : Environmetrics Volume 23, Issue 3, pages 253–27.
Doesn’t appear to be an empirical estimate!
“The [climate sensitivity] mean is 2.0°C… which is lower than the IPCC estimate from the IPCC Fourth Assessment Report (IPCC, 2007), but this estimate increases if an extra forcing component is added, see the following text. The 95% credible interval (CI) ranges from 1.1°C to 4.3°C, whereas the 90% CI ranges from 1.2°C to 3.5°C.”


Nic, do you have a complete citation handy for
* Ring et al. (2012) also estimates ECS as 1.6°C, using the HadCRUT4 temperature record (1.45°C to 2.01°C using other records).

Mark my, far shorter words. Eventually it will be possible to show that mankind’s contribution to the planetary CO2 budget echos the Reid Bryson effect; ‘You can go outside and spit and have the same effect as doubling carbon dioxide.’


Don’t get me wrong, I do enjoy the articles and the comments although I must confess some of the science is way above my pay grade. I greatly admire those swimming against the tide of ‘scientific consensus’ when it comes to CAGW, discussing the real science behind the climate.
I also feel that a few contributors and commenters on this blog have unrealistic expectations. They seem to believe that when they prove that the science behind the CAGW theory is wrong, people will abandon ‘the project’. I believe that it doesn’t matter anymore what the science says. Global temperatures might flatten while CO2 goes up unabated (As is presently the case), they will still blame anthropogenic CO2 for global warming, even when the data tells them it isn’t warming anymore. Temperatures and CO2 might go up or down, it matters not. For them, Anthropogenic Global Warming is a fact, no matter what the data says.
Hundreds of billions of dollars have been invested by individuals, financial institutions, Governments, oil companies, NGO’s, energy companies, politicians and energy companies in ‘Green Technology’. ‘Green Energy’ and the whole ‘Green Ideology’.
They are not going to change just because the facts are proven wrong.
Compare the CAGW debate with the Hayek- Keynes debate. Proponents and opponents have been debating Hayek versus Keynes for decades. While the reality shows, looking at the economies of Greece, Portugal, Italy, Spain etc., that Keynes isn’t working, people still vote for politicians advocating this model.
The IPCC is part of and is controlled by the biggest quasi- political organisation in the world.
An organisation where democracies have been a minority since 1958.
An organisation that is controlled by a bunch of corrupt, dictatorial countries.
Last year……
Libya, China, Russia, Cuba, Saudi Arabia and Cameroon all had seats on the Human Rights Council.
North Korea had the presidency of the UN Conference on Disarmament
Iran had a seat on the UN’s Commission on the Status of Women.
Pakistan served as acting head of a U.N. body called the Counter-Terrorism Implementation Task Force.
This is an organisation (Of non-elected bureaucrats, politicians and hypocritical self-serving kleptomaniacs) trying to impose a global taxation system.
An organisation trying to limit free speech.
An organisation trying to restrict the flow of information by ‘regulating’ the Internet.
Hardly an organisation you can trust on anything.
And still people support this organisation and take this organisation and its ‘reports’ serious.
So when will CAGW and Keynes be abandoned?
Both theories will be finished when the economy collapses, when there is no more money for their boondoggles and when the sheeple realise they’ve been taken by the hypocritical self-proclaimed elite.

Gail Combs

The bit that stuck out like a sore thumb when I looked at the AR5 table on page 8-39 (hanks Mr Rawls) was the small amount of radiative forcing attributed to H2O. Around 0.1Wm^2 while CO2 gets assigned ~ 1.7Wm^2.
We know are told that H2O is a much stronger greenhouse gas than CO2 and that water vapor is either the same or decreasing [See graph ] so H2O as an ‘amplifier’ of CO2 no longer flies.
So why hasn’t H2O been assigned an amount greater than that of CO2 this time around?
My take home from this whole exercise is everything is assigned a minimum value if positive or a maximum value if negative to amplify the radiative forcing attributed to CO2. This includes leaving out as many other factors as possible.
I can not see how any logical person can believe that AR5 table after even a cursory knowledge of the subject. If that table is bogus then the equilibrium climate sensitivity is also called into question.
I do however appreciate the fact that Nic Lewis has taken the data as presented by the SOD and illustrated where the conclusions drawn are wrong. – Hats off to you.

Gail Combs

Darn the computer hiccuped again.
H2O vs CO2 – http://www.globalwarmingart.com/images/7/7c/Atmospheric_Transmission.png
Global relative Humidity (water vapor) – http://i38.tinypic.com/30bedtg.jpg

Gary Hladik

Alec Rawls says (December 19, 2012 at 9:54 am): “I’ll just add that the estimated sensitivity would be reduced still further by taking into account the solar forcing beyond TSI that the IPCC now admits to be indicated by the paleo evidence (page 7-43, lines 1-5):”
Indeed. If I read the article correctly, if any of the supposed historical temperature increase is caused by factors (solar and/or something else) other than so-called “greenhouse” gasses, then the ECS drops even lower. So the 1.6 degrees per doubling figure is an upper limit.


Isn’t it great that a retired financier who knows some math can set all of the scientists and their graduate students straight? Not sure why anyone would want to bother with a Ph.D. when the the amateurs seem to be so much better. Why don’t the retired finacial wizzards and retired engineers just take over all of those Ph.D. science jobs. US science is the best around the world.. Think of how good we could be if we just replaced all of the know nothing professionals and professors with amateurs. And while we are at it, let’s replace all of those pesky expensive peer-reviewed journals with the WSJ and blogs.

Richard M

It appears the sensitivity is based on a warming of 0.727°C (love the accuracy). If that warming is too high then the sensitivity will also be too high. Given the adjustments to the temperature record I suspect both are true.