They seem to overlook one very important thing. In their method, they look at “variations in yearly global temperatures”. They are assuming that the envelope created by the variations will reveal an underlying trend, and from that, a measure of climate sensitivity by comparing it to model output. Their analogy in the press release, using a weighted spring reveals their thinking as believing Earths climate as being a “constrained system”.

Earth’s climate does have some constraints, but it also has chaos, and the chaotic nature of the myriad of forces in Earth’s atmosphere is often pushed beyond what is considered a normal for such constraints. Chaos itself becomes a “forcing”. It is why we get occasional extremes of weather and climate.  Edward Lorentz was the first to describe the chaotic nature of the atmosphere with his “butterfly effect” paper in 1972. http://eaps4.mit.edu/research/Lorenz/Butterfly_1972.pdf

Lorenz describes the evidence that the atmosphere is inherently unstable as “overwhelming”.

It’s that instability that they are trying to quantify, and put an envelope around, but it is a fools errand in my opinion becuase there’s so much noise in that chaos.

To see why, have a look at this presentation from Stephens et al. 2104.  http://wind.mit.edu/~emanuel/Lorenz/Lorenz_Workshop_Talks/Stephens.pdf

That team asks: “Is Earth’s climate system constrained”?

Their answer is that it is,

– The reflected energy from Earth is highly regulated & this regulation by clouds. The most dramatic example of this appears in hemispheric symmetry of reflected solar radiation

– Hemispheric OLR also appears regulated by clouds

but…. Stephens et al also uses the CMIP5 models and say this about them:

– Models don’t have the same behavior as the observed Earth – they lack the same degree of regulation and symmetry. Does this really matter? It seems so.

Yes, the problem is clouds. And as most anyone knows in climatology, models don’t do clouds well. If you do a search of the literature you’ll find statements suggesting clouds limit warming, and clouds enhance warming. There’s no good agreement of what affect clouds actually have had on long term climate trends. But the key component of clouds, water vapor, has been revealing as a primary forcing as our presentation at AGU16 demonstrated:

https://wattsupwiththat.com/2016/12/14/challenging-climate-sensitivity-observational-quantification-of-water-vapor-radiative-forcing-our-agu16-presentation/

In the Cox et al 2018 paper, they say”

“… the emergent relationship from the historical runs and observational constraint can be combined to provide an emergent constraint on ECS.”

On the face of it, that “seems” reasonable, however, the flaw here is that they are doing this:

“We use an ensemble of climate models to define an emergent relationship.”

First, making an average of model output also averages their error along with their predictions. And if models don’t do clouds well, and “Models don’t have the same behavior as the observed Earth – they lack the same degree of regulation and symmetry.” and because they are comparing to the highly biased and adjusted surface temperature record for confirmation, then all Cox et al is doing is making a classical statistical blunder of “correlation is not causation”. They are looking for forcing in the surface temperature record for confirmation of the models, but the surface temperature record is really highly dependent on clouds as well as being highly adjusted. It’s has a wide envelope of base noise from “chaos” creating weather extremes.

This paper in 2013 says this about CMIP5 and clouds: http://onlinelibrary.wiley.com/doi/10.1029/2012JD018575/full

“Despite a variety of discrepancies in the simulated cloud structures, a universal feature is that in all models, the cloud parameterization errors dominate, while the large-scale and the covariation errors are secondary. This finding confirms the deficiency in the current state of knowledge about the governing mechanisms for subgrid cloud processes…”

Really, in my view, all they have done is to plot the envelope of possible values, then constrain it (figure 4A), and come up with a new ECS average based on that assumed constraint.

Figure 4 | Sensitivity of the emergent constraint on ECS to window width. a, Central estimate and 66% confidence limits. The thick black bar shows the minimum uncertainty at a window width of 55 yr and the red bar shows the equivalent ‘likely’ IPCC range of 1.5–4.5 K. b, Probabilities of ECS > 4 K (red line and symbols) and ECS < 1.5 K (blue line and symbols).

There’s more noise and less signal, and from that they create a statistical probability of a climate sensitivity of 2.8C. I think they are fooling themselves. “The first principle is that you must not fool yourself — and you are the easiest person to fool.” – Richard Feynman

Basically, they are comparing two smoothed time series (HadCRUT4 and CMIP5 model mean) to come up with an ECS value. Statistician William Briggs points out the folly of this: http://wmbriggs.com/post/195/

Now I’m going to tell you the great truth of time series analysis. Ready? Unless the data is measured with error, you never, ever, for no reason, under no threat, SMOOTH the series! And if for some bizarre reason you do smooth it, you absolutely on pain of death do NOT use the smoothed series as input for other analyses! If the data is measured with error, you might attempt to model it (which means smooth it) in an attempt to estimate the measurement error, but even in these rare cases you have to have an outside (the learned word is “exogenous”) estimate of that error, that is, one not based on your current data.

If, in a moment of insanity, you do smooth time series data and you do use it as input to other analyses, you dramatically increase the probability of fooling yourself! This is because smoothing induces spurious signals—signals that look real to other analytical methods.

Their figure 1A in Cox et al 2018 compares smoothed series. The dots represent yearly averages of global temperature.

Figure 1 | Historical global warming. a, Simulated change in global temperature from 16 CMIP5 models (coloured lines), compared to the global temperature anomaly from the HadCRUT4 dataset (black dots). The anomalies are relative to a baseline period of 1961–1990. The model lines are colour-coded, with lower-sensitivity models (λ > 1 W m−2 K−1) shown by green lines and higher-sensitivity models (λ < 1 W m−2 K−1) shown by magenta lines.

The surface temperature record is highly smoothed, and the model mean they chose is smoothed.

And there’s also by a bit of cherry picking on their part:

If we instead use all 39 historical runs in the CMIP5 archive, we find a slightly weaker emergent relationship, but derive a very similar emergent constraint on ECS (Extended Data Table 2).

So, why limit the number of models used? It’s because it makes them believe they are more certain of the ECS value.

I’m also reminded of this quote:

“If your experiment needs statistics, you ought to have done a better experiment.” – Ernest Rutherford

As Dr. Judith Curry says, “Climate [is] a wicked problem”, and that’s why (from the Exeter press release) “…the standard ‘likely’ range of climate sensitivity has remained at 1.5-4.5°C for the last 25 years…”. I don’t think this study has contributed any precision to that problem.

There are other observational estimates of climate sensitivity to observations. They come up with much lower values.

Willis came up with this:

The results were that the equilibrium climate sensitivity to a change in forcing from a doubling of CO2 (3.7 W/m2) are 0.4°C in the Northern Hemisphere, and 0.2°C in the Southern Hemisphere. This gives us an overall average global equilibrium climate sensitivity of 0.3°C for a doubling of CO2.

https://wattsupwiththat.com/2012/05/29/an-observational-estimate-of-climate-sensitivity/

Dr. Roy Spencer came up with this:

In this case, we see that a climate sensitivity of only 1.5 C was required, a 40% reduction in climate sensitivity. Notably, this is at the 1.5C lower limit for ECS that the IPCC claims. Thus, even in the new pause-busting dataset the warming is so weak that it implies a climate sensitivity on the verge of what the IPCC considers “very unlikely”.

http://www.drroyspencer.com/2015/07/new-pause-busting-temperature-dataset-implies-only-1-5-c-climate-sensitivity/

He adds:

The simplicity of the model is not a weakness, as is sometimes alleged by our detractors — it’s actually a strength. Since the simple model time step is monthly, it avoids the potential for “energy leakage” in the numerical finite difference schemes used in big models during long integrations. Great model complexity does not necessarily get you closer to the truth.

In fact, we’ve had 30 years and billions of dollars invested in a marching army of climate modelers, and yet we are no closer to tying down climate sensitivity and thus estimates of future global warming and associated climate change. The latest IPCC report (AR5) gives a range from 1.5 to 4.5 C for a doubling of CO2, not much different from what it was 30 years ago.

Climate sensitivity remains the “Holy Grail” of climate science; a lot of people think they know where it is hidden, but so far it seems nobody has the actual location of it.