Guest post by Nic Lewis
Many readers will know that I have analysed the Forest et al., 2006, (F06) study in some depth. I’m pleased to report that my paper reanalysing F06 using an improved, objective Bayesian method was accepted by Journal of Climate last month, just before the IPCC deadline for papers to be cited in AR5 WG1, and has now been posted as an Early Online Release, here. The paper is long (8,400 words) and technical, with quite a lot of statistical mathematics, so in this article I’ll just give a flavour of it and summarize its results.
The journey from initially looking into F06 to getting my paper accepted was fairly long and bumpy. I originally submitted the paper last July, fourteen months after first coming across some data that should have matched what was used in F06. The reason it took me that long was partly that I was feeling my way, learning exactly how F06 worked, how to undertake objective statistical inference correctly in its case and how to deal with other issues that I was unfamiliar with. It was also partly because after some months I obtained, from the lead author of a related study, another set of data that should have matched the data used in F06, but which was mostly different from the first set. And it was partly because I was unsuccessful in my attempts to obtain any data or code from Dr Forest.
Fortunately, he released a full set of (semi-processed) data and code after I submitted the paper. Therefore, in a revised version of the paper submitted in December, following a first round of peer review, I was able properly to resolve the data issues and also to take advantage of the final six years of model simulation data, which had not been used in F06. I still faced difficulties with two reviewers – my response to one second review exceeded 9,500 words – but fortunately the editor involved was very fair and helpful, and decided my re-revised paper did not require a further round of peer review.
Forest 2006
First, some details about F06, for those interested. F06 was a ‘Bayesian’ study that estimated climate sensitivity (ECS or Seq) jointly with effective ocean diffusivity (Kv)1 and aerosol forcing (Faer). F06 used three ‘diagnostics’ (groups of variables whose observed values are compared to model-simulations): surface temperature anomalies, global deep-ocean temperature trend, and upper-air temperature changes. The MIT 2D climate model, which has adjustable parameters calibrated in terms of Seq , Kv and Faer, was run several hundred times at different settings of those parameters, producing sets of model-simulated temperature changes. Comparison of these simulated temperature changes to observations provided estimates of how likely the observations were to have occurred at each set of parameter values (taking account of natural internal variability). Bayes’ theorem could then be applied, uniform prior distributions for the three parameters being multiplied together, and the resulting uniform joint prior being multiplied by the likelihood function for each diagnostic in turn. The result was a joint posterior probability density function (PDF) for the parameters. The PDFs for each of the individual parameters were then readily derived by integration. These techniques are described in Appendix 9.B of AR4 WG1, here.
Lewis 2013
As noted above, Forest 06 used uniform priors in the parameters. However, the relationship between the parameters and the observations is highly nonlinear and the use of a uniform parameter prior therefore strongly influences the final PDF. Therefore in my paper Bayes’ theorem is applied to the data rather than the parameters: a joint posterior PDF for the observations is obtained from a joint uniform prior in the observations and the likelihood functions. Because the observations have first been ‘whitened’,2 this uniform prior is noninformative, meaning that the joint posterior PDF is objective and free of bias. Then, using a standard statistical formula, this posterior PDF in the whitened observations can be converted to an objective joint PDF for the climate parameters.
The F06 ECS PDF had a mode (most likely value) of 2.9 K (°C) and a 5–95% uncertainty range of 2.1 to 8.9 K. Using the same data, I estimate a climate sensitivity PDF with a mode of 2.4 K and a 5–95% uncertainty range of 2.0–3.6 K, the reduction being primarily due to use of an objective Bayesian approach. Upon incorporating six additional years of model-simulation data, previously unused, and improving diagnostic power by changing how the surface temperature data is used, the central estimate of climate sensitivity using the objective Bayesian method falls to 1.6 K (mode and median), with 5–95% bounds of 1.2–2.2 K. When uncertainties in non-aerosol forcings and in surface temperatures, ignored in F06, are allowed for, the 5–95% range widens to 1.0–3.0 K.
The 1.6 K mode for climate sensitivity I obtain is identical to the modes from Aldrin et al. (2012) and (using the same, HadCRUT4, observational dataset) Ring et al. (2012). It is also the same as the best estimate I obtained in my December non-peer reviewed heat balance (energy budget) study using more recent data, here. In principle, the lack of warming over the last ten to fifteen years shouldn’t really affect estimates of climate sensitivity, as a lower global surface temperature should be compensated for by more heat going into the ocean.
Footnotes
- Parameterised as its square root
- Making them uncorrelated, with a radially symmetric joint probability density
The below plot shows how the factor for converting the joint PDF for the whitened observations into a joint PDF for the three climate system parameters (on the vertical axis – units arbitrary) varies with climate sensitivity Seq and ocean diffusivity Kv. This conversion factor is, mathematically, equivalent to a noninformative joint prior for the parameters. The plot is for a slightly different case to that illustrated in the paper, but its shape is almost identical. Aerosol forcing has been set to a fixed value. At different aerosol values the surface scales up or down somewhat, but retains its overall shape.
The key thing to notice is that at high sensitivity values not only does the prior tail off even when ocean diffusivity is low, but that at higher Kv values the prior becomes almost zero. (Ignore the upturn in the front RH corner, which is caused by model noise.) The noninformative prior thereby prevents more probability than the data uncertainty distributions warrant being assigned to regions where data responds little to parameter changes. It is that which results in better-constrained PDFs being, correctly, obtained compared to when uniform priors for the parameters are used.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Link:
http://www.huffingtonpost.com/2013/04/15/antarctica-summer-ice-melt-antarctic_n_3082750.html
sigh.
Nic // Thank you for posting and informing us of this very interesting and useful research. I regard your result as applicable to the case of zero- or at most Stefan-Boltzmann based solar forcing. I base this on the assumption that the MIT model will contain at most a solar forcing based on simple Total Solar Illuminance (i.e., thermal) considerations. That is, we are assuming a prior in which no amplification of simple TSI forcing is admitted, as argued for example in the cosmic ray hypothesis of Svensmark. Therefore, any such non thermal contribution is subsumed in the other variables and results in a higher climate senitivity. For example, when a non thermal component to solar forcing is admitted, Ziskin and Shaviv http://www.phys.huji.ac.il/~shaviv/articles/20thCentury.pdf find that the most likely value of climate sensitivity is 1.0 deg C, with a 95% C.I. (2 sigma) of 0.3 — 1.6 deg C.
I’d be curious to see what numbers you got if you used UAH or RSS, Nic.
David L. Hagen
“Recommend graphing to show increasing climate sensitivity from left to right as most common graphing to assist understanding.”
Fair point. I afraid that when I wrote the code to produce this plot I hadn’t fully mastered R lattice graphics. Actually, I’m not sure that I’ve done so yet.
Tilo Reber
” I’d be curious to see what numbers you got if you used UAH or RSS, Nic.”
Unfortunately, the UAH and RSS temperature records are currently too short to be used in this sort of study.
The intention to use an objective approach is rare in this subject. Warmest congratulations to Nic Lewis. His climate-sensitivity interval is plausible, and his warning that the recent long absence of warming may have no implications for climate sensitivity is timely. I hope to see this valuable addition to the reviewed literature given due weight in the Fifth Assessment Report.
@Monckton: Since there hasn’t been an ‘absence of warming’ it’s hard to see how it could have any implications either way.
Nic, congratulations on the article, and on your various explanatory posts. Seems your intuition from last year about mode 1.6 was about right mathematically.
I find an equally interesting question to be why the IPCC ECS of 3 is so high. Posted on this a number of times after surviving Dr. Curry review over at Climate Etc. plus the underlying GCM errors highlighted in my ebook. Models tuned to too high a recent temperature due to UHI, homogenization, and all the other problems pointed out by WUWT. UTrH is roughly constant when observation has it declining, leading to too high a water vapor positive feedback. There is a GCM positive could feedback when that is almost certainly neutral or negative. The observational evidence, plus other observational means of getting at ECS, all support something between 1.5 and about 1.9.
Hopefully, your article will have an impact on AR5.
Tonyb
” Can you confirm where the actual data you use is derived from”
The surface temperature data is from the original HadCRUT dataset, as in F06, except when using the additional six years of model-simulation data, which extend beyond the end of the HadCRUT data. For that case I switched to using HadCRUT4 data. So, there are two different versions of Hadley SST involved. The data used goes back to 1907 in the first case and 1902 in the second case. Deep ocean (0-3000 m) temperature data came from the Levitus, 2005 dataset, and covers pentads ending in 1959 to 1995 or 1998. Upper-air data comes from the HadRT dataset and relates to the difference between 1986-95 and 1961-80 averages. The upper-air data has much less effect on parameter inference than do the surface and deep ocean data.
Monckton of Brenchley
Thanks!
I think my paper will be mentioned in AR5 WG1, although I’m doubtful that it will be given much weight.
Nick
Thanks for your reply.
Having investigated the various historic data bases covering temperatures of all kinds they leave me thoroughly unimpressed. The least impressive of all are the SST records prior to around 1960
I wrote about them here
http://judithcurry.com/2011/06/27/unknown-and-uncertain-sea-surface-temperatures/
The records are extremely flimsy and made worse by the substantial amount of interpolation using data that is already very limited. Our knowledge of the deep ocean is very recent and the jury is surely still out on what they are telling us.
I am not criticising you, but basing public policy on data that is so weak is an indictment of the politicians and those advising them.
You may well be right on sensitivity but the existing data is not the basis for proper scientific studies. We need much longer and much more reliable data before we know either way
Tonyb
talldave2 says:
Are you guys sure you’re comparing apples? The difference between like estimates is less than a degree, doesn’t seem especially large, esp since he also throws in some changes re the surface data.
The key is the loss of the large tail, not the change in median.
The real bed-wetters get all alarmed because their analysis yields some probability that climate sensitivity is very high. Throw in the precautionary principle, and something must be done now!
Nic shows that even using their data and their models and their assumptions that there is little to no chance that climate sensitivity is very high. Hence there is no need to panic. The biggest alarmists really won’t want to see that in a peer-reviewed journal.
On top of that there is always the liklihood that their data, their models and their assumptions have been unduly alarmist in the first place.
Rud Istan
Thanks!
I think that the points you make about why the IPCC ECS of 3 K is so high are valid, although they mainly apply to GCM derived estimates and (apart from UHI) should in theory not greatly affect observationally-constrained estimates. And in fact, ESC estimates that are constrained by the instrumental temperature record tend to come out at well below 3 K. Forest 2006 was the main exception in AR4, leaving out studies with strange, poorly defined PDFs. Paleo studies come out all over the place, which to my mind reflects the huge uncertainties involved in such studies.
I think it is very unfortunate how much weight is given to GCM-simulation derived ECS estimates in IPCC reports. I cannot see that giving them much weight is justified. But I doubt this will change much in AR5. Many climate scientists seem to place more faith in GCM simulations than in observations of the actual climate system.
Nic,
Glad to hear that your analysis was finally published. Have you considered writing to Andrew Revkin to see if he will post it at dot Earth? You may need to soften the statistics part and explain what it means in the big picture.
Well done, Nic.
It took perseverance to get through the publication maze, but at the end, it can be very satisfying.
Hmm, missed here seems to be that climate sensitivity is taken to be a constant. If that were so, then adding data should not substantially alter the result. Since it does, I have to be doubtful about the result.
Excuse my ignorance , but if I read right, you have extracted the sensitivity from GCM and not real data, so one might conclude that this variation happens because.
A.The models are not representing reality, that is the truncated data set delivers a prediction significantly different from the longer set, Ie the short data set fails to project the new data set.
or
B. Climate sensitivity is not a constant – if you look a tropical climate behaviour in general things saturate. That is beyond a certain temperature there is a chaotic response to temperature that results in a vast amount of negative feedback (cyclones). This is analogous to an amplifier that has excursed to its power supply, no amount of input forcing will ever produce more output, despite the feedback that might be present. This for example would occur when cyclone formation expends all the extra C02 retained energy. – The waynI see it on the available evidence climate sensitivity is very likely to be temperatute dependent, and likely chaotic.
If the MIT model used does not involve all of the natural forcings (for example the solar-mediated cosmic ray flux changes), then how does the method distinguish between CO2 forcing and unrecognized other forcings?
Nic, Congrats to you! You should be given a medal for perseverance.
Nice one, Nic – I am no statistician, but it’s nice to see your work based on an existing paper shows even less reason for alarm. Which will probably be more than enough for the Warmists to decry it!
Nic,
Doesn’t this analysis assume that all of the temperature increase in the HadCRUT4 data is due to CO2 tempered by aerosols? Is there a basis for this assumption besides just being a worst case?
Congratulations,
I’ve always found Bayesian analysis rather formidable and have tended to steer clear of it. While I appreciate your argument about uniform priors, which I feel is correct, I would simply ask:
If you add more data to model, which may be incomplete, and the results change dramatically, are you dealing with an ill-conditioned system? In other words, is the data adequate to support the model? There are methods for investigating this in many systems, but without digging through the maths, can you apply these methods?
Nic Lewis,
Congratulations on your successful peer review success.
John
I think climate senistivity can also be estimated (if it is real) from the work done for IPCC over the years using a pyschologically-based upper limit constraint. If one is a firm believer in CAGW, one’s work is going to definitely accept whatever leeway upwards that the limits to decency allows. Some experience in how people will exaggerate (especially when they are given the green light to push the envelope by such as the late Dr. Schneider and others). Other data available for the psychological method is the secrecy of the main proponents with their data, algorithms, adjustments, etc. What could there be to hide? Hansen up into the mid to late 1990s was using a 15.5 C average world temp as a base in (1979? anyone?) against which to measure future temps. When the slowdown came, he suddenly began using a 14.5 C past average – chopping a degree off the “cool end” to elevate the difference in a disappointing 1999. The warm end was also enhanced by a stepwise upward adjustment of ~0.5C. I would say the upper psychological constraint (I like the double meaning) is about half the high confidence level temperature. With CS in the 90s calculated 3-5C, they did cut it back progressively in the new millennium to 1.5-3 or so. I think they still have given themselves a margin since the alarms go off at 2C rise. Work done by several more neutral researchers arrived at 1-1.5%. We are getting closer to a sensible number and it is one without alarm.
milodonharlani says: “It finds that climate sensitivity is most likely a lot lower than imagined by IPCC & the alarmosphere, even if slightly higher than maintained by many skeptics who feel that feedbacks like water vapor roughly cancel out the minor effect of increased CO2 concentrations.”
I number myself among those skeptics, but nevertheless I applaud Nic Lewis’s work and congratulate him on having successfully run the peer review gauntlet.
Thanks, too, to Nic and Zeke for clarifying the issue regarding the extra years of data.
Nic Lewis
Between 1987 and 2000 there was a 5% reduction in cloud cover which resulted in a 0.9w/m2 forcing and caused 0.3 deg of warming.
A doubling of CO2 would therefore cause about 1.2 deg of warming assuming the 3.7W/m2 is accurate (which is contested as being too high by a recent study)
The IPCC and climate modellers continue to ignore the existence of this dominate climate forcing despite an increase in OLR at the top of the atmosphere which would be impossible for GHG to produce! Consequently their sensitivities or way out of whack because the the warming caused by this large shortwave forcing is wrongly attributed to being caused by a small GHG forcing.
Remove cloud forcing and their is only 0.1 deg C of warming in the 30 years of satellite coverage (vs 0.6degC predicted)
See climate4you.com climate and clouds page for details
Cheers