An objective Bayesian estimate of climate sensitivity

Guest post by Nic Lewis

Many readers will know that I have analysed the Forest et al., 2006, (F06) study in some depth. I’m pleased to report that my paper reanalysing F06 using an improved, objective Bayesian method was accepted by Journal of Climate last month, just before the IPCC deadline for papers to be cited in AR5 WG1, and has now been posted as an Early Online Release, here. The paper is long (8,400 words) and technical, with quite a lot of statistical mathematics, so in this article I’ll just give a flavour of it and summarize its results.

The journey from initially looking into F06 to getting my paper accepted was fairly long and bumpy. I originally submitted the paper last July, fourteen months after first coming across some data that should have matched what was used in F06. The reason it took me that long was partly that I was feeling my way, learning exactly how F06 worked, how to undertake objective statistical inference correctly in its case and how to deal with other issues that I was unfamiliar with. It was also partly because after some months I obtained, from the lead author of a related study, another set of data that should have matched the data used in F06, but which was mostly different from the first set. And it was partly because I was unsuccessful in my attempts to obtain any data or code from Dr Forest.

Fortunately, he released a full set of (semi-processed) data and code after I submitted the paper. Therefore, in a revised version of the paper submitted in December, following a first round of peer review, I was able properly to resolve the data issues and also to take advantage of the final six years of model simulation data, which had not been used in F06. I still faced difficulties with two reviewers – my response to one second review exceeded 9,500 words –  but fortunately the editor involved was very fair and helpful, and decided my re-revised paper did not require a further round of peer review.

Forest 2006

First, some details about F06, for those interested. F06 was a ‘Bayesian’ study that estimated climate sensitivity (ECS or Seq) jointly with effective ocean diffusivity (Kv)1 and aerosol forcing (Faer). F06 used three ‘diagnostics’ (groups of variables whose observed values are compared to model-simulations): surface temperature anomalies, global deep-ocean temperature trend, and upper-air temperature changes. The MIT 2D climate model, which has adjustable parameters calibrated in terms of Seq , Kv and Faer, was run several hundred times at different settings of those parameters, producing sets of model-simulated temperature changes. Comparison of these simulated temperature changes to observations provided estimates of how likely the observations were to have occurred at each set of parameter values (taking account of natural internal variability). Bayes’ theorem could then be applied, uniform prior distributions for the three parameters being multiplied together, and the resulting uniform joint prior being multiplied by the likelihood function for each diagnostic in turn. The result was a joint posterior probability density function (PDF) for the parameters. The PDFs for each of the individual parameters were then readily derived by integration. These techniques are described in Appendix 9.B of AR4 WG1, here.

Lewis 2013

As noted above, Forest 06 used uniform priors in the parameters. However, the relationship between the parameters and the observations is highly nonlinear and the use of a uniform parameter prior therefore strongly influences the final PDF. Therefore in my paper Bayes’ theorem is applied to the data rather than the parameters: a joint posterior PDF for the observations is obtained from a joint uniform prior in the observations and the likelihood functions. Because the observations have first been ‘whitened’,2 this uniform prior is noninformative, meaning that the joint posterior PDF is objective and free of bias. Then, using a standard statistical formula, this posterior PDF in the whitened observations can be converted to an objective joint PDF for the climate parameters.

The F06 ECS PDF had a mode (most likely value) of 2.9 K (°C) and a 5–95% uncertainty range of 2.1 to 8.9 K. Using the same data, I estimate a climate sensitivity PDF with a mode of 2.4 K and a 5–95% uncertainty range of 2.0–3.6 K, the reduction being primarily due to use of an objective Bayesian approach. Upon incorporating six additional years of model-simulation data, previously unused, and improving diagnostic power by changing how the surface temperature data is used, the central estimate of climate sensitivity using the objective Bayesian method falls to 1.6 K (mode and median), with 5–95% bounds of 1.2–2.2 K. When uncertainties in non-aerosol forcings and in surface temperatures, ignored in F06, are allowed for, the 5–95% range widens to 1.0–3.0 K.

The 1.6 K mode for climate sensitivity I obtain is identical to the modes from Aldrin et al. (2012) and (using the same, HadCRUT4, observational dataset) Ring et al. (2012). It is also the same as the best estimate I obtained in my December non-peer reviewed heat balance (energy budget) study using more recent data, here. In principle, the lack of warming over the last ten to fifteen years shouldn’t really affect estimates of climate sensitivity, as a lower global surface temperature should be compensated for by more heat going into the ocean.

Footnotes

  1. Parameterised as its square root
  2. Making them uncorrelated, with a radially symmetric joint probability density

The below plot shows how the factor for converting the joint PDF for the whitened observations into a joint PDF for the three climate system parameters (on the vertical axis – units arbitrary) varies with climate sensitivity Seq and ocean diffusivity Kv. This conversion factor is, mathematically, equivalent to a noninformative joint prior for the parameters. The plot is for a slightly different case to that illustrated in the paper, but its shape is almost identical. Aerosol forcing has been set to a fixed value. At different aerosol values the surface scales up or down somewhat, but retains its overall shape.

lewis_2013_fig1

The key thing to notice is that at high sensitivity values not only does the prior tail off even when ocean diffusivity is low, but that at higher Kv values the prior becomes almost zero. (Ignore the upturn  in the front RH corner, which is caused by model noise.) The noninformative prior thereby prevents more probability than the data uncertainty distributions warrant being assigned to regions where data responds little to parameter changes. It is that which results in better-constrained PDFs being, correctly, obtained compared to when uniform priors for the parameters are used.

0 0 votes
Article Rating
87 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Joseph Murphy
April 16, 2013 11:38 am

Does anyone speak this language and possibly provide a quick translation?

John Tillman
April 16, 2013 11:40 am

Sensitivity of 1.6 degrees K for doubling from 280 to 560 ppm CO2, with around 0.7 K observed at current 400 ppm. Sounds about right for a linear increase.

Mpaul
April 16, 2013 11:42 am

Nic, well done. Those of us who are amateur stats enthusiasts will find a lot of great reading in your paper.

April 16, 2013 11:44 am

Joseph:
It finds that climate sensitivity is most likely a lot lower than imagined by IPCC & the alarmosphere, even if slightly higher than maintained by many skeptics who feel that feedbacks like water vapor roughly cancel out the minor effect of increased CO2 concentrations.

tallbloke
April 16, 2013 11:50 am

Well played on the the team’s field Nic Lewis. Now we need to address the ocean diffusivity issue.

Mac the Knife
April 16, 2013 12:01 pm

Upon incorporating six additional years of model-simulation data, previously unused, and improving diagnostic power by changing how the surface temperature data is used, the central estimate of climate sensitivity using the objective Bayesian method falls to 1.6 K (mode and median), with 5–95% bounds of 1.2–2.2 K.
If a doubling of CO2 leads to just 1.6K temperature increase, perhaps it its time to refer to this as Climate Insensitivity.
MtK

Wayne2
April 16, 2013 12:12 pm

My translation: It appears that the temperature increase we expect, based on models, for twice as much CO2 in the atmosphere is about half of what Forest 2006 had calculated. This is the middle-of-the-range increase. Looking at the high end of the range of likely values, this study’s high end is one third of Forest 2006.
Forest focused on parameters of climate models, while this study concentrated on the outputs of the models. Which makes more sense.

hotjazz17
April 16, 2013 12:12 pm

Reblogged this on UNCOVER777.

Icarus62
April 16, 2013 12:12 pm

Tillman: The current 395ppm CO₂ is a climate forcing of ~1.85W/m². With fast feedbacks alone and a climate sensitivity of 1.6K (0.4K/W/m²) that equates to around 0.45K of transient warming – We’ve actually seen 0.8K, almost twice as much. Therefore a fast feedback climate sensitivity of 1.6K certainly isn’t supported by the magnitude of warming since the pre-industrial. A fast feedback climate sensitivity of 0.75K/W/m² fits the modern observations perfectly, however.

Johan i Kanada
April 16, 2013 12:14 pm

“Sensitivity of 1.6 degrees K for doubling from 280 to 560 ppm CO2, with around 0.7 K observed at current 400 ppm. Sounds about right for a linear increase”
But it’s logarithmic, is it not?

Berényi Péter
April 16, 2013 12:16 pm

Nic, you are talking about equilibrium climate sensitivity, aren’t you? If so, how long is relaxation time?

Bob_G
April 16, 2013 12:22 pm

The problem as I see it is that the models still use possibly biased surface temperature data (it has been endlessly adjusted) and it uses deep ocean temperature data that was also recently adjusted. That adds quite a bit of uncertainty. However, the result of the study is at least within what I would consider possible based on past temperature changes during geological ages with varying amounts of CO2.

April 16, 2013 12:25 pm

Nick,
Good work overall. Its a huge time commitment to go through the peer review process, especially with hostile reviewers, but at the end of the day it will have much greater impact than a simple blog post.
You mention that “In principle, the lack of warming over the last ten to fifteen years shouldn’t really affect estimates of climate sensitivity”. However, the fact that the addition of only 6 years of data changes the sensitivity estimate so dramatically (such that the 5% estimate of the earlier number is almost at the 95% estimate of the latter) seems to somewhat belie that. The large dependence on short periods of temperatures, which are subject to non-externally-forced factors like ENSO and other decadal and muti-decadal variability suggests to me that the confidence intervals might be too tight.

Manfred
April 16, 2013 12:35 pm

With due respect to Nic Lewis, much time is spent on this site usefully criticising the wider IPCC reliance upon modeled data for a range of valid reasons. Now, simply because we have a modeled estimate here that yields an arguably more rational and agreeable result, it is of greater interest.
This complex methodology nevertheless remains an estimate, even if “…the observations have first been ‘whitened’, to ‘free of bias’ and ‘converted to an objective joint PDF for the climate parameters’.
I read recently that it was considered that there was now just about a sufficiency of empirical observations over time to calculate a valid empirical measure of climate sensitivity. So why not?

Editor
April 16, 2013 12:40 pm

Nick
Firstly, sincere congratulations for going down the tortuous route of peer review.
The article is pay walled. Can you confirm where the actual data you use is derived from, for example are there Hadley sst’s involved and how far back does the data you used go?
Tonyb

April 16, 2013 1:15 pm

Bayes’ theorem could then be applied, uniform prior distributions for the three parameters being multiplied together, and the resulting uniform joint prior being multiplied by the likelihood function for each diagnostic in turn. The result was a joint posterior probability density function (PDF) for the parameters. The PDFs for each of the individual parameters were then readily derived by integration.
My dear learned friends, I got lost in the there.
If I need complex statistics (beyond averages, probability and simple correlation) to understand a natural event, I will happily ignore high brow statistics, once I was characterized as a ‘man of superior ignorance’, that is my excuse.
However, I will listen with an unlimited enthusiasm, to what the nature has to tell by its simple but fundamental laws of cause and consequence.

Greg House
April 16, 2013 1:21 pm

An objective Bayesian estimate of climate sensitivity Guest post by Nic Lewis: “The 1.6 K mode for climate sensitivity I obtain…”
=======================================================
Your Bayesian estimate is not objective and your 1.6 K can not be true for physical reasons.
The thermodynamical properties of CO2 are well known beyond “climate science”, adding CO2 in its present concentration to the air would have an effect like 0.0001K and is negligible.
The alleged CO2 induced warming by returning “back radiation” to the surface, as presented by the IPCC, is physically impossible. “Trapping” IR radiation does not affect the temperature of the source. It must be clear on the theoretical level that otherwise an endless mutual heating would be an inevitable outcome in some cases, which is absurd, and on the experimental level it was proven 100 years ago as well, see R.W.Wood experiment (1909).

davidmhoffer
April 16, 2013 1:22 pm

Zeke Hausfather;
You mention that “In principle, the lack of warming over the last ten to fifteen years shouldn’t really affect estimates of climate sensitivity”. However, the fact that the addition of only 6 years of data changes the sensitivity estimate so dramatically (such that the 5% estimate of the earlier number is almost at the 95% estimate of the latter) seems to somewhat belie that.
>>>>>>>>>>>>>>>>>
Ohmigosh, I’m about to agree with ZH who s exactly correct on this matter. The fact that 6 years of data so dramatically alters the sensitivity calculation is evidence (to me at least) that it is wrong. There are other factors affecting temps that can be either positive or negative that this (and the IPCC approach) simply are not taking into account. Until there is a mechanism by which ALL forcing factors can be identified and quantified with some degree of precision, it will be impossible to isolate the sensitivity to any given one (such as CO2) for the simple reason that we’ve inadvertently got other “stuff” in the data that is affecting the calculation and producing a wrong result. If the PDO for example were to go wildly negative, pushing down global temps for another 10 years, we we conclude that CO2 sensitivity had declined further? That wouldn’t make sense would it?

April 16, 2013 1:26 pm

How much time shall we spend, trying to calculate “climate sensitivity” using CO2 rise and warm phase of AMO cycle? It makes NO SENSE! If you use 1910-1945 HadCRUT data, you will get ten times higher climate sensitivity, since using this mechanistic approach, you will get 0,7 deg C warming from 10 ppm CO2 increase. CO2 effect is not recognizable, looking at CET or GISP2 data, or non-existing hotspot or whatever else. It is a virtual concept, existing in PC models only.

April 16, 2013 1:29 pm

Zeke/davidmhoffer,
Are you guys sure you’re comparing apples? The difference between like estimates is less than a degree, doesn’t seem especially large, esp since he also throws in some changes re the surface data.
“Using the same data, I estimate a climate sensitivity PDF with a mode of 2.4 K and a 5–95% uncertainty range of 2.0–3.6 K, the reduction being primarily due to use of an objective Bayesian approach. Upon incorporating six additional years of model-simulation data, previously unused, and improving diagnostic power by changing how the surface temperature data is used, the central estimate of climate sensitivity using the objective Bayesian method falls to 1.6 K (mode and median), with 5–95% bounds of 1.2–2.2 K. “

Nic Lewis
April 16, 2013 1:33 pm

Zeke,
Thanks for your comments. As I wrote, the 95% estimate of 2.2 K that you refer to does not allow for all uncertainties, hence my giving also the higher 95% estimate of 3.0 K.
The difference between the two sensitivity estimates is something that was considered in some detail in peer review, as you can imagine. It relates not just to the use of six additional years’ data, but to the redesign of the surface temperature ‘diagnostic’ to improve its power. This is explained in more detail in the paper. If you would like a copy, just let me know. With the original surface diagnostic design and data extending only to the decade ending 1995, the signal-to-noise ratio was insufficient to properly constrain climate sensitivity. Going forwards in time, the signal to noise ratio will improve further, so estimates should become increasingly stable. Yes, ENSO and other decadal and multi-decadal internal variability will still be a problem, but should be less so now that ocean temperatures are better monitored. Updating from HadCRUT (which does not extend to 2001) to HadCRUT4 temperature data may also have had some effect on the ECS estimation. Ring et al (2012) reported a cumulative 0.5 K decrease in its ECS estimate as a result of that change.
In fact, as reported in my paper, using the original data to 1995 and surface diagnostic design, at the best-fit parameter values the MIT 2D model-simulated global mean temperature change between the first twenty and last twenty years of the simulation period (which happens to span about two AMO cycles) is one third or more higher than observed. Using the best-fit parameter values derived using the extended data to 2001 and the revised surface diagnostic design, the corresponding rise in model-simulated temperature is closely in line with that observed. That provides substantial support for thinking that results using the original data to 1995 and surface diagnostic design are flawed, and do not properly reflect the observational data.

Dr Burns
April 16, 2013 1:36 pm

It would be interesting to see a detailed analysis of this because it very much rings of more modelling nonsense “take advantage of the final six years of model simulation data”.

April 16, 2013 1:37 pm

Lewis goes wrong, of course, by taking the basic science as “settled”, and thus mistaking model runs, based upon that in fact false science, as factual observations of the real world. I posted the following comment on the Bishop Hill site this morning:
“All of that probability jargon is so much learned idiocy… Anyone who thinks they can get to the heart of the matter through such ornate, but empty, advanced mathematical rhetoric (that is irrelevant, immaterial, and incompetent, in the immortal words of Perry Mason) is an incompetent fool–and climate scientists all fit that bill, like it or not. The bare facts, comparing CO2 to temperature, have long indicated an insubstantial CO2 climate sensitivity (and indeed, a CO2 level dependent upon the temperature, not vice-versa), and the definitive evidence, of a proper Venus/Earth temperatures comparison, over the full range of Earth tropospheric pressures, shows the CO2 climate sensitivity is zero. There will be no sanity, much less real progress, in climate science until that is properly confronted and generally accepted.”

Manfred
April 16, 2013 1:45 pm

“In principle, the lack of warming over the last ten to fifteen years shouldn’t really affect estimates of climate sensitivity, as a lower global surface temperature should be compensated for by more heat going into the ocean.”
In principle, perhaps, according to the last 6 years data, perhaps not. With more PDO negative data coming in now and AMO negative data coming soon and at the same time very small variations in heat content, this sensitivity estimate may be still too high. Data spanning about equally over both PDO and AMO modes would therefore be desirable. And a solar amplification is not even considered.
However, good to see that studies get more sophisticated and errors finally corrected.

David L. Hagen
April 16, 2013 1:57 pm

Nic
Recommend graphing to show increasing climate sensitivity from left to right as most common graphing to assist understanding.
David

April 16, 2013 2:16 pm

Nic // Thank you for posting and informing us of this very interesting and useful research. I regard your result as applicable to the case of zero- or at most Stefan-Boltzmann based solar forcing. I base this on the assumption that the MIT model will contain at most a solar forcing based on simple Total Solar Illuminance (i.e., thermal) considerations. That is, we are assuming a prior in which no amplification of simple TSI forcing is admitted, as argued for example in the cosmic ray hypothesis of Svensmark. Therefore, any such non thermal contribution is subsumed in the other variables and results in a higher climate senitivity. For example, when a non thermal component to solar forcing is admitted, Ziskin and Shaviv http://www.phys.huji.ac.il/~shaviv/articles/20thCentury.pdf find that the most likely value of climate sensitivity is 1.0 deg C, with a 95% C.I. (2 sigma) of 0.3 — 1.6 deg C.

Tilo Reber
April 16, 2013 2:26 pm

I’d be curious to see what numbers you got if you used UAH or RSS, Nic.

Nic Lewis
April 16, 2013 2:31 pm

David L. Hagen
“Recommend graphing to show increasing climate sensitivity from left to right as most common graphing to assist understanding.”
Fair point. I afraid that when I wrote the code to produce this plot I hadn’t fully mastered R lattice graphics. Actually, I’m not sure that I’ve done so yet.

Nic Lewis
April 16, 2013 2:49 pm

Tilo Reber
” I’d be curious to see what numbers you got if you used UAH or RSS, Nic.”
Unfortunately, the UAH and RSS temperature records are currently too short to be used in this sort of study.

April 16, 2013 2:54 pm

The intention to use an objective approach is rare in this subject. Warmest congratulations to Nic Lewis. His climate-sensitivity interval is plausible, and his warning that the recent long absence of warming may have no implications for climate sensitivity is timely. I hope to see this valuable addition to the reviewed literature given due weight in the Fifth Assessment Report.

icarus62
April 16, 2013 2:57 pm

@Monckton: Since there hasn’t been an ‘absence of warming’ it’s hard to see how it could have any implications either way.

Rud Istvan
April 16, 2013 2:59 pm

Nic, congratulations on the article, and on your various explanatory posts. Seems your intuition from last year about mode 1.6 was about right mathematically.
I find an equally interesting question to be why the IPCC ECS of 3 is so high. Posted on this a number of times after surviving Dr. Curry review over at Climate Etc. plus the underlying GCM errors highlighted in my ebook. Models tuned to too high a recent temperature due to UHI, homogenization, and all the other problems pointed out by WUWT. UTrH is roughly constant when observation has it declining, leading to too high a water vapor positive feedback. There is a GCM positive could feedback when that is almost certainly neutral or negative. The observational evidence, plus other observational means of getting at ECS, all support something between 1.5 and about 1.9.
Hopefully, your article will have an impact on AR5.

Nic Lewis
April 16, 2013 3:01 pm

Tonyb
” Can you confirm where the actual data you use is derived from”
The surface temperature data is from the original HadCRUT dataset, as in F06, except when using the additional six years of model-simulation data, which extend beyond the end of the HadCRUT data. For that case I switched to using HadCRUT4 data. So, there are two different versions of Hadley SST involved. The data used goes back to 1907 in the first case and 1902 in the second case. Deep ocean (0-3000 m) temperature data came from the Levitus, 2005 dataset, and covers pentads ending in 1959 to 1995 or 1998. Upper-air data comes from the HadRT dataset and relates to the difference between 1986-95 and 1961-80 averages. The upper-air data has much less effect on parameter inference than do the surface and deep ocean data.

Nic Lewis
April 16, 2013 3:24 pm

Monckton of Brenchley
Thanks!
I think my paper will be mentioned in AR5 WG1, although I’m doubtful that it will be given much weight.

climatereason
Editor
April 16, 2013 3:30 pm

Nick
Thanks for your reply.
Having investigated the various historic data bases covering temperatures of all kinds they leave me thoroughly unimpressed. The least impressive of all are the SST records prior to around 1960
I wrote about them here
http://judithcurry.com/2011/06/27/unknown-and-uncertain-sea-surface-temperatures/
The records are extremely flimsy and made worse by the substantial amount of interpolation using data that is already very limited. Our knowledge of the deep ocean is very recent and the jury is surely still out on what they are telling us.
I am not criticising you, but basing public policy on data that is so weak is an indictment of the politicians and those advising them.
You may well be right on sensitivity but the existing data is not the basis for proper scientific studies. We need much longer and much more reliable data before we know either way
Tonyb

Mooloo
April 16, 2013 3:40 pm

talldave2 says:
Are you guys sure you’re comparing apples? The difference between like estimates is less than a degree, doesn’t seem especially large, esp since he also throws in some changes re the surface data.

The key is the loss of the large tail, not the change in median.
The real bed-wetters get all alarmed because their analysis yields some probability that climate sensitivity is very high. Throw in the precautionary principle, and something must be done now!
Nic shows that even using their data and their models and their assumptions that there is little to no chance that climate sensitivity is very high. Hence there is no need to panic. The biggest alarmists really won’t want to see that in a peer-reviewed journal.
On top of that there is always the liklihood that their data, their models and their assumptions have been unduly alarmist in the first place.

Nic Lewis
April 16, 2013 3:41 pm

Rud Istan
Thanks!
I think that the points you make about why the IPCC ECS of 3 K is so high are valid, although they mainly apply to GCM derived estimates and (apart from UHI) should in theory not greatly affect observationally-constrained estimates. And in fact, ESC estimates that are constrained by the instrumental temperature record tend to come out at well below 3 K. Forest 2006 was the main exception in AR4, leaving out studies with strange, poorly defined PDFs. Paleo studies come out all over the place, which to my mind reflects the huge uncertainties involved in such studies.
I think it is very unfortunate how much weight is given to GCM-simulation derived ECS estimates in IPCC reports. I cannot see that giving them much weight is justified. But I doubt this will change much in AR5. Many climate scientists seem to place more faith in GCM simulations than in observations of the actual climate system.

RHL
April 16, 2013 3:43 pm

Nic,
Glad to hear that your analysis was finally published. Have you considered writing to Andrew Revkin to see if he will post it at dot Earth? You may need to soften the statistics part and explain what it means in the big picture.

RomanM
April 16, 2013 3:55 pm

Well done, Nic.
It took perseverance to get through the publication maze, but at the end, it can be very satisfying.

bobl
April 16, 2013 3:59 pm

Hmm, missed here seems to be that climate sensitivity is taken to be a constant. If that were so, then adding data should not substantially alter the result. Since it does, I have to be doubtful about the result.
Excuse my ignorance , but if I read right, you have extracted the sensitivity from GCM and not real data, so one might conclude that this variation happens because.
A.The models are not representing reality, that is the truncated data set delivers a prediction significantly different from the longer set, Ie the short data set fails to project the new data set.
or
B. Climate sensitivity is not a constant – if you look a tropical climate behaviour in general things saturate. That is beyond a certain temperature there is a chaotic response to temperature that results in a vast amount of negative feedback (cyclones). This is analogous to an amplifier that has excursed to its power supply, no amount of input forcing will ever produce more output, despite the feedback that might be present. This for example would occur when cyclone formation expends all the extra C02 retained energy. – The waynI see it on the available evidence climate sensitivity is very likely to be temperatute dependent, and likely chaotic.

GlynnMhor
April 16, 2013 3:59 pm

If the MIT model used does not involve all of the natural forcings (for example the solar-mediated cosmic ray flux changes), then how does the method distinguish between CO2 forcing and unrecognized other forcings?

April 16, 2013 4:00 pm

Nic, Congrats to you! You should be given a medal for perseverance.

Alexander K
April 16, 2013 4:06 pm

Nice one, Nic – I am no statistician, but it’s nice to see your work based on an existing paper shows even less reason for alarm. Which will probably be more than enough for the Warmists to decry it!

Scott Scarborough
April 16, 2013 4:10 pm

Nic,
Doesn’t this analysis assume that all of the temperature increase in the HadCRUT4 data is due to CO2 tempered by aerosols? Is there a basis for this assumption besides just being a worst case?

RCSaumarez
April 16, 2013 4:21 pm

Congratulations,
I’ve always found Bayesian analysis rather formidable and have tended to steer clear of it. While I appreciate your argument about uniform priors, which I feel is correct, I would simply ask:
If you add more data to model, which may be incomplete, and the results change dramatically, are you dealing with an ill-conditioned system? In other words, is the data adequate to support the model? There are methods for investigating this in many systems, but without digging through the maths, can you apply these methods?

John Whitman
April 16, 2013 4:36 pm

Nic Lewis,
Congratulations on your successful peer review success.
John

Gary Pearse
April 16, 2013 5:05 pm

I think climate senistivity can also be estimated (if it is real) from the work done for IPCC over the years using a pyschologically-based upper limit constraint. If one is a firm believer in CAGW, one’s work is going to definitely accept whatever leeway upwards that the limits to decency allows. Some experience in how people will exaggerate (especially when they are given the green light to push the envelope by such as the late Dr. Schneider and others). Other data available for the psychological method is the secrecy of the main proponents with their data, algorithms, adjustments, etc. What could there be to hide? Hansen up into the mid to late 1990s was using a 15.5 C average world temp as a base in (1979? anyone?) against which to measure future temps. When the slowdown came, he suddenly began using a 14.5 C past average – chopping a degree off the “cool end” to elevate the difference in a disappointing 1999. The warm end was also enhanced by a stepwise upward adjustment of ~0.5C. I would say the upper psychological constraint (I like the double meaning) is about half the high confidence level temperature. With CS in the 90s calculated 3-5C, they did cut it back progressively in the new millennium to 1.5-3 or so. I think they still have given themselves a margin since the alarms go off at 2C rise. Work done by several more neutral researchers arrived at 1-1.5%. We are getting closer to a sensible number and it is one without alarm.

jorgekafkazar
April 16, 2013 5:17 pm

milodonharlani says: “It finds that climate sensitivity is most likely a lot lower than imagined by IPCC & the alarmosphere, even if slightly higher than maintained by many skeptics who feel that feedbacks like water vapor roughly cancel out the minor effect of increased CO2 concentrations.”
I number myself among those skeptics, but nevertheless I applaud Nic Lewis’s work and congratulate him on having successfully run the peer review gauntlet.
Thanks, too, to Nic and Zeke for clarifying the issue regarding the extra years of data.

Rob JM
April 16, 2013 5:28 pm

Nic Lewis
Between 1987 and 2000 there was a 5% reduction in cloud cover which resulted in a 0.9w/m2 forcing and caused 0.3 deg of warming.
A doubling of CO2 would therefore cause about 1.2 deg of warming assuming the 3.7W/m2 is accurate (which is contested as being too high by a recent study)
The IPCC and climate modellers continue to ignore the existence of this dominate climate forcing despite an increase in OLR at the top of the atmosphere which would be impossible for GHG to produce! Consequently their sensitivities or way out of whack because the the warming caused by this large shortwave forcing is wrongly attributed to being caused by a small GHG forcing.
Remove cloud forcing and their is only 0.1 deg C of warming in the 30 years of satellite coverage (vs 0.6degC predicted)
See climate4you.com climate and clouds page for details
Cheers

Matthew R Marler
April 16, 2013 5:31 pm

According to Jay Kadane in “Principles of Uncertainty” there is no such thing as an “objective” prior. How do you know that your prior is accurate? That is, how do you justify it as providing an accurate posterior?
Please let us know when the paper comes out from behind the paywall.

April 16, 2013 5:32 pm

davidmhoffer says:
April 16, 2013 at 1:22 pm
I’m about to agree with ZH who s exactly correct on this matter. The fact that 6 years of data so dramatically alters the sensitivity calculation is evidence (to me at least) that it is wrong.

I also agree. If a result is derived from empirical data, and the empirical data changes significantly, then the result will change.
There are other factors affecting temps that can be either positive or negative that this (and the IPCC approach) simply are not taking into account. Until there is a mechanism by which ALL forcing factors can be identified and quantified with some degree of precision, it will be impossible to isolate the sensitivity to any given one (such as CO2) for the simple reason that we’ve inadvertently got other “stuff” in the data that is affecting the calculation and producing a wrong result.
Such a mechanism is, of course, impossible. If n forcings are known, there can never be a proof that an n+1 forcing doesn’t exist. I recall Popper dealt with this issue at length.
Were the known forcings capable of producing highly accurate forecasts, we could say that all significant forcings are known, but we are very far from that state.
Now, off to read up on ocean diffusivity, which seems the key to understanding this paper.

Matthew R Marler
April 16, 2013 5:52 pm

Upon incorporating six additional years of model-simulation data, previously unused, and improving diagnostic power by changing how the surface temperature data is used, the central estimate of climate sensitivity using the objective Bayesian method falls to 1.6 K (mode and median), with 5–95% bounds of 1.2–2.2 K. When uncertainties in non-aerosol forcings and in surface temperatures, ignored in F06, are allowed for, the 5–95% range widens to 1.0–3.0 K.
The 1.6 K mode for climate sensitivity I obtain is identical to the modes from Aldrin et al. (2012) and (using the same, HadCRUT4, observational dataset) Ring et al. (2012). It is also the same as the best estimate I obtained in my December non-peer reviewed heat balance (energy budget) study using more recent data, here. In principle, the lack of warming over the last ten to fifteen years shouldn’t really affect estimates of climate sensitivity, as a lower global surface temperature should be compensated for by more heat going into the ocean.

These are highly model dependent statements, however you estimate the parameters in the model, and whatever priors you choose. If in fact there is an increased cloudiness that blocks sunlight, or if the response of the climate to variations in some aspects of solar output have been mis-specified, then you basically have a computationally intensive method for writing on the water surface.
It appears from your presentation that you disliked the Forest et al procedure because it produced a posterior distribution that was discordant from your personal prior, and you liked your procedure because the formal prior you used produced a posterior prior that was closer to your personal prior. As an exercise in showing how posterior distributions depend on prior distributions that’s nice; but you have provided no substance for claiming that your credible interval is better than the credible interval than the credible interval that you critiqued. Now that you have practice with the software, you should be able to compute a 95% credible interval for the ECS of a future doubling that is approximately (-2.0K, 2.0K). All you need is a prior that puts the right amount of positive probability for ECS on the negative reals.

bw
April 16, 2013 5:54 pm

Paleo-climatology shows pre-industrial temps change on millenial scales. Vostok cores show that CO2 then follows those temp changes. The biological carbon cycle follows temperature.
The total quantity of actual evidence that adding CO2 to the atmosphere causes any warming is zero.
About the only climate where CO2 might have a measurable sensitivity is a land desert, or a planet without water.
Stratospheric CO2 is going up, but the temperature is going down. Reliable surface thermometers show zero warming over decades. Satellite data show no significant warming globally since 1979
If you wanted to change the temps, change the surface albedo, thats what plants do.

fjodor
April 16, 2013 6:04 pm

By what mechanism does the heat suddenly decide to disappear into the depths of the ocean, bypassing the atmosphere as it goes?

Crispin in Waterloo but actually in Yogyakarta
April 16, 2013 6:15 pm

Bradley says:
“I also agree. If a result is derived from empirical data, and the empirical data changes significantly, then the result will change.”
Hmm…that is not how I read it. The 6 years’ data were added but the methods used were refined, and the conclusion was that the 6 years (stasis) did not affect the result and will not affect the result in future. I understand the explanation to be that if F06 had been processed more carefully it would have given a different and more constrained result. The addition of another 6 years of data does not change the forcing estimate.
I am with bobl on the idea that the sensitivity changes, but that merely means we are at the stage where the number of forcings is n+m where m is large.
Bobl, things don’t ‘just change’. For every effect there is a cause, save that group of effects which are uncaused. The number of uncaused effects is very small. [See “Minimalism, the new philosophy” by William S Hatcher]

April 16, 2013 7:09 pm

Crispin, Nic said,
In principle, the lack of warming over the last ten to fifteen years shouldn’t really affect estimates of climate sensitivity, as a lower global surface temperature should be compensated for by more heat going into the ocean.
That is a rather large assumption. Essentially he is assuming that the forcings in the model are correct. Whereas, I’d suggest that continued lack of warming is (accumulating) evidence the model forcings are too high (overall).

Crispin in Waterloo but actually in Yogyakarta
April 16, 2013 7:22 pm

Bradley
I agree that he accepted their assumptions. To me that is a very strong argument and difficult for the original author to refute because one is accepting their own assumptions. It is thus an examination of the processing method, not an examination of the assumptions. It is a mathematical approach.
Monckton uses the same technique: “Assuming your ‘givens’ are correct, this is what the correct answer should be.” That does not lend credence to the assumptions, just that if they were correct, here is how to properly calculate the answer.

Crispin in Waterloo but stlil in Yogyakarta
April 16, 2013 7:51 pm

@Matthew R Marler says:
MRM “It appears from your presentation that you disliked the Forest et al procedure because it produced a posterior distribution that was discordant from your personal prior, and you liked your procedure because the formal prior you used produced a posterior prior that was closer to your personal prior.”
I find this a shabby response. It amounts to an accusation that mathematical rigour is susceptible to personal opinion. In my view this work shows that F06 was not executed with the necessary rigour. When executed correctly, the answer is different and better constrained. The reasons for these differences are given.
MRM “As an exercise in showing how posterior distributions depend on prior distributions that’s nice; but you have provided no substance for claiming that your credible interval is better than the credible interval than the credible interval that you critiqued.”
This statement leads me to believe you have not understood the methods applied and what the reasons are for doing so. You are projecting upon the author a personal motivation that does not appear to exist in either his work or F06. I find such intimations diversionary and I recommend the both authors ignore them and continue this discussion, as it should be, via published works.
If you, MRM, wish to take up the cause of personal motivations affecting the scientific discussion, I suggest you address the misuse of personal influence by well known people in attempting to suppress the publication of works that conflict with their own opinions as is well documented in the ClimateGate I and II emails. There you will find no shortage of abuse during the editorial process upon which you can fix your gaze.

April 16, 2013 7:57 pm

Crispin, nicely summarized. Nic’s article would be easier to understand if he explicitly stated this.

Crispin in Waterloo but stlil in Yogyakarta
April 16, 2013 9:37 pm

Bradley
Thanks. I felt that it is outlined clearly enough if one can follow the article – certainly for the intended audience. I think that Monckton is more specific in pointing it out because the audience he addresses is not used to a skeptical analysis that accepts the CAGW assumptions even for a theoretical analysis of their methodology. It is why they dislike him so. They have no coherent rebuttals and default to personal attacks, further undermining their own positions.
I do not think MRM should be accorded space to attempt the same thing given zero evidence. The paper is mathematical and accepts as a given the F06 assumptions. Another follow-up paper could challenge the assumptions and perform the same (corrected) calculations to provide yet another result. The follow-up paper could be written by anyone conversant with the subject, hopefully as conversant as is the fine author above.

pottereaton
April 16, 2013 10:02 pm

Re-posted from Bishop Hill:
How refreshing! A scientist/statistician writes a paper. It is published in a respected journal. He comes to a well-known blog, announces it and describes its content in a straightforward way. He then stays online and answers questions from all comers.
By contrast, compare that to the orchestrated media circus that accompanied the release of Marcott et al and Gergis et al, each of which was followed shortly thereafter by the disappearance and unavailability of the principal authors.
theduke

Greg Goodman
April 17, 2013 12:09 am

Nick, thank you for putting in all the hard work and getting this paper published. It is encouraging to see someone who understands these methods and how to apply them, publishing in climate science.
Like some others here I was a little surprised that you suggest the recent “pause” does not make much difference. To look at this very simplistically would this method , despite its complexity, not be subject to error in taking a trough to peak period for analysis.?
Most climate models are developed principally to reproduce the 1960-1990 period as accurately as possible ( even John Kennedy used the word “tuned” in our discussion on Judith’s site http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/ ). So it is perhaps not surprising if they diverge outside that timespan.
I also established there that Hadley processing removed “the majority of the variation from the majority of the record”. This majority being the earlier two thirds. That is not to say they are necessarily wrong but in view of their speculative basis it is certainly of concern.
If the 60 year pseudo cyclic variation is due to some real climate driver, it may be expected to continue and would affect the contribution that is currently attributed to ‘climate sensitivity’. Would your method produce different results dependant upon whether an integral number of cycles was included in the study period?
I produced this plot recently to point out the fallacy of some comment the “recent” warming was steeper than last century warming. It’s the usual trough-to-peak trickery.
http://climategrog.wordpress.com/?attachment_id=209
Would you expect your method to produce comparable results if it was limited to 1950-2010 , for example?

Greg Goodman
April 17, 2013 2:40 am

Philip Bradley says:
“That is a rather large assumption. Essentially he is assuming that the forcings in the model are correct. Whereas, I’d suggest that continued lack of warming is (accumulating) evidence the model forcings are too high (overall).”
I suspect that volcanics are exaggerated (negative) forcing and CO2+assumed feedbacks is exaggerated +ve forcing. From 1960-1997 the two errors roughly cancel (since they are “tuned” for fit that period) , since 1997 the lack of major volcanoes has shown CO2+assumed feedbacks to be far too strong.
Nick Lewis’ result is coming a lot closer to CO2 without presumed feedbacks.
Since the HadleySST data that is major component of all this has been subject to much lesser ‘correction biases’ since 1950, it would be interesting to see what the same method gives using only data from that data onwards, if that is sufficient. This would presumably increase the 95% confidence range but its affect on the central value may be illuminating.

Kelvin Vaughan
April 17, 2013 2:55 am

I live in a very simple world. Press letter A on the keyboard and A appears on the screen! I don’t really care what is going on in the computer at the atomic level.
Likewise in my world 1/2500 of the atmosphere can only intercept 1/2500 of Earths outgoing radiation and push up temperatures by 1/5000 of the ambient temperature. (Only half of the intercepted radiation comes back.)

AndyG55
April 17, 2013 3:01 am

I must admit that when I see anyone using pre-satellite Hadcrud as any sort of data is always going to leave me with a icky taste about the result.
I would like to see someone try to find the original, unadjusted temperature readings and use that instead.

AndyG55
April 17, 2013 3:05 am

ps.. apart from using a rather dubious data source, methodology looks solid, unlike most most agw peer reviewed stuff.

rgbatduke
April 17, 2013 5:16 am

itSounds like a very interesting paper, but do not miss the meta-point. Three different Bayesian computations are done, two of them on the same data, one on that data plus six more years. They produce three different estimates for climate sensitivity, with three different bounds. The confidence interval of the final one does not really overlap with the confidence interval of the other two.
What does this tell us of the real confidence that can be placed in any of the results? Only that at least one of the three computations, more likely two, possibly all three, are terribly flawed. It may be that the entire approach is flawed. It means that either the estimation of the 95% confidence interval is mistaken (and it should really be much broader in all three cases, most likely) or we are singularly unlucky, hitting something like a 1 in 400 chance or even less.
Ordinarily, I would consider this sort of result as grounds for rejecting (or at the very least doubting) the entire assumed model (set). There is missing information, missing variability, missing noise, or else an incorrect treatment of probable error in there or the error ranges of all three computations would OVERLAP instead of being disjoint with the predicted mode of one outside of the range of another, with the RANGE of one disjoint from the RANGES of the other two.
Sadly, this otherwise well-done computation seems to perpetuate what I think of as a primary flaw in climate science. Nobody is doing the computation of probable error correctly. Note well, not the computation of statistical error within the assumptions of the model, the probable REAL error. All we can really conclude from this is that we don’t have any particularly good idea what the climate sensitivity really is, even with a really good job being done estimating it. Who would be surprised if, in six MORE years, the climate sensitivity arrived at in a redo of the exact same computation turned out to be 0.5 C \pm 0.6 C, or something else that didn’t even overlap significantly with the range of the third, presumably best result? I’m guessing that all it would take is for the temperature to remain flat and/or actually decrease a tenth or two of a degree over six years and one would get exactly that. If the temperature dropped any MORE than that, it could even turn out to be negative.
I’ve been pointing out for years that the best estimate for climate sensitivity is dropping (as it should) with the global temperature data. What I haven’t been pointing out (but probably should have been) is that an honest estimation of probable error in the sensitivity should from the beginning have included the entire range down to zero or even negative. We really don’t know what it is. 1.6 C is a lot more reasonable than 3-5 C, but it is 1.6 C and falling. How far will it fall? When will it start to warm again? Or will it cool? What is the predictive value of this number?
Practically nil.
rgb

oldfossil
April 17, 2013 5:21 am

Forest’s method as improved by Lewis is really very very cool, but with one rider. What they’re actually measuring is not climate sensitivity but the stability/robustness/integrity of the model. I’m an amateur mathematician and although stats doesn’t really ring my bell I still admire this approach.

Bill Illis
April 17, 2013 5:27 am

Half of the warming predicted comes from water vapor feedback.
All of the climate models and the theory build in a 7.0% increase in water vapor per 1.0C increase in temperatures (or more than +1.75 W/m2 per 1.0C increase in temperatures which then amplifies into about 1.5C of water vapor caused temperature increase).
However, water vapor is only increasing between 2.4% to 4.6% per 1.0C increase. let’s say only half that predicted.
Hadcrut4 vs water vapor back to 1948.
http://s13.postimg.org/7dk4nfh6f/Hadcrut4_vs_TCWV_Scatter.png
UAH/RSS lower troposphere temps vs water vapor back to 1979.
http://s9.postimg.org/y8o23z2rz/UAH_RSS_vs_TCWV_Scatter.png
So with water vapor coming in around half of the feedback increase expected, total global warming falls into the range right around 1.6C per doubling as Nic has calculated. Interesting chart here showing how sensitive the theory is to the assumptions about the feedbacks.
http://s24.postimg.org/7jjj2kcgl/Feedback_Strength.png

ferdberple
April 17, 2013 5:44 am

as well, see R.W.Wood experiment (1909).
davidmhoffer says:
April 16, 2013 at 1:22 pm
If the PDO for example were to go wildly negative, pushing down global temps for another 10 years, we we conclude that CO2 sensitivity had declined further? That wouldn’t make sense would it?
=============
It would make sense. It would mean that the underlying statistical assumptions about the nature of climate are wrong. Most likely that natural variability is much greater than is allowed for, and specifically the run up in temperatures from 1970-2000 was a natural event, no different than the run up in temperatures from 1910 to 1940.
The underlying problem is that the cycle of ice ages are natural events and need to be accounted for in the statistical model of climate. These ice ages with rapid warming and cooling periods show us that extreme events are highly likely in climate. Statistical climate models ignore this and assume that climate is relatively stable and changes very slowly. However, this is simply an illusion of recent history and very limited climate data.
We are here – our civilization exists – because we are co-incidentally in a period of unusually stable climate. However the statistics all assume that our present climate is typical of the climate system and will continue into the future. This is an inherently wrong assumption because we know from the paleo records that interglacial periods such as at present are short and not typical in comparison of the longer record.
Thus, any statistical model that relies on interglacial data alone to estimate climate sensitivity is likely to be mislead by natural variability. Natural warming will be incorrectly attributed to CO2 and as a result will will see a rapid change in the calculation of CO2 sensitivity during periods of cooling.
This is exactly the case we see today, where CO2 sensitivity is calculated to be rapidly dropping as temperatures are no longer climbing. This tells us that the underlying statistical assumptions about climate are wrong.

ferdberple
April 17, 2013 5:57 am

rgbatduke says:
April 17, 2013 at 5:16 am
What I haven’t been pointing out (but probably should have been) is that an honest estimation of probable error in the sensitivity should from the beginning have included the entire range down to zero or even negative.
===========
Exactly. Take out the assumption that increased CO2 must cause warming. In an ideal world with linear climate response that might be a reasonable assumption. However, in a non-linear system no such assumption can be made.
From the fist day the ice cores showed that CO2 lagged temperature, that was the day that the assumption was shown to likely be wrong. If CO2 causes any significant warming then the cycle of ice ages and interglacials would be impossible due to positive feedback. This is now being shown in spades by the rapid decrease in calculated CO2 sensitivity.

ferdberple
April 17, 2013 6:05 am

rgbatduke says:
April 17, 2013 at 5:16 am
We really don’t know what it is. 1.6 C is a lot more reasonable than 3-5 C, but it is 1.6 C and falling. How far will it fall? When will it start to warm again? Or will it cool? What is the predictive value of this number?
===========
An interesting exercise might be to project the current halt in warming another 10 years into the future and see what effect it has on the CO2 sensitivity estimate. Maybe try a couple of different temperature projections to test how sensitive the CO2 sensitivity estimate is to natural variability. If the CO2 sensitivity calculation is highly sensitive to natural variability, then can one really place much faith in the estimate?

Greg Goodman
April 17, 2013 7:17 am

Fred: ” If CO2 causes any significant warming then the cycle of ice ages and interglacials would be impossible due to positive feedback. ”
No. There could be (probably is) a +ve f/b that causes the rapid swing from glacial to interglacial. This is bounded by a stronger -ve f/b that keeps the system stable. This is a very common situation physically.
Fred: ” It means that either the estimation of the 95% confidence interval is mistaken (and it should really be much broader in all three cases, most likely) or we are singularly unlucky, hitting something like a 1 in 400 chance or even less.”
OR, that one or more of those calculations was done incorrectly and/or with insufficient data, the resulting 95% range was wrong. That is very clearly what Nick Lewis’ paper shows in a very full and rigorous way.
The only flaw I can see is that it accepts HadCRUT4 to be accurate over the range of time used. The pre-1950 part of hadSST3 (in particular) has been subject to considerable speculative adjustments that may cause further refinements of sensitivity to be made in the future.
I hope Nick will be able to comment later on the possibility of running the same method on post-1950 data only.

April 17, 2013 7:53 am

I’m still laughing. Some time past I noted the conforming of ‘climatology’ to promotion ( as per military intelligence psyops ) of Bayesian modeled belief systems where a priori assumptions must not be challenged. Such is the co2 feedback mechanism and determining strength of same rather than its very existence.

Matthew R Marler
April 17, 2013 11:55 am

Crispin in Waterloo but still in Yogyakarta: In my view this work shows that F06 was not executed with the necessary rigour.
I don’t dispute that. All Bayesian analyses should be accompanied by analyses of the sensitivity to the prior used, and the best way to do that, as far as I know, is to use many priors that differ dramatically.
I disputed the claims that this particular choice of prior is “objective” and that the resultant posterior distribution is an improvement. Neither claim is justified. The only justifiable claim is that this is an instance where a different prior produced a different posterior distribution; it’s something we know in principle and this is a worked example.

Matthew R Marler
April 17, 2013 12:05 pm

Crispin in Waterloo but still in Yogyakarta: If you, MRM, wish to take up the cause of personal motivations affecting the scientific discussion,
I don’t doubt that there other instances of personal motivation afflicting climate science. Upon re-reading this post, I admit that my attribution of motivation in this case is weak.

April 17, 2013 5:26 pm

davidmhoffer says:
April 16, 2013 at 1:22 pm
If the PDO for example were to go wildly negative, pushing down global temps for another 10 years, we we conclude that CO2 sensitivity had declined further? That wouldn’t make sense would it?

It would mean either the claimed CO2 forcing is much too high and is in fact well under 1C per doubling (and consequently the late 20th century warming didn’t result from increasing CO2 levels),
OR the Forcing Model/Theory is wrong, and changes in forcings do not result in the changes in climate it predicts.

April 17, 2013 5:41 pm

I have dissembled your first paragraph under “Lewis 2013”. There are several jumps of logic and statistical theorems that are not explained, not obvious, and not supported by reference. Let me comment on these jumps for the purpose of further discussion.
Therefore in my paper Bayes’ theorem is applied to the data rather than the parameters: a joint posterior PDF for the observations is obtained from a joint uniform prior in the observations and the likelihood functions.
I must admit that this is thinking outside the box. But to what end? Are you therefore using Bayesian theory as an excuse to adjust the data to fit a model? What is the Bayesian approach used for on the observations if not to obtain flexibility in the observational data?
Because the observations have first been ‘whitened’, (i.e. made non-correlated),
Why should we believe that observations in a time series, or in a weather map, should be uncorrelated to neighboring points in time and space? Why should an act of whitening not bias the data some way?
this uniform prior is noninformative,
This statement sets off my alarm bells. Frankly, I don’t believe in the non-informative scenario.
For one thing, if a distribution is uniform in one coordinate system, it is not necessarily uniform if transformed into another. Therefore Uniform is not magically pure. The choice of coordinate system is informative and biased by choice of model. Secondly, people usually talk of uniform distributions between defined end-points. The choice of endpoints is an act of information and act of bias. Furthermore, when people are studying a system with Bayesian statistics, some knowledge of the system must already be known — there is a model. To assume that the midpoint of a range is as equally possible as the studied endpoints is an act of bias toward the extremes and away from acquired knowledge. It is a bias toward the chosen endpoints.
One can assume that a six-sided die has a prior distribution that is uniform between 1 and 6 inclusive. However, I once owned a die that contained a 7 (a 6 superimposed with a 1, or 5, of 3) and two 5’s. Had it as a kid, long lost, but a treasured memory. The point being that the distribution of a die is not limited to 1 and 6 inclusive, that a 6 and a 7 are not of equal probabilities, and the probability of a 7 is not zero. To assume a uniform distribution between 1 and 6 and zero at 7 is an informative, biased choice that ignores available information. To assume a uniform distribution between 1 and 7 is also to ignore a lot of available information. I don’t think “non-informative” means “blind”.
meaning that the joint posterior PDF is objective and free of bias.
Sorry, I don’t buy it. The decision to make the distribution uniform (with or without unspecified end points) and the whitening makes the claim that the result is free of bias a stretch.
Then, using a standard statistical formula,
Which standard statistical formula? Normal Gaussian statistics? Why not say so?
this posterior PDF in the whitened observations can be converted to an objective joint PDF for the climate parameters.
No doubt it CAN be so converted. But why should this be free of bias?
But we get back to the “at what end?” Now you have a PDF for the climate parameters. The whole point of Bayesian statistics is that you can take this PDF as a prior distribution, apply new observational data to get a refined posterior distribution. In your case, the second cycle is fundamentally different than your first. Perhaps this is why the addition of six additional years created such a different distribution.

April 18, 2013 6:45 am

Stephen Rasey says:
April 17, 2013 at 5:41 pm

My stats is only what science undergrads learn, but you have articulated issues, some of which raised flags with me. Like why ‘whiten’ the data?
Thanks. Very informative.

April 19, 2013 12:20 pm

While the uniform prior is uninformative, priors that are uninformative and non-uninform are of infinite number. Thus, the posterior PDF lacks the uniqueness that is required by logic.

April 19, 2013 5:21 pm

From Wikipedia

The term “uninformative prior” may be somewhat of a misnomer; often, such a prior might be called a not very informative prior, or an objective prior, i.e. one that’s not subjectively elicited. Uninformative priors can express “objective” information such as “the variable is positive” or “the variable is less than some limit”.

Like I said, I don’t believe in the noninformative scenario. “Uninformative Priors” fly under a false flag for every one of them contain presumptions, not the least of which are the model to which they are applied and the parameters such as choice of endpoints.
On the other hand, if Terry Oldberg is right, that a uniform distribution is only one of many possible uniformative distributions, then the choosing of a uniform distribution over the others is a potential act of bias and not one of indifference. The choice of a uniform distribution is subjective and therefore neither objective nor noninformative.

April 19, 2013 9:58 pm

Stephen Rasey:
It is easy to prove the existence of an infinity of non-informative prior probability density functions over the climate sensitivity, one of which is the uniform prior. I’ve already published a proof of this assertion in the blogosphere. If there is call for it, I’ll publish this proof once again in this thread.

April 19, 2013 11:20 pm

Oldberg,
Whether there are two, three, or an infinite number I think is beside the point. So for the sake of argument, we’ll take as settled that there are always more than one noninformative prior distribution that could be used.
Then are each of these candidate non-informative priors interchangeable so that any could be used and the same posterior distribution will be the result? I cannot fathom how the same result is possible given different priors. It must be that different priors will result in different posteriors if processed with the same set of observations, at least if the prior and observations are not trivial cases.
If priors are not interchangable, then the choice of noninformative prior has a bearing on the result. How then can the choice of one prior over another be a noninformative action?
Does one use a Monte Carlo process to randomly (non-informatively?) choose several different priors and preform an analysis? If the domain of candidate priors is infinite on an unknown number of dimensions, how do we tell whether we have fairly sampled the domain with the Monte Carlo process?
What is actually meant by the word “noninformative” when one prior distribution is chosen out of many and that choice is justified and defended by the analyst? Can there no information guiding the choice?

Reply to  Stephen Rasey
April 20, 2013 9:03 am

Stephen Rasey
You say:
“If priors are not interchangable, then the choice of noninformative prior has a bearing on the result. How then can the choice of one prior over another be a noninformative action?”
I answer:
It’s not the choice of prior that is noninformative but rather is each prior in a set containing many priors that is noninformative. A “noninformative” prior is one that maximizes the entropy of the associated probability distribution function.
In generating posterior PDFs over the equilibrium climate sensitivity (TECS), climatologists select one of the many equally noninformative priors arbitrarily. According to IPCC Assessment Report 4, the uniform prior is popular with climatologists. However, priors that are equally uninformative but non-uniform can be proved to be of infinite number. Each of the many priors yields a different posterior PDF and public policy prescription. If you think this process is illogical, you are right. The multiplicity of the noninformative priors generates violations of Aristotle’s law of non-contradiction.
As TECS is defined on the change in the equilibrium temperature and the equilibrium temperature is not an observable, TECS is not a scientifically viable concept. It has been made to appear viable through concealment of the violations of non-contradiction by the authors of the IPCC assessment reports.

April 21, 2013 11:36 am

Terry,
climatologists select one of the many equally noninformative priors arbitrarily
Are you sure the word “equally” applies? That implies that the priors are interchangeable. But we know that the generation of the posterior is dependent upon the choice of prior, even a noninformative prior. Therefore, there must be material differences in the priors.

(ii) The statistical analysis is often required to appear objective Of course true objectivity is virtually never attainable and the prior distribution is usually the least of the problems in terms of objectivity but use of a subjectively elicited prior significantly reduces the appearance of objectivity Noninformative priors not only preserve this appearance but can be argued to result in analyses that are more objective than most classical analyses[emphasis in original]
(v)…the Je􀀀reys prior seems to almost always yield a proper posterior distribution This is magical in that the common constant ( or uniform) prior will much more frequently fail to yield a proper posterior. Even better the reference prior approach….yield surprising good performance…..(Yang-Berger 1998 p.4-5)

For any class of prior distribution, there is at least one prior distribution that maximizes the entropy for that distribution class. This is the noninformative prior distribution from that class. But there are an infinite number of classes. It is a stretch for me to believe that each of these infinite noninformative priors have the same entropy level.
Uniform distributions (from endpoints a to b) might be noninformative as a class. But not all noninformative prior distributions are uniform. Some of these, unlike uniforms, have non-zero skew.
I assert that all Bayesian studies of TECS that employ a uniform prior distribution result in a posterior distribution with a positive skew. I’d love to see a counter example. ( I think you could create a negative skew by choosing a uniform distribution (a,b) with a deliberately chosen low b near 2. But I digress.) To employ a zero-skew prior distribution when the result is expected (from prior work) to be positively skewed is an element of bias and a bias toward the high end of the a to b range.

I agree with you that Aldrin is the most thorough study, although its use of a uniform prior distribution for climate sensitivity will have pushed up the mean, mainly by making the upper tail of its estimate worse constrained than if an objective Bayesian method with a noninformative prior had been used. – Nic Lewis, Bishop Hill, Jan 12, 2013

Nic Lewis in his 5th paragraph makes the statement (concerning his use of the uniform on the observations)

this uniform prior is noninformative, meaning that the joint posterior PDF is objective and free of bias.

Uniform priors might be noninformative, but it does not follow that the choice of uniform, and particularly the choice of the a,b range of that uniform distribution, is free of bias. If it is not free of bias, it is not necessarily objective. There seems to be an implied claim in many Bayesian papers that “Uniform = noninformative = nonbiased = objective.” No. There is only the appearance of objectivity. When it comes to estimates of physical constants (even if TECS was one), the uniform prior distribution is seldom if ever the best objective prior.
Perhaps the title of this paper really should be: “A noninformative (Bayesian) estimate of climate sensitivity”

Reply to  Stephen Rasey
April 21, 2013 5:04 pm

Stephen Rasey:
Please find proofs of the infinitude of noninformative priors and noninformativeness of the uniform prior below.
Let T designate the equilibrium climate sensitivity. Let X designate a unique interval in T in which the probability density of T is non-nil. Let Xp(X) designate a partition of X into intervals that are of infinitesimal length.
Let P(X) designate a function that maps the elements of X to the associated probabilities. By stipulation, P(X) is constant within each element of the partition Xp(X). Maximization of the entropy of P(X) yields the conclusion that P(X) is a constant. Let this constant be designated by C.
Let i designate a particular element of Xp(X) and let l(i) designate the length of this element. In i, the probability density is C/l(i). Partitions of X are of infinite number. It follows that noninformative priors are of infinite number also.
Among the many noninformative priors is the one in which l(i) is a constant. This prior is the uniform prior. Thus, the uniform prior is noninformative but not uniquely so.
Q.E.D.