So What Happened to the Philosophy?
Guest essay by John Ridgway
Many years ago, when I was studying at university, an old joke was doing the rounds in the corridors of the physics department:
Question: What is the difference between an experimental physicist, a theoretical physicist and a philosopher?
Answer: Experimental physicists need thousands of pounds of equipment to do their work but the theoretical physicists only need a pencil, some paper and a waste paper basket. The philosopher doesn’t need the waste paper basket.
With such jokes in our repertoire how could we fail to attract a mate? But fail we did.
Fortunately, polluting the gene pool was not our main priority at the time. We were satisfied simply to point out that philosophy was a fascinating way to waste a life, and then move on. There was nothing wrong with philosophy that a good waste paper basket couldn’t solve, but it wasn’t something that a self-respecting physicist should indulge.
I’m sure that there are many of you out there that feel the same way about the IPCC’s output; if the IPCC hadn’t ignored the waste paper basket in the corner, then science would be all the better for it. However, just to be controversial, I’m going to disagree with this analysis. The problem with philosophy is more subtle than I had appreciated in my formative years. Philosophers use their waste paper basket just as readily as theoretical physicists, but it is in the nature of philosophy that no two philosophers will use their basket for the same purpose. This problem is evident in the IPCC’s assessment reports as they deal with the deeply philosophical subjects of probability and uncertainty.
Consensus is King
A great deal has already been said on this website regarding the CAGW campaigners’ obsession with the supposed 97% consensus amongst scientists. It has been quite rightly pointed out that science doesn’t work that way; it isn’t a democracy. Such a misunderstanding is perhaps to be expected amongst journalists, politicians and activists who are just looking for an authoritative endorsement of their views, but it is all the more shocking to see that the IPCC also holds to the belief that consensus can manufacture truth. Whilst you and I may struggle to come to terms with the subtleties of probability and uncertainty, the IPCC appears to have had no trouble in arriving at a brutally simplistic conclusion. In guidelines produced for its lead authors to ensure consistent treatment of uncertainties,1 one can find advice that is tantamount to saying, “A thousand flies can’t all be wrong when they are attracted to the same dung heap”. To be specific, one finds the following statement:
“A level of confidence provides a qualitative synthesis of an author team’s judgment about the validity of a finding; it integrates the evaluation of evidence and agreement in one metric.”
The IPCC is not referring here to agreement between data. As it explains, “The degree of agreement is a measure of the consensus across the scientific community on a given topic.”
Now, I have to admit that here is a classic example of the waste paper basket being egregiously ignored. One could not hope for a clearer enunciation of the belief that consensus is a legitimate dimension when assessing confidence (which here substitutes for uncertainty). The IPCC believes that consensus stands separately from the evaluation of evidence and can be metrically integrated with it. For the avoidance of doubt, the authors go on to provide a table (fig. 2 of section 2.2.2) that illustrates that ‘high agreement’ should explicitly improve one’s level of confidence even in the face of ‘limited evidence’.2
The presupposition that it is evidence, and evidence alone, that should determine levels of uncertainty, and hence confidence, has been brazenly ignored here. Disagreement amongst experts should do nothing more than reflect the quality of evidence available. If it doesn’t, then we may be dealing with political or ideological disagreement that has nothing to do with the measurement of scientific uncertainty. It is a mistake, therefore, to treat disagreement as a form of uncertainty requiring its own column in the balance books. The shame is that, to ensure consistency, these guidelines are meant to apply to all IPCC working groups, so they ensure that the same mistake is made by all.
I have a background in systems safety engineering, and so I am well acquainted with the idea that confidence in the safety of a system has to be gained by developing a body of supporting evidence (the so-called ‘safety case’). If the quality of evidence was ever such that it was open to interpretation then the case was not made. No one would then be happy to proceed on the basis of a show of hands, and for a very good reason. In climatology, consensus is king. In safety engineering, consensus is a knave; consensus launched the space shuttle Challenger.
Probability to the Rescue
Note that the guidelines described above are meant to apply when the nature of evidence can only support qualitatively stated levels of confidence (using the supposedly ‘calibrated uncertainty language’ of: very low, low, medium, high, and very high). As such, they only form part of the advice offered by the IPCC. The guidelines proceed to explain that there will be circumstances where the “probabilistic quantification of uncertainties” is deemed possible, “based on statistical analysis of observations or model results, or expert judgment”. For such circumstances, an alternative lexicon is prescribed, as follows:
- 99–100% probability = Virtually certain
- 90–100% probability = Very Likely
- 66–100% probability = Likely
- 33 to 66% probability = About as likely as not
- 0–33% probability = Unlikely
- 0–10% probability = Very Unlikely
- 0–1% probability = Exceptionally Unlikely
The first thing that needs to be appreciated here is that (for no better reason than the ability to quantify) the IPCC has switched from the classification of uncertainty to the classification of likelihood. High likelihood can be equated to low uncertainty, but so can low likelihood (meaning high confidence in the non-event). It is in the mid-range (‘About as likely as not’) that uncertainty is at its highest. Confusingly, however, most of the IPCC’s categories overlap, and their spread of probabilities varies. So, in the end, just what the IPCC is trying to say about uncertainty is quite unclear to me.
However, my main problem here with the guidelines is their implication that risk calculations (which require the likelihood assessments) can only be made when evidence supports a quantitative approach. Once again, with my background in systems safety engineering I find such a restriction to be most odd. Risk should be reduced to levels that are ‘As Low As Reasonably Practicable’ (ALARP), and in that quest both qualitative and quantitative risk assessments are allowed. As a separate issue, the strength of evidence for the achievement of ALARP risk levels should leave decision makers ‘As Confident As Reasonably Practicable’ (ACARP). Once again, confidence levels can be qualitatively or quantitatively assessed, depending upon the nature of the evidence. That’s how the risk management world sees it, so why not the IPCC?
In summary, the IPCC’s approach to questions of risk and uncertainty not only betrays poor philosophical judgement, it entails a methodology for calibrating language that is at best unconventional and at its worst downright wrong. But I’m not finished yet. It is not just the ham-fisted way in which the IPCC guidelines address probabilistic representation that concerns me; it’s also the undue trust it places in the probabilistic modelling of uncertainty. This, I maintain, is what one gets by overusing the philosopher’s waste paper basket.
Philosophical Conservatism and the IPCC
To explain myself I have to provide background that some of you may not need. If this is the case, please bear with me, but I think I need to point out that there is a fundamental ambivalence lying at the centre of the concept of probability that is deeply disquieting.
The concept of probability was first given a firm mathematical basis in the gambling houses of the 18th century, as mathematicians sought to quantify the vagaries of the gaming table. Since then, probability has gone on to provide the foundation for the analysis of any situation characterised by uncertainty. As such, it is hugely important to the work of anyone seeking to gain an understanding of how the world works. And yet, no one can agree just what probability is.
Returning once more to the gaming table: Though the outcome of a single game cannot be stated with certainty, the trend of outcomes over time becomes increasingly predictable. This predictability gives credence to the idea that probability objectively measures a feature of the real world. However, one can equally say that the real issue is that gaps in the gambler’s knowledge are the root cause of the uncertainty. With a perfect knowledge of the physical factors that determine the behaviour of the system of interest (for example, a die being tossed) the outcome of each game would be entirely predictable. It is only because perfect knowledge is unobtainable that probability is needed to calculate the odds. Seen in this light, probability becomes a subjective concept, since individuals with differing insights would calculate different probabilities.3
The idea that probability objectively measures real-world variability (in which fixed but unknown parameters are determined from random data) is referred to as the frequentist orthodoxy.4 The subjective interpretation, in which the parameters are held to be variable and the probabilities for their possible values are determined from the fixed data that have been observed, is known as Bayesianism.5
Despite a long history of animosity existing between the frequentist and Bayesian camps,6 both approaches have enjoyed enormous success, so it is unsurprising to see that both feature in the IPCC’s treatment of uncertainty. However, the argument between frequentists and Bayesians, regarding the correct interpretation of the concept of probability, can be recast as a dispute over the primacy of either aleatoric or epistemic uncertainty, i.e. the question is whether uncertainty is driven principally by the stochastic behaviour of the real world or merely reflects gaps in knowledge. Although the IPCC proposes using subjective probabilities when dealing with expert testimony, the problem arises when the level of ignorance is so profound that even Bayesianism fails to fully capture the subjectivity of the situation. Gaps in knowledge may include gaps in our knowledge as to whether gaps exist; a profound, second-order epistemic uncertainty sometimes referred to as ontological uncertainty. And in the real world, even the smallest unsuspected variations can have a huge impact.
The belief that casino hall statistics can model the uncertainty in the real world has been dubbed the ‘ludic fallacy’ by Nassim Taleb.7 Concepts such as the ludic fallacy suggest that probability and uncertainty are such slippery subjects that one would be unwise to restrict oneself to analytical techniques that have proven effective in one context only. However, I see little in the IPCC output to suggest that this message has been properly taken on board. For example, Monte Carlo simulations feature largely in the modelling of climate model uncertainties, seeming to suggest that many climatologists are still philosophically chained to the gaming table.
Furthermore, I have yet to mention the fuzzy logicians. They say that the question of aleatoric or epistemic uncertainty is missing the point. For them, vagueness is the primary cause of uncertainty, and the fact that there are no absolute truths in the world invalidates the Boolean (i.e. binary) logic upon which probability theory is founded. Probability theory, of whatever caste, is deemed to be a subset of fuzzy logic; it is what you get when you replace the multi-valued truth functions of fuzzy logic with the binary truth values of predicative logic. We are told that Aristotle was wrong and the western mode of thinking, in which the law of the excluded middle is taken as axiomatic, has held us back for millennia. Bayesianism was unpopular enough amongst certain circles because it has a postmodern feel to it. But the fuzzy logicians seem to be saying, “If you thought Bayesianism was postmodern, cop a load of this! Unpalatable though it may be, only by losing our allegiance to Boolean logic may we hope to fully capture real-life uncertainty”.
Once again, one seeks in vain for any indication that the IPCC appreciates the benefits of multi-valued logic for the modelling of uncertainty. For example, non-probabilistic techniques, such as possibility theory, are notable by their absence. To a certain extent, I can appreciate the conservatism that informs the IPCC’s treatment of uncertainty. After all, some of the techniques to which I am alluding may seem dangerously outré.8 However, gaining an understanding of the true scale of uncertainty could not be more important in climatology and I think the IPCC does itself no favours by insisting that probability is the only means of modelling or quantification.
But the IPCC Knows Best
Laplace once said:
“Essentially probability theory is nothing but good common sense reduced to mathematics. It provides an exact appreciation of what sound minds feel with a kind of instinct, frequently without being able to account for it”.
The truth of this statement can be found in our natural use of the terms ‘probability’ and ‘uncertainty’. Nevertheless, when one tries to tie them down to an exact and calculable meaning, controversy, ambiguity and confusion invariably raise their ugly heads.9 In particular, the many sources of uncertainty (variability, ignorance and vagueness, to name but three) have inspired scholars to develop a wide range of taxonomies and analytical techniques in an effort to capture the illusive nature of the beast. Some of these approaches may not be to everyone’s taste, but that is no excuse for the IPCC to use its authority to proscribe the vast majority of them when trying to deal with a matter of such importance as the future of mankind. And I don’t think that the certitude of consensus should play a role in anyone’s calculation.
Of that I’m certain.
1 Mastrandrea M. D., K. J. Mach, G.-K. Plattner, O. Edenhofer, T. F. Stocker, C. B. Field, K. L. Ebi, and P. R. Matschoss (2011). The IPCC AR5 guidance note on consistent treatment of uncertainties: a common approach across the working groups. Climatic Change 108, 675 – 691. doi: 10.1007 / 10584 – 011 – 0178 – 6, ISSN: 0165-0009, 1573 – 1480.
2 It is amusing to note that the same table includes the combination of low agreement in the face of strong evidence. So denial is allowed within the IPCC then, is it?
3 As would be the case, for example, with the individual who happens to know that the die is loaded.
4 Frequentism is the notion of probability most likely to be shared by the reader since it is taught in schools and college using concepts such as standard errors, confidence intervals, and p-values.
5 The idea that probability is simply a reflection of personal ignorance was taken up in the 18th century by the Reverend Thomas Bayes who sought to determine how the confidence in a hypothesis should be updated following the introduction of germane information. His ideas were formalised mathematically by the great mathematician and philosopher Pierre-Simon Laplace. Nevertheless, the equation used to update the probabilities is still referred to as the Bayes Equation.
6 The story of the trials and tribulations experienced by the advocates of Bayes’ Theorem during its long road towards general acceptance is too complex to do justice here. Instead, I would invite you to read a full account, such as that given in Sharon Bertsch McGrayne’s book, ‘The Theory That Would Not Die’, ISBN 978-0-300-16969-0. Suffice it to say, it is a story replete with bitter academic rivalry, the ebb and flow of dominance within the corridors of academia, and the sort of acerbic hyperbole normally only to be found in religious bombast.
7 See Nassim Taleb, ‘The Black Swan: The Impact of the Highly Improbable’, ISBN 978-0141034591.
8 Take, for example, the work of one of fuzzy logic’s luminaries, Bart Kosko. On the evidence of his seminal book, ‘Fuzzy Thinking: The New Science of Fuzzy Logic’, ISBN 978-0006547136, his interest and acumen in control engineering, statistics and probability theory is buried deep within a swamp of Zen wisdom and anti-Western polemic that is guaranteed to turn off the majority of jobbing statisticians.
9 Back in the 1930s wags at the University College London saw fit to suggest that the collective noun for statisticians should be ‘a quarrel’.