Philosophy, uncertainty, probability, consensus, the IPCC, and all that…

So What Happened to the Philosophy?

Guest essay by John Ridgway

Many years ago, when I was studying at university, an old joke was doing the rounds in the corridors of the physics department:

Question: What is the difference between an experimental physicist, a theoretical physicist and a philosopher?

Answer: Experimental physicists need thousands of pounds of equipment to do their work but the theoretical physicists only need a pencil, some paper and a waste paper basket. The philosopher doesn’t need the waste paper basket.

With such jokes in our repertoire how could we fail to attract a mate? But fail we did.

Fortunately, polluting the gene pool was not our main priority at the time. We were satisfied simply to point out that philosophy was a fascinating way to waste a life, and then move on. There was nothing wrong with philosophy that a good waste paper basket couldn’t solve, but it wasn’t something that a self-respecting physicist should indulge.

I’m sure that there are many of you out there that feel the same way about the IPCC’s output; if the IPCC hadn’t ignored the waste paper basket in the corner, then science would be all the better for it. However, just to be controversial, I’m going to disagree with this analysis. The problem with philosophy is more subtle than I had appreciated in my formative years. Philosophers use their waste paper basket just as readily as theoretical physicists, but it is in the nature of philosophy that no two philosophers will use their basket for the same purpose. This problem is evident in the IPCC’s assessment reports as they deal with the deeply philosophical subjects of probability and uncertainty.

Consensus is King

A great deal has already been said on this website regarding the CAGW campaigners’ obsession with the supposed 97% consensus amongst scientists. It has been quite rightly pointed out that science doesn’t work that way; it isn’t a democracy. Such a misunderstanding is perhaps to be expected amongst journalists, politicians and activists who are just looking for an authoritative endorsement of their views, but it is all the more shocking to see that the IPCC also holds to the belief that consensus can manufacture truth. Whilst you and I may struggle to come to terms with the subtleties of probability and uncertainty, the IPCC appears to have had no trouble in arriving at a brutally simplistic conclusion. In guidelines produced for its lead authors to ensure consistent treatment of uncertainties,1 one can find advice that is tantamount to saying, “A thousand flies can’t all be wrong when they are attracted to the same dung heap”. To be specific, one finds the following statement:

“A level of confidence provides a qualitative synthesis of an author team’s judgment about the validity of a finding; it integrates the evaluation of evidence and agreement in one metric.”

The IPCC is not referring here to agreement between data. As it explains, “The degree of agreement is a measure of the consensus across the scientific community on a given topic.”

Now, I have to admit that here is a classic example of the waste paper basket being egregiously ignored. One could not hope for a clearer enunciation of the belief that consensus is a legitimate dimension when assessing confidence (which here substitutes for uncertainty). The IPCC believes that consensus stands separately from the evaluation of evidence and can be metrically integrated with it. For the avoidance of doubt, the authors go on to provide a table (fig. 2 of section 2.2.2) that illustrates that ‘high agreement’ should explicitly improve one’s level of confidence even in the face of ‘limited evidence’.2

The presupposition that it is evidence, and evidence alone, that should determine levels of uncertainty, and hence confidence, has been brazenly ignored here. Disagreement amongst experts should do nothing more than reflect the quality of evidence available. If it doesn’t, then we may be dealing with political or ideological disagreement that has nothing to do with the measurement of scientific uncertainty. It is a mistake, therefore, to treat disagreement as a form of uncertainty requiring its own column in the balance books. The shame is that, to ensure consistency, these guidelines are meant to apply to all IPCC working groups, so they ensure that the same mistake is made by all.

I have a background in systems safety engineering, and so I am well acquainted with the idea that confidence in the safety of a system has to be gained by developing a body of supporting evidence (the so-called ‘safety case’). If the quality of evidence was ever such that it was open to interpretation then the case was not made. No one would then be happy to proceed on the basis of a show of hands, and for a very good reason. In climatology, consensus is king. In safety engineering, consensus is a knave; consensus launched the space shuttle Challenger.

Probability to the Rescue

Note that the guidelines described above are meant to apply when the nature of evidence can only support qualitatively stated levels of confidence (using the supposedly ‘calibrated uncertainty language’ of: very low, low, medium, high, and very high). As such, they only form part of the advice offered by the IPCC. The guidelines proceed to explain that there will be circumstances where the “probabilistic quantification of uncertainties” is deemed possible, “based on statistical analysis of observations or model results, or expert judgment”. For such circumstances, an alternative lexicon is prescribed, as follows:

  • 99–100% probability = Virtually certain
  • 90–100% probability = Very Likely
  • 66–100% probability = Likely
  • 33 to 66% probability = About as likely as not
  • 0–33% probability = Unlikely
  • 0–10% probability = Very Unlikely
  • 0–1% probability = Exceptionally Unlikely

The first thing that needs to be appreciated here is that (for no better reason than the ability to quantify) the IPCC has switched from the classification of uncertainty to the classification of likelihood. High likelihood can be equated to low uncertainty, but so can low likelihood (meaning high confidence in the non-event). It is in the mid-range (‘About as likely as not’) that uncertainty is at its highest. Confusingly, however, most of the IPCC’s categories overlap, and their spread of probabilities varies. So, in the end, just what the IPCC is trying to say about uncertainty is quite unclear to me.

However, my main problem here with the guidelines is their implication that risk calculations (which require the likelihood assessments) can only be made when evidence supports a quantitative approach. Once again, with my background in systems safety engineering I find such a restriction to be most odd. Risk should be reduced to levels that are ‘As Low As Reasonably Practicable’ (ALARP), and in that quest both qualitative and quantitative risk assessments are allowed. As a separate issue, the strength of evidence for the achievement of ALARP risk levels should leave decision makers ‘As Confident As Reasonably Practicable’ (ACARP). Once again, confidence levels can be qualitatively or quantitatively assessed, depending upon the nature of the evidence. That’s how the risk management world sees it, so why not the IPCC?

In summary, the IPCC’s approach to questions of risk and uncertainty not only betrays poor philosophical judgement, it entails a methodology for calibrating language that is at best unconventional and at its worst downright wrong. But I’m not finished yet. It is not just the ham-fisted way in which the IPCC guidelines address probabilistic representation that concerns me; it’s also the undue trust it places in the probabilistic modelling of uncertainty. This, I maintain, is what one gets by overusing the philosopher’s waste paper basket.

Philosophical Conservatism and the IPCC

To explain myself I have to provide background that some of you may not need. If this is the case, please bear with me, but I think I need to point out that there is a fundamental ambivalence lying at the centre of the concept of probability that is deeply disquieting.

The concept of probability was first given a firm mathematical basis in the gambling houses of the 18th century, as mathematicians sought to quantify the vagaries of the gaming table. Since then, probability has gone on to provide the foundation for the analysis of any situation characterised by uncertainty. As such, it is hugely important to the work of anyone seeking to gain an understanding of how the world works. And yet, no one can agree just what probability is.

Returning once more to the gaming table: Though the outcome of a single game cannot be stated with certainty, the trend of outcomes over time becomes increasingly predictable. This predictability gives credence to the idea that probability objectively measures a feature of the real world. However, one can equally say that the real issue is that gaps in the gambler’s knowledge are the root cause of the uncertainty. With a perfect knowledge of the physical factors that determine the behaviour of the system of interest (for example, a die being tossed) the outcome of each game would be entirely predictable. It is only because perfect knowledge is unobtainable that probability is needed to calculate the odds. Seen in this light, probability becomes a subjective concept, since individuals with differing insights would calculate different probabilities.3

The idea that probability objectively measures real-world variability (in which fixed but unknown parameters are determined from random data) is referred to as the frequentist orthodoxy.4 The subjective interpretation, in which the parameters are held to be variable and the probabilities for their possible values are determined from the fixed data that have been observed, is known as Bayesianism.5

Despite a long history of animosity existing between the frequentist and Bayesian camps,6 both approaches have enjoyed enormous success, so it is unsurprising to see that both feature in the IPCC’s treatment of uncertainty. However, the argument between frequentists and Bayesians, regarding the correct interpretation of the concept of probability, can be recast as a dispute over the primacy of either aleatoric or epistemic uncertainty, i.e. the question is whether uncertainty is driven principally by the stochastic behaviour of the real world or merely reflects gaps in knowledge. Although the IPCC proposes using subjective probabilities when dealing with expert testimony, the problem arises when the level of ignorance is so profound that even Bayesianism fails to fully capture the subjectivity of the situation. Gaps in knowledge may include gaps in our knowledge as to whether gaps exist; a profound, second-order epistemic uncertainty sometimes referred to as ontological uncertainty. And in the real world, even the smallest unsuspected variations can have a huge impact.

The belief that casino hall statistics can model the uncertainty in the real world has been dubbed the ‘ludic fallacy’ by Nassim Taleb.7 Concepts such as the ludic fallacy suggest that probability and uncertainty are such slippery subjects that one would be unwise to restrict oneself to analytical techniques that have proven effective in one context only. However, I see little in the IPCC output to suggest that this message has been properly taken on board. For example, Monte Carlo simulations feature largely in the modelling of climate model uncertainties, seeming to suggest that many climatologists are still philosophically chained to the gaming table.

Furthermore, I have yet to mention the fuzzy logicians. They say that the question of aleatoric or epistemic uncertainty is missing the point. For them, vagueness is the primary cause of uncertainty, and the fact that there are no absolute truths in the world invalidates the Boolean (i.e. binary) logic upon which probability theory is founded. Probability theory, of whatever caste, is deemed to be a subset of fuzzy logic; it is what you get when you replace the multi-valued truth functions of fuzzy logic with the binary truth values of predicative logic. We are told that Aristotle was wrong and the western mode of thinking, in which the law of the excluded middle is taken as axiomatic, has held us back for millennia. Bayesianism was unpopular enough amongst certain circles because it has a postmodern feel to it. But the fuzzy logicians seem to be saying, “If you thought Bayesianism was postmodern, cop a load of this! Unpalatable though it may be, only by losing our allegiance to Boolean logic may we hope to fully capture real-life uncertainty”.

Once again, one seeks in vain for any indication that the IPCC appreciates the benefits of multi-valued logic for the modelling of uncertainty. For example, non-probabilistic techniques, such as possibility theory, are notable by their absence. To a certain extent, I can appreciate the conservatism that informs the IPCC’s treatment of uncertainty. After all, some of the techniques to which I am alluding may seem dangerously outré.8 However, gaining an understanding of the true scale of uncertainty could not be more important in climatology and I think the IPCC does itself no favours by insisting that probability is the only means of modelling or quantification.

But the IPCC Knows Best

Laplace once said:

“Essentially probability theory is nothing but good common sense reduced to mathematics. It provides an exact appreciation of what sound minds feel with a kind of instinct, frequently without being able to account for it”.

The truth of this statement can be found in our natural use of the terms ‘probability’ and ‘uncertainty’. Nevertheless, when one tries to tie them down to an exact and calculable meaning, controversy, ambiguity and confusion invariably raise their ugly heads.9 In particular, the many sources of uncertainty (variability, ignorance and vagueness, to name but three) have inspired scholars to develop a wide range of taxonomies and analytical techniques in an effort to capture the illusive nature of the beast. Some of these approaches may not be to everyone’s taste, but that is no excuse for the IPCC to use its authority to proscribe the vast majority of them when trying to deal with a matter of such importance as the future of mankind. And I don’t think that the certitude of consensus should play a role in anyone’s calculation.

Of that I’m certain.


Notes:

1 Mastrandrea M. D., K. J. Mach, G.-K. Plattner, O. Edenhofer, T. F. Stocker, C. B. Field, K. L. Ebi, and P. R. Matschoss (2011). The IPCC AR5 guidance note on consistent treatment of uncertainties: a common approach across the working groups. Climatic Change 108, 675 – 691. doi: 10.1007 / 10584 – 011 – 0178 – 6, ISSN: 0165-0009, 1573 – 1480.

2 It is amusing to note that the same table includes the combination of low agreement in the face of strong evidence. So denial is allowed within the IPCC then, is it?

3 As would be the case, for example, with the individual who happens to know that the die is loaded.

4 Frequentism is the notion of probability most likely to be shared by the reader since it is taught in schools and college using concepts such as standard errors, confidence intervals, and p-values.

5 The idea that probability is simply a reflection of personal ignorance was taken up in the 18th century by the Reverend Thomas Bayes who sought to determine how the confidence in a hypothesis should be updated following the introduction of germane information. His ideas were formalised mathematically by the great mathematician and philosopher Pierre-Simon Laplace. Nevertheless, the equation used to update the probabilities is still referred to as the Bayes Equation.

6 The story of the trials and tribulations experienced by the advocates of Bayes’ Theorem during its long road towards general acceptance is too complex to do justice here. Instead, I would invite you to read a full account, such as that given in Sharon Bertsch McGrayne’s book, ‘The Theory That Would Not Die’, ISBN 978-0-300-16969-0. Suffice it to say, it is a story replete with bitter academic rivalry, the ebb and flow of dominance within the corridors of academia, and the sort of acerbic hyperbole normally only to be found in religious bombast.

7 See Nassim Taleb, ‘The Black Swan: The Impact of the Highly Improbable’, ISBN 978-0141034591.

8 Take, for example, the work of one of fuzzy logic’s luminaries, Bart Kosko. On the evidence of his seminal book, ‘Fuzzy Thinking: The New Science of Fuzzy Logic’, ISBN 978-0006547136, his interest and acumen in control engineering, statistics and probability theory is buried deep within a swamp of Zen wisdom and anti-Western polemic that is guaranteed to turn off the majority of jobbing statisticians.

9 Back in the 1930s wags at the University College London saw fit to suggest that the collective noun for statisticians should be ‘a quarrel’.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

140 Comments
Inline Feedbacks
View all comments
John Mauer
October 28, 2017 9:15 am

Interesting to see a discussion of the ludic fallacy vs the precautionary principle, both advocated by Taleb

Trebla
Reply to  John Mauer
October 28, 2017 7:28 pm

No offence meant, but while reading this article, the word “pedantry” kept popping into my mind.

Paul Penrose
Reply to  Trebla
October 28, 2017 8:00 pm

Quite so.

John Ridgway
Reply to  Trebla
October 29, 2017 2:01 am

No offence taken, Trebla. The pedantry is in the philosophy. Don’t shoot the messenger. 🙂

Michael S. Kelly
Reply to  Trebla
October 29, 2017 10:13 am

You accusing him of child molestation? Quick, file a libel suit in DC court!

October 28, 2017 9:15 am

All probability is conditional on the premises held by and the information available to the person doing the evaluation. Decisions are made by doing cost/benefit analyses and are subject to uncertainty because of our limited state of being. Reality is what it is, regardless of our perceptions. The better we align our perceptions to reality, the better we will be.

george e. smith
Reply to  cdquarles
October 28, 2017 11:21 am

In optics, virtual is the opposite of real. A “virtual image” is something you can see even though no radiation actually passes through that image, as happens with a real image. You can’t project a virtual image onto a screen.

So “virtually certain”, may not mean what it is loosely used to mean. It may be quite unreal.

I know of at least one theory for a theoretical derivation of one of the fundamental constants of Physics; the Fine Structure Constant (alpha). This theory predicted the value of the fine structure constant to less than half of the standard deviation for the best experimental measurement of that constant; basically parts in 10^8.

Yet that theory was subsequently shown to be pure numerical Origami, and had NO input data or any other connection to anything in the real universe, yet it got the correct value.

So virtually certain is not to be believed either.

G

Reply to  george e. smith
October 29, 2017 4:29 am

A “virtual image” is something you can see even though no radiation actually passes through that image
1st LoT. You can see. Radiation actually passes through that image.

Latitude
October 28, 2017 9:27 am

“Fortunately, polluting the gene pool was not our main priority at the time.”

great line John!…..made me laugh!

K. Kilty
October 28, 2017 9:27 am

A “quarrel of statsticians”! Excellent.

Reply to  K. Kilty
October 29, 2017 11:31 am

“Back in the 1930s wags at the University College London saw fit to suggest that the collective noun for statisticians should be ‘a quarrel’.”

As in “a murder of crows” and “a charm of butterflies”.

Mike
Reply to  ALLAN MACRAE
October 30, 2017 5:12 am

‘…collective noun for statisticians should be ‘a quarrel’.”

Love it!

October 28, 2017 9:29 am

What’s worse is that what the statisticians are debating over is poorly defined and poorly measured, so the inputs to the statistical tests are themselves not very determinate.
Whether there has been a warming trend (and what correlates to that change in temperature) depends on just how robust GISSTEMP or HADCRUT are, or collections of proxies for a longer time frame. There are such large error bands and selection biases in these data bases, it is like trying to determine a die is loaded in a rather dark room with a possibility some participants are palming the die.

Phoenix44
Reply to  Tom Halla
October 29, 2017 3:26 am

Whilst not knowing how many sides the die has

Editor
October 28, 2017 9:32 am

John, thanks for a most interesting post. A comment. You say:

The belief that casino hall statistics can model the uncertainty in the real world has been dubbed the ‘ludic fallacy’ by Nassim Taleb.7

Since all the casino halls I know of are most assuredly in the real world, I fear that this statement makes no sense … what am I missing?

w.

Reply to  Willis Eschenbach
October 28, 2017 9:38 am

There’s a difference between uncertainty and a quantifiable statistical distribution.

george e. smith
Reply to  co2isnotevil
October 28, 2017 11:28 am

Statistics is all about the properties of algorithms. Those algorithms can be applied to ANY finite set of finite real numbers. It is not about the properties of those numbers.

So the meaning of statistical computations is simply what the algorithm designers have defined it to mean.

Nothing in the real universe is even aware of statistics, let alone pays any heed to it.

G

Reply to  george e. smith
October 28, 2017 11:52 am

“Nothing in the real universe is even aware of statistics, let alone pays any heed to it.”

Aren’t people in the real Universe?

Statistical distributions define Quantum Mechanics and this is definitely in the real Universe. What about the Uncertainty Principle?

But as in the gambling case, each possible outcome has a finite probability of occurring. Unless we’re considering the probability of a 1000:1 return from one bet on a roulette wheel, then we’re talking about consensus climate science …

Phoenix44
Reply to  co2isnotevil
October 29, 2017 3:31 am

It is only quantifiable if you know how many quantities there are. As Taleb points out, until we knew there were black swans, the probability of seeing a black swan was zero. Yet they existed.

The trouble with climate science is that is uses its potential ignorance to “prove” its probable correctness. That is a problem not solely of climate science but other areas too. If you only know say 10% of the numbers on the die you are throwing, your probability calculations about what numbers will be thrown are going to very wrong.

Reply to  co2isnotevil
October 29, 2017 10:11 am

co2: “Statistical distributions define Quantum Mechanics and this is definitely in the real Universe.”

Like the roll of the dice, we have no way to deal with the finest grain necessary to accurately measure the deterministic nature of the problem of one toss. We can establish probability of tossing dice, but if we had the ability to see the dice in ultra slow motion with the torque applied to them in their orientation on release, how the edges touch the table surface at the angles they do, the speed of the touch to determine how many moves it makes before coming to rest, then place our bets in the nanoseconds before the result…

Statistical distributions “uncover” quantum mechanics, but the infinitesimal details are not available to us (yet?) to determine it in fine grain. The ideal gas laws are similar. We can arrive at a reliable, excellent equation for them as a gross statistical matter (measuring gross relations), but are not privy to the actual data of the individual molecular collisions and their angles and energies that make up their deterministic behavior. Nevertheless, there are real, detailed, multibillions of actions at play.

George Smith is correct. The universe doesn’t care about statistics. It is rather a tool for us to handle our uncertainty of the fine grain of what is going on. It is a linear thinker’s idea that the world itself is variable given the identical circumstances and we are measuring this with statistics. Thought experiment: we know in all cases that we have gaps in our knowledge of the issue we are dealing with. This gives uncertainty. Our statistics gives us a measure of this uncertainty. It is elitist thought that, since we have uncertainty, that is due to the nature of the world to be variable in behavior.

Reply to  Gary Pearse
October 29, 2017 10:43 am

Gary,

My point has just been that there’s a difference between uncertainty and a statistical distribution and that statistical distributions are real and valid representations of reality.

Uncertainty is exactly what GS and you are saying. But this is not the nature of a statistical distribution where there may be uncertainty about a single event, but all possibilities are none the less valid. In the case of pure uncertainty, such as we see all over climate science, there’s only one possible correct result and the infinite number of other results are impossible.

BTW, it seems to me that the largest uncertainties I see are when measurements from different instruments are cross calibrated. In this case, the laws of large numbers are irrelevant and no value of N will make this ‘average away to zero’.

sy computing
Reply to  Gary Pearse
October 29, 2017 11:02 am

Like the roll of the dice, we have no way to deal with the finest grain necessary to accurately measure the deterministic nature of the problem of one toss.

Interestingly, a philosopher/mathematician of the 19th C with your same last name made an argument that we can.

Pick any side of the die, doesn’t matter, say “7”. The odds of the die falling on that side are always going to be 1 in 2 (50%) for every roll.

Either it will, or it won’t.

[?? .mod]

sy computing
Reply to  co2isnotevil
October 29, 2017 3:53 pm

“[?? .mod]”

Am I gunning for another rebuking?

[Nay, nay. Merely not understanding your thoughts, your logic trail there. Thus, others likely will not understand it either. .mod]

sy computing
Reply to  co2isnotevil
October 29, 2017 6:24 pm

“[Nay, nay. Merely not understanding your thoughts, your logic trail there. Thus, others likely will not understand it either. .mod]”

That’s not unusual, through every fault of mine own…I’m an odd sort of man…

See Charles S. Pierce, The Philosophical Writings of Pierce, ed. Justus Buchler, (New York, NY: Dover Publications, Inc.), chapters 13 and 14.

Yes, after all these years I botched his name…unbelievable. Anyway…

He argues that, e.g., in a barrel of 5 white balls with 1 blue, the probability of reaching in and grabbing the blue ball isn’t 1 in 6, as most would propose. Rather, it’s 1 in 2, 50% because, either you will get the blue ball or you won’t for each instance you try, which is key.

The evaluation of Truth in this case is all about each individual attempt at grabbing the ball, not the overall attempts. The overall attempts give a murky conclusion, e.g., 1 in 6 attempts will get the blue ball. Well that may or may not be true, depending. For every set of 6 attempts it’s logically possible to pull the blue ball 1, 2, 3, 4, 5 or even 6 times for each set.

But for each *individual* attempt (which it can be argued is what really matters in the real world for most applications of logic) the only possible possibilities are 1 in 2. Either one *will* get the blue ball or one *will not*.

Elegant…or so it seems to me!

John Ridgway
Reply to  Willis Eschenbach
October 28, 2017 10:07 am

Sorry, Willis. Perhaps I should have said, “The belief that casino hall statistics can model all the uncertainties in the real world…”

The point I was trying to make is that not all real-world problems are amenable to the theories that were developed to explain the vagaries of the gaming table.

Reply to  John Ridgway
October 28, 2017 10:24 am

There’s no uncertainty in a Casino. It’s absolutely certain that the house will win in the long run. Again, there’s a big difference between a statistical distribution where all outcomes are possible with distinct probabilities and the uncertainty surrounding the case where only one outcome among many seemingly plausible choices is possible.

billw1984
Reply to  John Ridgway
October 28, 2017 11:38 am

Does anyone have a reference to the IPCC using “more than half” as their definition of “most”? I think I remember seeing it in one of the IPCC reports (footnotes) but I could not find it awhile back. And do they still use this definition or was that just for one of the reports?

Reply to  John Ridgway
October 28, 2017 1:01 pm

And yet, I’ve seen many a casino go bankrupt. Perhaps uncertainty primarily lies in the places you can’t calculate or can’t know about.

jon
Reply to  John Ridgway
October 28, 2017 6:48 pm

And running a (Casino) business is not the same skill as running a blackjack table.

Don K
Reply to  Willis Eschenbach
October 28, 2017 10:21 am

Willis, I can’t speak for John. But as it happens, I’ve been rereading Taleb’s “The Black Swan”. As I currently understand it, Taleb’s notion is that the Ludic Fallacy has to do with thinking that the variables you understand and control (or at least evaluate) are the only important unknowns. In his casino example (Chapter 9) Taleb points out that the casino’s “gambling” risks are tightly controlled, and that their principle risks are things not easily subject to statistical evaluation. Cheating. The star in their theater act being mauled by his tiger. A disgruntled employee attempting to blow the building up. Another employee failing for years to file a mandatory tax form.

noaaprogrammer
Reply to  Don K
October 28, 2017 10:24 pm

…or a shooter with (as of yet) unknown motive(s) causing a momentary decrease in gamblers at Las Vegas.

Chris Riley
Reply to  Don K
October 29, 2017 10:47 am

Or me, a lifetime winner against the casinos, earned by winning $450 on my first, and last visit to one of those hell-holes years ago. BTW I have a 100% record of having my letters (actually letter) published in “The Economist”.

exSSNcrew
Reply to  Willis Eschenbach
October 28, 2017 11:19 am

The outcome at the gaming table is deterministic. However the real world outcome for a casino includes far more uncertainty than is covered in a probability model for the gaming tables. What Dr. Taleb discusses in “Antifragile” with respect to the ludic fallacy is that probability models for the gaming table exclude so many things that are real world impacts to the casino, such as losses from entertainers being unable to entertain, insider crime, and impacts to the facility. Attempting to create parameterized linear models of climate is similar in that the climate is subject to far more uncertainty and non-linearity than can be modeled with today’s computers, even the best super computers.

Michael S. Kelly
Reply to  Willis Eschenbach
October 29, 2017 10:20 am

I took it to mean that casino games operate with a finite set of well-defined rules, to which mathematics can be applied directly and unambiguously – and that is not the case with the real world. It’s the same common use of “real” as when someone tells a student “Wait until you get out in the real world,” distinguishing between a sheltered, artificial environment, and the world at large.

Mike
Reply to  Willis Eschenbach
October 30, 2017 5:22 am

Willis E.
“I don’t believe God plays with dice”…Albert E. to Niels B. I believe.

October 28, 2017 9:32 am

Speaking of uncertainty, in what Universe is a climate sensitivity with +/- 50% uncertainty ‘settled’? Add all the additional uncertainty from the fabricated RCP scenarios and their case becomes so absurd it would be laughable if not for the trillions of dollars required to mitigate a problem that can’t even exist.

Making this so much worse is that even with the excessive uncertainty, the low end of the IPCC’s presumed range isn’t even low enough to include the maximum sensitivity as limited by known physics!

StephenP
October 28, 2017 9:33 am

The experimental physicist gets his hands dirty, but shows whether a hypothesis works or not. The other two keep clean and warm, but are in the same class as the mediaeval clerics who argued about how many angels could dance on the head of a pin.

george e. smith
Reply to  StephenP
October 28, 2017 11:35 am

The experimental physicist will not live long enough to conduct every experiment one could contrive. The theoretical physicist can gather them all up in a neat pile so you don’t even have to perform all of them.

In the end the theoretical physicist will be correct as often as the experimental physicist is.

So be careful whom you deem to be on the head of the pin.

G

noaaprogrammer
Reply to  george e. smith
October 28, 2017 10:32 pm

“The theoretical physicist can gather them all up in a neat pile so you don’t even have to perform all of them.” What about “none of them,” as in String Theory – at least thus far?

Don K
Reply to  StephenP
October 28, 2017 12:06 pm

“argued about how many angels could dance on the head of a pin.”

The answer is virtually certain to be 17 … or maybe -6. Why do you ask?

StephenP
Reply to  Don K
October 29, 2017 1:25 am

You need to test the output of a theoretical physicist against reality to see if it agrees with reality. As Richard Feynman said, if it doesn’t, then no matter how beautiful your theory is or how clever you are, it is wrong.

StephenP
Reply to  Don K
October 29, 2017 1:27 am

I thought the answer to everything was 97.

schitzree
Reply to  Don K
October 29, 2017 4:59 pm

Nope.

The answer to everything is 42

<¿<

Mike
Reply to  Don K
October 30, 2017 5:28 am

Don K:
As I recall, at the coucil of Attica it was determined that… ‘an Angel was of such a size that 9 could dance on the head of a pin’ !

F. Leghorn
Reply to  StephenP
October 29, 2017 5:22 am

And they never even (as far as I know) asked the question “how many angels WANT to dance on the head of a pin”.

schitzree
Reply to  F. Leghorn
October 29, 2017 5:08 pm

Shouldn’t it also depend on what KIND of dancing? You could get a lot more angles doing the Jitterbug then the Electric Slide.

And what if they were putting on the smallest production of Swan Lake ever?
Then you’d be looking for choreography and talent over pure numbers of dancers.

~¿~

Curious George
October 28, 2017 9:35 am

Scientific progress by a popular vote, IPCC style. What does it have to do with science, philosophy, or statistics? Nothing. But they are good at cashing their salaries.

BallBounces
October 28, 2017 9:37 am

Very thought-provoking. Thank you.

October 28, 2017 9:51 am

See IPCC AR5 TS 6, Key Uncertainties.

The large amount of uncertainty in some extremely important areas is very unsettling, scientifically speaking.

Reply to  nickreality65
October 28, 2017 10:46 am

They must add excessive uncertainty to provide wiggle room, otherwise, they have no way to make a case for a climate sensitivity large enough to justify their existence.

Yes, it’s very unsettling.

The Reverend Badger
October 28, 2017 9:51 am

You correctly show how the IPCC is “fudging it”. We know why, that is one of their purposes. It’s political/ ideological NOT proper scientific enquiries. The real question is how can it be altered and who is going to do it?

October 28, 2017 9:52 am

IPCC is a political organisation and their science is what the member countries say it is.

David in Cal (Fellow, Casualty Actu
October 28, 2017 9:55 am

There are uncertainties in nature and uncertainties in the mind. An example of the former is a roulette wheel. We fully understand it, but the value it produces varies each time we roll.. Our uncertainty is what number will come up on the next roll. Much uncertainty in climate science is the latter type. IPCC says climate sensitivity is highly likely to be in the range of 1.5 – 4.5 deg C. This uncertainty is a measure of our ignorance. CS doesn’t vary randomly between 1.5 and 4.5. It presumably is a single fixed number, which has probability 1.0. But, we don’t [know] the true value.

Note that this type of uncertainty has no statistical meaning. There’s no random variable. There’s no conceivable way to do repeated trials.

Solomon Green
Reply to  David in Cal (Fellow, Casualty Actu
October 29, 2017 7:19 am

Agreed but do we even know that “sensitivity” as defined by the Climate cabal exists?

Griff
October 28, 2017 9:59 am

This is the best brief overview of philosophy I know…

Reply to  Griff
October 28, 2017 12:33 pm

😎
Never heard that before.
I’d heard of “fluid thinking” but I never knew what the fluid was.

Warren Blair
Reply to  Griff
October 29, 2017 1:30 am

Thanks for that Griff.
Pity about Eric Idle’s dark plans for anyone that disagrees with him.
And moreover what about antiestablishmentarianism which the Pythons made great!
Man made global warming is the establishment!

October 28, 2017 10:17 am

“consensus” belongs to religion. Traditionally the church answered questions about the Unknown to fight uncertainty (resulting from our consciensnes) Religion is a firewall against existential fears. Religious thesis become true by consensus (+ authority and threats) the reason that heretics must be silenced. Scientific thesis become true by demonstration : experiments, logic reasoning or deduction.
see: http://www.davdata.nl/math/mentalclimate.html

Reply to  David
October 28, 2017 3:33 pm

Whatever “religions” Man conceives of to explain “god”, the eternal reality is that God is a “consensus” of One.
KJV Isaiah 55:9 For as the heavens are higher than the earth, so are my ways higher than your ways, and my thoughts than your thoughts.

Science or Fiction
Reply to  David
October 28, 2017 4:18 pm

Let us be clear:

«Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus. There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus. Period.»
Michael Crichton

“I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet, because you’re being had.”
– Michael Crichton

Alan D McIntire
October 28, 2017 10:18 am

On a similar vein, I remember a line from Raymond Smullyan asking,
“Are you a Mathematician or a Physicist”
Here’s the test:
You are given a wood stove with kindling, an empty pot, a pump, and a box of matches. How do you get boiling water?
Both the Mathematician and Physicist would say, ” Fill the pot with water from the pump, put the pot on the stove, light a fire with a match, and wait for the water to boil.”

The second part of the test is, “What if you have a pot of water on an unlit stove, some kindling in the stove, and a box of matches?”
The Physicist would say, “Take a match from the box, light the kindling, and wait for the water to boil.”
The Mathematician would say, “Throw out the water. You now have the previous problem which was already solved.”

george e. smith
Reply to  Alan D McIntire
October 28, 2017 11:39 am

Not necessarily.

That water in the pot may be the last drops of water that was able to be pumped out of that pump so now you have no water to boil.

G

Mike McMillan
Reply to  george e. smith
October 28, 2017 12:43 pm

Isn’t that what they call an externality? The reason the pump is dry is because extensive irrigation pumping has depleted the local aquifer, thus moving the problem from the fields of mathematics and physics into the field of economics.

noaaprogrammer
Reply to  george e. smith
October 28, 2017 10:41 pm

… and the fields of the farmers all dry up as we all go to heaven on a little nanny goat.

Stevan Reddish
October 28, 2017 10:19 am

IPCC: We are certain that we agree that CO2 is causing harmful climate change. Our detractors do not matter because they cannot agree on the degree of certainty of our error.

SR

Stevan Reddish
Reply to  Stevan Reddish
October 28, 2017 11:41 am

Detractor #1: The IPCC is wrong.

Detractor #2: The IPCC is very wrong!

Detractor # 3: The IPCC is very, very wrong!

IPCC: See – our detractors are uncertain in their judgement of the degree of our wrongness. Their level of counter consensus cannot compete with our level of consensus.

SR

Reply to  Stevan Reddish
October 28, 2017 11:57 am

The fact that the IPCC is wrong as they are is why so many refuse to believe that they are as wrong as they are. The combined effects of their errors and misrepresentations is the difference between an effect that might be inconvenient for some (over-hyped claims of doom notwithstanding) and an effect that in the worst case is only somewhat beneficial for all.

October 28, 2017 10:38 am

If philosophy had been correctly applied to ‘climate change’ it would have fallen at the first fence.

Instead we have thousands of wannabe physicists chasing numbers around.

Philosophy sets the limits of thought.

Reply to  Leo Smith
October 28, 2017 10:47 am

If science had been correctly applied …

Reply to  co2isnotevil
October 28, 2017 12:13 pm

science=natural philosophy

Reply to  co2isnotevil
October 29, 2017 7:41 am

co2isnotevil: Correctly applied? It can’t even be agreed upon which tool to use, never mind using it correctly.

Does the Law of Large numbers apply to global temperature measurements? Climate scientists say yes, because they believe it gives them the precision they need. Skeptics say no, because the same thing is not being measured many times at the same place and time.

Is the interpolation of temperature data between points across an area with no data a valid method? Climate scientists say yes, because it gives them “data” where they “need” it to create the grids. Skeptics say no, because an interpolation is not data, and so can’t be used as such just because it gives one what they “need.”

Whether a mathematical tool CAN be applied to a set of numbers doesn’t mean it gives one useful information about that set.

Reply to  James Schrumpf
October 29, 2017 8:45 am

JS,
The concept of homogenization is wrong not because interpolation is not data, but because Hansen/Lebedeff homogenization assumes a normal distribution of sites and this is not a property of the data used.

Notice how Hansen has his fingers in this one in addition to the feedback fubar.

gnomish
Reply to  Leo Smith
October 28, 2017 10:58 am

and thousands of wannabe philosophers expounding their philosophy on the worthlessness of philosophy – but it’s ok- cuz numerology will guide you thru the concomitant ignorance- stochastic processes being so predictable, you understand. that’s why the i ching was replaced – it could only handle 64 possibilities, 63 of which would not happen on any occasion- but they were all in Teh Book. now we’ve got chi squared!
(cause and effect be damned!)

george e. smith
Reply to  Leo Smith
October 28, 2017 11:42 am

Science is about real (observable) things: it is not about thought.

G

Reply to  george e. smith
October 28, 2017 12:15 pm

there are no real observable things until you have an observer and a language to describe them in.

sy computing
Reply to  george e. smith
October 28, 2017 4:23 pm

there are no real observable things until you have an observer and a language to describe them in.

What qualifies as an “observer”? What qualifies as a “language”?

noaaprogrammer
Reply to  george e. smith
October 28, 2017 10:45 pm

…and if the observation changes what is being observed…?

sy computing
Reply to  Leo Smith
October 28, 2017 4:35 pm

If philosophy had been correctly applied to ‘climate change’ it would have fallen at the first fence.

Could you expound on this please?

gnomish
Reply to  sy computing
October 29, 2017 12:05 pm

in the beginning there was the law of cause and effect. the cause always precedes the effect. the effect always follows the cause.comment image
philosophy. it’s not just for breakfast.

Editor
October 28, 2017 11:49 am

My personal viewpoint agrees with John P. A. Ioannidis who has famously said:

“… for many current scientific fields, claimed research findings [consensus findings] may often be simply accurate measures of the prevailing bias.

Don K
Reply to  Kip Hansen
October 28, 2017 12:10 pm

Why is it that Ioannidis can criticize medical analytic and statistical practices and pretty much everyone agrees that he has a point? Whereas anyone who raises questions about the even sloppier practices in climate science and related fields gets called names?

The Reverend Badger
Reply to  Don K
October 28, 2017 1:24 pm

Because the alternative causes the scam to collapse so they have no alternative, they have to act like that. This, fundamentally, is the weakness that can be exploited.

Editor
Reply to  Don K
October 28, 2017 2:57 pm

Don K ==> The difference is between real science and the science involved in Modern Scientific Controversies, see my linked series.
CliSci is not the only field in which real science is thrown under the bus…..

Phoenix44
Reply to  Don K
October 29, 2017 3:39 am

Because its about intentions, not actions. Thus lots of people (rightly) criticise Pharma companies for the flaws in their research but refuse to criticise AGW scientists for exactly the same flaws because Parma is evil and climate scientists are good.

Climate science is the only field of science I know where every paper has been right and without fault – except where it has understated how bad things are.

donald penman
October 28, 2017 11:57 am

The aim of the IPCC or those who think that AGW is correct is to show that there has been a change in average global temperature or even local average temperatures caused by increased co2 however it is considered acceptable that this change can be proved by altering historical data because of errors ,statistics rules ok. It is acceptable to rely on statistics to work out the reality of atoms which we cannot observe but it is not acceptable to rely on statistic to describe the reality of things that we can observe and have observed and observations tell us that the earth has changed very little in the last hundred years.

Bruce Cobb
October 28, 2017 12:23 pm

Statistically speaking, what we need to know is the degree of likelihood that the IPCC are made up of a bunch of smoke-blowing, gravy-train riding wankers who have no clue about either climate or what actual science is. It would appear to be in the 99-100% range, so virtually certain.

PiperPaul
Reply to  Bruce Cobb
October 28, 2017 2:47 pm

97% is a much more believable number, though.

tom0mason
October 28, 2017 12:29 pm

The model’s that are the kingpin of the IPCC endeavors have no uncertainties, they just have unreal physical aspects and misplaced degrees of confidence* in believing the model’s sophistication is able to mimic nature’s chaotic pseudo-cyclic variations mixed with extraterrestrial random events (mostly solar in origin).

*In ‘climate science™’, belief in degrees of confidence for theoretical outcomes (and as evidence of reality) appears to have overtaken actual observations as the preferred method of verification.