So What Happened to Expertise with the IPCC?

Guest essay by John Ridgway

It was late evening, April 19, 1995, that the crestfallen figure of McArthur Wheeler could be found slumped over a Pittsburgh Police Department’s interrogation room table. Forlorn and understandably distressed by his predicament, he could be heard muttering dumbfounded astonishment at his arrest. “I don’t understand it,” he would repeat, “I wore the juice, I wore the juice!”

Wheeler’s bewilderment was indeed understandable, since he had followed his expert accomplice’s advice to the letter. Fellow felon, Clifton Earl Johnson, was well acquainted with the use of lemon juice as an invisible ink. Surely, reasoned Clifton, by smearing such ‘ink’ on the face, one’s identity would be hidden from the plethora of CCTV cameras judiciously positioned to cause maximum embarrassment to any would-be bank robber. Sadly, even after appearing to have confirmed the theory by taking a polaroid selfie, Mr Wheeler’s attempts at putting theory into practice had proven a massive disappointment. Contrary to expectation, CCTV coverage of his grinning overconfidence was quickly broadcast on the local news and he was identified, found and arrested before the day was out.

Professor David Dunning of Cornell University will tell you that this sad tale of self-delusion inspired him to further investigate a cognitive bias known as Illusory Superiority, that is to say, the predisposition in us all to think we are above average.1 After collaboration with Justin Kruger of New York University’s Stern School of Business, Professor Dunning was then in a position to declare his own superiority by announcing to the world the Dunning-Kruger effect.2 The effect can be pithily summarised by saying that sometimes people are so stupid that they do not know they are stupid – a point that was considered so obvious by some pundits that it led to the good professor being awarded the 2000 Ig Nobel Prize for pointless research in psychology.3

The idea behind the Dunning-Kruger effect is that those who lack understanding will also usually lack the metacognitive skills required to detect their ignorance. Experts, on the other hand, may have gaps in their knowledge but their expertise provides them with the ability to discern that such gaps exist. Indeed, according to Professor Dunning’s research, experts are prone to underestimate the difficulties they have overcome and so underappreciate their skills. Modesty, it seems, is the hallmark of expertise.

And yet, the current perception of experts is that they persistently make atrocious predictions, endlessly change their advice, and say whatever their paymasters require. This paints a picture of experts whose superiority may be as delusory as that demonstrated by bank robbers gadding about with lemon juice smeared on their faces. If so, they may be fooling no-one but themselves. However, the real problem with experts is that, no matter what one might think of them, we can’t do without them. We would dearly wish to base all our decisions upon solid evidence but this is often unachievable. And where there are gaps in the data, there will always be the need for experts to stick their fat, little expert fingers into the dyke of certitude, lest we be overwhelmed by doubt. Nowhere is this service more important than in the assessment of risk and uncertainty, and nowhere is the assessment of risk and uncertainty more important than in climatology, particularly in view of the concerns for Catastrophic Anthropogenic Global Warming (CAGW). For that reason, I was more than curious as to what the experts of the Intergovernmental Panel on Climate Change (IPCC) had to say in their Fifth Assessment Report (AR5), Chapter 2, “Integrated Risk and Uncertainty Assessment of Climate Change Response Policies”. Does AR5 (2) reassure the reader that the IPCC’s position on climate change is supported by a suitably expert evaluation of risk and uncertainty? Does it even reassure the reader that the IPCC knows what risk and uncertainty are, and knows how to go about quantifying them? Well, let us see what my admittedly jaundiced assessment made of it.

But First, Let Us State the Obvious

I should point out that the credentials carried by the authors of AR5 (2) look very impressive. They are most definitely what you might call, ‘experts’. If it were down to merely examining CVs, I might be tempted to fall to my knees chanting, “I am not worthy”. In fact, not to do so would incur the ‘warmist’ wrath: Who am I to criticise the findings of the world’s finest experts?4 Nevertheless, if one ignores the inevitable damnation one receives whenever deigning to challenge IPCC output, there are plenty of conclusions to be drawn from reading AR5 (2). The first is that the expert authors involved, despite their impressive qualifications, are not averse to indulging in statements of the crushingly obvious. Take, for example:

“There is a growing recognition that today’s policy choices are highly sensitive to uncertainties and risk associated with the climate system and the actions of other decision makers.”

One wonders why such recognition had to grow. Then there is:

“Krosnick et al. (2006) found that perceptions of the seriousness of global warming as a national issue in the United States depended on the degree of certainty of respondents as to whether global warming is occurring and will have negative consequences coupled with their belief that humans are causing the problem and have the ability to solve it.”

Is there no end to the IPCC’s perspicacity?

Now, I appreciate that my sarcasm is unappealing, but this sort of padding and waffle would sit far more comfortably in an undergraduate’s essay than it does a document of supposedly world-changing importance. It’s not a fatal defect but there is quite a lot of it and it adds little value to the document. Fortunately, however, there is plenty within AR5 (2) that is of more substance.

The IPCC Discovers Psychology

To be honest, what I was really looking for in a document that includes in its title the phrase, ‘Integrated Risk and Uncertainty Assessment’ was a thorough account of the concepts of risk and uncertainty and a reassurance that the IPCC is employing best practice for their assessment. What I found was a great deal on the psychology of decision-making (based largely upon the research of Kahneman and Tversky) highlighting the distinction to be made between intuitive and deliberative thinking. In particular, much is made of the importance of loss aversion and ambiguity aversion in influencing organisations and individuals who are contemplating whether or not to take action on climate change. In the process, some pretty flaky theories (in my opinion) are expounded, such as the following assertion regarding intuitive thinking in general:

“. . .for low-probability, high-consequence events. . .intuitive processes for making decisions will most likely lead to maintaining the status quo and focusing on the recent past.”

And specifically on the subject of loss aversion:

“Yet, other contexts fail to elicit loss aversion, as evidenced by the failure of much of the global general public to be alarmed by the prospect of climate change (Weber, 2006). In this and other contexts, loss aversion does not arise because decision makers are not emotionally involved (Loewenstein et al., 2001).”

Well, that doesn’t accord with my understanding of loss aversion, but I don’t want to get drawn into a debate regarding the psychology of decision-making. It’s a fascinating subject but the IPCC’s interest in it seems to be purely motivated by the desire to psychoanalyse their detractors and to take advantage of people’s attitudes to risk and uncertainty in order to manipulate them into supporting green policy. For example, whilst they maintain that:

“Accurately communicating the degree of uncertainty in both climate risks and policy responses is therefore a critically important challenge for climate scientists and policymakers.”

they go on to say:

“. . .campaigns looking to increase the number of citizens contacting elected officials to advocate climate policy action should focus on increasing the belief that global warming is real, human-caused, a serious risk, and solvable.

This leaves me wondering if they want to increase understanding or are really just looking to increase belief. I suspect that AR5’s concentration on the psychology of risk perception reveals that the IPCC’s primary interest is in the latter. The IPCC seems too interested in exploitation rather than education. For example, the authors explain how framing decisions so as to make the green choice the default option takes advantage of a presupposed psychological predilection for the status quo. This may be so, but this is a sales and marketing ploy; it has nothing to do with ‘accurately communicating the degree of uncertainty’.

Given the IPCC’s agenda, such emphasis is understandable, but once one has removed the waffle and the psychology lecture from AR5 (2), what is left? Remember, I’m still looking for a good account on the concepts of risk and uncertainty and how they should be quantified. Such an account would provide a good indication that the IPCC is consulting the right experts.

Another Expert, Another Definition

Let us look at the document’s definitions for risk and uncertainty:

“‘Risk’ refers to the potential for adverse effects on lives, livelihoods, health status, economic, social and cultural assets, services (including environmental), and infrastructure due to uncertain states of the world.”

Nothing too controversial here, although I would prefer to see a definition that emphasises that risk is a function of likelihood and impact and may be assessed as such.

For ‘uncertainty’, we are offered the following definition:

“‘Uncertainty’ denotes a cognitive state of incomplete knowledge that results from a lack of information and / or from disagreement about what is known or even knowable. It has many sources ranging from quantifiable errors in the data to ambiguously defined concepts or terminology to uncertain projections of human behaviour.”

I find this definition more problematic, since it only addresses epistemic uncertainty (a ‘cognitive state of incomplete knowledge’). The document therefore fails to explain how the propagation of uncertainty may take into account an incomplete knowledge of a system (the epistemic uncertainty) combined with its inherent variability (aleatoric uncertainty). The respective roles of epistemic and aleatoric uncertainties is a central and fundamental theme of uncertainty analysis that appears to be completely absent from AR5 (2).5

Later within the document one finds that:

“These uncertainties include absence of prior agreement on framing of problems and ways to scientifically investigate them (paradigmatic uncertainty), lack of information or knowledge for characterizing phenomena (epistemic uncertainty), and incomplete or conflicting scientific findings (translational uncertainty).”

Not only does this differ from the previously provided definition for uncertainty (a classic example of the ‘ambiguously defined concepts and terminology’ that the authors had been so keen to warn against), it also introduces terms that will be unfamiliar to all but those who have trawled the bowels of the sociology of science. What I would much rather have seen was a definition that one could use to measure uncertainty, for example:

“For a given probability distribution, the uncertainty H = – Σ(pi ln(pi)), where pi is the probability of outcome (i).”

Despite promising to explain how uncertainty may be quantified, there is nothing in AR5 (2) that comes close to doing so. Fifty-six pages of expert wisdom are provided with not a single formula in sight. I suspect that I am not as impressed as the authors expected me to be.

Finally, I see nothing in the document that comes near to covering ontological uncertainty, i.e. the concept of the unknown unknown. This is very telling since it is ontological uncertainty that lies at the heart of the Dunning-Kruger effect. Could it be that the good folk of the IPCC are unconcerned by the possibility that their analytical techniques lack metacognitive skill?

However, if the definitions offered for risk and uncertainty left me uneasy, this is nothing compared to the disquiet I experienced when reading what the authors had to say about uncertainty aversion:

“People overweight outcomes they consider certain, relative to outcomes that are merely probable — a phenomenon labelled the certainty effect.”

Embarrassingly enough, this is actually the definition for risk aversion, not uncertainty aversion!6

All of this creates very serious doubts regarding the expertise of the authors of AR5 (2). I’m not accusing them of being charlatans – far from it. But being an expert on risk and uncertainty can mean many things, and those who are experts often specialise in a way that gives them a narrow focus on the subject. This appears to be particularly evident when one looks at what the AR5 (2) authors have to say regarding the tools available to assess risk and uncertainty.

Probability, the Only Game in Town?

A major theme of AR5 (2) is the distinction that exists between deliberative thinking and intuitive thinking. The document (incorrectly, in my opinion) equates this with the distinction to be made between normative decision theory (how people should make their decisions) and descriptive decision theory (how people actually make their decisions). According to AR5 (2):

“Laypersons’ perceptions of climate change risks and uncertainties are often influenced by past experience, as well as by emotional processes that characterize intuitive thinking. This may lead them to overestimate or underestimate the risk. Experts engage in more deliberative thinking than laypersons by utilizing scientific data to estimate the likelihood and consequences of climate change.

So what is this deliberative thinking that only experts seem to be able to employ when deciding climate change policy? According to AR5 (2), it entails decision analysis founded upon expected utility theory, combined with cost-benefits analysis and cost-effects analysis. All of these are probabilistic techniques. Unsurprisingly, therefore, when it comes to quantifying uncertainty, AR5 informs that:

“Probability density functions and parameter intervals are among the most common tools for characterizing uncertainty.”

This is actually true, but it doesn’t mean it is acceptable. Anyone who is familiar with the work of IPCC Lead Author, Roger Cooke, will understand the prominence given in AR5 (2) to the application of Structured Expert Judgement methods. These depend upon the solicitation of subjective probabilities and the propagation of uncertainty through the construction of probability distributions. In defence of the probabilistic representation of uncertainty, Cooke has written:7

Opponents of uncertainty quantification for climate change claim that this uncertainty is deepor wickedor Knightianor just plain unknowable. We dont know which distribution, we dont know which model, and we dont know what we dont know. Yet, science based uncertainty quantification has always involved expertsdegree of belief, quantified as subjective probabilities. There is nothing to not know.”

I agree that there is no such thing as an unknown probability; probability simply comes in varying degrees of subjectivity. However, it is factually incorrect to state that ‘science based’ uncertainty quantification has always involved degrees of belief quantified as subjective probabilities, and it is a gross misrepresentation to assert that opponents of their use are automatically opposed to uncertainty quantification. Even within climatology one can find scientists using non-probabilistic techniques to quantify climate model uncertainty. For example, modellers applying possibility theory to determine the mapping of parameter uncertainty on output uncertainty have found there to be a 5-fold increase in the ratio as compared to an analysis that used standard probability theory (Held H., von Deimling T.S., 2006).8 This is not a surprising result. Possibility theory was developed specifically for situations of incomplete and/or conflicting data, and the mathematics behind it is designed to ensure that the uncertainty, thereby entailed, is fully accounted for. Other non-probabilistic techniques that have found application in climate change research include Dempster-Shafer Theory, Info-gap analysis and fuzzy logic, none of which get a mention in AR5 (2).9

In summary, I find that AR5 (2) places undue confidence in probabilistic techniques and fails woefully in its attempt to survey the quantitative tools available for the assessment of climate uncertainty. At times, it just looks like a group of people who are pushing their pet ideas.

Beware the Confident Expert

The Dunning-Kruger effect warns that there comes a point when stupidity runs blind. Fortunately, however, we will always have our experts, and they are presupposed to be immune to the Dunning-Kruger effect because their background and learning provides them with the metacognitive apparatus to acknowledge and respect their own limitations. As far as Dunning and Kruger are concerned, experts are even better than they think they are. But even if I accepted this (and I don’t) it doesn’t mean that they are as good as they need to be.

AR5 (2) appears to place great store by the experts, especially when their opinions are harnessed by Structured Expert Judgement. However, insofar as experts continue to engage in purely probabilistic assessment of uncertainty, they are at risk of making un-evidenced but critical assumptions regarding probability distributions; assumptions that the non-probabilistic methods avoid. As a result, experts are likely to underestimate the level of uncertainty that is framing their predictions. Even worse, AR5 (2) pays no regard to ontological uncertainty, which is a problem, because ontological uncertainty calls into question the credence placed in expert confidence. All of this could mean that the potential for climate disaster is actually greater than currently assumed. Conversely, the risk may be far less than currently thought. The authors of AR5 (2) wrote at some length about the difference between risk and the perception of risk, and yet they failed to recognise the most pernicious of uncertainties in that respect – ontological uncertainty. In fact, when I hear an IPCC expert say, “There is nothing to not know”, I smell the unmistakable odour of lemon juice.


John Ridgway is a physics graduate who, until recently, worked in the UK as a software quality assurance manager and transport systems analyst. He is not a climate scientist or a member of the IPCC but feels he represents the many educated and rational onlookers who believe that the hysterical denouncement of lay scepticism is both unwarranted and counter-productive.

Notes:

1 This cognitive bias goes by many names, illusory superiority being the worst of them. The bias refers to a delusion rather than an illusion and so the correct term ought to be delusory superiority. However, the term illusory superiority was first used by the researchers Van Yperen and Buunk in 1991, and to this day, none of the experts within the field has seen fit to correct the obvious gaffe. So much for expertise.

2 Kruger J; Dunning D (1999). “Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments”. Journal of Personality and Social Psychology, vol. 77 no. 6, pp: 1121–1134.

3 See http://www.improbable.com/ig/winners.

4 As it happens, I am someone who made a living by (amongst other things) constructing safety cases for safety-critical computer systems. This requires a firm understanding of the concepts of risk and uncertainty and how to form an evidence-based argument for the acceptability of a system, prior to it being commissioned into service. In short, I was a professional forecaster who relied on statistics, facts and logic to make a case. As such, I might even allow you to call me an expert. So, whilst I fully respect the authors of AR5, Chapter 2, I do believe that I am sufficiently qualified to comment on their proclamations on risk and uncertainty.

5 See, for example, “Kiureghian A. (2007), Aleatory or epistemic? Does it matter?, Special Workshop on Risk Acceptance and Risk Communication, March 26-27 2007, Stanford University”. If nothing else, please read its conclusions.

6 In expected utility theory, if the value at which a person or organisation would be prepared to sell-out, rather than take a risk, (i.e. the ‘Certain Equivalent’) is less than the sum of the probability weighted outcomes (i.e. the ‘Expected Value’) then that person or organisation is, by definition, risk averse.

7 Cooke R. M. (2012), Uncertainty analysis comes to integrated assessment models for climate changeand conversely. Climatic Change, DOI 10.1007/s10584-012-0634-y.

8 Held H., von Deimling T.S. (2006), Transformation of possibility functions in a climate model of intermediate complexity. In: Lawry J. et al. (eds), Soft Methods for Integrated Uncertainty Modelling. Advances in Soft Computing, vol 37. Springer, Berlin, Heidelberg.

9 It is not surprising that fuzzy logic is overlooked, since its purpose is to address the uncertainties resulting from vagueness (i.e. uncertainties relating to set membership), and this source of uncertainty seems to have completely escaped the authors of AR5 (2); unless, of course, they are confusing vagueness with ambiguity, which wouldn’t surprise me to be honest.

Advertisements

155 thoughts on “So What Happened to Expertise with the IPCC?

  1. The Dunning-Kruger effect is alive and well – thanks to IPCC members expertly and repeatedly demonstrating exactly how it functions!

    • How do we know that David Dunning and Justin Kruger didn’t themselves belong to the group of ‘experts’ characterised by what is known as the D-K effect, i.e. cognitive bias wherein people of low ability suffer from illusory superiority or to put it simply, two academics were much more competent than everyone else to judge thinking ability of rest of us.

      • How? Just test their conclusions/predictions.

        … Or were you just asking a rhetorical/sarcastic question?

        Actually, we all fall somewhere along the DKE curve — a spectrum of points between accurate self-awareness and cluelessness. And at different places depending on the topic, I suppose.

      • Personally, with an M.S. in Agriculture – I know a little about a whole lot. Expert at “ground” truth.

      • this is just a further example of the effect of power on the human mind.
        This was well discuss in a book of a similar name.

        In essence people believe there own BS at some point.

        Brain pathways are opened as per with drug addition and can never be closed.

      • “In essence people believe their own BS at some point.”
        … or more often someone else’s BS that they know nothing of, such as the universe is a hologram pulled around by strings ever since it emerged from a singularity to give birth to billions of new black ones, floating around in some other dark stuff.
        To some it is BS, to others beautiful stuff of science, but to the peddlers of it it could be a precious fertiliser good for farming large grants and more often the academia’s life long incomes.

    • Yeap. Climastrological ‘experts’ think they know more physics than a physicist, more chaos theory than a chaos theory expert, more computer science than a computer scientist, more statistics than a statistician, more mathematics than a mathematician and the list goes on and on.

      Funny about that Dunning, though, since psychology is largely a pseudo science and has abysmal reproducibility results.

      And it’s funny about those experts in some more vague fields, they prove of having less expertise than advertised, when checked: http://repository.upenn.edu/cgi/viewcontent.cgi?article=1010&context=marketing_papers

      And simply having experts in homeopathy, astrology, alchemy, creationism or climastrology for that matter does not prove they are really above Bozo the clown, no matter what a psychologist says.

      • Mann pontificating on hurricanes, a subject for which he has no expertise, is a classic example of your first paragraph

      • Great quote from the linked paper which explains a lot:

        Mahoney asked 75 reviewers to referee a paper. Two versions of the paper were presented to randomly selected subsamples of reviewers. The papers differed only in the results: one version had results favoring the common wisdom of the day and the other refuted it. A strong bias was found toward accepting the study that agreed with a commonly held hypothesis and rejecting the one that contradicted this hypothesis.

    • The D K effect is psycho babble nonsense.

      There is no mean for this. It’s an epic generalisation of a mythical human, and does not relate to real people whatsoever

      • “It’s an epic generalisation of a mythical human, and does not relate to real people whatsoever”
        —————————-
        I see a lot of ignorant yet boastfull people around me, babbling confidently about things they know nothing about, typical of the D K syndrome. They are real, and numerous. If you don’t see any of them “whatsoever”, try to get out of your bubble.

      • They’re called know-it-alls, and we recognized them just fine before Dunning and Kruger’s superfluous self-important pontificating.

      • The “illusory superiority bias” is specific to certain type of person and behavior, NOT the average person. The DK studies were specific to SOME of the test subjects in two distinct categories amongst the test subjects…(who were real people and not mythical at all) and there are numerous validating studies where people were tested/observed and those two groups of people did indeed exhibit this particular cognitive bias more so than those subjects who were NOT in those groups.

        There is nothing “mythical” about the fact that SOME people who are extremely competent actually do underestimate/under-value their own skills and SOME incompetent people overestimate/over value their own. In the studies, the two groups represented the top 1/4th of subjects and the bottom 1/4 th of subjects, and NONE of the studies suggested that ALL of the subjects in both categories suffered from the bias. It also means that more than HALF of the subjects did NOT suffer from it. The studies ALSO indicated that after training, the incompetent actually corrected their bias.

        The fact that your statement sounded really “superior” regarding this bias, AND demonstrated an obvious lack of understanding of even the most basic definition of it….well….does more to support it than refute it.

        Just saying :)

    • Here’s a quote that may be more indicative of what we are seeing:

      “By ‘educated incapacity’ we mean an acquired or learned inability to understand or see a problem, much less a solution. Increasingly the more expert, or at least the more educated, a person is, the more likely he is to be affected by this.”

      Herman Kahn, William Brown, and Leon Martel, The Next 200 Years, William Morrow and Company, 1976, p. 22.

      This was far too good not to share.

  2. That’s why the most important part of scientific training is to realize how easy it is to fool yourself and how to take measures against it.
    Something modern education too often fails to address. Parroting your profs words may help pass the next test, but does not count as scientific expertise.

  3. “For example, the authors explain how framing decisions so as to make the green choice the default option takes advantage of a presupposed psychological predilection for the status quo. This may be so, but this is a sales and marketing ploy”

    Spot on!

    In future, I shall call my bullshit detector the “Ridgeway Test”

    • Better keep the psychologists and their word puzzles and Cheshire Cat word definitions out of the real sciences. Wow it would impossible to study statistics and probability or mathematics of chemistry if all the terms and names were wrapped in this gobbeldy gook and made up words or known words with their meanings changed. And can you imagine what they could mess up with a set of engineering specifications and drawings? A design for a simple valve might end looking like a Citron.

  4. My brother was a geothermal scientist and when we meet we debate Climate Change .He says” I know that air with elevated amounts of CO2 will warm up more than ambient air. End of story ” I tell him it is not that simple the theory of global warming depends on positive feed backs and positive feed backs depend on the tropical hot spot in the atmosphere and that has never been located.
    Back in the nineties I met and became a friend of John Maunder a meteorologist from New Zealand and in casual conversation I said I did-int believe in CAGW and he said neither did he ,I learn’t a lot from him and have not seen any convincing proof to change my mind since. Silly climate scare stories that are floated constantly might convince the younger generations but us old guys have seen it all before To cap it all off if the theory was proven beyond doubt no one would rely on consensus to reinforce there case .

  5. “…that is to say, the predisposition in us all to think we are above average.1”

    I’m with Homer on this, we aspire to be ‘average’!

    • the predisposition in us all to think we are above average.

      HA, not so.

      For the past 30+– years the Public School System “edumacators” have been brainwashing their students into believing “everyone is average percentile equal” ….. and that no student actually “fails” a subject, …… that they are just rated in the “lower percentile” of superior achieving intellectuals.

      • OOPS, only the word “average” should had had a strike-thru.

        ….. into believing “everyone is average percentile equal” …..

  6. ” This cognitive bias goes by many names, illusory superiority being the worst of them. The bias refers to a delusion rather than an illusion and so the correct term ought to be delusory superiority. ”

    Dictionaries equate “illusory” to misperception. Since cognitive bias is also a misperception, dictionaries make “illusory superiority” SEEM correct. Since illusory and illusion both refer to a characteristic of the subject being perceived, in this case superiority, illusory would only be correct if others also perceived the superiority. Since they do not, that perception of superiority is a self deception.

    While I agree in substance with your thinking, most dictionaries equate “delusory” to deception of others, while equating “delusion” to deception of self. Your terminology should thus be “delusional superiority”.

    SR

    • Yes, you’re right. Better still to call it “delusional superiority”. I had taken my cue from the fact that the psychologists refer to “illusory superiority” rather than “illusional superiority” and I had not been aware that dictionaries are making a distinction between “delusory” and “delusionary”. That said, I too notice that some dictionaries refuse to accept the distinction you and I would make between an illusion and a delusion.

      • John,

        Delusion tends to be applied with regard to mental illness, and people can have cognitive biases without being mentally ill.

        That you noted that your opinion (and Stevan’s) differs from that of the “experts” in word definitions and applications is a positive thing.

        :)

      • I am pleased that you posted here Aphan, since you provide a timely reminder that psychiatrists do indeed use the term ‘delusion’ in the narrow sense of a mental illness. That would explain why psychologists have avoided the term, ‘delusionary superiority’; they presumably invented their term for the benefit of fellow academics (not the lay public) and therefore would not wish to be accused of assuming that the cognitive bias is a form of mental illness. Your post also provides me with the opportunity to point out another frustrating habit that experts have: They appropriate existing words and imbue them with a more narrow and/or altered meaning in order to create their jargon. They can then chastise the lay public when they fail to pick up on this.
        Meanwhile, in the big, wide world, we non-experts are advised by authorities such as H.W. Fowler, who had this to say in ‘Fowler’s Modern English Usage’:
        “A delusion is a belief that, though false, has been surrendered to and accepted by the whole mind as the truth and so may be expected to influence action. An illusion is an impression that, though false, is entertained provisionally on the recommendation of the senses or the imagination, but awaits full acceptance and may be expected not to influence action.”
        On such a basis, the lay public would find ‘delusionary superiority’ to be the appropriate terminology.
        When I simply observe that the distinction between ‘delusion’ and ‘illusion’ appears to have become blurred in current usage, I am not differing from expert opinion, I am just wryly observing the dumbing down of language. I suspect I may also be speaking here for Stevan.

  7. The IPCC’s position is that climate sensitivity to increasing atmospheric CO2 is up to ~10 times higher than it really is, and therefore humanity should beggar our economies in the developed world and deny cheap reliable energy to the developing world, in response to their fictitious threat.

    As evidence of the IPCC’s utter incompetence, none of their scary scenarios have actually materialized in the decades that they have been in existence. They have a perfectly negative predictive track record, so nobody should believe anything they do or say.

  8. Of course, the IPCC is pervaded by self delusional superiority, because for the main part the Authors of the Report are not independent Experts, but instead are reviewing and relying upon either their own works (ie., papers which they themselves have written), or on work which they themselves were reviewers (due to the incestuous nature of peer review). The IPCC is essentially witness, judge and jury all rolled into one, and hence there is no objective critical thinking.

    The IPCC is not fit for purpose since it does not consist of a body of independent Experts who are charged with ascertaining what is meant by climate, how does climate work, what, if any, climate change there has been to climate on regional levels, and on a global level, and why regional changes have been different, and the causes of such change.

    The IPCC is not a scientific organisation, but rather a political one charged with promoting a particular agenda.

    • “….but rather a political one charged with promoting a particular agenda.”

      As it seems to me are several organisations and think tanks which oppose climate science.

      • Griff returns with the same comment, but without the direct mention of ‘Koch’ or ‘fossil fuel industry funded’.

        One-trick pony, Billy Goat Griff.

      • Griff, you talk as if we two sides have the same role and the same tests. But if my role is paying for your expedition to find the lost gold mine, and I have found your map to be unhelpful, I have no responsibility to come up with a better map. I just pull the plug, instead.

        It would be nice to do more, but there is no such contract. In this situation I’m not being paid to do your expert job for you.

      • Griff, even if the organisations which as you see as opposed to the CAGW theory are just political, they seem to have a better record at predicting outcomes than the CAGW climate models. Do you ever stop to think about that?

    • Have to disagree with one thing you said there, that the IPCC is “not fit for purpose.” It is, in fact, fit for a purpose – exactly the one you describe in your last paragraph. It is a political body charged with promoting “belief” in AGW to foster submission to economically suicidal (for most, but not for the politically connected cronies positioned to profit enormously) “climate policies.”

  9. “So What Happened to Expertise with the IPCC?”

    From what I can see, anyone with ANY real expertise left after they read the POLITICAL summaries.

    • That comes from Philip Tetlock. He studied the predictions of experts for decades. It turns out that experts have a lousy (worse than a dart throwing monkey) record of predicting the future. Here’s an example about trying to outguess the stock market.

      There are two kinds of expertise. There is expert performance wherein someone is able to repeat, and possibly go a bit beyond, what they have learned to do. An example is a surgeon. Here’s a link to a wonderful survey paper by K. Anders Ericsson. It’s where the 10,000 hour figure cited by Malcolm Gladwell comes from.

      Expert performance is very reliable. Civilization would collapse if that weren’t true. If half the bridges failed because of unreliable engineering we wouldn’t build bridges. etc. etc.

      The other kind of expert is deemed to be so because of her superior education and experience. The problem is that we bestow on such people the respect earned by the expert performers. The truth is that we should take their bloviations cum grano salis.

      The folks pushing CAGW are not expert performers. Their predictions are no better than those of a blindfolded dart throwing monkey and should be treated as such.

      Dunning pointed out that stupid people tend to overestimate their abilities. He also found that experts often claim to know more than they actually do. link Tetlock pointed out that experts have a plethora of mechanisms to protect them from acknowledging their failed predictions. Experts are just as prone to human foibles as anyone else, they just have the skills to convince most people that they actually know what they’re talking about.

      • “The other kind of expert is deemed to be so because of her superior education and experience. The problem is that we bestow on such people the respect earned by the expert performers.”

        This is pounding the nail on the head in one stroke. There is a difference between experts and the credentialed. Although a credentialed person can be expert, he can only be so based on thousands of hours of performance. The credentialed without any discernible record of performance is not an expert, just some guy or gal with a diploma.

  10. This is a good essay by one who is clearly well aware of the unreliability of forecasts of highly complex (possibly chaotic), multi-variate, non-linear systems.

    Not only does climatology not know the coefficients, it doesn’t even know the independent variables.

    Most WUWT readers are already well aware of the poor quality (collection, compilation, methods and coverage) of the historic temperature record.

    These known substantial weaknesses in our knowledge of climate don’t exactly accord with most people’s concept of “settled science.”

  11. The inherent contradictions in the progressive worldview are simply mind boggling.

    Be who you are, you be you… think for yourself… question everything, especially authority … be a critical thinker… embrace your own truth… be an individual, not a conformist… trust the experts, not your own cognitive abilities… don’t question what authorities say… don’t think independent of what is acceptable.

    • Where the [pruned] have you seen a progressive saying you must think for yourself and question anything ? A progressive is just enable to see a person, he only see a representative of a group. You are not you, you are a part of a category (male, white, denier, or whatever, depending on the issue)

      A progressive knows he is just a stupid ignorant man, and to be a better person is very simple: just stick to the beliefs of do-gooders. Do NOT think by yourself. Do NOT trust authority. Trust the group, the GOOD group, the group that talk about being good and doing good, whatever it takes (including hating and killing “evil” people).

      [“just enable” should be what? .mod]

      • What you are describing is the result, not how progressives see themselves (their identity). The incremental (progressive) steps to get to that result (the hive mind) leveraged and co-opted the ideals of liberalism (some of the things that I described) through a process of mind-phuck (pardon my rather simplistic description, lol) to get to that point.

    • A leftist believes that most people are stupid and thinks, I need to be running these people’s lives.
      A rightist believes that most people are stupid and thinks, I sure don’t want those people running my life.

      • A whatever-ist (like myself) believes that most people are stupid and thinks, “holy phuck Batman, we’re all going to die, so just let everyone live their own lives as they see fit and try to do things that don’t hurt them.”

      • When I said “most people are stupid”, methinks I should have said “most people are stupid (like myself)”

        (p.s. it’s wonderful to not care what people think about you; very liberating). Cheers!

      • MarkW,
        Yes, I think that one of the issues society deals with is the hubris of someone with (at best) a liberal arts background who thinks that they are smarter than everyone else and therefore they have a responsibility to run their lives. Most vocal progressives that I have come across have spoken and behaved as though they thought that they were smarter than those they disagreed with. That is why they often view themselves as Social Justice Warriors, on par with Joan of Arc.

      • Some people are unskilled and can be trained, some are ignorant and can be taught, some are misguided and can be advised, some are foolish… and I fear there’s no remedy for that. Like an incurable disease, we can only let the affliction run its course and hope the sufferer survives.

        And I do not automatically exclude myself from any of the aforementioned groups. :)

  12. “feels he represents the many educated and rational onlookers who believe that the hysterical denouncement of lay scepticism is both unwarranted and counter-productive”

  13. “feels he represents the many educated and rational onlookers who believe that the hysterical denouncement of lay scepticism is both unwarranted and counter-productive”

  14. Very interesting and accessible article about a quite unknown and complicated matter. Lots to be learned here.

  15. Part of the problem is that we love a sort of mini-Authority Fallacy in experts. To use an example that is often used by people defending experts, yes, i want an expert to fly my plane. However, i don’t think that expert will be very good at forecasting developments in aircraft, or passenger numbers or even what the flying will be like next Tuesday,

    Similarly, the IPCC assembled climate experts.Yes, perhaps the best qualified climate scientists in the world. But it then asked them to do something for which they had demonstrated no expertise whatsoever – forecasting the future climate for the next century or so.

    The fact that they are not very good at forecasts doesn’t make them bad climate scientists, because we really don’t have enough knowledge or other abilities (e.g. computing power) to be good at that at all. But now that they have got themselves into the position where being a good forecaster means being a good climate scientists, they will be judged by their forecasting ability. Thus they will defend their forecasts and perhaps even (unwittingly in many cases) cross over to manipulating data to show that their forecasts are right.

    This is the corruption and waste we see now.. Experts should have said “we can’t do this very well at all, so these are just guesses really” and then worked hard to [understand] the climate so that they could improve those forecasts. Instead, just about every piece of research seems to be about proving how the forecasts are in fact right.

  16. As an author, adviser, and peer reviewer to the IPCC I am in a position to comment on their internal biases. When I have raised difficult questions about such topics as the CO2 output associated with the production of polycrystalline silicon cells, or whether hurricanes are increasing in frequency and intensity (or not), I find myself waved away with vague and unverifiable statements that “that doesn’t really matter”.

    I repeatedly tell my students to Question Everything. But do not try that with the IPCC. Their minds were already made up 20 or more years ago, so new data are simply minor inconveniences.

    And as for that “peer review” thing, most of the reviewers of my chapter in their mighty tome had no background in the field in question, and I found their questions inane. To answer them I had to go back to basics and start in their freshman year. And many of the papers they sent me to review were so far afield from my area of specialization that I had to return them with no comments. Needless to say, the IPCC seems to be less than satisfied with my performance. I guess I should be covered in rue, but I do have professional standards to uphold.

    And I am, I hope, the inverse of the Dunning-Kruger Effect, because I am constantly beset with self-doubt. Maybe that means I have at least a modicum of expertise.

  17. An expert is someone who knows more and more about less and less. Or alternatively, someone from out of town with a Powerpoint presentation.
    The psychologist Edward de Bono, in his book “A Five Day Course in Lateral Thinking”, described the difference between an expert and an inpert by reference to a story about Edison. Towards the end of his career, Edison was allocated a number of bright young men to assist him. One day he assigned them the task of calculating the volume of a light bulb. The young men took the bulb away, measured every dimension, laboured with paper, pen and slide rule and came back with an answer. “You’re out by at least 10%” said Edison who drilled a small hole in the bulb, filled it with water and poured it into a measuring jug.
    We need more inperts who understand a subject and fewer experts.

    • This speaks to my appreciation of an elegant automatic control mechanism that I encountered: a fire door mounted on an inclined track (down to close) held open by a fusible link. That approach of leveraging simple laws of nature (rather than more complicated technological solutions) has influenced my approach to life ever since.

    • But all he got was the interior volume, less the volume of the filament and its supports, and less the volume of the fitting.

      To get the volume of the bulb, all his students needed to do was to immerse the bulb in water, and measure the volume displaced. Simple Archimedes. QED.

      • I Agree with Dudley, and that was the solution I would have used–displacement. However, the problem was to ‘calculate’ the volume, not to just ‘determine’ it.

      • Sorry, I disagree with Dudley. The immersion approach is simple, but it doesn’t measure the interior volume. That may have actually been the goal – if one plans to replace air with some other gas, one reasonably cares how much is needed. That volume is the interior volume, and should exclude the filament and supports. That said, I would do it differently. Start with the immersion to get the gross volume, then break the bulb and use the immersion technique to get the volume of the glass, etc, then subtract to get the net volume of the interior.

      • Dudley/Jsuter – You have identified a classic source of measurement uncertainty; incomplete definition of the measurand.

    • I don’t think Edison was very bright. He could have immersed the bulb in water and measured the volume change.

    • Like the legend of how Alexander the Great, when presented with the Gordian Knot and asked to try to undo it, drew his sword and cut it apart.

  18. “…I am constantly beset with self-doubt. Maybe that means I have at least a modicum of expertise.”

    “(Maybe) I don’t know” is key to all learning, and the converse attitude stops learning. To paraphrase a saying – The more a wise person learns, the more that person realizes how little (s)he knows. And the more a foolish person learns the more that person esteems how knowledgeable (s)he is.

  19. Climate and psychology are both consequential, fascinating subjects with a great deal of uncertainty and an almost equal amount of claims to expertise.Economics is perhaps even worse.
    It is in the area of psychology why people make claims to why they have the knowledge to give advice that others should follow, when if they are in fact forced to explain the uncertainties of the field, are really stating that given a large enough group, their recommendation will have a small, positive effect, but fail most of the time for most of the subjects. Not something one could readily sell.
    Why people want certainty, or the illusion of certainty, is a consequential issue. Lyndon Johnson’s crack about wanting a “one-armed economist” is nearly a Zen koan in the psychology of politics. Why is it that there is a market for bad advice?

    • You can hardly blame Johnson (or Truman) for that remark. What’s the old saying? Ask 10 economists for an opinion and you’ll get 14 answers?

      • If an economist and a lawyer were both drowning, and you could only save one of them, would you sit and watch, or just walk away?

        One of many economist jokes on the wall in the econ department at UM St. Louis.

  20. And somehow this predisposition and the desired result is aided by the “Precautionary Principle.” It amazes me the number of time thos pushing the argument of the supposedly “settled science” also throw in that even if it is not provable we must do this oui of precaution for our children and their children.

    • Reply to Usurbrain:-

      And they have got the “Precautionary Principle ” wrong. According to Wikipaedia:
      “The precautionary principle (or precautionary approach) to risk management states that if an action or policy has a suspected risk of causing harm to the public, or to the environment, in the absence of scientific consensus (that the action or policy is not harmful), the burden of proof that it is not harmful falls on those taking that action.”

      In the case of CAGW, this means that as all the actions proposed to deal with “Climate Change” have a suspected risk of causing harm to the public, it is up to the proponents of these proposed actions to combat “Global Warming” to demonstrate that these actions are not harmful.

      To date all the actions proposed by CAGW enthusiasts have been shown to have a substantial cost to the public, with in many cases actual harm (as in making power prices so great that people do not use heating or cooling when the weather is cold or warm, and die as a result, or businesses find their electricity costs have grown so high that they can no longer continue in business, and therefore make their workers unemployed). The plausible benefits of the actions proposed have been found to be low, and their “projections” of future states of the weather have been found to be incorrect, such that some have now admitted that their models are wrong (Remember Ben Santer – and I note that Michael E Mann was a signatory to that paper!

      • The Precautionary Principle is the darling of environmentalists. However, progressives seem to give it no thought when they advocate for major social changes without precedent. It is sufficient that they think changing the way that things are done will lead to a better, more just world.

    • The Precautionary Principle bites both ways. Who is to say that the cure will not cause more problems than the disease when the cure has never been tried except on very limited scale. And where it has been tried lots of unforeseen problems have resulted. As you scale up small problems become big problems.

    • Anyone who thinks that a climate sensitivity bounded with +/- 50% uncertainty is settled is either a fool or a charlatan. We know that extraterrestrial life exists somewhere in the Universe with near absolute certainty and yet this is far from settled. Climate science gets even worse as the various RCP scenarios add at least an additional +/- 50% uncertainty to the anthropogenic component considered equivalent to post albedo solar forcing. Making this so much worse is that even with all the uncertainty, the low end of the IPCC’s presumption isn’t even low enough to include the worst case magnitude of the effect as limited by the laws of physics

  21. I am constantly surprised that anyone is surprised that the IPCC has gotten much science wrong or dabbles in propaganda, data manipulation, or other bad habits. Just take a look at their founding documents. The United Nations Framework Convention on Climate Change(UNFCC) and the International Panel on Climate Change(IPCC) that it started were political constructs from the start. All the science summaries and research reported is strictly on “human caused” climate change. The conclusion was assumed from the start and they merely looked only at supporting evidence.

    To them there are no “unknown unknowns”, propaganda is a useful tool in shaping outcomes, and the results are already established.

  22. I remember this guy being held up to ridicule in the British press for this statement, but it always made perfect sense to me, and it’s basically what this piece is saying. It applies to individuals and society equally. “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know”. Donald Rumsfeld.

    • What is missing is that the size of the unknown unknowns is infinite. All the others. What you know. What you know you don’t know. These are all finite.
      Where the experts go wrong is to assume the unknown unknowns are finite. Which causes them to over estimate how much they actually know.

    • I clipped that quote from the newspaper when I first read it. I thought it was a brilliant summary, not just for military intelligence, but for every field of endeavor.

  23. “Experts” do not necessarily know more about any subject than non-experts. They are just better at defending their positions on those subjects. They have a larger arsenal of tools and arguments, are better at expressing themselves, and can almost always point to a phalanx of other “experts” who agree with them. Knowedgeable non-experts can be very good at knocking down expert opinion, if they ask the right questions. It is all about understanding the vulnerabilities of the expert opinion, and pointing out the failures of their arguments. And there are ALWAYS vulnerabilities in their arguments.

    This makes the job of the decisionmakers really hard, because it is rare for the decisionmaker to have a comparable “expert” at his/her side, to tell him/her that the experts are full of $hIt. And it is usually best to have two or your own experts for advice, because if you only have one, he/she will be swamped by the outside “experts”.

    Been there, seen that, in government.

    • rxc: “Experts” do not necessarily know more about any subject than non-experts. They are just better at … expressing themselves”

      Indeed, those with linguistic prowess, that think in language, have the advantage of presenting their arguments in language. Those that do not think in language are burdened with expressing their thoughts and knowledge with language. Understandably, language is how we communicate our thoughts, but this leads to linguistic prowess dominating any debate. Even when those with linguistic prowess cannot recognize their own limitations of thought without language.

      • I have often said that the primary difference between a Mensan and the average person is that the Mensan is able to better articulate their rationalizations for their irrational behavior.

  24. The one [well one of them…] idea that sticks with me is just how LITTLE we know just a few hundreds years after the renaissance. We know almost NOTHING yet. And the science is settled?? I tell my children I am jealous about what they will know that I will never know.

  25. For a better understanding of how “uncertain” the IPCC experts really are, look at the technical summary chapter 6 pages 114-115 (TS.6 Key Uncertainties). This is IMO the most damning two pages in the entire IPCC material. There is almost no area of climate study without high levels of uncertainty, low levels of confidence or stated challenges to their level of understanding.

    http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_TS_FINAL.pdf

    Can any of the IPCC supporters, GISS supporters or any other alarmists reconcile this with the perception of certainty which the Summary for Policymakers exudes?

    • I have read these pages before. Amazing how many “Supporters” of AGW there are that have not read these pages. The statements in that section remind me of the “contract” given to the poor guy that buys a car at one of those “Buy-here-Pay-here” dealerships.

    • Michael S,
      The Technical Summary is indeed a very damning document! It should be ‘required reading’ for all with an interest in the topic, but especially for those who frequent this blog who are advocates of the extreme danger to humanity, based on model predictions. Should the Red/Blue Team exercise come to pass, all participants should be provided with a copy of the Technical Summary.

    • Michael S:

      This is my favorite version of your similar observation:

      “In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.”

      http://ipcc.ch/ipccreports/tar/wg1/505.htm

    • Hello Michael S,

      Following your suggestion I have reviewed those key pages and summarised them as follows. As you can see there are no claims of “high confidence”, although there is one claim of “likely”. In addition, there are a couple of mentions of either “little consensus” or “no consensus”.

      < Total of 2.
      Low confidence: 1b, 1c. 2a. 3a. 4a. 5a. => Total of 6.
      Robust conclusions not possible/poorly characterised/sampling too sparse/time series too short/not yet adequately or comprehensively assessed/record remains poor: 6a. 7a. 8a. 10b. 12a, 12b. => Total of 6.
      Coverage limited/limits the quantification/number is limited: 9a, 9c. 10a. => Total of 3.
      Hampers more robust estimates/data inadequate: 9b. 11a => Total of 2.

      Section TS.6.2 – Drivers of Climate Change. [3 bullet points (numbered from 13 to 15)]
      Low confidence: 15a. => Total of 1.
      Uncertainties large/uncertainties dominant: 13a, 13b. => Total of 2.
      Quantification difficult: 14a. => Total of 1.
      Likely: 14a => Total of 1.

      Section TS.6.3 – Climate System & Its Recent Changes. [6 bullet points (numbered from 16 to 21)]
      Remains challenging/challenges persist: 16a. 19b. => Total of 2.
      (Modelling) Uncertainties or limits in understanding hamper attribution: 17a, 17b, 17c. 19a. 21b. => Total of 5.
      Low agreement/Confidence remains low/Ability to simulate is limited/Modelling uncertainties/Less reliably modelled/limiting confidence/observational uncertainties/precludes more confident assessment: 18a, 18b, 18c, 18d. 20a. 21a, 21c. => Total of 7.

      Section TS.6.4 – Projections of Global and Regional Climate Change. [9 bullet points (numbered 22 to 30)]
      Medium confidence: 23a, 28a => Total of 2.
      Low confidence: 24a. 26a. 27a, 27b. 28b. 29a. 30a => Total of 7.
      Limited confidence/low predictability/uncertainty in projections: 22a, 22b, 22c. => Total of 3.
      Not robust: 25a => Total of 1.
      Little or no consensus: 26b, 29b => Total of 2.

      End of file.>>

      I plan to expand the review in the near future by comparing and contrasting the above confidence statements with those in the Review for Policy Makers – that comparison will, I anticipate, make interesting reading.

      Regards,
      Idiot_Wind.

      • Erratum: the analysis should started like this:-
        < Total of 2.>>

        Sorry for the cut-&-paste error.

        Regards,
        Idiot_Wind.

      • The text looks correct on my screen but when I post it the initial words are missing; is because I have used <> quotes?

        Anyway the analysis should start like this:-
        “Section TS.6.1 – Observation of Changes in the Climate System. [12 bullet points (here numbered from 1 to 12 with subordinate ordering indicated by a, b, c, …)]
        Medium confidence: 1a. 2b. => Total of 2. …..”

        I hope it works this time!
        Idiot_Wind.

  26. President Reagan summed this up quite well, “Well, the trouble with our liberal friends is not that they are ignorant, but that they know so much that isn’t so.”

  27. The misuse of paid experts in courtrooms is also bad, such as when a generalist “expert” like a physician is set against a true subject expert in a case.

  28. Or could it be they suffer from the old adage I heard repeated so many times about very senior PhD’s while going to college “He has specialized his expertise to point that he now knows everything there is about nothing at all.”

    • Reminds me of the old joke:
      What is the difference between a Professor of Economics and a Professor of Finance?
      A Professor of Economics knows nothing about everything.
      A Professor of Finance knows everything about nothing.

  29. The IPCC’s mission was/is to provide cover to force spending, on green scams that do not work, at a time when there are ridiculous, unsustainable, high levels of government overspending, not to solve scientific problems.

    The Age of CAGW will be an interesting science/sociological study, post mortem when the key observations that affect people’s gut feelings, concerning AGW, changes – planet significantly cools and CO2 levels fall.

  30. A hot air balloonist was blown off course and lands in a field, and he yells over to a man: “Where am I?” and the man replies, “You’ve landed in a large field.”

    The balloonist then tells the man that he must be an accountant (or an economist, or a mathematician, et al.) because the answer was completely accurate and utterly useless.

    Not sure who came up with it (Taleb>), but I believe its called IYI: intelligent, yet idiot.

    Its my job to gather and crunch numbers, for bosses who are the real “experts”. I have some math, so I can tell pretty much that x is > y, but its their jobs to fully understand What This Means, a job I gratefully allow them to consider as much more important than mine. That way, when the answer is supposed to be x < y, I'll still have a job, but at least one of them will be fired. Or, as in Seinfeld, moved upstairs where their genius can shine much further afield.

    Oh, and our new Executive Director is a PhD, who came from academe to tell us how much wrong we've had it for a few decades. Yes, we'd never think of checking all the easy stuff first. And man does he loves him some InfoGraphics…

    I'll basically be employed for life.

    • ‘The balloonist then tells the man that he must be an accountant (or an economist, or a mathematician, et al.) because the answer was completely accurate and utterly useless. ‘

      Reminds me of the Heisenberg joke: Cop stops Heisenberg for speeding. He approaches the car, and asks the question: COP: Do you know how fast you were going? Heisenberg: No, but I know exactly where I am,

    • “intelligent, yet idiot.”
      —————————–

      “Intellectual yet idot”, according to Taleb. Intellectuals are indeed mostly idiots. They have no real productive job, so never learn from challenge and failure to have a chance to become intelligent.

  31. The most damning quote from the ‘Integrated Risk and Uncertainty Assessment’ chapter must be, “. . .campaigns looking to increase the number of citizens contacting elected officials to advocate climate policy action should focus on increasing the belief that global warming is real, human-caused, a serious risk, and solvable.” That sounds like something out of a Madison Avenue marketing campaign planning meeting. And it followed a paragraph in which they wrote: ” found that perceptions of the seriousness of global warming as a national issue in the United States depended on the degree of certainty of respondents as to whether global warming is occurring and will have negative consequences coupled with their belief that humans are causing the problem and have the ability to solve it. ” It is amazing that such statements are included in a chapter ostensibly about the uncertainties of the science. There seems instead nothing but certainty on display.

  32. Say an expert knows 10 times as much as a layman. Let x represent what the layman knows and 10x what the expert knows.

    However the problem is never what you know. It is what you don’t know. The layman doesn’t know infinity – x. The expert doesn’t know infinity – 10x. Both of those two values are identical. They equal infinity.
    No matter how much you know, what you don’t know is still the same size as anyone else.

  33. In performance, we can have true experts. These are people who show significantly above average skill at a given task. In theoretical subjects, the word ‘expert’ has an entirely different meaning. It is simply a person who can tell a large and/or influential group of people just what they want to hear.

    IPCC ‘experts’ have no record of successful performance, consequently, they fall in the latter category.

  34. A “climate science” expert is one who can impart a minimum amount of information using a maximum amount of words. This fools lots of people, which is the intended effect.

    • I call it “coach talk”, because professional sports coaches have raised talking lots and saying little to a near art form.

  35. I am an ‘expert’ on a particular subject of some interest to the public, but a more authoritative looking chap dealing with environmental matters in a ‘modern’ way was viewed as the expert on a variety of subjects. He called me when stumped. The press considered him the expert. Our American founders knew about this tendency towards ‘worship of authority.’ This now deceased chap was a non-radical good guy, an expert in a physical science, but became famous for his feeling and mostly reasonable endeavors for the environment.

    It is easy to see how radical types need their ‘15 minutes of fame’ at various levels over and over and over again. Some never obtain it and some misuse it over and over and over again.

  36. In my experience on this planet I have found that any individual’s intelligence is generally inversely related to their degree of certainty that they are a genius.

  37. I read somewhere that doctors are statistically among the worst investors, because they are convinced that their demonstrated skill relative to medicine translates to skill in other fields, and therefore won’t listen to advice. I’ve also heard this applied to being a private pilot – the smarter they think they are, the less likely to use the checklist, and therefore the more dangerous it is to fly with them.

    Not surprising to see this effect in climate science, as incestuous and self-congratulating a group as was ever constituted.

    • Statistically, the most successful hedge funds are those that acknowledge that they do not know, and instead merely follow the trend. Michael Covel’s book, Trend Following, can be bought here:

    • As my older brother, a trim carpenter, once told me, it’s not the beginning woodworkers who lose fingers. It’s the experienced ones who get caught up in routine and stop paying attention.

  38. “Experts engage in more deliberative thinking”

    Politicians engage in both intuitive and deliberative thinking, or in other words, thinking driven by emotional arguments. Science requires objective thinking driven by logical arguments and the veracity of the scientific method, the most important attribute being the concept of falsification.

  39. I love the question, “is it [climate change] solvable?”

    Of course, that’s not the real question they’re asking at all. What they’re really asking is, can we get the general population to be as worried about climate change as we are? NO. Because the population of IPCC supported “scientists” is agenda driven, politically motivated, generally supportive of leftist causes, and so more than willing to deny people freedom and liberty, all of which causes their risk assessments to be so much BS, completely devoid of a rational cost / benefit assessment of what they’re asking.

    The question “is worry about climate change solvable” is an easily answered ‘yes.’ Just stop worrying. Stop propagandizing and brainwashing our children. Educate the young and old, and leftists, on actual science, as replication, explaining about natural climate variability over the long haul. Insist that science is not about politics and eco-lunacy driven “green” agendas, but patiently waiting for ambiguity to erode in the face of solid, proven evidence.

    I love a little more CO2 in the air. It will be a boon to the ecosystems they’re claiming to protect. They can learn to love it, too.

  40. I have found that most actual “experts” tend to underrate their own expertise or the sureness of the “knowledge” while others tend to greatly overrate their own expertise or knowledge. Much like when I was 18 years old, I knew I knew just about everything there was to know about almost everything! Now I know much more than I knew then and know I don’t know but a tiny fraction of the stuff I probably should know.

    Cheers!

    Joe

  41. “Accurately communicating the degree of uncertainty in both climate risks and policy responses is therefore a critically important challenge for climate scientists and policymakers.”

    On its face, this is an unambiguous statement that communicating uncertainty is exceedingly difficult for climate scientists to do! I’m sure this uncharacteristic candor wasn’t intended. What the statement is trying to wrangle with is a way to conceal the magnitude of uncertainty because this makes the public not take the urgency serious. Hence the detour into egg head psychology. He wants to conceal also that he is going to lie about it (95% confidence!)

    The worst revelation, though, is that the author doesn’t appreciate what the idea of uncertainty and gaps in knowledge and even unknowables or unknown unknowns are and what these do to your thesis! These are not just different colors. These are a picture of how far the journey to objective knowledge in climate science really is. The gobbledegookery is derived from the terrible legacy left by the late disasterhood peddlar Dr Stephen Schneider who advised fellow climate scientists that they had to decide how much their conscience could withstand in departing from the truth to make their case for CAGW. This is classic neo-left ends-justify-the-(always dubious) means behavior.

    Yes the bulk of the American public react very rationally to the duct tape and bailing wire science of global warming.

  42. My grandfather wasn’t “educated” but he was intelligent and wise. I will always remember what he said to me after I had won the gold medal for the highest marks in my graduating class (about 50 years ago): “The more you know, the more you know you don’t know.”

  43. I liked the article. To add to all of the great comments above, there is an old saying “to a man who only has a hammer every problem resembles a nail”. I think there is a tendency to make the problem fit the knowledge that one already has because learning new knowledge is so difficult.

    • Joel, You make an excellent point about the tendency to make the problem fit the knowledge one already has since it was common in my experience, working for a high end engineering company with a lot of experts with different specialties. Often we would see that even outstanding metallurgists would perform a failure analysis for a customer without consulting other specialties.
      Often they would issue a report indicating that the component was overstressed without consulting the mechanical engineer that should perform a stress analysis of the component to verify their claim.
      Clearly in most cases failure analysis should normally involve individuals with different expertise to avoid the “tendency to make the problem fit the knowledge one already has..”
      I don’t think it is related to “learning new knowledge is difficult” but rather a failure to realize their limitations on a complex situation where other expertise is needed.
      Failure analysis requires a team of different skills in my opinion.

  44. My major professor in grad school impressed me with one rule: if you can explain something to your mother so that she genuinely understands it, then you understand it. Otherwise, you don’t.

    There is so much truth in that aphorism that I have modeled my life after it. I can explain measurement accuracy and precision to anyone so that they understand it (and have had to do so professionally), and I can explain why I see the evidence for “global warming” is so unconvincing to the same effect. I note, however, that the alt-Climate crowd obfuscates rather than illuminates. At best, therefore, they don’t understand their own position

    • … also the ‘elevator explanation’ … explain it in simple terms to someone in the time to ride the elevator.

  45. The whole problem with the world is that fools and fanatics are always so certain of themselves, and wiser people so full of doubts.

    Bertrand Russell

  46. Systematic Bias?
    Dare anyone mention Type B Uncertainty? (including ”systematic bias” or “collective bias”, (aka Type B error) aka the “lemming effect”?)

    Why is there not reference in IPCC’s AR5 to the BIPM’s international standard on uncertainty?
    Guide for expression of Uncertainty in Measurement (GUM) 2008 (JCGM100) defines:

    2.3.1 standard uncertainty
    uncertainty of the result of a measurement expressed as a standard deviation
    2.3.2 Type A evaluation (of uncertainty)
    method of evaluation of uncertainty by the statistical analysis of series of observations
    2.3.3 Type B evaluation (of uncertainty)
    method of evaluation of uncertainty by means other than the statistical analysis of series of observations. . . .
    3.3.5 . . .Thus a Type A standard uncertainty is obtained from a probability density function (C.2.5) derived from an observed frequency distribution (C.2.18), while a Type B standard uncertainty is obtained from an assumed probability density function based on the degree of belief that an event will occur [often called subjective probability (C.2.1)]. Both approaches employ recognized interpretations of probability.

    In climate:
    On the need for bias correction of regional climate change projections of temperature and precipitation
    Jens. H. Christensen et al. 2008, Geophysical Research Letters, Vol. 35, L20709, doi: 10.1029/2008GL035694, 2008

    the common assumption of bias cancellation (invariance) in climate change projections can have significant limitations when temperatures in the warmest months exceed 4-6 deg C above present day conditions.

    Some systematic bias in climate models is now beginning to be addressed. e.g.
    Revised cloud processes to improve the mean and intraseasonal variability of Indian summer monsoon in climate forecast system: Part 1
    http://onlinelibrary.wiley.com/doi/10.1002/2016MS000819/full

    Evaluating CFSv2 Subseasonal Forecast Skill with an Emphasis on Tropical Convection
    Nicholas J. Weber and Clifford F. Mass 2017 https://doi.org/10.1175/MWR-D-17-0109.1

    There are even papers starting to reference BIPM’s GUM together with climate models and data!
    https://scholar.google.com/scholar?hl=en&as_sdt=0%2C15&as_ylo=2016&q=guide+for+uncertainty+measurement+climate+model+bipm&btnG=

    For the math inclined:
    General Deming Regression for Estimating Systematic Bias and Its Confidence Interval in Method-Comparison Studies, Robert F. Martin, Clinical Chemistry, Jan 2000, Vol. 46, No. 1, 100-104

    When assumptions that underlie a particular regression method are inappropriate for the data, errors in estimated statistics result. . . .
    Conclusion: Only iteratively reweighted general Deming regression produced statistically unbiased estimates of systematic bias and reliable confidence intervals of bias for all cases.

  47. I really enjoyed this article … especially, ” … Dunning-Kruger effect because their background and learning provides them with the metacognitive apparatus to acknowledge and respect their own limitations. As far as Dunning and Kruger are concerned, experts are even better than they think they are. ”
    The D-K effect is what recruits ignorant academics and other acolytes to the CAGW hypothesis … it’s their unwavering belief in ‘experts’ because it supports their belief in their own “acknowledgement and respect [for] their own limitations” within their areas of ‘expertise’. So they MUST be right, RIGHT ?

  48. “. . .for low-probability, high-consequence events. . .intuitive processes for making decisions will most likely lead to maintaining the status quo and focusing on the recent past.”

    On the contrary, the IPCC analysis of a low-probability, high-consequence event is demonstrably flawed, and decisions made from that analysis would be the wrong ones. Paradoxically, based on a mathematically correct analysis of low-probability, high-consequence events, a decision to maintain the status quo until the science is right would be the right decision. The elephant in the room not addressed in this discussion is if any steps taken by mankind will have a significant effect on future climate.

    The fallacy of the IPCC analysis is that probability distributions have two tails. A correct analysis must consider the entire distribution, not just the extreme high value. The IPCC’s findings ignore the low-probability, high consequence cooling event. The consequences of a warming earth are no greater than the consequences of a cooling earth. Policies appropriate for the warming case would be diametrically opposite to those appropriate for the cooling case. Under this reality, promulgating environmental regulations with too little information is illogical. The likely damage from acting on the wrong premise, a warming or a cooling planet, nullifies arguments for either action until the science is right. The goal of climate research should be to successfully predict global mean temperatures within a range of values that is narrow enough to guide public policy decisions.

    “Accurately communicating the degree of uncertainty in both climate risks and policy responses is therefore a critically important challenge for climate scientists and policymakers.”

    The IPCC has failed miserably in the communication of degree of uncertainty and policy responses. Temperature databases and GCMs are not sufficiently robust to reliably estimate future long-term temperatures. At best, current technology can only predict future temperatures within a wide range of values, which is not sufficient to warrant spending trillions of dollars going down the wrong road.

    A few notes on the practically probability analysis of imperfect data-sets:

    A probability distribution of an imperfect data-set can be approximated by a triangular distribution, often called the three-point estimate.

    Figure 1. Three ways to represent the probability distribution of the same data-set. The area under the continuous function is 1.0, and the curve can be expressed by an equation. The discrete function is represented by a table of values or a bar graph. The triangular distribution function is a special case of a continuous function defined by three vertices and the connecting straight lines. The values A, B and C could be mean global temperature estimates.

    Figure 2. Calculation of best estimate from a triangular distribution function. To work around unavoidable prediction errors because of incorrect assumptions and limited data, petroleum scientists often present predictions as a range of expected values or best estimates. An expected value converges to an exact prediction as the available data increase and the methodology improves. In the absence of a large data-set of independent predictions, a mathematically rigorous approximation of a best estimate can be calculated from a triangular distribution function.

    Figure 3: Determination of the likely range of a future temperature from a three-point estimate. To calculate the probability-weighted maximum high temperature, the high estimate and the mode are the same. To calculate the probability-weighted maximum low estimate, the low estimate and the mode are the same. The best estimate will lie within the range of the probability-weighted high and low estimates. All values within the range are equally likely. The end-values define a rectangular distribution. Climate scientists should focus on research that will reduce the range to a value useful for guiding environmental policies.

  49. However, the real problem with experts is that, no matter what one might think of them, we can’t do without them.

    In the broadest sense, maybe. If I crash my car and break the chassis, I need an expert to weld it back together.
    But who was the first person who ever thought “I need a climate expert to fix this problem?”

    Correct. Nobody ever did. They are “experts” in search of victims.

  50. The IPCC is done , they served their masters purpose and no longer have a roll . The problem Al Gore and the global warmies have is some of the scientists woke to how they were being used . The “science is settled ” was the way the scary warmist promoters wanted to silence credible witnesses .
    We will not hear from the IPCC in any meaningful way again . The lie factory has been shut down .

Comments are closed.