∙First of 3 posts examining papers by Lewandowsky & co-authors before ‘conspiracy ideation’ claims. These papers warn of cognitive bias effects, all of which occur in the CAGW Consensus, confirming it is heavily biased. Can’t admit this? Skeptics exposing the dilemma? So… push skeptics beyond the pale, minimizing cognitive dissonance.
Psychologist Stephan Lewandowsky’s ‘conspiracy ideation’ papers (‘Moon hoax’ and ‘Recursive Fury’) that link climate skeptics to generic belief in ‘way out there’ conspiracies, have generated a great deal of traffic in the climate blogosphere and the media. Not least regarding pretty much inarguable challenges to their detailed methodology and data collection, the legitimacy of such approval procedures as occurred, and even the ethics of the papers; essentially the entire validity of these works. Indeed ‘Recursive Fury’ was eventually withdrawn from the journal Frontiers of Psychology on ethical grounds.
However prior papers from Lewandowsky (with various co-authors) identify and warn us about a list of major cognitive bias effects in society, all of which occur within the social phenomenon† of Catastrophic Anthropogenic Global Warming (CAGW), and strongly contribute to the dominance of this phenomenon. Constrained so tightly by his own findings, wrapped if you will in Lew papers, yet apparently possessing a worldview that is highly challenged by any questioning of the climate change ‘Consensus’ (note: a challenge to worldview is itself one of the warnings), any attempt by Lewandowsky to analyze rising world skepticism is very likely to have resulted in a polarized outcome: Either a wholesale rejection of the climate Consensus based upon the belated realization that all his above warnings apply to CAGW, and must always have applied, or an attempt to place skeptics beyond the pale, which hence might preserve a pre-existing worldview and prevent the head-on intellectual and emotional crash of the bias list with the behavior of the Consensus. It seems that the latter course was taken. While I have some sympathy for anyone caught in such an excruciating position, and the resultant behavior in these circumstances is typically not fully conscious, the debacle described in the second paragraph above seems very much like a desperate and sustained attempt to reduce cognitive dissonance.
This short series of posts does not delve further into the tangle surrounding Lewandowsky’s recent jaunt into conspiracy ideation, represented by Moon Hoax / Recursive Fury and other papers. Instead I explore, in detail, warnings about cognitive bias that came mainly before that jaunt. In this first post, each of the warnings is detailed by type. In the second post, the excellent applicability of each warning to CAGW is demonstrated. And finally the clash of these warnings with pre-conceptions is examined in the third post, a clash that psychologists and academia generally should heed regarding climate change perceptions. The three posts together form an extensive look at climate psychologization, using Lewandowsky’s work and stance as a prominent example case and framework, demonstrating that bias has blinded the discipline of psychology and prevented it from applying established principles and past findings (even about bias!) to the climate domain, which in turn has led to grossly erroneous conclusions. Along the way we glimpse the root causes of, and flawed treatments for, climate depression aka eco-anxiety aka apocalypse fatigue, and open a useful window onto the fundamental workings of Consensus culture itself.
Note: no original work on psychology is contributed in these posts; the conclusions of the prior papers from Lewandowsky and associated authors are taken at face value, with explanatory comment but without significant critique. And no prior knowledge of psychology is required to follow this series, which is (hopefully!) broken down into a logical trail of modest steps that folks can follow. This first and shorter post introduces the list of cognitive biases.
The first warning type can be termed ‘worldview bias’. The paper Misinformation and Its Correction: Continued Influence and Successful Debiasing by Lewandowsky et al (L2012), is about the spread of misinformation plus strategies to correct this misinformation and counteract its damaging effects. Yet the paper spends a lot of time on the role that an individual’s pre-existing worldview plays in absorbing misinformation in the first place, and likewise in being resistive to later concerted attempts at correction. In a short article summarizing the paper at the Association for Psychological Science online, the authors state that: “Individuals’ pre-existing attitudes and worldviews can influence how they respond to certain types of information”. By ‘information’ here they are referring both to original misinformation and also to corrected information. L2012 contains numerous quotes about how one’s worldview can seriously bias the processing of incoming information, including the rejection of information that challenges the held worldview, maintaining misinformation that supports it, and even (via a ‘backfire’ effect) perceiving misinformation corrections as a justification or reinforcement of the held worldview. E.g.
“It is possible that one’s worldview forms a frame of reference for determining, in Piaget’s (1928) terms, whether to assimilate information or to accommodate it. If one’s investment in a consistent worldview is strong, changing that worldview to accommodate inconsistencies may be too costly or effortful. In a sense, the worldview may serve as a schema for processing related information (Bartlett, 1977/1932), such that relevant factual information may be discarded or misinformation preserved.”
And also this:
“Thus far, we have reviewed copious evidence about people’s inability to update their memories in light of corrective information and have shown how worldview can override fact and corrections can backfire.”
As long as taken in the lightweight form (worldview likely doesn’t have an overwhelming influence in all cases; there’s evidence that within some topic domains at least, particular worldviews appear to introduce only modest bias), then this position seems both consistent with other literature and also not particularly controversial. Indeed it seems like common sense when considered as a part of the expression below, also from L2012 (and if we take worldview as a subset of ‘assumed truths’):
“As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974).”
While it appears that more discovery on the underlying psychological mechanisms is required, and no doubt there are challenge positions, let’s just take it as read here that L2012 is right and one’s worldview has a strong influence (the paper is emphatic at various places) on the information one does or does not accept as ‘true’. Whatever the reader’s opinion, the important thing in this post is what the opinion of psychologists is, with Lewandowsky and co-authors as our main example J. So the type one warning amounts to: ‘beware of the bias from one’s worldview’.
Type two concerns incoming information that contains a significant emotional component. The paper Theoretical and empirical evidence for the impact of inductive biases on cultural evolution by Griffiths et al (G2008, one of the other authors is Lewandowsky), includes this paragraph:
“Sperber (1996, p. 84) states that ‘the ease with which a particular representation can be memorized’ will affect its transmission, and Boyer (1994, 1998) and Atran (2001) emphasize the effects of inductive biases on memory. This idea has some empirical support. For example, Nichols (2004) showed that social conventions based on disgust were more likely to survive several decades of cultural transmission than those without this emotional component. This advantage is consonant with the large body of research showing that emotional events are often remembered better than comparable events that are lacking an emotional component (for a review, see Buchanan 2007).”
This quote is a little hard to grasp outside the context of the paper, but says that (mental representations of) social conventions or cultural concepts with an emotional component are easier to memorize, which appears to result in them also being retained for longer plus better transmitted to others in society, than would be the case for concepts minus the emotional load. Social conventions based on ‘disgust’ are an example explored by Nichols, but in fact other literature referred to in this quote (and also elsewhere), suggests that the same effect occurs for a range of different emotive stimuli which can be carried within generic information. The word ‘advantage’ applied to the Nichols example here presumably refers to the enhanced prospering of the concept itself, and perhaps also because ‘disgust’ would typically accompany concepts deemed unhealthy for society. So in short, concepts that include an emotive load will possess an arbitrary bias in their favor.
This same point also appears back within L2012, while positing that because an emotive load strongly affects the prospects of (generic) information being passed on, quoting Peters et al in support, then this should also hold for misinformation (which is the main theme for L2012). I.e. an emotive load should have an effect on the degree to which misinformation both spreads and persists.
“Concerning emotion, we have discussed how misinformation effects arise independently of the emotiveness of the information (Ecker, Lewandowsky, & Apai, 2011). But we have also noted that the likelihood that people will pass on information is based strongly on the likelihood of its eliciting an emotional response in the recipient, rather than its truth value (e.g., K. Peters et al., 2009), which means that the emotiveness of misinformation may have an indirect effect on the degree to which it spreads (and persists). Moreover, the effects of worldview that we reviewed earlier in this article provide an obvious departure point for future work on the link between emotion and misinformation effects, because challenges to people’s worldviews tend to elicit highly emotional defense mechanisms (cf. E. M. Peters, Burraston, & Mertz, 2004).”
There is an important extra observation on the end of this quote regarding the inter-relatedness of emotive content and worldview. To some extent, the emotive load is in the eye of the beholder. Information (or misinformation) that strongly challenges a specific worldview may produce an emotive response in one individual but not in another, arousing in the former ‘highly emotional defense mechanisms’. What the quote doesn’t say is that this is a subset. Information (or misinformation) that powerfully promotes a specific worldview may likewise produce a strong emotive response, e.g. euphoria, self-justification, enhanced feelings of security and identity [to worldview-aligned social entities]. While no doubt future work is required as the paper suggests, the implication is that these implicit emotional responses will add or subtract to any explicit emotional content, aiding transmission still further in the case of a worldview alignment, and attenuating it in the case of a worldview clash (and possibly for the latter, spawning the transmission of countering information). This would cause a social amplification of any polarization that already existed regarding the perceived ‘truth’ of the original information. (Note: I used inverted commas on the word truth, only to remind folks that for many speculative and / or complex concepts not examined long in retrospect, there would already be some fuzziness and interpretation regarding the level of truth, even without any emotional interference).
I mention in passing that this part of the above quote: ‘But we have also noted that the likelihood that people will pass on information is based strongly on the likelihood of its eliciting an emotional response in the recipient, rather than its truth value (e.g., K. Peters et al., 2009)’, is a major contributor to the spread of memes; in terms of narrative success, emotive punch is rewarded more than veracity. The lens of memetics is extremely useful for examining this whole area, but I digress and we are staying with Lewandowsky’s work here.
So, the type two warning amounts to: ‘beware of the bias from emotive content’, to which we might also add the rider for any particular information: is there implied emotional content, essentially via a powerful type 1 reaction, which may enhance or attenuate any explicit emotive bias?
Type three concerns the Continued Influence Effect (CIE). The paper Explicit warnings reduce but do not eliminate the continued influence of misinformation by Ecker et al (E2010, one of the other authors is Lewandowsky), neatly explains the CIE in this paragraph:
‘For example, H. M. Johnson and Seifert (1994) presented participants with a story about a fictitious warehouse fire, allegedly caused by volatile materials stored carelessly in a closet. Participants were later told that the closet had actually been empty. Although participants later remembered this retraction, they still used the outdated misinformation to make inferences; for example, people might argue that the fire was particularly intense because of the volatile materials or that an insurance claim may be refused due to negligence. H. M. Johnson and Seifert (1994) termed this reliance on misinformation the continued influence effect (CIE). The CIE is robust and occurs in a variety of contexts, regardless of the particular story being presented and regardless of the test applied (Ecker et al., in press; H. M. Johnson & Seifert 1994, 1998; Wilkes & Reynolds, 1999).’
The paper is available here:- http://rd.springer.com/content/pdf/10.3758%2FMC.38.8.1087.pdf (You may need to cut and paste this link into your browser, for some reason it doesn’t work direct for me). Ecker works at the ‘cogsci’ cognitive science lab of the University of Western Australia, where Lewandowsky also worked before moving to Bristol University in the UK.
E2010 demonstrates the robustness and persistence of the CIE via reference to various real-world examples, such as reports relating to Weapons of Mass Destruction in the Iraq conflict, reports relating to the alleged link between autism and vaccines, a New York Times article suggesting that China had directly benefitted from a crisis in the US economy, and the pseudo real-world example of laboratory analogues of court proceedings. E.g.
‘The continued influence of misinformation is also detectable in real-world settings. For example, during the 2003 invasion of Iraq, the public was exposed to countless hints that weapons of mass destruction (WMDs) had been discovered in Iraq. Even though no such report was ever confirmed, these constant hints were powerful enough to engender, in a substantial proportion of the U.S. public, a longstanding belief in the presence of WMDs that has persisted, even after the nonexistence of WMDs became fully evident (Kull, Ramsay, & Lewis, 2003; Lewandowsky, Stritzke, Oberauer, & Morales, 2005). Unconfirmed hints can thus engender false memories in the public (analogous to the “sleep” example presented at the outset) that resist subsequent correction (analogous to the warehouse fire example above).’
I don’t intend to pursue here the exploration of potential mechanisms for the CIE within E2010 or other papers, except to include this summarizing paragraph:
‘The CIE typically has been explained by reference to a mental-event model that people build when trying to understand an unfolding event (H. M. Johnson & Seifert,1994; van Oostendorp, 1996; Wilkes &Leatherbarrow,1988). On this view, a retraction of central information creates a gap in the model, and—because people are apparently more willing to accept inconsistencies than they are voids in their event model—they continue to rely on misinformation. That is, people prefer to retain some information in crucial model positions (e.g., what caused something to happen or who was involved), even if that information is known to be discredited (H. M. Johnson &Seifert, 1994; van Oostendorp & Bonebakker, 1999).’
It is notable that the above quote is directly followed by this:
‘Previous efforts [i.e. before the experiments described in this paper] to reduce the CIE have been pursued along various lines, most of which have remained unsuccessful.’
The CIE would appear to be extremely tenacious, and remains influential even when considerable efforts to negate it are undertaken, for instance via repeated high-profile retractions or corrections of information that was later found to be wrong. E2010 further states [my underline]:
‘Contrary to the ease with which false memories can be created and true memories altered, the elimination of memories for information that is later revealed to be false—we refer to this as misinformation—has proven to be considerably more difficult. Misinformation continues to affect behavior, even if people explicitly acknowledge that this information has been retracted, invalidated, or corrected (Ecker, Lewandowsky, & Apai, in press; Ecker, Lewandowsky, Swire, & Chang, 2010; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998; Seifert, 2002; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999).’
E2010 describes two modest experiments aimed at combating the CIE, run on 125 and 92 test subjects respectively. The following quote from the abstract for the paper summarizes the results of those tests:
‘The present study investigated whether the continued influence of misinformation can be reduced by explicitly warning people at the outset that they may be misled. A specific warning—giving detailed information about the continued influence effect (CIE)—succeeded in reducing the continued reliance on outdated information but did not eliminate it. A more general warning—reminding people that facts are not always properly checked before information is disseminated—was even less effective. In an additional experiment, a specific warning was combined with the provision of a plausible alternative explanation for the retracted information. This combined manipulation further reduced the CIE but still failed to eliminate it altogether.’
So, even when subjects are explicitly warned beforehand that such a thing as the CIE exists, and then are also told afterwards that certain information given to them in the experiment was false, along with a clear and plausible explanation as to why the original information was false, the CIE is still not eliminated. I.e. subjects still displayed some level of belief in the false information they’d received. The mention about the level of fact checking in the above quote is also very important; more generically this emphasizes the fact that uncertainties within information, whatever their source, should be clearly communicated, even though this has to happen in conjunction with other efforts in order to be truly effective against the CIE, should information later prove to be partially or wholly in error. This concept is crucial within the CAGW debate, and so we’ll return to it later.
One factor that can help towards reducing the CIE is a suspicion towards the source(s) of information that may later turn out to be false. In other words, possessing a healthy skepticism (e.g. regarding the potential politicization of the source). It seems that a skeptical stance considerably reduces the CIE. Two other papers with Lewandowsky as lead author are referenced by E2010 in support of this finding, and please take a moment to truly absorb the underlined text at the bottom of this quote [my underline]:
‘The second factor that seems to reduce the CIE is suspicion toward the source of the misinformation. In the WMD studies discussed earlier, belief in the existence of MDs in Iraq was correlated with support for the war and was especially pronounced in those people who obtained news from sources that supported the invasion (e.g., Fox News; Kull et al., 2003). Lewandowsky et al. (2005) uncovered a more direct link between suspicion and the ability to update misinformation related to the Iraq War. They operationalized suspicion as the extent to which respondents doubted the official WMD-related reasons for the invasion. Lewandowsky et al. (2005) found that, when this measure was used as a predictor variable, it explained nearly a third of the variance in people’s belief in misinformation. Moreover, once suspicion was entered as a predictor, previously striking mean differences between respondents in the U.S. and two other countries (Germany and Australia) disappeared and were, instead, found to reflect differing degrees of suspicion between those countries. Lewandowsky, Stritzke, Oberauer, and Morales (2009) extended the notion of suspicion by suggesting that it may be related to a more stable personality trait of skepticism—skeptics will generally tend to question the motives behind the dissemination of information.’
Yes, that’s right. Lewandowsky et al are suggesting here that skepticism is a stable personality trait which makes those who possess it less subject to the influence from misinformation, more able to update their position in the light of corrections; a finding that can only mean skepticism is in fact a positive and healthy trait. Lewandowsky echoes this in L2012 [underline = section heading]:
‘Skepticism: A key to accuracy. We have reviewed how worldview and prior beliefs can exert a distorting influence on information processing. However, some attitudes can also safeguard against misinformation effects. In particular, skepticism can reduce susceptibility to misinformation effects if it prompts people to question the origins of information that may later turn out to be false.’
Well, that makes sense. But given the position of Lewandowsky in the climate debate, plus his attempted use of psychology to paint climate-catastrophe skeptics as ‘deniers’ and way-out conspiracy theorists, this insight is highly ironic to say the least.
In the paper Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction by Ecker et al (E2011, one of the other authors is Lewandowsky), we are further warned that the CIE cannot wholly be eliminated by any retraction method known to date:
‘…however, the finding that retractions never eliminate continued influence altogether is pervasive and robust.’
Worse still, E2011 also finds that if there is a cognitive load at the time of absorbing any retractions (i.e. the subject’s attention is divided), then these will be much less effective. It occurs to me that this opens an avenue for those who are compelled to retract (e.g. by the law) yet actively seek to lessen the retraction’s impact, for instance by choosing the style of delivery. Further, mixed messages in the retraction itself may create an ‘automatic’ cognitive load, plus explicit emotive content might also add or subtract from the retraction’s effectiveness, per the sections above, as desired. One cannot always assume that retractions themselves will be neutral, even if they are supposed to be.
E2011 concludes that [my underline]:
‘The practical implications of the present research are clear: If misinformation is encoded strongly, the level of continued influence will significantly increase, unless the misinformation is also retracted strongly. Hence, if information that has had a lot of news coverage is found to be incorrect, the retraction will need to be circulated with equal vigor, or else continued influence will persist at high levels. Of course, in reality, initial reports of an event, which may include misinformation (e.g., that a person of interest has committed a crime or that a country seeks to hide WMDs), may attract more interest than their retraction. Moreover, retractions apparently need full attentional resources to become effective; hence, retractions processed during conditions of divided attention (e.g., when listening to the news while driving a car) may remain ineffective.’
I think ‘significantly increase’ in this context essentially means ‘spread strongly within society’.
So, the type three warning amounts to: ‘beware of the bias from the CIE’, which it appears can never be wholly eliminated. Further, we are told that unless specific warnings about uncertainty in information (e.g. from lax fact checking, and implied from any other source) plus the possibility of being misled, are given ahead, then the resulting bias from any information that turned out not to be wholly true, will be significant. And any retraction will need to be circulated with equal vigor, otherwise the created bias will not be significantly reduced.
Warning type four concerns the repetition of persuasive messages and the ‘third person’ effect.
From the abstract for the video presentation Scientific Uncertainty in Public Discourse: The Case for Leakage Into the Scientific Community (L2014), which presentation was given by Lewandowsky as part of the AGU Chapman Conference on Communicating Climate Science: A Historic Look to the Future, is this concerning the ‘third person effect’ and the constant repetition of a message:
‘To illustrate with an example, the well-known “third-person effect” refers to the fact that people generally think that others (i.e., third persons) are affected more by a persuasive message than they are themselves, even though this is not necessarily the case. Scientists may therefore think that they are impervious to “skeptic” messages in the media, but in fact they are likely to be affected by the constant drumbeat of propaganda.’
You can find the video along with the text of the abstract at Watts Up With That here.
Yes, yes I know, Lewandowsky cites ‘(climate) skeptic propaganda’ as his proffered example of the third person effect, I’ll return to the context and validity of this of this in the later posts. But the point is that L2014 cites the third person effect as having a real influence upon folks, an influence that typically will be increased by narrative repetition. Plus it is indeed a well-known effect, which therefore will receive no further explanation here, yet does play a part in the dominance of the climate Consensus.
That’s probably enough causes of major bias. To summarize these as warnings for an individual seeking to avoid bias, the various papers by Lewandowsky and associated authors include the following wisdom:
Type 1: Beware of the bias from one’s worldview.
Type 2: Beware of the bias caused by explicit emotive content.
Type 2A: Beware of implied emotional content, which via a powerful type 1 reaction may
enhance or attenuate Type 2 (essentially an interaction of 1 & 2).
Type 3: Beware of the bias from the CIE, which can never be wholly eliminated.
Type 3A: Beware of information that does not come with health warnings.
Type 3B: Try to be aware of corrections / retractions; be suspicious if these are not on a par
with the vigor of the original information transmission.
Type 3C: Be healthily skeptical; suspicions based on innate skepticism reduce the CIE.
Type 4: Beware of the ‘third person effect’, especially for oft repeated / saturating information.
I figure that by this point, a lot of readers can already see how this list of warnings about cognitive bias is directly applicable to the dominant environmental culture promoted by the CAGW Consensus J. Yet given that this biased dominant culture will slip towards every possible means to misunderstand objective analysis, then explanations have to be both very clear and very thorough, including implications, and also based as far as is possible (I think I’ve managed this exclusively) upon data from the Consensus itself, hence avoiding um… denial. So that is the job of the next two posts.
Before signing off on this post, I’ll point out that I have not looked into any of the experimental methods or math described in some of the referenced papers (and such basic skills as I once had in statistics have atrophied decades ago anyhow!) As mentioned above the conclusions are taken at face value, and while to some extent what matters in this series is that Lewandowsky and associated authors believe them, in my limited experience the conclusions appear to mesh reasonably with other literature and are not far out on a limb, as one can only conclude regarding the Moon Hoax / Recursive Fury papers. And while the variety of real-world examples invoked is I suppose rather narrow (for instance E2010, E2011, L2012 and other Lewandowsky contributions all feature information about ‘Weapons of Mass Destruction’ during the Iraq war), which may allow in some bias by the back door, there nevertheless seems to be a laudable attempt at objectivity, plus conclusions that do not appear to all come out as weighted in a single direction. For instance L2012 contains various statements that might surprise those only familiar with Lewandowsky’s conspiracy ideation work and climate related articles (which generally have strong alignment to alarmist positions from governments, NGOs, and academic press releases that promote climate alarmism, plus include attempts to characterize climate skeptics as beyond the pale). Here are some of those statements:
“Governments and politicians can be powerful sources of misinformation, whether inadvertently or by design.”
“The magnitude of opposition to GM foods seems disproportionate to their actual risks as portrayed by expert bodies (e.g., World Health Organization, 2005), and it appears that people often rely on NGOs, such as Greenpeace, that are critical of peer-reviewed science on the issue to form their opinions about GM foods (Einsele, 2007). These alternative sources have been roundly criticized for spreading misinformation (e.g., Parrott, 2010).”
“For example, after a study forecasting future global extinctions as a result of climate change was published in Nature, it was widely misrepresented by news media reports, which made the consequences seem more catastrophic and the timescale shorter than actually projected (Ladle, Jepson, & Whittaker, 2005). These mischaracterizations of scientific results imply that scientists need to take care to communicate their results clearly and unambiguously, and that press releases need to be meticulously constructed to avoid misunderstandings by the media (e.g., Riesch & Spiegelhalter, 2011).”
Added to which list, the important insight about skepticism from the same paper is worth repeating:
‘Skepticism: A key to accuracy. We have reviewed how worldview and prior beliefs can exert a distorting influence on information processing. However, some attitudes can also safeguard against misinformation effects. In particular, skepticism can reduce susceptibility to misinformation effects if it prompts people to question the origins of information that may later turn out to be false.’
Given an understanding that governments and NGOs can be potent sources of misinformation regarding, say, weapons of mass destruction or GM crops, it seems at best highly inconsistent to give them a free pass regarding the cultural juggernaut of CAGW. And the 3rd quote is even related to climate change! So once upon a time at least, it seems Lewandowsky acknowledged that bias towards the catastrophic point of view can occur within this domain. Yet what proportion of climate change related academic or NGO press releases actually take heed of the above advice in L2012? I guess WUWT on its own has probably racked up a log of hundreds that are most certainly ambiguous, resulting frequently in mischaracterized results, sometimes wildly so (even w.r.t. the IPCC technical reports as ‘the norm’). And what proportion are well-written and accurate, especially where doing this would disadvantage the Consensus? Or at least not promote it. Given the cumulative feedback effect of the totality of press releases upon the course of climate science over decades, how can psychologists believe that all those poor ones won’t be causing very significant bias?
None of the social effects occurring within the domain of CAGW are new, and cognitive mechanisms underlying these effects are bequeathed to us from the evolutionary trajectory of homo-sapiens-sapiens. Psychology has made slow but useful progress in understanding these mechanisms over the last 150 years. But as confirmed in the follow-on posts, possessing knowledge of cognitive bias by no means guarantees protection against it; through avoiding their own collision of worldviews Lewandowsky and many of his colleagues simply don’t apply these hard-won findings to the entire social landscape of the climate change domain. Instead, they appear to assume that the dominant paradigm is magically free of bias, and focus only upon negative reactions to that paradigm, namely inaction and skepticism; within the latter of course some bias will indeed be found. However, most psychologists soon find themselves mired deep in tar regarding public inaction, the apparently inexplicable riddle of a largely unmoved rump of the public; kind of ‘inert skeptics’. They find only puzzlement, or a string of secondary strength explanations like psychological distancing or issue fatigue, and many more (we find in this series the real reason). At least the practitioners who stop at this point realize that such large numbers of people can’t be dismissed as ‘out there’, nor can the tiny community of ‘active skeptics’ really be driving them all. A few practitioners nevertheless persist in trying to assign various degrees of villainy, fingering the ‘deniers’. Lewandowsky has travelled the extra mile of the mythic, beyond the merchants of doubt meme and on to a highly improbable theory of conspiracy ideation. One psychologist at least, PhD candidate in Social Psychology Jose Duarte, has been brave enough to call out ‘Moon Hoax’ and ‘Recursive Fury’ in the strongest terms (‘this is fraud’, ‘wildly unethical’). I wonder how many of his silent colleagues in the discipline are ‘only’ biased, or instead, afraid of condemnation for stepping out of the Consensus line?
Probing deep into Consensus culture and peeling off the surface of climate pyschologization, the rest of this series presents extensive evidence supporting the above paragraph, examining Lew and crew’s list of cognitive biases presented here in relation to the entire social domain of climate change. It becomes very clear that this list is in fact excellently applicable to the dominant culture of the catastrophic within that domain, clear also that acknowledging the truth of this would cause a severe clash of worldviews with reality for many folks, including Lewandowsky and it seems (at least from a first pass search) almost all psychologists who have stuck their fingers into the muddy mess of climate change mind-sets.
Andy West : www.wearenarrative.wordpress.com
Footnote
† The social phenomenon of CAGW is largely independent of anything that is happening in the climate, and whether this is good, bad or indifferent. CO2 worries acted as a trigger, but once triggered the social processes have a developmental trajectory of their own. While scientific uncertainties surrounding the wicked problem of understanding the climate system remain broad, science will be unable to constrain or shut down these social processes, which are currently dominated by a culture of catastrophe.
Main Reference Papers
L2014 = abstract for the video presentation Scientific Uncertainty in Public Discourse: The Case for Leakage Into the Scientific Community, by Lewandowsky. Video and text of the abstract at WUWT.
L2012 = Misinformation and Its Correction: Continued Influence and Successful Debiasing, by Lewandowsky et al.
E2011 = Correcting false information in memory: Manipulating the strength of misinformation encoding and its retraction, by Ecker et al (one of the other authors is Lewandowsky).
E2010 = Explicit warnings reduce but do not eliminate the continued influence of misinformation, by Ecker et al (one of the other authors is Lewandowsky). You may need to cut and paste this link into your browser: http://rd.springer.com/content/pdf/10.3758%2FMC.38.8.1087.pdf
G2008 = Theoretical and empirical evidence for the impact of inductive biases on cultural evolution, by Griffiths et al (one of the other authors is Lewandowsky).

“The greatest challenge facing mankind is the challenge of distinguishing reality from fantasy, truth from propaganda. Perceiving the truth has always been a challenge to mankind, but in the information age (or as I think of it, the disinformation age) it takes on a special urgency and importance.”
– Michael Crichton, 2003
Ubiquitous and powerful hardware and software have enabled the illusion of competence to become much more common, easy to achieve and dangerous.
I will admit that I only got through the first half of this interesting paper. My notes:
1) The CIE example regarding WMD seems a little off base. My memory isn’t of people mindlessly continuing to believe in WMD, but rather ridicule heaped on the Bush administration for the continuing inability to locate the WMD.
2) CIE seems to me to have huge implications in a judicial/court setting. So often one of the lead lawyers will make a statement that is then struck down by the judge, but of course, the jury has heard the words, and based on CIE, will likely still use them in their decision. Maybe this would justify the jury viewing the proceedings via video on a taped delay, and anything struck by the judge should be blanked out.
Simple question how much heat does one Ton of CO2 actually trap and does Lewandowsky or anybody else even know.
Psychology is famous for just 2 things:
(1) Being generally recognized as a non-rigorous field of study (e.g.: an impressive percentage of foundational and other “experimental” results simply cannot be reproduced);
(2) Desperately wanting to be recognized a rigorous field of study (albeit, without remediating the shoddy methodology).
…ok, maybe 3 things:
(3) Comical use of statistics.
World view may be the most important aspect of this discussion. It is world view that is the focus of propaganda/education. Our students have been inundated with “sustainability” (who can argue with apple pie). Their lens for interpreting information is the singular view of mankind harming the earth. This is why countering information often is outright rejected. They “know” that sea level is rising and dangerous – they have been told that a 1000 times. No matter what is used as a factual or rhetorical counter argument, it will be rejected.
We may, however, be missing an option to change perceptions. People tend to believe what they “see”. To me, the pictures of California beaches that show the same shore features as 50 years earlier are powerful counter arguments- without a word spoken. A few billboards without a lot of words?
ROFL.
The title of the article clearly says ‘Lew papers’, but Josh’s illustration immediately replaced that in my mind with ‘loo papers.’
Glad you finally got the joke …
+10
To be perfectly honest; I think spending anything more than 10 minutes on the musings of Stephan Lewandowsky, is a waste of time. Time that could be more usefully spent doing just about anything else.
Well, there’s my “highly emotional defense mechanism”. Yup…
Lew has a convert, tho i can’t find any evidence whatsoever that Kasra ever made a single contribution to a CAGW-sceptic website over the years:
6 Nov: Salon.com: I was once a climate change denier
I’m a scientist now, but the embarrassment lingers. Here’s why I let myself be duped — and how I came to my senses.
Kasra Hassani, The Tyee
I, a scientist with a PhD in microbiology and immunology, was a climate change denier. Wait, let me add, I was an effective climate change denier: I would throw on a cloak of anecdotal evidence, biased one-sided skepticism and declare myself a skeptic…
So what happened to me then? What was the revelation? How did I enter…
The ‘Tear down the conspiracy wall!’ phase
I began to actively pursue knowledge on how to discuss climate change with conspiracy theorists (the ones who believe in conspiracies in principle and therefore are more likely to be climate change deniers) and I realized my strong-held beliefs and stubbornness matched the same criteria as the people I was trying to convince. I was a denier myself…
http://www.salon.com/2014/11/06/i_was_once_a_climate_change_denier_partner/
Lewandowsky’s article mentioned by Paul Matthews above, which is at
https://theconversation.com/are-you-a-poor-logician-logically-you-might-never-know-33355
makes a false statement about Anthony Watts. I put up the following comment, which has been removed (comments are now closed):
“The authors mention “the contrarian blogger who is paired with a climate expert in “debating” climate science and who thinks that hot brick buildings contribute to global warming.” and link to a discussion in which Anthony Watts says: “A brick building that’s been out in the summer sun, you stand next to it at night, you can feel the heat radiating off of it. That’s a heat sink effect… Yes, we have some global warming. It’s clear the temperature has gone up in the last 100 years, but what percentage of that is from carbon dioxide and what percentage of that is from the changes in the local and measurement environment?”
From this it is clear that Watts does not think that that hot brick buildings contribute to global warming, and he is not a contrarian, in that he fully accepts the existence of man-made global warming. The claim about him is therefore false on two counts, and the link provided shows it to be false.
Lewandowsky has published three peer reviewed papers on climate sceptics, all three of which contain falsehoods. The first one makes a false claim about the origins of the on-line respondents to a survey, plus a claim in the paper’s title that was based on just ten respondents; the second, which was riddled with errors, insulted a number of named people, including Anthony Watts, and was retracted; the third, another on-line survey, but of the general public, claimed to confirm the results of the first, when its findings couldn’t have been more different.
When these errors and false claims are pointed out, Lewandowsky just ignores them, refusing any dialogue with his critics. What’s he doing publishing at a site called the Conversation?”